id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2310.15383
GD-COMET: A Geo-Diverse Commonsense Inference Model
With the increasing integration of AI into everyday life, it's becoming crucial to design AI systems that serve users from diverse backgrounds by making them culturally aware. In this paper, we present GD-COMET, a geo-diverse version of the COMET commonsense inference model. GD-COMET goes beyond Western commonsense knowledge and is capable of generating inferences pertaining to a broad range of cultures. We demonstrate the effectiveness of GD-COMET through a comprehensive human evaluation across 5 diverse cultures, as well as extrinsic evaluation on a geo-diverse task. The evaluation shows that GD-COMET captures and generates culturally nuanced commonsense knowledge, demonstrating its potential to benefit NLP applications across the board and contribute to making NLP more inclusive.
Mehar Bhatia, Vered Shwartz
2023-10-23T22:03:56Z
http://arxiv.org/abs/2310.15383v1
# GD-COMET: A Geo-Diverse Commonsense Inference Model ###### Abstract With the increasing integration of AI into everyday life, it's becoming crucial to design AI systems that serve users from diverse backgrounds by making them culturally aware. In this paper, we present gd-comet, a geo-diverse version of the COMET commonsense inference model. gd-comet goes beyond Western commonsense knowledge and is capable of generating inferences pertaining to a broad range of cultures. We demonstrate the effectiveness of gd-comet through a comprehensive human evaluation across 5 diverse cultures, as well as extrinsic evaluation on a geo-diverse task. The evaluation shows that gd-comet captures and generates culturally nuanced commonsense knowledge, demonstrating its potential to benefit NLP applications across the board and contribute to making NLP more inclusive. ## 1 Introduction Culture plays a significant role in shaping an individual's worldviews, beliefs, behaviours, and communication styles (Spradley, 1987). A considerable portion of what is commonly referred to as commonsense knowledge is not universal but rather culture-specific, including social norms, values, traditions, and more. An example of cultural differences is greetings, which may involve a handshake in Western cultures, bowing in some Asian cultures, a 'namaste' gesture in India, or 'wai' in Thailand. With AI systems becoming increasingly ubiquitous in society, it is imperative to go beyond the Western cultural perspective (Hershcovich et al., 2022). Lack of cultural awareness may lead to models perpetuating stereotypes and reinforcing societal inequalities (Hutchinson et al., 2020; Ross et al., 2021; Sogaard, 2022), impeding their effectiveness for users from non-Western countries. In this paper, we focus on a popular model for commonsense reasoning, COMET (Bosselut et al., 2019), which is based on an English language model (LM) and further trained on commonsense inferences collected from North American crowdsource workers (Sap et al., 2019). Consequently, the model exhibits a certain bias towards the North American cultural perspective. As evidenced by Fig. 1, COMET displays limited familiarity with the concept of a German pancake, erroneously interpreting the term "dutch baby" in a literal sense. We identify a need for more inclusive commonsense reasoning models and propose gd-comet: **Geo-D**iverse COMET. As demonstrated in Fig 1, gd-comet gained the culturally relevant knowledge to interpret "dutch baby" as a legitimate dish. gd-comet is similarly based on an English LM but is trained on a knowledge base of cultural knowledge (Nguyen et al., 2023) prior to training on COMET's original training data. This simple approach is effective, as judged by both human evaluations as well as extrinsic evaluation on a geo-diverse task (Yin et al., 2021). gd-comet can potentially benefit many downstream NLP applications where the user population is diverse.1 Footnote 1: Code available at github.com/meharbhatia/GD-COMET ## 2 Background ### Commonsense Inference Models Many NLP tasks require reasoning beyond what is explicitly stated in the text. People fill in those gaps Figure 1: Inferences from COMET and gd-comet for the sentence “PersonX eats a dutch baby”, demonstrating lack of culture awareness in COMET. with their commonsense knowledge. NLP models attempt to do the same by leveraging commonsense knowledge bases (KBs) such as ConceptNet (Speer et al., 2017) and ATOMIC (Sap et al., 2019). To achieve better coverage, knowledge models such as COMET (Bosselut et al., 2019) are based on pre-trained LMs and further fine-tuned on KBs, enabling contextually-relevant inferences along the KB's dimensions for new contexts. COMET's hybrid approach proved useful for various tasks (e.g., Chakrabarty et al., 2020; Ammanabrolu et al., 2021; Ravi et al., 2023a). Subsequent versions of COMET have been developed to draw inferences from paragraphs (Gabriel et al., 2021), images (Park et al., 2020), and complex sentences (Ravi et al., 2023b). Further improvements include obtaining additional training data through crowdsourcing (Hwang et al., 2021) or generating synthetic data from LMs (West et al., 2022). COMET and its successors assume the universality of commonsense knowledge, yet much of this knowledge may differ among cultures, in traditions (e.g., duration of a wedding ceremony; Acharya et al., 2021), foods (e.g., what counts as breakfast food; Speer et al., 2017), social norms, and more. ### Culture-Aware NLP While multilingual NLP is a popular topic, culture-aware NLP is under-explored. It is crucial for language technologies to not only serve speakers of a wide variety of languages but also acknowledge that users come from diverse cultures (Hershcovich et al., 2022). Cultural norms and pragmatic aspects differ across speakers from different cultures (Zhou et al., 2023). Nevertheless, English LMs primarily reflect a North-American lens due to training on web data with a US user bias (Cao et al., 2023). Current work in culture-aware NLP addresses various aspects. One line of work focuses on cultural stereotypes and biases, and ways to measure and mitigate them (e.g., Hutchinson et al., 2020; Ross et al., 2021; Sogaard, 2022). Another line of work analyzes the differences in culture-specific commonsense knowledge, including relational knowledge (Yin et al., 2022), grounding of time expressions (Shwartz, 2022), food-related customs (Palta and Rudinger, 2023) and social values (Lin et al., 2021; Arora et al., 2023). At the same time, there have been efforts to develop benchmarks (Yin et al., 2021; Liu et al., 2021), and adapt models to new cultures (Zhou et al., 2023; Yin et al., 2023). Finally, there are several recent cultural KBs such as StereoKG (Deshpande et al., 2022), Quasimodo (Romero et al., 2019), and CANDLE (Nguyen et al., 2023). CANDLE, which we use in this work, is the most comprehensive among them, containing 1.1M assertions in English about 386 cultures (e.g. "A Dutch baby is a German pancake that is baked instead of cooked on the stove top"). CANDLE assertions were extracted from a large web corpus and clustered into _5 facets of culture_: food, drinks, clothing, rituals, and traditions. ## 3 Gd-comet We present gd-comet, a geo-diverse version of COMET. The goal of gd-comet is to generate high-quality commonsense inferences for concepts and events pertaining to both Western and non-Western cultures. Rather than collecting a large-scale geo-diverse dataset in the style of ATOMIC, we split the training into two phases: (1) training the underlying LM on geo-diverse data; (2) continue training on the large-scale original COMET training data. This is motivated by Bosselut et al. (2019) that showed that implicit commonsense knowledge from underlying LM's pre-training transfers to COMET. We similarly hypothesize that encoding geo-diverse data into the underlying LM prior to training on COMET data will transfer this knowledge to gd-comet. Geo-Diverse Training (GD-BART).We pick 770,000 assertions from CANDLE with a combined score greater than 0.5. This threshold selects highly distinctive assertions specific and relevant to the specific region. We fine-tune BART-Large, the underlying LM of the latest COMET model (Hwang et al., 2021), on this data, using BART's original pre-training objectives (token masking, token deletion, text infilling and sentence permutation). We save the model checkpoint with the lowest validation loss after training for 50 epochs on two NVIDIA A40 GPUs. COMET Training.We proceed to fine-tuning GD-BART on the large-scale ATOMIC-2020 dataset, using the same training method and hyperparameters as Hwang et al. (2021). Appendix A lists the 34 COMET relations used in this paper. ## 4 Intrinsic Evaluation To evaluate the quality of gd-comet, we construct a set of input sentences pertaining to 5 diverse cul tures (Table 1). We sample 5 concepts for each facet and use facet-specific templates (Appendix B) to create 20 sentences for each culture. For each of COMET and gd-comet, we use beam search to generate 5 inferences for each of the 34 dimensions and convert them to natural language statements using relation-specific templates based on prior work [1]. The correctness of both inferences were judged by 10 graduate students, two students from each of the respective cultures. Annotators were asked to grade inferences along the following criteria on scale of 0 (worst) to 3 (best): 1. **Cultural Relevance:** The inference is factually accurate and reflects the values, customs, traditions, and societal norms associated with the given culture. 2. **Stereotype Avoidance:** The inference does not perpetuate stereotypes about the culture. 3. **Linguistic Accuracy:** The inference is grammatical, and the vocabulary and idiomatic expressions are appropriate in that culture. The annotations yielded a substantial inter-annotator agreement with \(\kappa\) = 0.656 for COMET and 0.702 for gd-comet, measured with average Cohen's Kappa [1] across cultures. Results.Table 1 reveals that gd-comet consistently outperforms the standard COMET model. Specifically, gd-comet excels in generating culturally aligned inferences across chosen diverse cultures, and is more likely than COMET to avoid biased assumptions. However, there is still room for improvement for South Korea and Nigeria. ## 5 Extrinsic Evaluation Traditional benchmarks often fall short in testing models' knowledge and comprehension of diverse cultural contexts. To show gd-comet's utility for downstream tasks, we evaluate on a multimodal task, GD-VCR (Sec 5.1). We develop a model inspired by VLC-BERT [11] that generates inferences and incorporates them into a vision and language (V&L) model (Sec 5.2). We show that gd-comet improves the performance on GD-VCR upon an array of baselines (Sec 5.3) and demonstrate the inferences contributing to the performance gains (Sec 5.4). ### Dataset Visual Commonsense Reasoning [23] is a benchmark for testing V&L models' ability to understand and reason beyond a visual scene. Each example consists of an image extracted from movies or TV series and a multiple-choice question about the actions or people depicted in the image. This dataset focuses solely on Western, primarily North American movies. Geo-Diverse Visual Commonsense Reasoning dataset (GD-VCR; Yin et al., 2021) follows the same setup of VCR but extends to diverse regions. This evaluation-only dataset includes 328 images from movies and TV series in East Asian, South Asian, African and Western countries (See Appendix C). We follow the original setup and train our model on VCR before testing on GD-VCR. ### Model (VLC-BERT with gd-comet) We take inspiration from VLC-BERT [11], that incorporated COMET inferences into VL-BERT [26]. Instead, we integrate GD-COMET as a source of contextualized cultural commonsense knowledge for GD-VCR. Figure 2 illustrates the model. We describe below VLC-BERT and where our model deviates from it. \begin{table} \begin{tabular}{l l l r r r} \hline \hline & & 1 & 2 & 3 & Average \(\kappa\) \\ \hline \multirow{4}{*}{**Condet**} & **India** & 2,32 & 2.16 & 2.65 & 0.71 \\ & **S Korea** & 1.93 & 1.86 & 2.32 & 0.67 \\ & **Nigeria** & 1.97 & 1.98 & 2.27 & 0.61 \\ & **Iran** & 2.09 & 2.31 & 2.42 & 0.63 \\ & **Indonesia** & 2.28 & 2.36 & 2.55 & 0.66 \\ \hline \multirow{4}{*}{**Condet**} & **I** & 2 & 3 & Average \(\kappa\) \\ \cline{2-5} & **India** & 2.62 & 2.54 & 2.73 & 0.74 \\ \cline{1-1} & **S Korea** & 2.13 & 1.92 & 2.35 & 0.65 \\ \cline{1-1} & **Nigeria** & 2.25 & 1.92 & 2.35 & 0.59 \\ \cline{1-1} & **Iran** & 2.27 & 2.38 & 2.58 & 0.76 \\ \cline{1-1} & **Indonesia** & 2.43 & 2.46 & 2.58 & 0.77 \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation of COMET and gd-comet inferences, judged by annotators from the respective cultures. Figure 2: A model using gd-comet for GD-VCR. Knowledge Generation and Selection.VLC-BERT uses the question and the object tags as input to COMET. Instead of object tags, we generate an image caption using BLIP Li et al. (2023) and extract noun phrases from the caption using SpaCy Honnibal et al. (2020). We found that the noun phrases provide a more detailed description of the depicted activities within the image (e.g. "family, burn" in Fig. 2). We additionally append a country tag to the input. During training on VCR, we use the tag "North America", the primary source of movies in the dataset. For the images in GD-VCR, we extracted country tags from Wikipedia. We use beam search to generate five inferences for each of the 34 dimensions. To select the most relevant inferences, we convert the inferences to natural language sentences using relation-specific templates and select the inferences that are the most similar to the question using SBERT embeddings Reimers and Gurevych (2019). Overall Architecture.The generic input to VLBERT for VCR is <question, answer tokens, image regions>. Following Ravi et al. (2023), we embed each inference with SBERT and summarize them into a single token with a weighted average based on learned attention scores. Finally, we feed the output of the [CLS] token into a classifier to predict a score for each answer choice. We train the model using binary cross-entropy loss for 20 epochs on 4 NVIDIA RTX6000 GPUs. ### Results Table 2 compares our model's performance on GD-VCR with baselines that: (i) do not make use of commonsense knowledge (VL-BERT); (ii) generate inferences using GD-BART; and (iii) use COMET (VLC-BERT w/COMET). Note that the same signals (i.e., country tag and noun phrases) were used for the GD-BART and COMET baselines. We also include prior results reported using VisualBERT and ViLBERT for completeness. VLC-BERT w/COMET modestly improves upon VL-BERT across most regions, with an overall improvement of 1.2 points in accuracy. This suggests that COMET provides some commonsense inferences that are universal. Conversely, gd-comet shows a substantial improvement of nearly 5 points over VL-BERT and 4 points over VLC-BERT w/COMET. This highlights the effectiveness of incorporating gd-comet for downstream tasks that require culture-specific knowledge across diverse regions. Furthermore, GD-BART performs less effectively than other methods, underscoring the importance of training on structured knowledge to generate contextually relevant responses. ### Qualitative Analysis Figures 3 presents several GD-VCR instances along with the models' predictions, and the inferences generated by COMET and gd-comet for them. In Figure 2(a), gd-comet accurately associates a girl wearing henna in Somalia with marriage. In Figure 2(b), it understands that folding palms during an Indian festival signifies a greeting or welcome. Finally, in Figure 2(c), it recognizes that bowing in South Korea is a gesture of apology, leading to VLC-BERT w/ gd-comet to be the only model that provides a correct answer. In contrast, COMET's inferences for this example are generic and irrelevant. These examples highlight gd-comet's effectiveness in identifying the cultural context and dynamically generating culturally-relevant commonsense inferences across ATOMIC's relations. ## 6 Conclusion This work challenges the current notion of universally applicable commonsense knowledge by introducing gd-comet, a geo-diverse variant of COMET. gd-comet can generate culturally-nuanced commonsense inferences for a broad range of cultures. Our comprehensive evaluation con \begin{table} \begin{tabular}{l|c|c c|c|c c c} \hline \hline \multirow{2}{*}{**Datasets**} & \multirow{2}{*}{**Human**} & \multirow{2}{*}{**VisualBERT***} & \multirow{2}{*}{**VL-BERT***} & \multicolumn{3}{c}{**VLC-BERT w/**} \\ & & & & & **GD-BART** & **COMET** & **gd-comet** \\ \hline **GD-VCR** & 88.84 & 53.27 & 58.47 & 58.63 & 52.69 & 59.59 & **63.51** \\ \hline \(\circ\)**West** & 91.23 & 65.82 & 64.37 & 65.27 & 57.69 & 66.78 & **69.93** \\ \(\circ\)**South Asia** & 92.98 & 52.04 & 62.9 & 64.92 & 54.35 & 64.25 & **68.17** \\ \(\circ\)**Africa** & 87.93 & 51.85 & 62.04 & 58.17 & 51.87 & 57.71 & **64.81** \\ \(\circ\)**East Asia** & 83.05 & 45.39 & 46.45 & 47.88 & 41.87 & 49.64 & **53.07** \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy (%) of the different models on the subset of each region in GD-VCR. We report the average across 3 runs (see Appendix D for the results of individual seeds). Results marked with \(*\) have been reported in Yin et al. (2021). firms the effectiveness of gd-comet in incorporating and leveraging cultural cues. We view our work as a step towards developing more inclusive and culturally-aware AI systems. ### Limitations While gd-comet represents a significant advancement in incorporating cultural commonsense knowledge into AI models, a few limitations need to be acknowledged. First, the availability of comprehensive, high-quality data remains a challenge in training culturally-aware models. While resources like CANDLE provide a step forward in curating diverse cultural knowledge, it is essential to note that merely capturing the existence of concepts within a culture is insufficient. Future efforts should aim to collect data that reflects the presence of certain concepts and encompasses how people perceive and interpret those concepts within their specific cultural contexts. This would require extensive data collection efforts that go beyond surface-level understanding, and delve into the nuances of cultural perspectives. A second limitation is the availability of suitable benchmarks for testing models' knowledge and understanding of cultural variations. In particular, two such tasks, GD-VCR and MarVL (Liu et al., 2021), focus on vision and language, while Nguyen et al. (2023) proposes a cultural knowledge quiz. We hope to see more language-only datasets developed to go beyond testing models on knowledge about concepts from diverse cultures to understanding cultural nuances. ## Ethics Statement Despite being designed to be more culturally inclusive, gd-comet runs the risk of unintentionally perpetuating biases present in CANDLE data. In particular, CANDLE might misrepresent cultures with stereotypes or underrepresent cultures. Addressing these concerns requires proactive measures such as identifying biases using methods such as Mehrabi et al. (2021) and mitigating them through filtering and additional data collection. Additionally, the size of evaluation benchmarks means they don't always account for cultural variations within the same region. For example, GD-VCR images in the African region are concentrated in East Africa. Similarly, addressing this issue would require additional annotation efforts. ## Acknowledgement This work was funded, in part, by the Vector Institute for AI, Canada CIFAR AI Chairs program, an NSERC discovery grant, and a research gift from AI2. Finally, we sincerely thank Sahithya Ravi, Aditya Chinchure, Ward Pennink and Jan Zimny for valuable feedback and discussions. Figure 3: Attention analysis of commonsense inferences generated by COMET and gd-comet for testing samples in GD-VCR.
2304.06506
DiaTrend: A dataset from advanced diabetes technology to enable development of novel analytic solutions
Objective digital data is scarce yet needed in many domains to enable research that can transform the standard of healthcare. While data from consumer-grade wearables and smartphones is more accessible, there is critical need for similar data from clinical-grade devices used by patients with a diagnosed condition. The prevalence of wearable medical devices in the diabetes domain sets the stage for unique research and development within this field and beyond. However, the scarcity of open-source datasets presents a major barrier to progress. To facilitate broader research on diabetes-relevant problems and accelerate development of robust computational solutions, we provide the DiaTrend dataset. The DiaTrend dataset is composed of intensive longitudinal data from wearable medical devices, including a total of 27,561 days of continuous glucose monitor data and 8,220 days of insulin pump data from 54 patients with diabetes. This dataset is useful for developing novel analytic solutions that can reduce the disease burden for people living with diabetes and increase knowledge on chronic condition management in outpatient settings.
Temiloluwa Prioleau, Abigail Bartolome, Richard Comi, Catherine Stanger
2023-04-04T00:59:04Z
http://arxiv.org/abs/2304.06506v1
DiaTrend: A dataset from advanced diabetes technology to enable development of novel analytic solutions ###### Abstract Objective digital data is scarce yet needed in many domains to enable research that can transform the standard of healthcare. While data from consumer-grade wearables and smartphones is more accessible, there is critical need for similar data from clinical-grade devices used by patients with a diagnosed condition. The prevalence of wearable medical devices in the diabetes domain sets the stage for unique research and development within this field and beyond. However, the scarcity of open-source datasets presents a major barrier to progress. To facilitate broader research on diabetes-relevant problems and accelerate development of robust computational solutions, we provide the DiaTrend dataset. The DiaTrend dataset is composed of intensive longitudinal data from wearable medical devices, including a total of 27,561 days of continuous glucose monitor data and 8,220 days of insulin pump data from 54 patients with diabetes. This dataset is useful for developing novel analytic solutions that can reduce the disease burden for people living with diabetes and increase knowledge on chronic condition management in outpatient settings. ## Background & Summary Advanced technologies like continuous glucose monitors (CGMs) and insulin pumps are transforming the standard of care for diabetes management [1, 2, 3]. The ubiquitous nature of these devices enables real-time monitoring and treatment in daily living; this is a huge advantage over single point-in-time alternatives like glucose meters and insulin pens. Research shows that many patients with diabetes achieve better outcomes with CGMs and insulin pumps [4, 5]. However, research also shows that digital data from these devices is significantly underutilized to optimize outcomes [6, 7]. Meanwhile, the next generation of solutions needed to advance diabetes care, such as the hybrid and fully closed-loop artificial pancreas [8, 9], depend substantially on continuous data from CGMs and insulin pumps. A major barrier to progress in this field centers around access to rich datasets that facilitate the development of novel analytic solutions. In addition, there is a large amount of related but disconnected data streams that is not often reviewed or analyzed together, which further limits our understanding of diabetes management and even prevention [10, 11]. To advance research and development of robust analytic solutions for the growing population of people with diabetes, there is a critical need for open datasets to understand outpatient management, develop interventions, and build clinically-relevant decision-support solutions. Despite the recognized need for open datasets to enable research [12], there are limited datasets for data-driven research in the diabetes domain. One is the OhioT1DM dataset [13], which consists of eight weeks of CGM, insulin pump, physiological sensor, and self-reported events from 12 people with type 1 diabetes, while another is an N-of-1 dataset, which consists of two weeks of blood glucose, insulin, and carbohydrate intake logs [14]. To broaden the scope of research on diabetes and chronic conditions in general, and accelerate development of robust computational solutions, we provide the DiaTrend dataset. The DiaTrend dataset includes CGM and insulin pump data from 54 patients with type 1 diabetes. This dataset is created from a subset of two larger studies focused on: 1) developing computational tools for self-management of diabetes [6], and 2) evaluating a digital intervention for young adults with type 1 diabetes [15]. The provided dataset includes time-aligned blood glucose samples recorded on average every 5 minutes with FDA-approved CGMs by Dexcom [16], Abbott [17], and Medtronic [18], and insulin pump data comprising basal and bolus insulin doses, carbohydrate intake logs, and other pump settings such as insulin-carb ratio and more. Figure 1 presents an overview of the data collection process and data provided. The DiaTrend dataset is useful for several research directions including more common tasks like blood glucose prediction [19, 20, 21, 22, 23, 24, 25, 26], prediction of adverse glycemic events (i.e., hypoglycemia and hyperglycemia) [27, 28, 29, 30], detection of unannounced meals [31, 32, 33, 34, 35], and algorithm development for insulin delivery [36, 37]. However, this dataset is also useful to support further re search on less studied topics like discovering digital biomarkers of glycemic control [7], mining patterns/trends in diabetes management [6, 38, 39], understanding adherence to wearable medical devices and patterns of missing data [40, 41], developing novel visual analytic and data visualization solutions [42], and designing decision-support tools through user-centered studies [43, 44, 45, 46]. Additionally, prospective researchers can find more opportunities for artificial intelligence in the diabetes domain through recent reviews in literature [47, 48, 49]. ## Methods ### Participants The DiaTrend dataset includes CGM and insulin pump data from a total of 54 patients with type 1 diabetes (age: 19 - 74 years, gender: 17 males, 37 females). Table 1 provides an overview of the demographic and clinical characteristics of patients in this dataset, including the distribution across age groups, gender, race, diabetes type, and hemoglobin A1C. Participants were recruited through two independent studies. Study 1 (also known as Digital SMD) recruited patients from Dartmouth Health in 2019, while study 2 (also known as SweetGoals [15]) is an ongoing randomized control trial that recruits patients through social media and online platforms. Both studies were approved by the Committee for Protection of Human Subjects at Dartmouth College and all participants provided verbal and written consent prior to joining either study. In addition, participants provided consent to share their data openly to the broader research community. Cohort 1 (n=17), from the Digital SMD study [6], includes persons with type 1 diabetes between the ages of 25 to 74 years old who use a CGM and insulin pump for daily management of their condition and consented to share their retrospective device data for research. Meanwhile, cohort 2 (n=37), from the SweetGoals study [15], includes persons with type 1 diabetes for longer than 18 months between the ages of 19 to 29 years old who use a Glooko compatible glucometer or CGM, reported a clinical visit within the previous 6 months from the recruitment date, and self-reported their most recent Hemoglobin A1C (HA1C) value as >7.5%. It is important to note that all device data included in the DiaTrend dataset was collected at baseline (i.e., prior to any intervention). Additionally, each individual's dataset spans varying time periods based on the available retrospective data at the time of recruitment. Given our focus on advanced diabetes technology for novel analytic solutions, only participants who use CGMs (with <30% missing data) and insulin pumps for daily management are included in the provided dataset. Figure 1: Overview of the data collection process and data provided in the DiaTrend dataset. ### Dataset Description The DiaTrend dataset includes a total of 27,561 days of CGM data and 8,220 days of insulin pump data from 54 patients with type 1 diabetes. In addition, the DiaTrend dataset includes demographic and clinical characteristics for each subject, including metrics such as age, gender, race, diabetes type and HA1C - see Table 1. There is an average of 510 days (range: 31 - 1885 days) of CGM data per subject, and an average of 152 days (range: 31 - 780 days) of insulin pump data per subject - see Fig. 2. Within the insulin pump data, there is an average of 993 total bolus doses per subject (range: 132 - 4939 doses) and an average of 438 total carb inputs per subject (range: 1 - 2310 input) - see Fig. 3. These data were collected as part of the Digital SMD[6] and SweetGoals[15] studies during which each patient's retrospective CGM and insulin pump data was downloaded through a third-party application (i.e., Tidepool[50] and Glooko[51]). It is important to note that since the SweetGoals study is a randomized control trial, only retrospective baseline data collected during the initial screening is included as part of the DiaTrend dataset (i.e., the provided data does not include sensor data from the intervention period of that study). In addition, HA1C - the primary clinically-validated metric for accessing glycemic control - was collected via the patient's electronic health record (i.e., the most recent HA1C) in the Digital SMD study and via a mail-in home test in the SweetGoals study at the time of the baseline assessment (approximately the endpoint of the device data). Figure 3: Overview of the total number of bolus and carb input data per patient in the DiaTrend dataset. Figure 2: Overview of the number of days of sensor data per patient in the DiaTrend dataset. read as floating point numbers. Table 2 provides a detailed breakdown of each data record, the format, and a description. ## Technical Validation For each patient included in the DiaTrend dataset, we provide an overview of their blood glucose data using clinically-validated metrics for assessing glycemic control [54, 55]. This includes the percentage of all blood glucose readings in 5 clinically-relevant categories, namely, very low (< 54 mg/dL), low (54 - 69 mg/dL), target range (70 - 180 mg/dL), high (181 - 250 mg/dL), and very high (> 250 mg/dL). From Fig. 4a, we can observe that blood glucose is highly variable and only a minority of patients living with diabetes (less than 10% in our dataset) meet the clinical target of maintaining blood glucose within the target range of 70 - 180 mg/dL for more than 70% of the time [54]. Fig. 4b presents histograms for daily mean blood glucose (mean = 187 mg/dL), daily glycemic variability (mean = 0.33), and daily time in range (mean = 47%). From this figure, we can observe a normal distribution for each clinically-relevant metric in the DiaTrend dataset. Similarly, we provide an overview of each patient's insulin pump data using box plots and histograms. Fig. 5a and 5b show box plots with descriptive statistics associated with bolus insulin doses and carb inputs, respectively, for each subject. Additionally, Fig. 5c shows the distributions of total daily bolus insulin doses (units) and total daily carb inputs (g), respectively. From this figure, we can observe a mean total daily bolus of 24 units and a mean total daily carb input of 115 g, both with a positively skewed distribution. In particular, we observe a high number of days (\(\sim\)1400 days) with very low carb inputs (\(\sim\)0g); this could be indicative of missed mealtime boluses (i.e., no bolus insulin used during mealtimes) -- this is a common contributor to poor glycemic outcomes [56, 57, 58]. ### Limitations There are some important considerations and limitations associated with the DiaTrend dataset provided in this paper. First, there is imbalance in the representation of subjects across the dimensions of race, gender, and age. More specifically, majority of patients whose CGM and insulin pump data is provided (i.e., 48 out of 54 or 89%) are non-Hispanic White/Caucasian. Also, \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline Sheet Name & Column Name & Format & Description \\ \hline \hline CGM & date & datetime (yyyy-mm-dd HH:MM:SS) & Date and time that glucose reading was recorded \\ & mg/dL & Float64 & Blood glucose reading in mg/dL \\ \hline Bolus & date & datetime (yyyy-mm-dd HH:MM:SS) & Date and time that bolus was administered \\ & normal & Float64 & Amount of bolus insulin delivered (units) \\ & carlbInput & Float64 & Total carbs announced for bolus (grams) \\ & insulinCarbRatio & Float64 & Patient setting for grams of carbs covered per one unit of insulin \\ & & & Blood glucose reading at time of bolus (mg/dL) \\ & recommended.carb & Float64 & Amount of insulin recommended to cover carb intake for normal bolus \\ & recommended.net & Float64 & Amount of insulin recommended for bolus delivery \\ & recommended.correction & Float64 & Amount of insulin recommended for correction component of normal bolus \\ & insulinSensitivityFactor & Float64 & Patient setting for how one unit of insulin lowers blood glucose level \\ & targetBloodGlucose & Float64 & Target blood glucose value for after bolus delivery \\ & insulinOnBoard & Float64 & Amount of active insulin remaining from prior insulin doses \\ \hline Basal & date & datetime (yyyy-mm-dd HH:MM:SS) & Date and time of basal infusion \\ & duration & Float64 & Duration of basal infusion (ms) \\ & rate & Float64 & Rate of basal infusion (units/hr) \\ \end{tabular} \end{table} Table 2: Overview of the data records, format, and description in the DiaTrend dataset. Figure 4: Descriptive summary of CGM data in the DiaTrend dataset. (a) The percent of blood glucose samples in 5 clinically-relevant categories. (b) The distributions of daily mean blood glucose, daily glycemic variability, and daily time in [target] range. Figure 5: Descriptive summary of insulin pump data in the DiaTrend dataset. (a) A box plot of all bolus insulin doses per subject. (b) A box plot of all carb input entries per subject. (c) The distributions of total daily bolus insulin and total daily carb inputs across all subjects. this dataset includes a lower representation of males (n=17 out of 54 or 32%) compared to females, and a lower representation of older adults (e.g., for age \(\geq\) 45 years old, n=12 or 22%). The limitation with regards to race (i.e., low representation of participants from non-White/Caucasian races including Asian and Black/African Americans) is partly due to the geographical location (i.e., New Hampshire) from which some participants were recruited. However, the imbalance in representation also underscores racial disparities which have been identified in prior literature relating to access and use of advanced diabetes technologies [59], particularly CGMs and insulin pumps. Additionally, the limitation with regards to age (i.e., low representation of older adults and higher representation of young adults) is primarily due to the targeted focus on young adults with type 1 diabetes in the SweetGoals study [15]. A second limitation of the DiaTrend dataset is that it lacks full temporal alignment in the CGM and insulin pump data for each participant. This difference is apparent from Fig. 2 which shows more CGM data than insulin pump data for a number of subjects. While the reason for this is unknown, we suspect that it is primarily due lower data storage capacity on insulin pumps compared to CGMs, which in turn limits the amount of retrospective data available for download from insulin pumps. Third, basal insulin data is not available for subjects from cohort 2 (37 out of 54). This missing data stream might limit research efforts that require basal rate for analysis. However, despite the aforementioned limitations, the DiaTrend dataset represents one of the largest open-source datasets currently available in the diabetes domain. This critical resource provides a unique opportunity to advance development of novel data-driven solutions that can improve the lives of people living with diabetes. In addition, this dataset provides a necessary benchmark to evaluate the generalizability of numerous diabetes-relevant algorithms in literature [19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]. ## Usage Notes The DiaTrend dataset is provided for research and educational purposes that support the development of novel data-driven solutions for the diabetes community and beyond. Consistent with exemplar studies [60, 13, 61], we have set governance structures in place to balance the need for open datasets that advance research and protect the privacy of participants. Researchers interested in accessing the DiaTrend dataset should complete the following steps: 1. Register for a Synapse account (www.synapse.org). 2. Become a Synapse Certified User with a validated user profile. 3. Submit an Intended Data Use statement. 4. Agree to the Conditions of Use. The conditions of use are as follows: * You confirm that you will not attempt to re-identify research participants for any reason, including for re-identification theory research. * You commit to keeping the DiaTrend dataset confidential and secure. * You understand that these data may not be used for commercial advertisement or to re-contact research participants. * You agree to acknowledge the research participants as data contributors, study investigators, and this paper on all publications or presentations which results from using the DiaTrend dataset. ## Code availability Python was used for all data processing described in this paper. The Python code used to generate all figures in this paper is available on the Augmented Health Lab's Github: [https://github.com/Augmented-Health-Lab/Diatrend](https://github.com/Augmented-Health-Lab/Diatrend).
2307.06056
How Many Papers Should You Review? A Research Synthesis of Systematic Literature Reviews in Software Engineering
[Context] Systematic Literature Review (SLR) has been a major type of study published in Software Engineering (SE) venues for about two decades. However, there is a lack of understanding of whether an SLR is really needed in comparison to a more conventional literature review. Very often, SE researchers embark on an SLR with such doubts. We aspire to provide more understanding of when an SLR in SE should be conducted. [Objective] The first step of our investigation was focused on the dataset, i.e., the reviewed papers, in an SLR, which indicates the development of a research topic or area. The objective of this step is to provide a better understanding of the characteristics of the datasets of SLRs in SE. [Method] A research synthesis was conducted on a sample of 170 SLRs published in top-tier SE journals. We extracted and analysed the quantitative attributes of the datasets of these SLRs. [Results] The findings show that the median size of the datasets in our sample is 57 reviewed papers, and the median review period covered is 14 years. The number of reviewed papers and review period have a very weak and non-significant positive correlation. [Conclusions] The results of our study can be used by SE researchers as an indicator or benchmark to understand whether an SLR is conducted at a good time.
Xiaofeng Wang, Henry Edison, Dron Khanna, Usman Rafiq
2023-07-12T10:18:58Z
http://arxiv.org/abs/2307.06056v1
How Many Papers Should You Review? A Research Synthesis of Systematic Literature Reviews in Software Engineering ###### Abstract [Context] Systematic Literature Review (SLR) has been a major type of study published in Software Engineering (SE) venues for about two decades. However, there is a lack of understanding of whether an SLR is really needed in comparison to a more conventional literature review. Very often, SE researchers embark on an SLR with such doubts. We aspire to provide more understanding of when an SLR in SE should be conducted. [Objective] The first step of our investigation was focused on the dataset, i.e., the reviewed papers, in an SLR, which indicates the development of a research topic or area. The objective of this step is to provide a better understanding of the characteristics of the datasets of SLRs in SE. [Method] A research synthesis was conducted on a sample of 170 SLRs published in top-tier SE journals. We extracted and analysed the quantitative attributes of the datasets of these SLRs. [Results] The findings show that the median size of the datasets in our sample is 57 reviewed papers, and the median review period covered is 14 years. The number of reviewed papers and review period have a very weak and non-significant positive correlation. [Conclusions] The results of our study can be used by SE researchers as an indicator or benchmark to understand whether an SLR is conducted at a good time. SLR, Systematic Literature Review, Methodological Study, Research Synthesis, Software Engineering + Footnote †: publicationid: pubid: 978-1-6654-5223-6/23/$31.00 ©2023 IEEE ## I Introduction Systematic literature reviews (SLRs) have a strong presence in Software Engineering (SE) literature, and the number of SLR studies has grown steadily in the last two decades [1]. SLRs, like any research, should be performed carefully, following rigorous processes, and results should be reported and interpreted appropriately. They require considerably more effort than traditional literature reviews [2]. Therefore, SE researchers should not commit to conducting an SLR without understanding whether it is worth doing. The worthiness can be understood from different perspectives, among which an important one is timing, i.e., when is the appropriate time to conduct an SLR? Or is there an appropriate time at all? Despite several guidelines and tertiary studies on SLRs in SE [2, 3, 4], no clear indications are provided on the right time to conduct an SLR on a research question, area, or phenomenon in the SE research field. We aspire to fill this observed knowledge gap. As the first step of our research, we investigated the datasets, i.e., the reviewed papers, in SLRs in SE. We assumed that analysing the dataset of an SLR can reveal the development status of a research topic or area when an SLR was conducted. Therefore, we asked the following research question: _What are the characteristics of the datasets of SLRs in SE?_ To answer the research question, we conducted a research synthesis on a sample of SLRs published in top-tier SE journals. For each of the SLR studies in the sample, we extracted relevant data on the reviewed papers, including the number of reviewed papers and the period covered by these reviewed papers. The collected data was analysed through multiple angles to reach the answer to the posed research question. The findings reported in the paper provide insights into the characteristics of the datasets used by SLRs in SE. SE researchers can take our findings as an indicator or benchmark to understand whether an SLR is conducted at a good time. The rest of the paper is organised as follows. Section II provides a review of the guidelines and tertiary studies that are relevant to our study. The data collection process we followed to build our study sample is described in Section III. The following section, Section IV, reports the findings, which are discussed in Section V. Lastly, in Section VI, we outline the next steps of our research on understanding the temporal aspects of SLRs in SE. ## II Related Work The widely used guidelines of SLRs in SE are provided in [2], in which the reasons for performing SLRs and their importance are argued. Later on, guidelines for the search strategy to update SLRs in SE are provided in [5]. Recently, Kitchenham et al. [6] presented an integrated set of guidelines to address reporting problems in secondary SE studies. Apart from these guidelines, several tertiary studies in SE exist in the literature. These studies assess the impact of SLRs and provide an annotated catalogue of SLRs (e.g., [3, 4]), record the reported experiences of conducting SLRs for the benefit of new researchers [7], or review SLRs in a specific SE area (e.g., Software Engineering Education [8]). Few existing guidelines or tertiary studies in SE suggest the appropriate time to conduct an SLR on a research question or topic. The study of Mendes et al. [9] is the only one that we are aware of investigating the timing aspect of SLRs in SE. Their goal is to understand when is the appropriate time to update SLRs in SE. Using a decision framework employed in other fields, they analysed 20 SLRs which are updates of previously conducted SLRs. The study finds that 14 of the 20 updated SLRs need not be conducted. The work of Mendes et al. [9] provides a good motivation to examine the necessity of conducting first-time SLRs in SE, which is not investigated by these authors or in any existing SE literature as far as we are aware of. More specifically to the focus of this paper, no suggestion is provided on how many papers should be reviewed in an SLR in SE. Understandably, suggestions like this are difficult to offer since each research topic or area has a different development pace, has a different number of researchers working on it, and therefore accumulates evidence and knowledge at a different speed. Nevertheless, it would be useful to have an overall understanding of the datasets used by SLRs in SE, since a dataset, i.e., reviewed papers, in an SLR represents the knowledge accumulated on the research topic under the investigation. ## III Research Approach To answer the research question, we employed research synthesis. Research synthesis is an umbrella term referring to methods used to summarise, integrate, combine, and compare the findings of different studies on a particular topic or research question [10, 11, 12]. Research synthesis aims at analysing and evaluating multiple studies to integrate and provide new interpretative explanations about them [12]. We conducted a research synthesis of a sample of SLRs in SE, focusing on the datasets used in these SLRs to investigate how many papers should be reviewed in an SLR. ### _Data collection_ #### Iii-A1 Search strategy Even though we were not conducting an SLR study, we followed the search strategy defined in [2] to build our sample. We did not attempt to search for all relevant SLRs in SE exhaustively but rather to sample enough studies for analysis. Therefore, we focused on SLRs published in top-tier SE journals as identified by Wong et al. [13]. This is a trade-off between considering as much literature as possible and at the same time accumulating and extracting reliable information. As reported in [1], more than 600 SLRs were published between 2004 and 2016, and there is a trend that the number has been growing since. Therefore, the number of SLRs published in journals can already provide enough data for the first step of our study. To build our search string, we combined the journals' titles with the synonyms of "systematic literature reviews" [14]. Our generic search string is: _("systematic review" OR "research veribres" OR "research veribres" OR "systematic research synthesis" OR "systematic research synthesis" OR "integrative research review" OR "integrative review" OR "systematic literature review" OR "literature review") AND ("Information and Software Technology" OR "Journal of Systems and Software" OR "IEEE Software" OR "IEEE Transactions on Software Engineering" OR "Software: Practice and Experience" OR "Software Testing, Verification and Reliability" OR "Transactions on Programming Languages and Systems" OR "Transactions on Software Engineering and Methodology" OR "Journal of Software: Evolution and Process" OR "International Journal on Software Tools for Technology Transfer" OR "Empirical Software Engineering")_ We ran the search string in Scopus on Feb 24, 2023, and retrieved 412 published papers. Each paper was inspected by two authors to decide whether it is an SLR study or follows SLR guidelines. In the cases where the two authors did not agree on the decision, a third author's vote was required. We excluded the studies that do not follow SLR guidelines (e.g., conventional/ad-hoc reviews). We also excluded mapping studies, grey literature reviews, multi-vocal literature reviews, tertiary studies or SLR updates. Some studies published in IEEE Software are typically summaries of existing SLRs that have already been published in other venues. We checked the venues where the original SLRs were published. We included the original SLRs in our sample if the venues are among the journals we used to search for SLRs. #### Iii-A2 Data extraction A key element of an SLR is dataset, i.e., the papers reviewed in an SLR. There are various facets of a dataset that could be relevant to our study. In this first step, we focused on the following three facets: * The number of reviewed papers in the SLR; * The earliest publication year of the reviewed papers; and * The latest publication year of the reviewed papers. If an SLR does not report the information above or provide detailed information on how to get them, we excluded it from our sample. The unit of analysis in our study is the SLR study itself. Therefore, if two SLR studies were conducted and reported in one paper, we considered two data points from that paper. Moreover, if a published paper contains both an SLR and other review studies (e.g., systematic mapping study) or empirical studies (e.g., case studies, experiments, etc.), we only included the paper if we were able to extract the SLR-related data. For each of the identified SLR studies, the meta-data of the paper in which it is published (e.g., title, publication year, authors, publication venue) were extracted automatically through the "export" feature of Scopus. Ultimately, we collected data from 170 SLRs, constituting our final data analysis sample. The final version of the dataset is accessible through a publicly available repository [15]. ### _Data analysis_ In the data analysis phase, apart from the meta-data of the publications containing the SLRs, we defined the following two variables directly related to the dataset of an SLR: * NoRP: Number of Reviewed Papers in an SLR; and * the earliest publication year of the reviewed papers in an SLR + 1_ After obtaining the descriptive statistics (min, max, median, mean and standard deviation) of NoRP and RPC, we explored whether there was any relation between the two variables. That is, to understand whether the number of reviewed papers in an SLR can be indicated by how long the research topic under the study has been explored. ## IV Results ### _Sample overview_ Before reporting the results related to the two variables, we provide an overview of the 170 SLRs in our sample, as shown in Fig. 1 and Table I. Fig. 1 shows the distribution of the SLRs across the years. In our sample, the two earliest SLRs were published in IST in 2008. The number of SLRs published in top-tier journals has been growing over the years, despite having small dips in certain years. Table I shows the distribution of these SLRs across the journals. It can be seen from Table I that the _Journal of Systems and Software_ has the most SLRs (70), followed by _Information and Software Technology_ (40). _Journal of Software: Evolution and Process_ and _Empirical Software Engineering_ have similar numbers of SLRs (18 and 16, respectively). ### _Characteristics of the datasets of SE SLRs_ Table II lists the descriptive statistics of the two variables, NoRP and RPC. As shown in the first column "NoRP (n=170)" of Table II, the number of reviewed papers, or the size of the datasets of the SLRs, varies greatly (sd=95.60). The minimum number of reviewed papers is 6 (in one SLR), and the maximum is 925 (in one SLR). The median size is 57, and the mean value is 80.59, which means the number of reviewed papers is right-skewed. Indeed, after removing the outliers (the four largest numbers of NoRP) to make Fig. 2 more readable (otherwise, the majority of the data points will be squeezed into a small area of the diagram), the difference between the median and mean values is reduced, as well as the standard deviation, as shown in the second column "NoRP (n=166)" of Table II. To show the distribution of NoRP more clearly, we plotted the histogram using the sample of 166 SLRs, as shown in Fig. 2. The red line indicates the mean value. It can be seen in Fig. 2 that the dataset sizes ranging from 53 to 57 reviewed papers are most common, used by fourteen SLRs. The other common size ranges are between 28 and 32 (thirteen SLRs), between 33 and 37 (twelve SLRs), and between 68 and 72 reviewed papers (twelve SLRs). As shown in the third column, "RPC (n=170)" of Table II, this variable's median and mean values converge to 14 years, with a standard deviation of 8.22. The longest review period covered by the reviewed papers in an SLR is 41 years. The Fig. 1: Number of SLRs per year (n = 170) SLR with the longest review period was published in TSE in 2021. One hundred and sixty-six papers were reviewed in this SLR, ranging from 1977 to 2017. What is somehow surprising is the shortest review period (min value of RPC), which is 2 years. The SLR with the shortest review period was published in _Software Testing Verification and Reliability_ in 2014. Despite the short review period, the number of reviewed papers is fifty-four, close to the median dataset size. These fifty-four reviewed papers were published between 2009 and 2010. Fig. 3 shows the distribution of RPC, the review periods covered by the reviewed papers in SLRs, using the sample of 170 SLRs. No outlier is perceived since all values are within a reasonable range (between two and 41). The red line indicates the mean value. As shown in Fig. 3, fifteen SLRs have reviewed the papers published within 14 years, which is the most common review period covered and also the median value of RPC. The next common review period covered is 6 years (thirteen SLRs have this review period), followed by 11 years (twelve SLRs). Fig. 4 is the scatterplot of the two variables (NoRP vs RPC) using the sample of 170 SLRs. It shows no observable relationship between the number of reviewed papers in an SLR and the review period covered by that collection of reviewed papers. The scatterplot can be better observed using the sample of 166 SLRs as shown in Fig. 5. Using both samples, we tested the correlation between NoRP and RPC. Since the two variables are not normally distributed (based on the results of the Sharpiro-Wilk test [16]), we tested their correlation using the Spearman rank correlation coefficient [17] with a 0.95 confidence level. For the sample of 170 SLRs, the results indicate a very weak positive correlation between the two variables (rho = 0.1310, p-value = 0.0886). Similar results were obtained using the sample of 166 SLRs (rho = 0.1357, p-value = 0.0814). However, in both cases, the p-value is above 0.05, which indicates that there is no sufficient evidence to support the correlation between the two variables in both samples. ## V Discussion The quantitative analysis conducted on the datasets used by the SLRs in our sample shows that there is no single magic number that SE researchers can rely on to decide whether it is an appropriate time to conduct an SLR. It depends evidently on the research question or topic under investigation. However, the median number of reviewed papers in the SLRs (57) and the typical review period covered (14 years) can serve as a first useful indicator or benchmark to evaluate whether the Fig. 3: The distribution of the year span of reviewed studies in SLRs (n=170) Fig. 2: The distribution of the number of reviewed studies in SLRs (n=166) research on a given topic has accumulated enough studies that warrant an SLR. SE researchers can estimate the dataset they will obtain or compare what they have already obtained to understand whether they are dealing with a smaller or larger dataset than the average ones used by the SLRs in SE. They should be more cautious when the dataset is extremely small or large, which may signal a potential issue in the literature search or inclusion/exclusion processes. Additionally, when the number is extremely small, it may mean that the research field is not mature enough, and an SLR is not needed at that point in time. On the contrary, when the number is extremely large, it indicates that the SLR should have been conducted earlier. One major limitation of our study is that we constrained our SLR sampling to those published in a selected list of top-tier SE journals. We did not include SLRs published in SE conferences. Therefore, the findings cannot be generated for the SLRs published in those venues. Another limitation is that we used the Number of Reviewed Papers (NoRP) as an indicator. This number is only obtainable after the relevant papers are retrieved, and inclusion/exclusion criteria are applied, which means a significant amount of effort has already been invested before the NoRP can be known. This limits the usefulness of NoRP as an early-stage indicator of "when" to conduct an SLR. ## VI Next Steps and Future Work This paper reports the initial findings of our study on the temporal aspects of SLRs in SE. Our eventual goal is to understand when it is an appropriate time to conduct an SLR on an SE research topic. In the first step, we used the number of reviewed papers and review period covered by these papers as the indicators. In the next steps, we will investigate other data, e.g., the number of retrieved papers after applying the search string (assuming a good one), as an earlier indicator on whether an SLR is conducted in a timely manner. We also need to explore the factors that affect the size of SLR datasets, such as the number of libraries used in the search and the search strategies used (such as closed vs. open period). Additionally, we will collect more data about different facets of the dataset of an SLR, the distribution of the reviewed papers over years and venues, and the types of papers included in a dataset (conference or journal paper, research methodology used, and so on). We will explore the patterns in these data and relations among different facets. Another venue for future work is to broaden our sample by collecting and analysing the SLRs published in SE conferences. By contrasting and comparing the SLRs published in these two different types of venues, we can improve the generalisability of our findings. Fig. 4: The relation between the number of reviewed papers and review period covered in SLRs (n = 170) Fig. 5: The relation between the number of reviewed papers and review period covered in SLRs (n = 166) Our study focused only on the quantitative SLR data. In the future, qualitative analysis can be conducted on SLRs. For example, one can investigates which SE topics have been systematically reviewed and published. One can also map the topics of SLRs to the SE knowledge areas [18] to provide a bigger picture of SE research and its change over time. This could help SE researchers to find the relevant SLRs on their topics and decide if an SLR on their topic is needed. Even though we focused on SLR, we believe our research question is relevant to other literature review methods, such as systematic mapping studies or multivocal reviews. Therefore, researchers could replicate our approach to advance our knowledge in these related areas. ## Acknowledgement This work has been supported by ELLIIT; the Swedish Strategic Research Area in IT and Mobile Communications.
2307.01739
Spatial organization of slit-confined melts of ring polymers with non-conserved topology: A lattice Monte Carlo study
We present Monte Carlo computer simulations for melts of semiflexible randomly knotted and randomly concatenated ring polymers on the fcc lattice and in slit confinement. Through systematic variation of the slit width at fixed melt density, we first explore the influence of confinement on single-chain conformations and inter-chain interactions. We demonstrate that confinement makes chains globally larger and more elongated, while enhancing both contacts and knottedness propensities. As for multi-chain properties, we show that ring-ring contacts decrease with the confinement, yet neighbouring rings are more overlapped as confinement grows. These aspects are reflected on the decrease of the links formation between pairs of rings. The results suggest that confinement can be used to fine-tune the mechanical properties of the polymer network. In particular, confinement biases the synthesis of networks that are softer to mechanical stress. Finally, in connection with a previous study of us and recent simulations on two-dimensional polymer melts, our findings suggest that entanglements in polymer melts arise from pairwise ring-ring links alone.
Mattia Alberto Ubertini, Angelo Rosa
2023-07-04T14:16:11Z
http://arxiv.org/abs/2307.01739v1
Spatial organization of slit-confined melts of ring polymers with non-conserved topology: A lattice Monte Carlo study ###### Abstract We present Monte Carlo computer simulations for melts of semiflexible randomly knotted and randomly concatenated ring polymers on the fcc lattice and in slit confinement. Through systematic variation of the slit width at fixed melt density, we first explore the influence of confinement on single-chain conformations and inter-chain interactions. We demonstrate that confinement makes chains globally larger and more elongated, while enhancing both contacts and knottedness propensities. As for multi-chain properties, we show that ring-ring contacts decrease with the confinement, yet neighbouring rings are more overlapped as confinement grows. These aspects are reflected on the decrease of the links formation between pairs of rings. The results suggest that confinement can be used to fine-tune the mechanical properties of the polymer network. In particular, confinement biases the synthesis of networks that are softer to mechanical stress. Finally, in connection with a previous study of us and recent simulations on two-dimensional polymer melts, our findings suggest that entanglements in polymer melts arise from pairwise ring-ring links alone. ## I 1. Introduction Recent years have witnessed a growing interest in the design of so called _smart_ materials, such polycatenanes and polyrotaxanes [1; 2], whose microscopic components are constituted by ring polymers interlocked to each other by topological links that can be artificially synthesised following precise chemical routes. Interestingly similar devices can be also prepared by employing biological components, mainly DNA plasmid rings [3] which interlock to each other through the action of the enzyme _topoisomerase-II_ (TopoII) and form a molecular state termed _Olympic_ hydrogel which has been first theorized by de Gennes in 1997 [4]. Remarkably, similar molecules can be also found in Nature: a classical example is the kinetoplast DNA [5] present in the mitochondria of certain _Trypanosoma_ parasites. Similarly to covalent bonds stabilizing the shape of a molecule, topological links remain stable at room temperature which guarantees the corresponding molecule to maintain a relatively well characterized spatial conformation. On the other hand, since the single ring constituents are not rigid objects but they fluctuate [6] as ordinary polymers typically do [7; 8], these molecules display unusual mechanical properties under stress and tunable viscoelasticity that can be exploited in a wide number of practical applications (molecular machines and drug delivery [9; 10], to name a few), thus justifying the adjective "smart" employed for these materials. The preparation of topological materials with well designed properties is a delicate balance between many parameters: indeed, several numerical studies [11; 12; 13; 14] have characterized the topological state of systems made up of randomly concatenated and knotted polymer rings, and have shown that the resulting networks can be controlled using experimentally tunable parameters such as the length of the polymer chain, the density of the polymer solution, and the bending stiffness of the polymer fiber. So far though, _geometric confinement_ as a way to drive the synthesis of concatenated ring networks has received considerable less attention. Yet, recent experiments [15] performed on kinetoplast DNA [5] at varying degree of _slit confinement_ have foreseen the possibility of exploiting geometric constraints to bias the synthesis of a DNA-based network, similarly to the one discussed in Ref. [3]. In this work, we explore how geometric constraints, under the form of slit confinement, can affect the structural properties of systems of strand-crossing rings. To this purpose, we perform extensive dynamical simulations of highly entangled systems of randomly concatenated and knotted rings employing the kinetic Monte Carlo algorithm introduced by us [13] for studying these systems at bulk conditions. Varying the degree of confinement, we quantify its influence on the metric properties of the rings, which present interesting non-monotonous behaviour, as well as topological ones, in particular knotting probability is highly enhanced by reducing the height of the slit, while the linking between the rings is diminished. These findings suggest that geometric confinement can be used as a powerful tool to control the topology of the resulting networks and their elastic properties. The paper is structured as the following. In Section 1 we present and discuss the Monte Carlo lattice polymer model, we introduce the notation and we explain how to detect and compute topological invariants for the characterization of knots and links in the system. In Sec. 2 we present the main results of our work, while in Sec. 3 we provide some discussion and conclusions regarding the role of slit confinement in shaping both single-chain and inter-chain properties of the resulting polymer networks. Additional figures have been included in the Supporting Information (SI) file. ## II 2. Model and methods ### 2.1. Polymer model We consider polymer melts made of \(M\) randomly concatenated and randomly knotted ring polymers of \(N=320\) monomers each on the fcc lattice; the fcc unit step \(a\) is taken as our unit length. The simulations are based on the kinetic Monte Carlo (kMC) algorithm introduced by us in [13]. Since then, the algorithm has been variously applied to study melts of non-concatenated and unknotted rings [16] and the connection between entanglements and physical links in semiflexible chain melts [14]. In this article we limit ourselves to summarizing the essential details of the numerical protocol, while referring the reader to our past works for more details. Essentially, the polymer model takes into account: (i) chain connectivity, (ii) bending stiffness, (iii) excluded volume, (iv) topological rearrangement of polymer chains. Finally, and for the first time, in this work we consider (v) slit confinement in the model. For the implementation of chain dynamics, the following combination of MC moves - that automatically take into account excluded volume interactions - are used: * Topology-_preserving_ moves (termed _Rouse-like_ and _reptation-like_, see [13]) that automatically enforce excluded volume interactions. By construction, these moves enable two (and no more than two) consecutive bonded monomers along each single chain to occupy the same lattice site: by allowing to store contour length along the polymer filament, this numerical "trick" makes the chains locally elastic and facilitates global chain equilibration. Because of that, the bond length is a fluctuating quantity with mean value \(=\langle b\rangle\): in particular, the latter is insensitive to confinement (the measured values for \(\langle b\rangle\) are reported in Table 1). In this way, the mean polymer contour length is \(L=N\langle b\rangle\) and, similarly, the mean contour length of a subchain of \(n\) monomers is \(\ell=n\langle b\rangle\). * Topology-_changing_ moves [13] that induce random strand crossings between nearby polymer filaments at a tunable rate: we set this rate to \(10^{4}\) kMC elementary steps, consistent with our previous works [13; 14; 16]. Strand-crossings between filaments of the same ring can result in the creation or destruction of knots, while inter-ring crossings may cause either catenation or decatenation. The model has been shown to exhibit dynamical behavior consistent with the experiments [3], specifically dynamic "fluidization" of the rings due to topological violations through strand-crossings. Thus, by performing simulations of strand-crossing rings, we sample the ensemble of the network structures formed by randomly concatenated and knotted rings at the given density and in slit confinement (see below for details). Then, bending stiffness is modelled in terms of the Hamiltonian (in Boltzmann units, \(\kappa_{B}T\)): \[\frac{\mathcal{H}}{\kappa_{B}T}=-\kappa_{\text{bend}}\sum_{i=1}^{N(b)/a}\cos \theta_{i}\,,\] where \(\kappa_{\text{bend}}=2\) is the bending stiffness and \(\theta_{i}\) is the angle between consecutive bonds along the chain, with periodic conditions - due to ring geometry - assumed for all the chains. By fixing the monomer number per fcc lattice site equal to \(\frac{5}{4}=1.25\)[13; 14; 16], the chosen bending stiffness corresponds to the chain Kuhn segment \(\ell_{K}/a=3.4\)[16] which is high enough to guarantee that distinct polymers are in an effective highly entangled state. Finally, ring polymers are subject to slit confinement. This particular form of constraint is imposed by forcing the chains to move on the fcc lattice, with periodic boundary conditions on the \(xy\)-plane and hard boundaries in the \(z\)-direction placed in \(z=0\) and \(z=\text{H}\). We vary the height of the box H to study different confinement regimes, while adjusting the lateral box sides \(L_{x}=L_{y}\) to keep density constant. The degree of confinement is quantified by the ratio \(\hat{\text{H}}=\text{H}/\sqrt{\langle R_{g}^{2}\rangle_{\text{bulk}}}\), expressing the ratio between the height, or width, of the slit H and the root-mean-square gyration radius (see definition (3)), \(\sqrt{\langle R_{g}^{2}\rangle_{\text{bulk}}}/a=\sqrt{49.66}\)[16], of rings in bulk conditions. We investigate system's behavior from highly confined (\(\hat{\text{H}}\simeq 0.30\)) to mildly confined (\(\hat{\text{H}}\simeq 2.91\)) regimes and systematically compare the results with the corresponding values in bulk. Wherever appropriate, we have also compared the systems here with melts \begin{table} \begin{tabular}{c c c c} \hline \hline H/\(a\) & \(\hat{\text{H}}\) & \(M\) & \(\langle b\rangle/a\) \\ \hline 2.12 & 0.30 & 420 & 0.656 \\ 3.53 & 0.50 & 422 & 0.658 \\ 4.95 & 0.70 & 420 & 0.659 \\ 6.36 & 0.90 & 427 & 0.659 \\ 7.78 & 1.10 & 420 & 0.660 \\ 10.61 & 1.51 & 420 & 0.660 \\ 13.43 & 1.91 & 422 & 0.660 \\ 17.68 & 2.51 & 430 & 0.660 \\ 20.51 & 2.91 & 433 & 0.660 \\ bulk & – & 420 & 0.663 \\ \hline \hline \end{tabular} \end{table} Table 1: Values of physical parameters for the ring polymer melts investigated in this paper. \(a\) is the unit distance of the fcc lattice and the monomer number per fcc lattice site is equal to \(\frac{5}{4}=1.25\), see text and Refs. [13; 14; 16] for details. (i) H, height of the slit. (ii) \(\hat{\text{H}}=\frac{\text{H}}{\sqrt{\langle R_{g}^{2}\rangle_{\text{bulk}}}}\), ratio between the height of the slit and the root-mean-square gyration radius of rings in bulk (_i.e._, no confinement) conditions. (iii) \(M\), total number of simulated chains in the melt. (iv) \(\langle b\rangle\), mean bond length [17]. of unknotted and non-concatenated rings in bulk [16]. We simulate \(M\simeq 420\) chains, comprising a total of \(N\times M\simeq 134400\) monomers, with \(M\) slightly adjusted to maintain a constant density (see Table 1 for specific numbers). To assess meaningful chain statistics and as in our other works [13; 14] on similar polymer systems, we run simulations long enough in order to get properly equilibrated melts. This is visualized in Figure S1 in SI, that shows plots of the monomer time mean-square displacement in the frame of the centre of mass of the corresponding chain (the so called \(g_{2}\)[18]) as a function of the MC simulation time \(\tau_{\rm MC}\). As known, provided long-enough simulations are available, \(g_{2}\) displays a plateau that is indicative of the equilibration of the system. All our systems display corresponding plateaus, that demonstrates that equilibration has been reached for all the cases considered. Accordingly, the time scale to reach the corresponding plateau corresponds to the portion of the trajectory that has been discarded from the computation of the relative observables. ### 2.2. Detection of knots and links In order to characterize the topological states of the rings in the melt, we follow closely the pipeline recently developed by us [14]. Specifically, we employ a numerical algorithm which "shrinks" or simplifies each ring to its "primitive" shape, _i.e._ without violating topological constraints: in this way we detect knots and links at any order, _i.e._ pairwise links as well as three-chain links like the _Borromean_ ring configuration \(6_{2}^{3}\) (see Sec. 2.3). The algorithm is able to return the irreducible knotted or linked structure which we further characterize by computing their topological invariants. For knots, in particular, we compute the corresponding Jones polynomial [19] using the Python package _Topoly_[20]. Instead, for two-body links we compute the Gauss linking number (GLN): \[{\rm GLN}\equiv\frac{1}{4\pi}\oint_{\mathcal{C}_{1}}\oint_{\mathcal{C}_{2}} \frac{(\vec{r}_{2}-\vec{r}_{1})\cdot(d\vec{r}_{2}\wedge d\vec{r}_{1})}{|\vec{ r}_{2}-\vec{r}_{1}|^{3}}\,, \tag{1}\] which gives the number of times two closed loops \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\), parametrized respectively by coordinates \(\vec{r}_{1}\) and \(\vec{r}_{2}\), wind around each other. While unconcatenated rings have \({\rm GLN}=0\), it is known that there exists concatenated pairs with \({\rm GLN}=0\) (for instance, the so called _Whitehead_ link configuration \(5_{1}^{2}\)). In these "pathological" cases, the ones detected via our shrinking algorithm have been successively identified by computing the Jones polynomial using _Topoly_ again. We compute the Jones polynomials also for three-body irreducible links (for instance, Borromean rings) where a pairwise topological invariant such as the GLN fails (see Sec. 3.2.2). ### 2.3. Notation As for rings' metric properties, for some observables \(\mathcal{O}\) which can be expressed as a function of monomers' coordinates we study separately the contributions \(\mathcal{O}_{\perp}\) and \(\mathcal{O}_{\parallel}\), respectively perpendicular (or, transverse) and parallel to the plane of the slit (which, by construction (see Sec. 2.1), coincides with the \(xy\)-plane). As for rings' topological properties, in referring to a given knot or link we employ the conventional notation illustrated in the book by Rolfsen [21]. Namely, a knot or a link is defined by the symbol \(K_{i}^{p}\) where: \(K\) represents the number of irreducible crossings of the knot (or the link), \(p\) is the number of rings which takes part in the topological structure (_e.g._, \(p=2\) for two-chain links) and \(i\) is an enumerative index assigned to distinguish topologically _non-equivalent_ structures having the same \(K\) and \(p\). ## III 3. Results ### 3.1. Single-chain properties #### iii.1.1. Rings' size and shape First, we characterize the impact of slit confinement on the size and shape of the rings. To this purpose, for each ring of the system we compute the \(3\times 3\) symmetric gyration tensor \(Q_{\alpha\beta}=Q_{\beta\alpha}\) (\(\alpha,\beta=x,y,z\)) defined as: \[Q_{\alpha\beta}=\frac{1}{N}\sum_{m=1}^{N}\left(r_{m,\alpha}-r_{\rm CM,\alpha} \right)\left(r_{m,\beta}-r_{\rm CM,\beta}\right)\,, \tag{2}\] where \(r_{m,\alpha}\) is the \(\alpha\)-th Cartesian component of the spatial position \(\vec{r}_{m}\) of monomer \(m\) and \(\vec{r}_{\rm CM}\equiv\frac{1}{N}\sum_{m=1}^{N}\vec{r}_{m}\) is the center of mass of the chain. The mean values of the eigenvectors of \(Q\) ordered in descending order, \(\langle\lambda_{1}^{2}\rangle\geq\langle\lambda_{2}^{2}\rangle\geq\langle \lambda_{3}^{2}\rangle\), quantify the mean spatial elongations of the polymers on the corresponding principal axes, while the mean value of the trace of \(Q\), \(\langle{\rm tr}Q\rangle=\sum_{\alpha=1}^{3}\langle\lambda_{\alpha}^{2}\rangle\), is equal to the mean-square gyration radius or size, \[\langle R_{g}^{2}\rangle\equiv\frac{1}{N}\sum_{m=1}^{N}\langle(\vec{r}_{m}- \vec{r}_{\rm CM})^{2}\rangle=\langle{\rm tr}Q\rangle=\sum_{\alpha=1}^{3} \langle\lambda_{\alpha}^{2}\rangle\,, \tag{3}\] of the chain. The results for \(\langle R_{g}^{2}\rangle\) (Eq. (3)) and the perpendicular and parallel components, \(\langle R_{g,\perp}^{2}\rangle\) and \(\langle R_{g,\parallel}^{2}\rangle\), are reported in Fig. 1. As H decreases, the transverse component \(\langle R_{g,\perp}^{2}\rangle\) decreases (green curve in Fig. 1(a)) as expected. Conversely, the parallel component \(\langle R_{g,\parallel}^{2}\rangle\) grows with confinement (red curve in Fig. 1(a)) because the ring is forced to spread along the plane of the slit. Together, these two effects produce a characteristic non-monotonic behavior in the overall \(\langle R_{g}^{2}(\rm H)\rangle\) (blue curve in Fig. 1(a)) with the minimum attained around \(\hat{\rm H}\simeq 0.7\), _i.e._ where confinement effects are expected to become more pronounced. Interestingly, for high confinement (\(\hat{\rm H}\lesssim 0.3\)), the rings are markedly larger than the bulk reference (blue dotted curve in Fig. 1(a)). In a previous study [22] of randomly concatenated rings under slit confinement, the non-monotonic behavior was also observed but the swelling compared to the bulk state was not seen. We attribute this discrepancy to the fact that, in the previous work, rings without excluded volume were considered which could have favored more compact conformations. Beyond average values, we have also computed the corresponding probability distributions, \(P(R_{g})\), \(P(R_{g,\perp})\) and \(P(R_{g,\parallel})\), and represented each of them (see Fig. 1, panels (b) to (d)) in the corresponding scaled variable to ease comparison. While the distributions of the parallel component of the gyration radius are fundamentally unaffected by confinement (Fig. 1(c)), the ones of the normal components (see Fig. 1(d)) undergo a significant change in shape as the confinement becomes stronger, in particular becoming more peaked. Together these changes produce an interesting effect on the distributions of the full gyration radius (Fig. 1(b)), which are characterized by higher tails for the systems under confinement. This suggests that, under confinement, rings assume more heterogeneous sizes. We study then rings' shapes and anisotropies by looking at the ratios: (i) \(\langle\lambda_{1}^{2}\rangle/\langle\lambda_{2}^{2}\rangle\), (ii) \(\langle\lambda_{1}^{2}\rangle/\langle\lambda_{3}^{2}\rangle\) and (iii) \(\langle\lambda_{2}^{2}\rangle/\langle\lambda_{3}^{2}\rangle\). The first ratio indicates the elongation or "as-phericity" of the ring mean shape, while the other two measure the extent to which rings become effectively flat due to slit confinement. Results are shown in Fig. 2, where it is clear that at mild confinement \(\hat{\rm H}\gtrsim 1.5\) rings attain the shame shape of the bulk ones. At higher con Figure 1: (a) Ring mean-square gyration radius (\(\langle R_{g}^{2}\rangle\)) with its parallel (\(\langle R_{g,\parallel}^{2}\rangle\)) and transverse (\(R_{g,\perp}^{2}\)) components as a function of the degree of confinement \(\hat{\rm H}\) (see Sec. 2.1 for definition). The dashed lines are for the values of the bulk system (_i.e._, no confinement). Error bars are smaller than the symbols size. (b, c, d) Scaling plots for, respectively, distribution functions of the ring gyration radius (\(P(R_{g}/\sqrt{\langle R_{g}^{2}\rangle})\)) and of its parallel (\(P(R_{g,\parallel}/\sqrt{\langle R_{g,\parallel}^{2}\rangle})\)) and transverse (\(P(R_{g,\perp}/\sqrt{\langle R_{g,\perp}^{2}\rangle})\)) components, at different degrees of confinement \(\hat{\rm H}\) (see legend in panel (b)). The dashed line in each panel corresponds to the reference distributions in bulk conditions. finement (\(\hat{\rm H}\lesssim 0.7\)), the ratios to the smallest eigenvalues (blue and red curves in Fig. 2) are described by characteristic power-law behaviors \(\sim\!\hat{\rm H}^{-\alpha}\) with similar \(\alpha\)'s: more precisely, the exponent for \(\langle\lambda_{1}^{2}\rangle/\langle\lambda_{3}^{2}\rangle\), \(\alpha=1.831\pm 0.001\), is only slightly larger than the exponent for \(\langle\lambda_{2}^{2}\rangle/\langle\lambda_{3}^{2}\rangle\), \(\alpha\simeq 1.781\pm 0.002\). This difference is also evident in the behavior of \(\langle\lambda_{1}^{2}\rangle/\langle\lambda_{2}^{2}\rangle\) (green curve in Fig. 2), which increases slightly with confinement. In summary, our analysis shows that rings' flattening due to confinement biases the chains towards more elongated shape. #### iii.2.2 3.1.2 Bond-vector correlation function We investigate now in more detail how the folding of polymer chains is affected by confinement by looking at the bond-vector correlation function, \[c(\ell)\equiv\frac{\langle\vec{t}(\ell^{\prime})\cdot\vec{t}(\ell^{\prime}+ \ell)\rangle}{\langle t(\ell^{\prime})^{2}\rangle}\,, \tag{4}\] as a function of the polymer contour length \(\ell\). This quantity gives useful insight when applied to bulk \(3d\) melts of unknotted and non-concatenated rings, in particular its distinct [16] anti-correlation is a symptom of the double folding of the polymer chains at the entanglement scale (dot-dashed line in Fig. 3(a)). In contrast (dashed line in Fig. 3(a)), bulk \(3d\) melts of randomly knotted and concatenated rings exhibit normal exponential decay behavior [14] and are not characterized by double folding, hence the anti-correlation is absent. To investigate the impact of confinement on chain folding, we have computed \(c(\ell)\) for the confined rings. Results (Fig. 3) exhibit several noteworthy effects. Firstly (Fig. 3(a)), for confined rings at small \(\ell\)\(c(\ell)\) decays more slowly than the bulk counterpart. This is the consequence (Fig. 3(b)) of the increase of the mean-cosine of the angle between consecutive bond vectors, \(\langle\cos(\theta)\rangle\), as confinements increases: in other words, confined rings are slightly stiffer than the bulk reference and this confinement-enhanced stiffness grows with the confinement. At the same time, \(c(\ell)\) develops a characteristic anti-correlation that exhibits non-monotonic dependence on \(\hat{\rm H}\): in particular the deepest minimum occurs at \(\hat{\rm H}\simeq 0.7\), _i.e._ the same value at which the gyration radius (Fig. 1(a)) attains its minimum value. Moreover, the minimum itself disappears at the highest level of confinement. This peculiar behavior can be explained by considering the individual contributions of the parallel and transverse components of \(c(\ell)\). \(c_{\parallel}(\ell)\) does not exhibit any minima (Fig. 3(c)), while \(c_{\perp}(\ell)\) displays a minimum for all values of \(\hat{\rm H}\) (Fig. 3(d)). The mismatch in the values of \(\ell\), at which \(c_{\perp}(\ell)\) is minimum while \(c_{\parallel}(\ell)\simeq 0\), causes the non-monotonicity of the full \(c(\ell)\). The latter goes to zero for similar values of \(\ell\) for all \(\hat{\rm H}\), demonstrating that correlations grow mildly with the confinement. In contrast, \(c_{\perp}(\ell)\) shows a minimum for \(\ell\) close to the thickness of the slit H (Fig. 3(d), inset). This is due to the back-folding of the polymer filaments induced by the hitting with the impenetrable walls of the slit: of course this effect is more pronounced under strong confinement conditions, _i.e._ for \({\rm H}/\ell_{K}\leq 1\). Thus the minima in \(c(\ell)\) appear when H has similar value to the correlation length of \(c_{\perp}(\ell)\), indicating the competition between these two length scales. #### iii.2.3 3.1.3 Contact probability As just shown, confinement alters the metric properties of the polymers. Then, it is natural to expect that the consequent reorganization of the chains modifies the intra-chain polymer interactions. To test this hypothesis, we compute the mean contact probability between two monomers at contour length separation \(\ell=n\langle b\rangle\), \[\langle p_{c}(\ell)\rangle=\left\langle\frac{1}{N}\sum_{i=1}^{N}\Theta(r_{c}- |\vec{r}_{i}-\vec{r}_{i+n}|)\right\rangle\,, \tag{5}\] where \(\Theta(x)\) is the Heaviside step function and the "contact distance" \(r_{c}\) is set to the unit lattice size \(a\) (notice also that periodic conditions due to the ring geometry are tacitly assumed in Eq. (5)). Results are shown in Fig. 4, where \(\langle p_{c}\rangle\) is plotted against the "effective" variable \(\xi=\ell(1-\ell/L)\) in order to reduce [23] finite-size effect due to the ring geometry. First, one can notice that in bulk systems, as we let rings perform strand crossings, long-distance contacts decrease (dashed line) with respect to melts of non-concatenated Figure 2: \(\langle\Lambda_{1}^{2}\rangle/\langle\Lambda_{1}^{3}\rangle\) and \(\langle\Lambda_{2}^{2}\rangle/\langle\Lambda_{1}^{3}\rangle\), average ring shapes expressed as the ratios between the two largest eigenvalues of the ring mean gyration tensor \(Q\) (Eq. (2)) and the smallest one, as a function of the degree of confinement \(\hat{\rm H}\) (see Sec. 2.1 for definition). Dotted lines correspond to the power-law best fits obtained on the first three points of each curve. Dashed horizontal lines correspond to the bulk reference values of the two ratios. and unknotted rings (dot-dashed line). In contrast, confinement leads to an increase in the tail of the mean contact probability compared to the bulk reference. Notably, at \(\hat{\mathrm{H}}=0.30\), the tail's slope is slightly less steep than in the non-concatenated state. To get more insight, it is interesting to look at the exponent controlling the asymptotic power-law decay, \(\langle p_{c}\rangle\simeq\xi^{-\gamma}\) (Fig. 4, inset). In bulk, strand-crossing rings attain ideal statistics characterized by \(\gamma\simeq 1.5\), as confirmed by our previous findings [13]. In contrast, confinement leads to a decrease in \(\gamma\) which becomes close to the same asymptotic value as the non-concatenated state, \(\gamma\simeq 1.15\). Based on mean-field arguments [24], \(\gamma=d\nu\) where \(d\) is space dimension and \(\nu\) is the metric exponent of the chain relating [7; 8] the chain mean linear size to the number of monomers (_i.e._, \(\langle R_{g}^{2}\rangle\sim N^{2\nu}\)). Strand-crossing rings in bulk exhibit ideal statistics with \(\nu=1/2\)[13] and they are characterized by \(\gamma=\frac{3}{2}\) in three dimensions. In confined systems, however, the rings cannot fold freely in three dimensions, effectively reducing the dimensionality of the system and resulting in a decrease in \(\gamma\). #### iii.1.4 3.1.4 Knots statistics In our kMC algorithm two filaments from the same chain can cross and this event may induce the formation of a knot along the chain. Characterization of knots spectra in confined systems have been addressed so far mostly for isolated chains [25; 26; 27], while less results are available for confined systems at melt conditions. To fill this gap, we have investigated the occurrence of knots by computing the Jones polynomial of each ring of our systems and, for simplicity, we present our results based on the number of irreducible crossings (denoted by \(K\), see Sec. 2.3). Specifically, we have computed the probability, \(P_{\mathrm{knot}}(\hat{\mathrm{H}};K)\), of finding a knot with \(K\) irreducible crossings at given confinement degree \(\hat{\mathrm{H}}\) and the Figure 3: (a) \(c(\ell)\), bond-vector correlation function as a function of the contour length distance \(\ell\). Colors are for different confinements, dashed and dot-dashed lines are for bulk melts and melts of non-concatenated and unknotted rings (see legend). (b) \(\langle\cos(\theta)\rangle\), mean cosine value between two consecutive bonds along the chain as a function of the degree of confinement \(\hat{\mathrm{H}}\). (c) \(c_{\mathrm{l}}(\ell)\), contribution to the bond-vector correlation function in the \(xy\)-plane parallel to the slit. (d) \(c_{\perp}(\ell)\), contribution to the bond-vector correlation function orthogonal to the plane of the slit; in the inset the same quantity is represented as a function of the ring contour length normalized by the slit thickness, \(\ell/H\). Colors and symbols in panels (c) and (d) are as in panel (a). _cumulative_ knotting probability: \[P_{\rm knot}(\hat{\rm H})=\sum_{K=3}^{\infty}P_{\rm knot}(\hat{\rm H};K)\,, \tag{6}\] which gives the probability that a ring in the melt contains a knot (of any type). As shown in Fig. 5(a), \(P_{\rm knot}(\hat{\rm H})\) grows with the confinement and reaches the maximum value of \(\simeq 0.13\) for the smallest \(\hat{\rm H}\), resulting in an increase of \(\simeq 130\%\) compared to bulk reference (dashed line). Both in bulk and in confinement, the most common knot type is the simplest one, namely the _trefoil_ knot \(3_{1}\). Overall (Fig. 5(b)), more complex knots are much less probable for all \(\hat{\rm H}\) values, yet their abundance increases with confinement, see Fig. 5(b) for \(P_{\rm knot}(\hat{\rm H};K)\) and Fig. S2 in SI for the relative population of knot types with \(K\) crossings. In conclusion, our analysis points out that confinement enhances the probability of knot formation, yet the overall occurrence of knots (_i.e._, \(P_{\rm knot}\)) remains relatively low (\(\lesssim 0.13\)). ### 3.2. Chain-chain correlations #### iii.2.1 3.2.1 Chain neighbours The increase of the long-range intra-chain contacts seen in Fig. 4 may be indicative of the fact that confinement reduces the overlap between distinct chains or, in other words, ring-ring contacts should decrease. To test this hypothesis, we introduce the variable for the number of neighbors of ring \(i\) (\(i=1,2,...,M\)), \[\rho_{i}^{\rm ring}\equiv\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{M}\Theta\left(2\sqrt{\langle R_{g}^{2}\rangle}-|\vec{r} _{{\rm CM},i}-\vec{r}_{{\rm CM},j}|\right)\,, \tag{7}\] where \(\Theta(x)\) is the Heaviside step function, \(\langle R_{g}^{2}\rangle\) is the mean square gyration radius of the system, and \(\vec{r}_{{\rm CM},j}\) represents the centre of mass position of the \(j\)-th ring. According to Eq. (7), two rings are defined as "neighbor" whenever the spatial distance between their centres of mass is smaller than the twice the root mean-square gyration radius of the system. We have measured the distribution function of \(\rho^{\rm ring}\), \(P(\rho^{\rm ring})\) and its mean value, \(\langle\rho^{\rm ring}\rangle\), at different confnements and we study these quantities in relation to the distribution of spatial distances between the centres of mass \(d_{{\rm CM}-{\rm CM}}\) for neighboring rings, \(P(d_{{\rm CM}-{\rm CM}}|\,{\rm neighbors})\). Figure 5: (a) \(P_{\rm knot}(\hat{\rm H})\), ring knotting probability (Eq. (6)) for knots with \(K\) crossings and as a function of the degree of confinement \(\hat{\rm H}\). The dashed line correspond to the value for the bulk melt. (b) \(P_{\rm knot}(\hat{\rm H};K)\), probability of finding a knot with crossing number \(K\). Colors are for different confnements, the dashed is for bulk melts (see legend). \(K=0\) correspond to the unknot and \(P_{\rm knot}(\hat{\rm H};K=0)=1-P_{\rm knot}(\hat{\rm H})\) is its corresponding probability. Knots with \(>12\) crossings cannot be distinguished by _Topoly_[20]. Composite knots are knots made up by 2 or more irreducible knots. Figure 4: Mean contact probabilities, \(\langle p_{c}\rangle\) (Eq. (5)), as a function of \(\xi=\ell\left(1-\ell/L\right)\) where \(\ell\) is the contour length separation between monomers and \(L\) is ring total contour length. Colors are for different confnements, dashed and dot-dashed lines are for bulk melts and melts of non-concatenated and unknotted rings (see legend). Inset: local differential exponent \(\equiv\frac{d\log(p_{c})}{d\log\xi}\). Results are shown in Fig. 6, from which it is evident (panel (a)) that \(\langle\rho^{\rm ring}\rangle\) decreases as confinement increases, with \(\langle\rho^{\rm ring}\rangle\) being always smaller with respect to the bulk reference (dashed line) and even smaller (for the tighter confinements \(\hat{\rm H}\lesssim 1.5\)) with respect to the non-concatenated and unknotted case (dot-dashed line). At the same time (panel (b)), the distributions of spatial distances \(d_{\rm CM-CM}\) demonstrate that neighboring chains tend to overlap more with each other under stronger confinement. Taken all together, we can motivate the reason why the inter-chain contacts decrease in terms of the geometry of the slit. First, confinement can prevent the formation of stacked conformations along the transverse direction (see Fig. S3 in SI), and this surely reduces the inter-chain contacts. Moreover, we observe that, by reducing the width of the slit, inter-ring distances tend to increase and this is an effect due to the increasing asymmetry of the slit as confinement increases (see Fig. S4 in SI). #### iii.2.2 3.2. Links The reduction of inter-chain contacts should also have consequences on the _linking_ properties of the confined systems. To explore this aspect, we adopt the approach developed by us in Ref. [14] and compute, \(\langle n_{\rm 2link}(|{\rm GLN}|)\rangle\), the mean number of two-chain links at absolute Gauss linking number \(|{\rm GLN}|\) and the mean number of distinct three-chain links, \(\langle n_{\rm 3link}\rangle\), with given chain topology. Results for \(\langle n_{\rm 2link}(|{\rm GLN}|)\rangle\) are summarized in panel (a) of Fig. 7. We notice that ring-ring links are mostly Hopf-like (_i.e._, with \(|{\rm GLN}|=1\)) and that confinement reduces the extent to which the rings are linked, in agreement with the decrease of overlaps between neighboring chains. In general, the participation in more complex links decreases exponentially but the rate of decay depends on the level of confinement in the system. Chains under stronger confinement are characterized by a slower decay, which can be attributed to the fact that neighboring chains penetrate each other more (see Fig. 6(b)). Additionally, links with \(|{\rm GLN}|=0\) (_i.e._, the so-called Whitehead links) have been found between those with \(|{\rm GLN}|=2\) and \(|{\rm GLN}|=3\) at all confinements. We further classify these links by computing their Jones polynomial and determining their relative abundances (panel (a) in Fig. S5 in SI). We found that, even in this case, rings under stronger confinement form more complex links with greater ease. To examine three-chain links, it is necessary to distinguish between two distinct groups of links: those that can be reduced to two-chain links and irreducible ones [14]. The first group include: (a) _poly(3)catenanes_, chains made of three rings in which two non-concatenated rings are connected to a common ring, and (b) _triangles_, triplets of rings which are all pairwise concatenated. Both (a) and (b) can be detected via pairwise linking. Instead, irreducible three-chain links cannot be detected via pairwise linking and can be further divided in two subtypes: (c) _poly(2)catenane+1-ring_, structures made of a poly(2)catenane plus another ring which is not directly concatenated (in a pairwise manner) to any of the other two, and (d) _Brunnian_ links, non-trivial links which become a set of trivial links whenever one component ring is unlinked from the others (the so called _Borromean_ conformation, the link \(6^{3}_{2}\), constitutes the easiest example of this kind). By resorting to the shrinking method described in [14], we have detected links belonging to the last two classes and computed \(\langle n_{\rm 3link}\rangle\) for the different types of three-chain links (Fig. 7(b)). It is clear from \(\langle n_{\rm 3link}\rangle\) that links organize onto a network made almost entirely via pairwise concatenation both in bulk and in confinement. Irreducible three-chain links are much more rare Figure 6: (a) Distribution function, \(P(\rho^{\rm ring})\), of the the number of neighbors per chain \(\rho^{\rm ring}\). Inset: mean number of neighbors per ring, \(\langle\rho^{\rm ring}\rangle\). (b) Distribution function of the distances between the centres of mass of neighbour chains, \(P(d_{\rm CM-CM}\,|\) neighbours), as a function of the variable normalized to twice the root mean-square gyration radius, \(2\sqrt{\langle R_{g}^{2}\rangle}\) (Eq. (3)), of the rings. Colors are for different confinements, dashed and dot-dashed lines are for bulk melts and melts of non-concatenated and unknotted rings (see legend). and decrease with the degree of confinement, for this reason subsequent analysis has been performed by neglecting these three-chain links contribution. A detailed topological classification of these structures has been reported in Fig. S5(b) in SI, and even in this case three-chain links with higher crossings seem to be more likely for more confined systems. #### iii.2.3 3.2.3. Polymer network and entanglements Concatenated rings give rise to a fully connected polymer network [14; 28]. To characterize this network, we define [14] the linking degree \(\mathrm{LD}_{i}\) of ring \(i\), \[\mathrm{LD}_{i}=\sum_{j=1}^{M}\chi_{ij}\,C_{ij}\,, \tag{8}\] where the sum runs over the total number of chains in the melt, and where \(C_{ij}\) is the \(M\times M\) matrix expressing the concatenation status between rings \(i\) and \(j\): \[C_{ij}=\left\{\begin{array}{ll}0\,,&\mbox{if $i=j$}\\ \\ 1\,,&\mbox{if $i\neq j$ and form a two-chain link}\\ \\ 0\,,&\mbox{otherwise}\end{array}\right. \tag{9}\] The "weight" factor \(\chi_{ij}\) takes into account the "complexity" of two-chain links: \(\chi_{ij}=|\mathrm{GLN}|\) or \(=\frac{K}{2}\) depending on whether \(\mathrm{GLN}\neq 0\) or \(\mathrm{GLN}=0\) respectively. Here, \(K\) is the number of crossings characterizing the link or, in other words, each crossing of the link contributes \(1/2\) to an entanglement point. This quantity is of special interest as we have recently showed [14] that the mean value \(\langle\mathrm{LD}\rangle\equiv\langle\frac{1}{M}\sum_{i=1}^{M}\mathrm{LD}_{ i}\rangle\) is directly connected to the entanglement length of the melt, \(N_{e}\), via the relation \(\langle\mathrm{LD}\rangle=N/N_{e}\) of the systems. To complement this analysis, we have also computed the distribution of the values \(\mathrm{LD}\) at the single ring level, \(P(\mathrm{LD})\), which gives us information about the heterogeneity of the network. Results are presented in Fig. 8. \(\langle\mathrm{LD}\rangle\) (panel (a)) decreases as a function of the confinement, up to a reduction of \(\simeq 60\%\) with respect to bulk conditions. Then, by looking at the distribution functions (panel (b)) of the linking degree as a function of \(X=\mathrm{LD}/\langle\mathrm{LD}\rangle\), we see that the curves at mild confinements display the same behavior for bulk conditions. Conversely, tails become stronger for more confined systems. This is in agreement with the behaviour seen for the distribution functions of the sizes of the rings (Fig. 1(b)), where the tails are higher for stronger confinements. Fluctuations of ring size may impact on concatenation since smaller rings will be less concatenated having less possibility to reach other rings, while bigger rings can host more contacts and consequently more concatenations. To sum up, the resulting networks of concatenated rings tend to be more heterogeneous as the confinement become stronger in line with the fluctuations on the rings' sizes. ## IV 4. Discussion and Conclusions Our findings illustrate the impact that slit confinement has on the spatial structure of randomly concatenated and knotted ring polymers in melt conditions. At the single-chain level, our investigation shows that as rings flatten with increasing confinement they tend to adopt more elongated conformations. At the same time, rings become slightly more rigid with the confinement, a tendency captured by the increase of the correlation (\(\langle\cos(\theta)\rangle\), Fig. 3(b)) between consecutive bonds along the chain. We have also demonstrated that the competition between the Kuhn length of the polymers, \(\ell_{K}\), and the height of the slight, H, induces a non-monotonous behavior on the bond-vector correlation function, \(c(\ell)\) (see Figure 7: (a) \(\langle n_{\mathrm{2link}}(|\mathrm{GLN}|)\rangle\), mean number of links per ring with absolute Gauss linking number \(|\mathrm{GLN}|\). (b) \(\langle n_{\mathrm{3link}}\rangle\), mean number of different three-chain linked structures per ring. Different colors are for the different confinements, the dashed line is for the bulk system. Fig. 3(a, c, d)). In general, the impact of confinement on ring conformations becomes particularly pronounced with respect to the formation of long intra-chain contacts as the slit narrows (see Fig. 4), resulting in more compact rings. Finally, these changes have significant repercussions on the knotting probability which increases with the confinement, and for which we register an increase of \(\simeq 130\%\) compared to the bulk value (see Fig. 5(a)). The effects of slit confinement on the inter-chain statistics is similarly noteworthy. Specifically, as the level of confinement increases, the average number of neighbors per ring, \(\langle\rho^{\rm ring}\rangle\), experiences a considerable decrease (see Fig. 6(a)). This is directly connected to the decrease of the mean linking degree, \(\langle\)LD\(\rangle\), which displays a total reduction of \(\simeq 60\%\) with respect to bulk conditions. This finding has two interesting implications. First, being (LD) directly related to the mean number of entanglement strands per ring, the decrease of (LD) means that, at fixed density, confinement alone may alter the entanglement properties of the system making \(N_{e}\) effectively bigger. This would explains recent findings [29; 30] showing that for both, linear chains and rings in two-dimensional melts, the resulting dynamical quantities display a quite surprising Rouse-like behavior [7; 8] which, ultimately, points towards the effective irrelevance of entanglement effects due to inter-chain interactions. It is worth stressing that, although the mean number of ring-ring concatenations (_i.e._, \(\langle\)LD\(\rangle\)) decreasing with confinement is not entirely surprisingly (in \(2d\) rings can not be concatenated), the important point to stress here is that the works [29; 30] and our analysis here and in [14] suggest that entanglements are indeed well captured [31; 28; 32] by two-chain topological links alone. Finally, it is worth recalling that the elastic plateau modulus \(G_{0}\), which quantifies the stress-strain relationship of polymeric materials, is related to the total number of entanglement strands of the melt, \(G_{0}\propto\frac{NM}{N_{e}}\)[8]. Then our results imply that, as confinement grows, the resulting network becomes softer (\(G_{0}\) decreases), highlighting the important role of geometric confinement on the mechanical properties of the stored polymer network. ## V Supporting Information Time mean-square displacement of monomers in the frame of the centre of mass of the corresponding ring, fractional population of knot types, contour plots for the joint distribution function of parallel and transverse components of the distances between the centres of mass of neighboring rings, distribution functions of the distances between the rings' centres of mass, fractional population of two-chain links with \(\mathrm{GLN}=0\). ## VI Acknowledgement The authors acknowledge networking support by the COST Action CA17139 (EUTOPIA).
2307.16245
Bremsstrahlung Cross Section with Polarized Beams for Luminosity Determination at the EIC
The bremsstrahlung cross section is calculated at leading order for polarized beams of electrons and ions, which is needed for luminosity measurements at the upcoming Electron Ion Collider (EIC). Analytic expressions, differential in the emitted photon energy and polar angle, are derived. The component of the cross section which depends on the beam polarizations is found to be highly suppressed with respect to the unpolarized Bethe-Heitler component, owing to the low $q^2$ that characterizes the bremsstrahlung process.
Dhevan Gangadharan
2023-07-30T14:42:01Z
http://arxiv.org/abs/2307.16245v2
# Bremsstrahlung Cross Section with Polarized Beams ###### Abstract The bremsstrahlung cross section is calculated at leading order for polarized beams of electrons and ions, which is needed for luminosity measurements at the upcoming Electron Ion Collider (EIC). Analytic expressions, differential in the emitted photon energy and polar angle, are derived. The component of the cross section which depends on the beam polarizations is found to be highly suppressed with respect to the unpolarized Bethe-Heitler component, owing to the low \(q^{2}\) that characterizes the bremsstrahlung process. ## I Introduction The dominant contribution to the inelastic cross section in lepton-nucleus collisions is the bremsstrahlung process, where a photon is emitted into the final state: \(e^{-}+N\to e^{-}+N+\gamma\). Owing to the QED nature of this process, it can be calculated to high precision, which allowed its use to measure the collider luminosity of HERA[1; 2]. The upcoming Electron Ion Collider (EIC) will similarly collide electrons on protons, as well as heavier ions, and will also make use of the bremsstrahlung process to measure luminosity. Unlike HERA, the EIC will accelerate polarized electrons _and_ polarized nuclei. As the leading-order cross section only depends on the beam polarization if _both_ beams are polarized, the EIC requires calculation of the polarized component of the cross section. The process has been considered before and numerically calculated for \(q^{2}\) larger than those relevant for luminosity programs [3; 4]. Analytic expressions are presented here for appropriately low \(q^{2}\). Related calculations exist for the cases of polarized photon emission [5] and polarized photoproduction of leptons [6]. ## II Formalism The two Feynman diagrams that contribute to the leading-order bremsstrahlung amplitude are shown in Fig. 1. The corresponding unpolarized cross section was first calculated by Bethe and Heitler in 1934 [7]. A more complete calculation of the cross section that goes beyond the Born approximation (leading order) and is applicable for high atomic number \(Z\), was made by Bethe and Maximon in 1954 [8; 9; 10]. Higher-order contributions enter at the subpercent level for \(ep\) scattering [11]. The amplitude shown in Fig. 1 is given by Eq. 1: \[M = e^{3}\frac{g_{\mu\nu}}{q^{2}}\epsilon_{o}^{*}(k)\left[\bar{u}(r^{ \prime})\gamma^{\mu}u(r)\right]\times\] \[\bar{u}(p^{\prime})\left[\gamma^{\sigma}\frac{\not{p}+\not{k}+m_{ e}}{2p^{\prime}k}\gamma^{\nu}-\gamma^{\nu}\frac{\not{p}-\not{k}+m_{e}}{2pk} \gamma^{\sigma}\right]u(p).\] Incoming 4-momenta (energy, momentum) in the laboratory frame are denoted as \(p=(\varepsilon,{\bf p})\) (electron) and \(r=(\varepsilon_{p},{\bf r})\) (proton). Outgoing momenta are given by \(p^{\prime}=(\varepsilon^{\prime},{\bf p}^{\prime})\) (scattered electron), \(r^{\prime}=(\varepsilon^{\prime}_{p},{\bf r}^{\prime})\) (scattered proton), and \(k=(\omega,{\bf k})\) (emitted photon). The momentum of the exchanged photon is given by \(q=p^{\prime}+k-p\). The electron charge is given by \(e\), \(m_{e}\) is the electron mass, \(\epsilon^{*}(k)\) is the photon polarization vector, \(u(p)\) and \(u(r)\) are the spinors of the incoming electron and proton, respectively. As in the Bethe-Heitler calculation, the same set of approximations is applied to calculate the polarized contribution to the cross section. Since the photon polar-angle distribution of the Bethe-Heitler expression is sharply peaked near zero, it follows that the virtuality \(q^{2}\) of the exchange photon is also very small, \(q^{2}\sim m_{e}^{2}\) (Sec. 97 of Ref. [10]). The structure of the nucleus can therefore be neglected, which is justified at low \(q^{2}\). It additionally follows that this process occurs coherently with all charged nucleons in a large nucleus, and so the amplitude scales with the atomic number \(Z\). For simplicity, \(ep\) scattering is considered and \(Z\) is set to unity. Owing to the smallness of the electron mass with respect to the large energies of most experimental measurements, ultrarelativistic approximations (\(p\approx\varepsilon-m_{e}^{2}/(2\varepsilon)\)) are applied to the final expressions. The "no-recoil" approximation (Sec. 97 of Ref. [10]) is applied, where the energy transferred by the exchange photon is neglected: \(q=(0,{\bf q})\). The suitability of this approximation is discussed later in this article. The modulus square of the amplitude in Eq. 1 takes Figure 1: Leading-order bremsstrahlung Feynman diagrams for \(ep\) scattering. Incoming momenta denoted by \(p\) (electron) and \(r\) (proton). Outgoing momenta denoted by \(p^{\prime}\) (scattered electron), \(r^{\prime}\) (scattered proton), and \(k\) (emitted photon). on the following form: \[|M|^{2} \equiv -\frac{e^{6}}{4q^{4}}W^{\mu\alpha}\,w_{\mu\alpha}, \tag{2}\] \[W^{\mu\alpha} = Tr[u(r^{\prime})\bar{u}(r^{\prime})\gamma^{\mu}u(r)\bar{u}(r) \gamma^{\alpha}],\] (3) \[w_{\mu\alpha} = Tr[u(p^{\prime})\bar{u}(p^{\prime})Q_{\mu}^{\sigma}u(p)\bar{u}(p )\bar{Q}_{\sigma\alpha}],\] (4) \[Q_{\mu}^{\sigma} = \gamma^{\sigma}\frac{p^{\prime}+k\!\!\!/+m_{e}}{p^{\prime}k} \gamma_{\mu}-\gamma_{\mu}\frac{p\!\!\!/-k\!\!\!/+m_{e}}{pk}\gamma^{\sigma}, \tag{5}\] where \(W^{\mu\alpha}\) is the proton tensor and \(w_{\mu\alpha}\) is the electron tensor, and both are expressed as a trace over products of gamma matrices. Typically, the final-state particle polarizations are not experimentally measurable. Accordingly, the photon, scattered electron, and scattered proton polarizations are summed over: \(\sum\limits_{pol}\epsilon_{a}^{*}(k)\epsilon_{b}(k)\to-g_{ab}\), \(\sum\limits_{spin}u(p^{\prime})\bar{u}(p^{\prime})=p^{\prime}\!\!\!/+m_{e}\), \(\sum\limits_{spin}u(r^{\prime})\bar{u}(r^{\prime})=v^{\prime}\!\!\!/+m_{p}\), where \(m_{p}\) is the proton mass. The incoming beam polarizations are measurable, for which the electron and proton spinor products are expressed as [10] \[u(p)\bar{u}(p)=\frac{1}{2}(p\!\!\!/+m_{e})(1-\gamma^{5}q\!\!\!/^ {(e)}),\] \[u(r)\bar{u}(r)=\frac{1}{2}(r\!\!\!/+m_{e})(1-\gamma^{5}q\!\!\!/^ {(p)}). \tag{6}\] The electron and proton spin 4-vectors (Pauli-Lubanski pseudovectors) are \(a^{(e)}\) and \(a^{(p)}\), respectively. They have the form \((0,\vec{\xi})\) in the particle's rest frame, for which \(\vec{\xi}\) depends on the beam polarization. For the remaining expressions, longitudinal beam polarizations are assumed. In the target (proton) rest frame, \(a^{(e)}=2\mathbb{P}_{e}\frac{E_{e}\,E_{p}}{m_{e}m_{p}}(-1,0,0,+1)\). The parameter \(\mathbb{P}_{e}\) is the electron beam polarization, which is more conveniently defined as that measured in the laboratory frame along the electron's momentum. The beam energies \(E_{e}\) and \(E_{p}\) are also defined as that measured in the laboratory frame. Terms of \(\mathcal{O}(m_{e}/\varepsilon)\) and \(\mathcal{O}(m_{p}/\varepsilon_{p})\) are neglected in \(a^{(e)}\). The proton spin 4-vector has the following form in the target rest frame, \(a^{(p)}=\mathbb{P}_{p}\,(0,0,0,+1)\). The proton beam polarization, \(\mathbb{P}_{p}\), is also defined as that measured in the laboratory frame along the proton's momentum. The proton tensor, \(W^{\mu\alpha}\), and the electron tensor, \(w_{\mu\alpha}\), are expressed in terms of their unpolarized (\(\mathcal{U}^{\mu\alpha}\), \(\mathbf{u}_{\mu\alpha}\)) and polarized (\(\mathcal{P}^{\mu\alpha}\), \(\mathbf{p}_{\mu\alpha}\)) parts: \[W^{\mu\alpha} \equiv \mathcal{U}^{\mu\alpha}+\mathcal{P}^{\mu\alpha}, \tag{7}\] \[w_{\mu\alpha} \equiv \mathbf{u}_{\mu\alpha}+\mathfrak{p}_{\mu\alpha}, \tag{8}\] where the polarized parts arise from the \(\gamma^{5}\) terms in Eq. 6. Evaluating the tensor traces yields the following expressions for the polarized parts: \[\mathcal{P}^{\mu\alpha} = 2i\,m_{p}\,q_{a}a_{c}^{(p)}\,\varepsilon^{a\mu c\alpha}, \tag{9}\] \[\mathfrak{p}_{\mu\alpha} \equiv \mathfrak{p}_{\mu\alpha}^{(1)}+\mathfrak{p}_{\mu\alpha}^{(2)}+ \mathfrak{p}_{\mu\alpha}^{(3)}+\mathfrak{p}_{\mu\alpha}^{(4)},\] (10) \[\mathfrak{p}_{\mu\alpha}^{(1)} = 8i\frac{m_{e}a^{(e),\lambda}}{(p^{\prime}k)^{2}}\varepsilon_{a \mu\lambda\alpha}\left[m_{e}^{2}q^{a}-p^{\prime}k(p^{a}+k^{a})\right],\] (11) \[\mathfrak{p}_{\mu\alpha}^{(2)} = 8i\frac{m_{e}a^{(e),\lambda}}{(pk)^{2}}\Bigg{[}(m_{e}^{2}q^{a}-p^ {\prime\,a}pk)\varepsilon_{a\mu\lambda\alpha}\] (12) \[\qquad\qquad-p^{\prime a}k_{\lambda}(k^{b}-p^{b})\varepsilon_{a \mu b\alpha}\Bigg{]},\] \[\mathfrak{p}_{\mu\alpha}^{(3)} = 8i\frac{m_{e}a^{\lambda,(e)}}{(pk)(p^{\prime}k)}\Bigg{[}\frac{q^{ 2}}{2}p^{\prime a}\varepsilon_{\lambda a\mu\alpha}+p^{\prime a}p_{\mu}(k^{b}- p^{b})\varepsilon_{\lambda ab\alpha}\] (13) \[\qquad\qquad\qquad+(p_{\alpha}^{\prime}k^{a}(p^{b}-p^{b})\] \[\qquad\qquad\qquad+p^{b}(p_{\alpha}-k_{\alpha})(p^{\prime a}+k^{a} ))\varepsilon_{\lambda ab\mu}\Bigg{]},\] \[\mathfrak{p}_{\mu\alpha}^{(4)} = -\mathfrak{p}_{\alpha\mu}^{(3)}. \tag{14}\] The polarized tensors, \(\mathcal{P}^{\mu\alpha}\) and \(\mathfrak{p}_{\mu\alpha}\), are antisymmetric, while the unpolarized tensors, \(\mathcal{U}^{\mu\alpha}\) and \(\mathfrak{u}_{\mu\alpha}\), are symmetric. Thus, only \(\mathcal{U}^{\mu\alpha}\mathbf{u}_{\mu\alpha}\) and \(\mathcal{P}^{\mu\alpha}\mathfrak{p}_{\mu\alpha}\) contribute to the cross section. However, at higher orders with loop corrections, single-spin asymmetries emerge [12]. We have: \[|M|^{2} = -\frac{e^{6}}{4q^{4}}\left[\mathcal{U}^{\mu\alpha}\mathbf{u}_{ \mu\alpha}+\mathcal{P}^{\mu\alpha}\mathfrak{p}_{\mu\alpha}\right] \tag{15}\] \[= -\frac{(4\pi)^{3}\,m_{e}^{2}}{4q^{4}}\frac{\alpha\,r_{e}^{2}}{ \left[\mathcal{U}^{\mu\alpha}\mathbf{u}_{\mu\alpha}+\mathcal{P}^{\mu\alpha} \mathfrak{p}_{\mu\alpha}\right]} \tag{16}\] where the fine-structure constant is given by \(\alpha=\frac{e^{2}}{4\pi}\) and the classical electron radius is given by \(r_{e}=\frac{e^{2}}{4\pi\,m_{e}}\). ## III Differential cross sections The fully differential cross section expressed in the target rest frame is \[d\sigma = \frac{1}{(4\pi)^{5}}|M|^{2}\frac{|\mathbf{p}^{\prime}|\omega d \omega}{|\mathbf{p}|m_{p}^{2}}d\Omega^{\prime}d\Omega_{k}, \tag{17}\] \[\equiv d\sigma_{\mathcal{U}}+d\sigma_{\mathcal{P}}. \tag{18}\] Inserting the polarized part of Eq. 16 into Eq. 17 gives \[d\sigma_{\mathcal{P}}=\frac{-\alpha\,r_{e}^{2}\,m_{e}^{2}}{4(4\pi)^{2}q^{4}\,m_{ p}^{2}}\mathcal{P}^{\mu\alpha}\mathfrak{p}_{\mu\alpha}\frac{|\mathbf{p}^{\prime}| \omega d\omega}{|\mathbf{p}|}d\Omega^{\prime}d\Omega_{k}. \tag{19}\] The unpolarized term, \(d\sigma_{\mathcal{U}}\), corresponds to the usual Bethe-Heitler expression and was re-derived as a cross check. The angular phase space of the scattered electron and emitted photon are denoted by \(d\Omega^{\prime}\) and \(d\Omega_{k}\), respectively. In order to provide a practical expression, integration over the angles in the final state is needed. Evaluating the contraction of Levi-Civita symbols in \({\cal P}^{\mu\nu}\mathfrak{p}_{\mu\nu}\) leads to the following: \[{\cal P}^{\mu\nu}\mathfrak{p}_{\mu\nu} = -32m_{e}m_{p}\Bigg{[}\frac{1}{(pk)^{2}}\frac{q^{2}}{2}ka^{(e)} \Big{(}qa^{(p)}-2p^{\prime}a^{(p)}\Big{)}\] \[+ \Big{(}\frac{1}{(pk)^{2}}+\frac{1}{(p^{\prime}k)^{2}}\Big{)}\Big{(} qa^{(e)}qa^{(p)}m_{e}^{2}-a^{(e)}a^{(p)}q^{2}m_{e}^{2}\Big{)}\] \[+ \frac{1}{(pk)(p^{\prime}k)}\Big{(}-q^{4}a^{(e)}a^{(p)}-2qa^{(e)}a ^{(p)}m_{e}^{2}\] \[+\frac{q^{2}}{2}\Big{(}2p^{\prime}a^{(e)}(p^{\prime}a^{(p)}-pa^{( p)})\] \[+ka^{(e)}(3qa^{(p)}+2pa^{(p)})\] \[+4m_{e}^{2}a^{(e)}a^{(p)}\Big{)}\Big{)}\] \[+ \Big{(}\frac{1}{p^{\prime}k}-\frac{1}{pk}\Big{)}\Big{(}(qa^{(e)} +ka^{(e)})qa^{(p)}-2q^{2}a^{(e)}a^{(p)}\Big{)}\Bigg{]}.\] Integration of the differential cross section over the scattered electron angles, \(d\Omega^{\prime}\), can be performed analytically [5] and is done first. For longitudinally polarized beams, all 4-vector products involving the scattered electron momentum, \(p^{\prime}\), in Eq. 20 can be expressed in terms of two basis vectors, \({\bf a}\) and \({\bf b}\), defined through \(q^{2}\) and \(p^{\prime}k\): \[q^{2} = -{\bf q}^{2}=-({\bf p}^{\prime}-{\bf p}+{\bf k})^{2}, \tag{21}\] \[\equiv -({\bf p}^{\prime}-{\bf T})^{2}=-({\bf p}^{\prime 2}+T^{2})(1-{ \bf p}^{\prime}{\bf a}),\] \[p^{\prime}k = \omega\varepsilon^{\prime}-{\bf p}^{\prime}{\bf k}\equiv\omega \varepsilon^{\prime}(1-{\bf p}^{\prime}{\bf b}),\] (22) \[{\bf a} = \frac{2{\bf T}}{{\bf p}^{\prime 2}+T^{2}},\] (23) \[{\bf b} = \frac{{\bf k}}{\omega\varepsilon^{\prime}}. \tag{24}\] The other 4-vector products containing \(p^{\prime}\), \(p^{\prime}a^{(e)}\) and \(p^{\prime}a^{(p)}\), are also expressed with terms containing \((1-{\bf p}^{\prime}{\bf a})\) and \((1-{\bf p}^{\prime}{\bf b})\). Ultimately, the integral over the scattered electron angles is of the form: \[I_{m,n}=\int d\Omega^{\prime}(1-{\bf p}^{\prime}{\bf a})^{-m}(1-{\bf p}^{\prime }{\bf b})^{-n}. \tag{25}\] Note that by convention a factor of \(1/(2\pi)\) as defined for \(I_{m,n}\) in Ref. [5] is not included here. The array of integrals present is: \(I_{0,0}\), \(I_{0,1}\), \(I_{1,0}\), \(I_{-1,1}\), \(I_{1,-1}\), \(I_{1,1}\), \(I_{2,0}\), \(I_{0,2}\), \(I_{1,2}\), \(I_{1,2}\), \(I_{2,-1}\), \(I_{2,-2}\), \(I_{2,2}\). Integrals with negative indices can be expressed in terms of the other integrals. For the non-trivial integrals, \(I_{1,1}\), \(I_{2,1}\), \(I_{1,2}\), \(I_{2,2}\), Feynman parameters are first used to combine denominators. A strategically chosen polar angle axis is then chosen to make the azimuthal integrations trivial. Finally, a table of integrals [13] is used for the final integration over the Feynman parameter. Integration over \(d\Omega_{k}\) similarly starts with a strategically chosen polar axis (\(\hat{\bf p}\)) that makes the azimuthal integration trivial. Assembly of all terms resulting from each integral is quite algebraically laborious. Mathematica is used to assemble and simplify the algebra. The expressions for the double- and single-differential cross sections are given in the laboratory frame. For the small angles that characterize the Bethe-Heitler expression at ultra-relativistic energies, the Lorentz transformation from the lab frame (LF) to the target rest frame (TRF) is especially simple: \(E_{TRF}=\frac{2e_{p}}{m_{p}}E_{LF}\) for \(E\in\{\varepsilon,\varepsilon^{\prime},\omega\}\). After transforming to the lab frame, the beam parameters of \(a^{(e)}\) can be mapped to other variables: \(E_{e}\rightarrow\varepsilon\) and \(E_{p}\rightarrow\varepsilon_{p}\). The resulting polarized cross section, double-differential in the photon polar angle and energy up to \({\cal O}(m_{e}^{2})\), in the laboratory frame, is given by \[\frac{d\sigma_{\cal P}}{d\omega d\delta} = \mathbb{P}_{e}\mathbb{P}_{p}\frac{\alpha\,r_{e}^{2}\,m_{e}^{2}\, \delta}{\omega\varepsilon^{3}\varepsilon^{\prime}\varepsilon_{p}(1+\delta^{2})^ {2}}\Bigg{[}\delta_{0}^{2}\Bigg{(}4L_{1}\varepsilon\varepsilon^{\prime 2} \tag{26}\] \[+2\omega\Big{(}(L_{1}+L_{2}-L_{\theta})\varepsilon^{2}+L_{1} \varepsilon\varepsilon^{\prime}-L_{2}\varepsilon^{\prime 2}\Big{)}\] \[-2\omega^{3}-5\varepsilon^{\prime}\omega^{2}-6\varepsilon^{\prime 2 }\omega-4\varepsilon^{\prime 3}\Bigg{)}\] \[+2\varepsilon\Bigg{(}(1+2L_{\theta})\varepsilon^{\prime 2}+(1-L_{1}-L_{2}+L_{ \theta})\omega^{2}\] \[+\varepsilon^{\prime}\omega\Big{(}-L_{1}-L_{2}+2L_{\theta}+\frac{1 }{1+\delta_{0}^{2}}\Big{)}\Bigg{)}\Bigg{]},\] where \(L_{1}=\ln\frac{4\,\varepsilon^{\prime}\,\varepsilon_{p}}{m_{e}m_{p}}\), \(L_{2}=\ln\frac{4\,\varepsilon\,\varepsilon^{\prime}\,\varepsilon_{p}}{\omega\,m_{e} m_{p}}\), \(L_{\theta}=\ln\left(1+\delta^{2}\right)\). One observes that, like the Bethe-Heitler spectrum, the angular spectrum is sharply peaked for \(\delta=\theta_{k}\frac{\varepsilon}{m_{p}}\lesssim 1\). Finally, integrating over the photon polar angle yields the single-differential polarized cross section \[\frac{d\sigma_{\cal P}}{d\omega}=\mathbb{P}_{e}\mathbb{P}_{p}\frac{4\alpha r_{e}^ {2}}{\omega}\frac{\varepsilon^{\prime}}{\varepsilon}\frac{m_{e}^{2}}{ \varepsilon\,\varepsilon_{p}}\left(F_{1}+\frac{\varepsilon}{4\varepsilon^{\prime}}F_ {2}+\frac{\varepsilon^{\prime}}{8\varepsilon}F_{3}+\frac{\varepsilon^{2}}{2 \varepsilon^{\prime 2}}F_{4}\right), \tag{27}\] where \[F_{1} = \frac{1}{8}\left(7+L_{2}(2-4L_{3})-4L_{3}+L_{1}(-2+4L_{3})\right),\] \[F_{2} = -3+L_{1}+2L_{2}+L_{3}(1-2L_{2}+2L_{3}),\] \[F_{3} = (-1+2L_{2})(-1+2L_{3}),\] \[F_{4} = (-1+L_{3})(-2+L_{1}+L_{2}-L_{3}).\] and \(L_{3}=\ln\frac{\pi\varepsilon\,\varepsilon_{p}}{m_{e}m_{p}}\). The polarized cross section is to be compared to the unpolarized Bethe-Heitler expression [7]: \[\frac{d\sigma_{\cal U}}{d\omega}=\frac{4\alpha r_{e}^{2}}{\omega}\frac{ \varepsilon^{\prime}}{\varepsilon}\left(\frac{\varepsilon}{\varepsilon^{\prime}}+ \frac{\varepsilon^{\prime}}{\varepsilon}-\frac{2}{3}\right)\left[L_{2}-\frac{1}{2} \right]. \tag{28}\] It is clear that the polarized component of the bremsstrahlung cross section is highly suppressed with respect to the unpolarized component by a factor of \(m_{e}^{2}/(\varepsilon\,\varepsilon_{p})\). Figures 2 and 3 show the energy spectra of the polarized and unpolarized components, respectively, for \(\varepsilon=18\) GeV and \(\varepsilon_{p}=275\) GeV (top EIC energies). The sharp turn in the spectrum near the photon energy limit (18 GeV) in Fig. 2 occurs when logs with different signs change in magnitude in \(F_{4}\). Furthermore, the polarized component becomes negative very near to the energy limit, which can occur for perturbative calculations at finite order. A similar feature is also observed in a numerical calculation at larger \(q^{2}\)[3]. Very near to the upper energy limit, one should also keep in mind that the ultrarelativistic limit for the scattered electron breaks down. Additionally, inclusion of the ion's small recoil energy may alter this observed feature. The recoil of the ion was neglected in this calculation as it was in the original Bethe-Heitler calculation, \(q=(0,{\bf q})\). To a better approximation, one may instead employ the relation \(q=({\bf q}^{2}/2m_{p},{\bf q})\), owing to the smallness of the momentum transferred to the nucleus. As a consequence, terms containing \(1/q^{2}\) in Eq. 20 (inserting Eq. 19) can be expanded in a power series, for which the first two terms are \(-1/{\bf q}^{2}-1/(4m_{p}^{2})\). The latter extra term is suppressed by \(m_{e}^{2}/m_{p}^{2}\) with respect to the first term. Additionally, in order to simplify the algebra in this calculation, frequent use of the simple relation \(\varepsilon^{\prime}+\omega-\varepsilon=0\) was used, which would have to be promoted to \(\varepsilon^{\prime}+\omega-\varepsilon=-{\bf q}^{2}/2m_{p}\). Due to the similarly small scale of the calculated polarized cross section, it is not clear how much the no-recoil approximation affects its functional form. However, the polarized component is expected to remain suppressed. For the case of longitudinally polarized electrons and _transversely_ polarized protons, as expected for the EIC, the polarized component vanishes exactly when integrating over azimuthal angles of the final-state particles. Additionally, the vectors \({\bf a}\) and \({\bf b}\) do not form a complete basis to express the 4-vector products in Eq. 20, which further complicates analytic integration. Transverse polarization is not considered further here. In regards to the general luminosity program at the EIC, where high \(Z\) heavy-ions will be accelerated, it should be noted that the Born approximation underlying the Bethe-Heitler expression is known to be inadequate. The Bethe-Heitler expression should be replaced by the Bethe-Maximon expression [8; 9; 10]: \[\frac{d\sigma_{ll}}{d\omega} = \frac{4Z^{2}\alpha r_{e}^{2}}{\omega}\frac{\varepsilon^{\prime} }{\varepsilon}\left(\frac{\varepsilon}{\varepsilon^{\prime}}+\frac{\varepsilon ^{\prime}}{\varepsilon}-\frac{2}{3}\right)\left[L_{2}-\frac{1}{2}-f(\alpha Z) \right], \tag{29}\] \[f(\alpha Z) = (\alpha Z)^{2}\sum_{n=1}^{\infty}\frac{1}{n(n^{2}+(\alpha Z)^{2} )}. \tag{30}\] For heavy nuclei such as uranium, the Bethe-Maximon and Bethe-Heitler expressions differ by about 2%. For light nuclei, \(\alpha Z\ll 1\) and \(f(\alpha Z)\approx 1.2(\alpha Z)^{2}\). ## IV Conclusion The leading order bremsstrahlung cross section for polarized incoming beams has been calculated in anticipation of luminosity measurements at the EIC. An analytic expression has been derived, which shows that the polarized component is highly suppressed with respect to the usual unpolarized Bethe-Heitler expression. The suppression is linked to the low \(q^{2}\) that characterizes the Bethe-Heitler process, which is \(\sim m_{e}^{2}\). Further work is needed to estimate the effect of the no-recoil approximation, although the polarized component is expected to remain suppressed. ###### Acknowledgements. I would like to thank Andrei Afanasev, Wim Cosyn, and Katarzyna Wichmann for useful discussions. This work is supported by US DOE Nuclear Physics Grant No. DE-FG02-07ER41521.
2306.01410
On the orders of composition factors in completely reducible groups
We obtain an asymptotic upper bound for the product of the $p$-parts of the orders of certain composition factors of a finite group acting completely reducibly and faithfully on a finite vector space of order divisible by a prime $p$. An application is given for the diameter of a nondiagonal orbital graph of an affine primitive permutation group.
Attila Maróti, Saveliy V. Skresanov
2023-06-02T10:02:21Z
http://arxiv.org/abs/2306.01410v1
# On the orders of composition factors in completely reducible groups ###### Abstract. We obtain an asymptotic upper bound for the product of the \(p\)-parts of the orders of certain composition factors of a finite group acting completely reducibly and faithfully on a finite vector space of order divisible by a prime \(p\). An application is given for the diameter of a nondiagonal orbital graph of an affine primitive permutation group. Key words and phrases:simple group of Lie type, composition factor, completely reducible, orbital graph 2020 Mathematics Subject Classification: 20C33, 20E34 The project leading to this application has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 741420). The first author was also supported by the National Research, Development and Innovation Office (NKFIH) Grant No. K138596, No. K132951 and Grant No. K138828. Introduction Let \(G\) be a finite group and let \(V\) be a finite group. Let \(G\) be a finite group and let \(V\) be a finite group. Let \(G\) be a finite group and let \(V\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group and let \(V^{\prime}\) be a finite group. Let \(G^{\prime}\) be a finite group. \(x\in X\}\) is called a diagonal orbital graph. A criterion of Higman [6] states that a transitive permutation group is primitive if and only if all its nondiagonal orbital graphs are connected. Liebeck, Macpherson and Tent [11] described finite primitive permutation groups whose nondiagonal orbital graphs have bounded diameter (we note that in [11] orbital graphs are considered to be undirected). See also the papers of Sheikh [17] and Rekvenyi [14]. The paper [12] contains upper bounds for the diameters of nondiagonal orbital graphs of affine primitive permutation groups. Improving on a bound in [12], the second author [18] proved that there exists a universal constant \(C\) such that the diameter of a nondiagonal orbital graph for an affine primitive permutation group \(G\) of degree \(p^{n}\), for a prime \(p\) and an integer \(n\), is at most \(Cn^{3}\), provided that a point-stabilizer of \(G\) has order divisible by \(p\). As an application of Theorem 1.1, we obtain a strong upper bound for the orbital diameter of an affine primitive permutation group \(G\) with point-stabilizer \(H\), under the condition that \(c_{p}(H)\geq 1\), where \(p\) is the prime dividing the degree of \(G\). **Corollary 1.2**.: _There exists a universal constant \(C\) such that whenever \(G\) is an affine primitive permutation group of degree \(p^{n}\), where \(p\) is a prime and \(n\) is an integer, with a point-stabilizer \(H\) satisfying \(c_{p}(H)\geq 1\), then the diameter of any nondiagonal orbital graph of \(G\) is less than \(Cn^{2}/c_{p}(H)\)._ Note that if the composition factors of \(H\) belong to a list of known finite simple groups, then Corollary 1.2 is independent from the classification of finite simple groups. ## 2. Bounds on prime divisors of the orders of finite simple groups The purpose of this section is to establish Theorem 1.1 in the special case when \(H\) is a quasisimple group acting irreducibly on \(V\). The main result of the section is Proposition 2.5. The proof relies on bounds for prime divisors of the orders of finite simple groups of Lie type. Similar results have been obtained in [1, 2, 13], but we will require finer bounds in terms of the dimensions of irreducible projective modules of groups of Lie type. We need the following corollary of a result of Artin [1]. **Lemma 2.1**.: _Let \(r\) be a nonnegative integer, and let \(p\) be a prime. If \(a=\pm r\) or \(a=r^{2}\) then_ \[v_{p}\left(\prod_{i=1}^{m}(a^{i}-1)\right)\leq 2\frac{\log{(r+1)^{m}}}{\log{p}}.\] Proof.: In [1, p. 463], cf. [2, Lemma 4.2], it was shown that \[p^{v_{p}\left(\prod_{i=1}^{m}(a^{i}-1)\right)}\leq\begin{cases}3^{m/2}(r+1)^{m },&\text{ if $r$ is even, $a=\pm r$ or $a=r^{2}$},\\ 2^{m}(r+1)^{m},&\text{ if $r$ is odd, $a=\pm r$},\\ 4^{m}(r+1)^{m},&\text{ if $r$ is odd, $a=r^{2}$}.\end{cases}\] The right-hand side can be bounded above by \((r+1)^{2m}\). The claim follows by taking base \(p\) logarithms. Our notation for finite simple groups of Lie type follows [8]. **Lemma 2.2**.: _Let \(G\) be \(\mathrm{L}_{m}(r)\), \(\mathrm{U}_{m}(r)\), \(\mathrm{PSp}_{2m}(r)\), \(\Omega_{2m+1}(r)\), or \(\mathrm{P}\Omega_{2m}^{\pm}(r)\). If \(p\) is a prime not dividing \(r\), then_ \[v_{p}(|G|)\leq 3\frac{\log(r+1)^{m}}{\log p}.\] Proof.: We use Lemma 2.1 with \(a=r\) for linear groups, \(a=-r\) for unitary groups, and \(a=r^{2}\) for the spinor and orthogonal groups; see [8, Table 5.1.A] for the order formulae for these groups. For all cases except of the orthogonal groups in even dimension that gives us the bound \[v_{p}(|G|)\leq 2\frac{\log(r+1)^{m}}{\log p}.\] In case of \(\mathrm{P}\Omega_{2m}^{\pm}(r)\), the prime \(p\) may divide \(r^{m}\pm 1\) and \(\prod_{i=1}^{m-1}(r^{2i}-1)\). Since \(v_{p}(r^{m}\pm 1)\leq\log{(r+1)^{m}}/\log p\) we get the final bound. **Lemma 2.3**.: _Let \(G\) be an exceptional finite simple group defined over the field of order \(r\). If \(p\) is a prime not dividing \(r\), then_ \[v_{p}(|G|)\leq 30\frac{\log(r+1)}{\log p}.\] Proof.: We use the order formulae for the exceptional groups, see [8, Table 5.1.B]. For \({}^{2}B_{2}(r)\), \({}^{2}G_{2}(r)\), \({}^{2}F_{4}(r)\), and \({}^{3}D_{4}(r)\) we estimate the \(p\)-part of the order from above by \((r+1)^{16}\), so \(v_{p}(|G|)\leq 16\log{(r+1)}/\log p\) in this case. For the other groups we use Lemma 2.1 with the following parameters: \[G_{2}(r),\,a=r^{2},\,m=3,\] \[F_{4}(r),\,a=r^{2},\,m=6,\] \[E_{6}(r),\,a=r,\,m=12,\] \[E_{7}(r),\,a=r^{2},\,m=9,\] \[E_{8}(r),\,a=r^{2},\,m=15,\] \[{}^{2}E_{6}(r),\,a=-r,m=12.\] Clearly the \(E_{8}(r)\) case dominates the rest, which gives us the claimed bound. The next lemma shows that the dimensions of cross-characteristic modules for a group of Lie type are large in comparison to the prime divisors of the order of the group. **Lemma 2.4**.: _There exists a universal constant \(C\) such that the following is true. Let \(G\) be a nonabelian finite simple group of Lie type defined over a field of order \(r\) having an irreducible projective representation of dimension \(n\) over a field of characteristic \(p\). If \(p\) divides \(|G|\) and does not divide \(r\), then \(p\leq C\cdot n\). Moreover, the following are true:_ 1. _If_ \(G\) _is_ \(\mathrm{L}_{m}(r)\)_,_ \(\mathrm{PSp}_{2m}(r)\)_,_ \(\mathrm{U}_{m}(r)\)_,_ \(\mathrm{P}\Omega_{2m}^{\pm}(r)\)_, or_ \(\Omega_{2m+1}(r)\)_, then_ \(r^{m-1}\leq C\cdot n\)_._ 2. _If_ \(G\) _is an exceptional group, then_ \(r\leq C\cdot n\)_._ Proof.: Assume first that \(G\) is \(\mathrm{L}_{m}(r)\), \(\mathrm{PSp}_{2m}(r)\), \(\mathrm{U}_{m}(r)\), \(\mathrm{P}\Omega_{2m}^{\pm}(r)\), or \(\Omega_{2m+1}(r)\). We claim that for every type of the group (linear, symplectic, unitary or orthogonal) the dimension \(n\) is bounded from below by \(C_{1}\cdot r^{\alpha m+\beta}\) where \(C_{1}\) is some universal constant and \(\alpha,\beta\) depend only on the type of the group. For example, if \(G\simeq\mathrm{U}_{m}(r)\) and \(m\) is even, then by [8, Table 5.3.A], we have \(n\geq(r^{m-1}-1)/(r+1)\). Therefore \(n\geq\frac{1}{2}r^{m-1}\), so \(\alpha m+\beta\) is \(m-1\) in this case. The lower bounds on \(n\) extracted from [8, Table 5.3.A] are collected in the third column of Table 1. In the table below we list the expressions \(\alpha m+\beta\) such that \(n\geq C_{1}\cdot r^{\alpha m+\beta}\) for classical groups: \[\begin{array}{c|ccccc}\text{Group}&L_{m}(r)&\mathrm{PSp}_{2m}(r)&U_{m}(r)& \mathrm{P}\Omega_{2m}^{\pm}(r)&\Omega_{2m+1}(r)\\ \hline\text{Bound}&m-1&m&m-1&2m-3&2m-2\end{array}\] Clearly, for some constant \(C\), we have \(r^{m-1}\leq C\cdot n\), proving (1). Since \(p\) divides \(|G|\), it divides at least one of the factors from the order formula for \(|G|\), see [8, Table 5.1.A]. In the second column of Table 1 we list the largest such factors, that is, only those which are not dominated by the lower bound on the dimension \(n\). For instance, if \(G\simeq\Omega_{2m+1}(r)\), then \(p\) divides one of \(r^{2i}-1\), \(i=1,\ldots,m\). We know that \(n\geq C_{1}\cdot r^{2m-2}\) from the table above, so \(r^{2i}-1\leq C_{1}^{\prime}\cdot n\) for \(i=1,\ldots,m-1\) and some universal constant \(C_{1}^{\prime}\). Hence we put the factor \(r^{2m}-1\) in Table 1. Note that \(r^{2m}-1\) factorizes as \((r^{m}-1)(r^{m}+1)\), so \(p\) divides one of the factors, and therefore one has \(p\leq C_{1}^{\prime}\cdot n\). Similar factorizations can be used for other classical groups, so we derive that \(p\leq C\cdot n\) for some universal constant \(C\). Assume now that \(G\) is an exceptional group of Lie type. The dimension \(n\) can be bounded from below by \(C_{2}\cdot r^{\alpha}\) for some universal constants \(C_{2}\) and \(\alpha\) depending only on the type of the group by [8, Table 5.3.A]. We list the corresponding \(\alpha\) for the exceptional groups in the following table: \[\begin{array}{c|ccccccccc}\text{Group}&E_{6}&E_{7}&E_{8}&F_{4}&{}^{2}E_{6} &G_{2}&{}^{3}D_{4}&{}^{2}F_{4}&\text{Sz}&{}^{2}G_{2}\\ \hline\text{Bound}&11&17&29&8&11&3&5&5&1&2\end{array}\] It immediately follows that \(r\leq C\cdot n\) for some constant \(C\), proving (2). The prime \(p\) divides the order of the group and, hence, divides some factor in its order formula, see [8, Table 5.1.B]. As in the previous case, in the second column of Table 1 we list the largest such factor. Note that for the group \({}^{3}D_{4}(r)\) there are two factors not dominated by the lower bound for \(n\). We factorize the polynomials from the order formulae in order to obtain a bound of the form \(p\leq C\cdot n\) for some universal constant \(C\). For example, if \(G\simeq E_{6}(r)\) and \(p\) divides \(r^{12}-1\), we derive that \(p\) divides one of \(r^{6}-1\) or \(r^{6}+1\) which is smaller than \(r^{11}\). The only nontrivial cases arise when \(G\) is \({}^{3}D_{4}(r)\), \({}^{2}F_{4}(r)\) or \(\text{Sz}(r)\). If \(G\simeq{}^{3}D_{4}(r)\) and \(p\) divides \(r^{8}+r^{4}+1\), we use the factorization \[r^{8}+r^{4}+1=(r^{4}+r^{2}+1)(r^{4}-r^{2}+1),\] hence \(p\leq 3\cdot r^{5}\). If \(G\simeq{}^{2}F_{4}(r)\) and \(p\) divides \(r^{6}+1\), then we use \(r^{6}+1=(r^{2}+1)(r^{4}-r^{2}+1)\), so \(p\leq 3\cdot r^{5}\). Finally, if \(G\simeq\mathrm{Sz}(r)\) and \(p\) divides \(r^{2}+1\), then recall that \(r=2^{2e+1}\) for some integer \(e\) and we have \[r^{2}+1=(r+1-\sqrt{2r})(r+1+\sqrt{2r}).\] Therefore \(p\leq 3\cdot r\) in this case, finishing the proof of the lemma. Notice that in the setting of the lemma we also have bounds of the form \(p-1\leq C^{\prime}(n-1)\), \(r^{m-1}-1\leq C^{\prime}(m-1)\) in case (1), and \(r-1\leq C^{\prime}(n-1)\) in case (2) for some universal constant \(C^{\prime}\). The following result will be used in the main proof. Recall that \(G\) is quasisimple, if it is perfect and \(G/Z(G)\) is nonabelian simple. **Proposition 2.5**.: _There exists a universal constant \(C\) such that the following is true. Let \(G\) be a quasisimple group such that \(G/Z(G)\) is not isomorphic to a group of Lie type in characteristic \(p\). If \(G\) has an irreducible projective representation of dimension \(n\) over a field of characteristic \(p\), then_ \[v_{p}(|G|)\leq C\cdot\frac{n-1}{p-1}.\] \begin{table} \begin{tabular}{l|l|l} Group & Largest factors & Lower bounds \\ \hline \(L_{2}(r)\) & \(r^{2}-1\) & \((r-1)/\gcd(2,r-1)\) \\ \(L_{m}(r)\), \(m\geq 3\) & \(r^{m}-1\) & \(r^{m-1}-1\) \\ \(\mathrm{PSp}_{2m}(r)\), \(m\geq 2\) & \(r^{2i}-1\), \(m<2i\leq 2m\) & \((r^{m}-1)/2\), \(r\) odd \\ & & \(r^{m-1}(r^{m-1}-1)(r-1)/2\), \(r\) even \\ \(U_{m}(r)\), \(m\geq 3\) & \(r^{m}-(-1)^{m}\) & \(r(r^{m-1}-1)/(r+1)\), \(m\) odd \\ & & \((r^{m}-1)/(r+1)\), \(m\) even \\ \(\mathrm{P}\Omega^{+}_{2m}(r)\), \(m\geq 4\) & \(r^{2m-2}-1\) & \((r^{m-1}-1)(r^{m-2}+1)\), \(r\neq 2,3,5\) \\ & & \(r^{m-2}(r^{m-1}-1)\), \(r=2,3,5\) \\ \(\mathrm{P}\Omega^{-}_{2m}(r)\), \(m\geq 4\) & \(r^{2m-2}-1\) & \((r^{m-1}+1)(r^{m-2}-1)\) \\ \(\Omega_{2m+1}(r)\), \(m\geq 3\), \(r\) odd & \(r^{2m}-1\) & \(r^{2m-2}-1\), \(r>5\) \\ & & \(r^{m-1}(r^{m-1}-1)\), \(r=3,5\) \\ \hline \(E_{6}(r)\) & \(r^{12}-1\) & \(r^{9}(r^{2}-1)\) \\ \(E_{7}(r)\) & \(r^{18}-1\) & \(r^{15}(r^{2}-1)\) \\ \(E_{8}(r)\) & \(r^{30}-1\) & \(r^{27}(r^{2}-1)\) \\ \(F_{4}(r)\) & \(r^{12}-1\) & \(r^{6}(r^{2}-1)\), \(r\) odd \\ & & \(r^{7}(r^{3}-1)(r-1)/2\), \(r\) even \\ \({}^{2}E_{6}(r)\) & \(r^{12}-1\) & \(r^{9}(r^{2}-1)\) \\ \(G_{2}(r)\) & \(r^{6}-1\) & \(r(r^{2}-1)\) \\ \({}^{3}D_{4}(r)\) & \(r^{8}+r^{4}+1\), \(r^{6}-1\) & \(r^{3}(r^{2}-1)\) \\ \({}^{2}F_{4}(r)\) & \(r^{6}+1\) & \(r^{4}\sqrt{r/2}(r-1)\) \\ Sz(r) & \(r^{2}+1\) & \(\sqrt{r/2}(r-1)\) \\ \({}^{2}G_{2}(r)\) & \(r^{3}+1\) & \(r(r-1)\) \\ \end{tabular} \end{table} Table 1. Largest factors in order formulae and lower bounds of dimensions of representations for groups of Lie type Proof.: By [8, Corollary 5.3.3], the degree of a minimal projective \(p\)-modular representation of \(G\) is bounded below by the corresponding number for \(G/Z(G)\). We may thus replace \(G\) by \(G/Z(G)\) and assume that \(G\) is simple. Let \(C\) be a large fixed constant (how to specify \(C\) will be clear from the proof). Notice that by choosing \(C\) large enough we may assume that \(G\) is not a sporadic group. If \(G\) is isomorphic to \(\operatorname{Alt}(m)\), \(m\geq 5\), then by [8, Proposition 5.3.7] one has \(n\geq m-4\). Thus by Legendre's formula \[v_{p}(|\operatorname{Alt}(m)|)\leq\frac{m-1}{p-1}\leq 5\frac{n-1}{p-1},\] where the last inequality uses the fact that \(n+3\leq 5(n-1)\) for \(n\geq 2\). Now the claimed inequality follows for \(C\geq 5\). Now we assume that \(G\) is a group of Lie type not in characteristic \(p\). We first consider classical groups. Fix \(r\) and \(m\) as in Lemma 2.2, and notice that for \(r\geq 2\) and \(m\geq 2\) we have \((r+1)^{m}\leq 9(r^{m-1}-1)^{2}\). Lemma 2.2 implies \[v_{p}(|G|)\leq 3\frac{\log(9(r^{m-1}-1)^{2})}{\log p}.\] If \(p\geq\sqrt{3(r^{m-1}-1)}\), then \[v_{p}(|G|)\leq 3\frac{\log(9(r^{m-1}-1)^{2})}{\log\sqrt{3(r^{m-1}-1)}}=12\leq C \cdot\frac{n-1}{p-1},\] where the last inequality holds for \(C\) large enough by Lemma 2.4. If \(p<\sqrt{3(r^{m-1}-1)}\), then \[v_{p}(|G|)\leq 3\frac{\log(9(r^{m-1}-1)^{2})}{\log 2}<C_{1}\sqrt{r^{m-1}-1}<C_{2 }\cdot\frac{r^{m-1}-1}{p-1},\] for some constants \(C_{1},C_{2}\). By Lemma 2.4 (1), we have \(r^{m-1}-1\leq C_{3}\cdot(n-1)\) for some \(C_{3}\). Therefore \[v_{p}(|G|)\leq C_{2}\cdot C_{3}\cdot\frac{n-1}{p-1}\leq C\cdot\frac{n-1}{p-1},\] whenever \(C\geq C_{2}\cdot C_{3}\). We turn to the exceptional groups. If \(r\) is the order of the defining field, then \(r+1\leq 3(r-1)\) and Lemma 2.3 imply \[v_{p}(|G|)\leq 30\frac{\log(r+1)}{\log p}\leq 30\frac{\log 3(r-1)}{\log p}.\] If \(p\geq\sqrt{3(r-1)}\), then \[v_{p}(|G|)\leq 30\frac{\log 3(r-1)}{\log\sqrt{3(r-1)}}=60\leq C\cdot\frac{n-1}{ p-1},\] where the last inequality uses Lemma 2.4. If \(p<\sqrt{3(r-1)}\), then \[v_{p}(|G|)\leq 30\frac{\log 3(r-1)}{\log 2}<C_{1}^{\prime}\sqrt{r-1}<C_{2}^{ \prime}\cdot\frac{r-1}{p-1},\] for some constants \(C_{1}^{\prime},C_{2}^{\prime}\). By Lemma 2.4 (2), we have \(r-1\leq C_{3}^{\prime}\cdot(n-1)\), hence \[v_{p}(|G|)\leq C_{2}^{\prime}\cdot C_{3}^{\prime}\cdot\frac{n-1}{p-1}\leq C\cdot \frac{n-1}{p-1},\] for \(C\geq C_{2}^{\prime}\cdot C_{3}^{\prime}\). ## 3. Nonabelian composition factors For a finite group \(G\) with composition series \(1=G_{0}<\cdots<G_{m}=G\) let \(\overline{c}_{p}(G)\) be the sum of \(v_{p}(|G_{i}/G_{i-1}|)\) over such \(i\in\{1,\ldots,m\}\) that \(G_{i}/G_{i-1}\) is nonabelian and not isomorphic to a finite simple group of Lie type in characteristic \(p\). The main result of [3] bounds the number of composition factors isomorphic to cyclic groups of order \(p\), so in order to bound \(c_{p}(G)\) we may focus on bounding \(\overline{c}_{p}(G)\) first. **Proposition 3.1**.: _There exists a universal constant \(C\) such that the following holds. Let \(q\) be a power of a prime \(p\) and let \(V\) be a finite vector space of dimension \(n\) over the field of size \(q\). If \(H\) is a subgroup of \(\operatorname{GL}(V)\) acting completely reducibly with \(r\) irreducible summands, then_ \[\overline{c}_{p}(H)\leq C\cdot\frac{n-r}{p-1}.\] Proof.: Let \(H\leq\operatorname{GL}(V)\) be a counterexample to the statement of the theorem with \(n\geq 2\) minimal. Under this condition, assume that \(|H|\) is as small as possible. The proof proceeds in several steps; we choose the constant \(C=\max\{20/3,\,C_{1}\}\), where \(C_{1}\) is the constant \(C\) from Proposition 2.5. **Step 1: \(H\) acts irreducibly on \(V\).** Assume that \(W\) is a nonzero proper irreducible submodule of \(V\). Let \(K\) be the centralizer of \(W\) in \(H\). The factor group \(H/K\) acts irreducibly and faithfully on \(W\). Thus \(\overline{c}_{p}(H/K)\leq C\cdot\frac{m-1}{p-1}\) where \(m\) is the dimension of \(W\) over the field of size \(q\). Since \(H\) acts completely reducibly on \(V\), there exists a submodule \(U\) of \(V\) such that \(V=W\oplus U\). The group \(K\) acts faithfully on \(U\). Since \(K\) is normal in \(H\), it acts completely reducibly on \(U\) by Clifford's theorem. By the minimality of \(n\) again, we have \(\overline{c}_{p}(K)\leq C\cdot\frac{(n-m)-(r-1)}{p-1}\). These give \[\overline{c}_{p}(H)=\overline{c}_{p}(H/K)+\overline{c}_{p}(K)\leq C\cdot\frac {m-1}{p-1}+C\cdot\frac{(n-m)-(r-1)}{p-1}=C\cdot\frac{n-r}{p-1},\] a contradiction to the minimality of \(H\). **Step 2: \(H\) is perfect.** Since \(H\) acts irreducibly on \(V\), its derived subgroup \([H,H]\) acts completely reducibly. Now, \(\overline{c}_{p}(H)=\overline{c}_{p}([H,H])\) and we may assume that \(H=[H,H]\) by the minimality of \(|H|\). **Step 3: \(H\) acts primitively on \(V\).** Assume that \(H\) acts imprimitively on \(V\), that is, \(H\) preserves a decomposition \(V=V_{1}+\ldots+V_{t}\) of the vector space \(V\) to (proper) subspaces \(V_{i}\) of the same size where \(1\leq i\leq t\) for some integer \(t>1\). Let the kernel of the action of \(H\) on \(\{V_{1},\ldots,V_{t}\}\) be \(B\). We have \(\overline{c}_{p}(H/B)\leq(t-1)/(p-1)\) by considering the \(p\)-part of \(t!\). Since \(B\) is a proper normal subgroup of \(H\), we have \(\overline{c}_{p}(B)\leq C\cdot\frac{n-t}{p-1}\). These give \[\overline{c}_{p}(H)=\overline{c}_{p}(H/B)+\overline{c}_{p}(B)\leq\frac{t-1}{p -1}+C\cdot\frac{n-t}{p-1}\leq C\cdot\frac{n-1}{p-1},\] where \(C\geq 1\) is used. **Step 4: \(H\) acts absolutely irreducibly on \(V\).** Let \(E=\operatorname{End}_{H}(V)\). This is a field extension of the field of order \(q\). Let the order of \(E\) be \(q^{e}\). The group \(H\) may be viewed as a subgroup of \(\operatorname{GL}(V)\) where \(V\) is the vector space of dimension \(n/e\) over the field \(E\). The \(EH\)-module \(V\) remains irreducible. Let \(e>1\). The minimality of \(n\) gives \[\overline{c}_{p}(H)\leq C\cdot\frac{n/e-1}{p-1}<C\cdot\frac{n-1}{p-1}.\] A contradiction. **Step 5: \(H\) does not preserve any proper field extension.** Assume that \(H\) preserves a field extension structure on \(V\) over the field of order \(q^{e}\) for some \(e>1\). The group \(H\) may be embedded in \(\operatorname{GL}(n/e,q^{e}).e\) and since \(H\) is perfect, \(H\) lies in \(\operatorname{GL}(n/e,q^{e})\). By the argument in [5, p. 1028], the group \(H\) acts irreducibly (and faithfully) on \(V\) viewed as a vector space of dimension \(n/e\) over the field with \(q^{e}\) elements. These give \[\overline{c}_{p}(H)\leq C\cdot\frac{n/e-1}{p-1}<C\cdot\frac{n-1}{p-1}.\] A contradiction. **Step 6: The group \(H\) is quasisimple.** By the argument in [5, p. 1029], for every normal subgroup \(R\) of \(H\) every irreducible constituent of the \(R\)-module \(V\) is absolutely irreducible. Since \(H\) acts primitively on \(V\), every normal subgroup of \(H\) acts homogeneously on \(V\) by Clifford's theorem. In particular, every abelian normal subgroup of \(H\) is cyclic by Schur's lemma and is central by the previous paragraph. Let \(R\) be a normal subgroup of \(H\) minimal subject to being noncentral. The center \(Z(R)\) of \(R\) is contained in \(Z(H)\) and \(R/Z(R)\) is characteristically simple. As in the proof of [5, Theorem 4.1], the group \(R\) is either a central product of say \(t\) quasisimple groups \(Q_{i}\) (with the \(Q_{i}/Z(Q_{i})\) all isomorphic) or \(R/Z(R)\) is an elementary abelian \(r\)-group for some prime \(r\). In the second case \(R\) is an \(r\)-group with \(r\) different from \(p\) and it may be proved that \(R\) is of symplectic type with \(|R/Z(R)|=r^{2a}\) for some integer \(a\). We follow the proof of [5, Theorem 4.1] and introduce some notation. Let \(J_{1},\dots,J_{k}\) denote the distinct normal subgroups of \(H\) that are minimal with respect to being noncentral in \(H\). Let \(J=J_{1}\cdots J_{k}\) be the central product of these subgroups. Let \(W\) be an irreducible constituent of the \(J\)-module \(V\). Then \(W=U_{1}\otimes\cdots\otimes U_{k}\) where \(U_{i}\) is an irreducible \(J_{i}\)-module. If \(J_{i}\) is the central product of \(t\) copies of a quasisimple group, then \(\dim(U_{i})\geq 2^{t}\) and if \(J_{i}\) is of symplectic type with \(J_{i}/Z(J_{i})\) of order \(r^{2a}\), then \(\dim(U_{i})=r^{a}\). The group \(H/(Z(H)J)\) embeds into the direct product of the outer automorphism groups of the \(J_{i}\). Let \(J_{i}\) be a central product of say \(t\) quasisimple groups \(Q\). The outer automorphism group \(\operatorname{Out}(J_{i})\) in this case may be viewed as a subgroup of \(\operatorname{Out}(Q/Z(Q))\wr{\operatorname{Sym}(t)}\). Since \(\operatorname{Out}(Q/Z(Q))\) is solvable by Schreier's conjecture, \[v_{p}(|\operatorname{Out}(J_{i})/\operatorname{Sol}(\operatorname{Out}(J_{i}) )|)\leq v_{p}(|\operatorname{Sym}(t)|)\leq\frac{t-1}{p-1},\] where \(\operatorname{Sol}(X)\) denotes the solvable radical of a finite group \(X\). Now let \(J_{i}\) be a group of symplectic type with \(|J_{i}/Z(J_{i})|=r^{2a}\) for some prime \(r\) and integer \(a\). In this case \(\operatorname{Out}(J_{i})\) may be viewed as a subgroup of \(\operatorname{Sp}_{2a}(r)\) and so \[v_{p}(|\operatorname{Out}(J_{i})|)\leq v_{p}(|\operatorname{Sp}_{2a}(r)|),\] which is at most \(\frac{(4/3)r^{a}-1}{p-1}\) by [3, (3)]. Since \(n=\dim(V)\geq\dim(W)=\prod_{i}\dim(U_{i})\geq\sum_{i}\dim(U_{i})\), we have \[\overline{c}_{p}(H/(Z(H)J))\leq\frac{(4/3)n-1}{p-1}\leq\frac{5}{3}\cdot\frac{n -1}{p-1} \tag{1}\] by the previous paragraph and the fact that \(n\geq 2\). We claim that exactly one of the \(J_{i}\) is nonsolvable with a nonabelian composition factor of order divisible by \(p\) but different from a group of Lie type in characteristic \(p\). Suppose otherwise. If there is no such \(J_{i}\), then \(\overline{c}_{p}(Z(H)J)=0\) and so \[\overline{c}_{p}(H)\leq\overline{c}_{p}(H/(Z(H)J))+\overline{c}_{p}(Z(H)J) \leq\frac{5}{3}\cdot\frac{n-1}{p-1}<C\cdot\frac{n-1}{p-1}, \tag{2}\] by (1) and the fact that \(C\geq 5/3\), a contradiction. Let the number of such \(J_{i}\) be \(m>1\). Without loss of generality, let these be \(J_{1},\ldots,J_{m}\). We have \(\overline{c}_{p}(Z(H)J)=\sum_{i=1}^{m}\overline{c}_{p}(J_{i})\). For each \(i\) with \(1\leq i\leq k\), let \(\dim(U_{i})=n_{i}\). By the minimality of \(n\), we find that \[\sum_{i=1}^{m}\overline{c}_{p}(J_{i})\leq C\cdot\frac{(\sum_{i=1}^{m}n_{i})-m} {p-1}.\] If \(m\geq 3\) or \(m=2\) and \(\max\{n_{1},n_{2}\}\geq 4\), then \(\sum_{i=1}^{m}n_{i}\leq\frac{3}{4}\prod_{i=1}^{m}n_{i}\leq\frac{3}{4}n\), hence \[\overline{c}_{p}(H)=\overline{c}_{p}(Z(H)J)+\overline{c}_{p}(H/(Z(H)J))\leq \frac{3C}{4}\cdot\frac{n-1}{p-1}+\frac{5}{3}\cdot\frac{n-1}{p-1}\leq C\cdot \frac{n-1}{p-1},\] where the last inequality holds since \(C\geq 20/3\). A contradiction. If \(m=2\) and \(\max\{n_{1},n_{2}\}\leq 3\), then \(\overline{c}_{p}(\operatorname{Out}(J_{i}))=0\) for \(i=1,2\) and hence \[\overline{c}_{p}(H/(Z(H)J))\leq\frac{(4/3)\sum_{i=3}^{k}n_{i}-1}{p-1},\] so \[\overline{c}_{p}(H)\leq C\cdot\frac{(n_{1}+n_{2})-1}{p-1}+\frac{(4/3)\sum_{i=3 }^{k}n_{i}-1}{p-1}\leq C\cdot\frac{n-1}{p-1},\] by the minimality of \(n\), where the last inequality holds since \(C\geq 4/3\). A contradiction. We thus have \(m=1\). We claim that \(k=1\). Assume that \(k\geq 2\). By the previous paragraph and without loss of generality, \(\overline{c}_{p}(J_{1})\geq 1\) and \(\overline{c}_{p}(J_{i})=0\) for every \(i\) with \(2\leq i\leq k\). By the minimality of \(n\) and the fact that \(k\geq 2\) and \(n_{2}\geq 2\), we have \[\overline{c}_{p}(Z(H)J)=\overline{c}_{p}(J_{1})\leq C\cdot\frac{n_{1}-1}{p-1} \leq\frac{C}{2}\cdot\frac{n-1}{p-1}.\] This together with the bound (1) and \(C\geq 10/3\) give \(\overline{c}_{p}(H)<C\cdot\frac{n-1}{p-1}\), a contradiction. The group \(J=J_{1}\) is a central product of say \(t\) quasisimple groups \(Q_{i}\) (with the \(Q_{i}/Z(Q_{i})\) all isomorphic). We claim that \(t=1\). Assume for a contradiction that \(t\geq 2\). Let \(W\) be an irreducible constituent of the \(J\)-module \(V\). Then \(W=W_{1}\otimes\cdots\otimes W_{t}\) where \(W_{i}\) is an irreducible \(Q_{i}\)-module for every \(i\) with \(1\leq i\leq t\) by [8, Lemmas 5.5.5 and 2.10.1]. For each \(i\) with \(1\leq i\leq t\), let \(m_{i}\) be \(\dim(U_{i})\geq 2\). We have \(n\geq\prod_{i=1}^{t}m_{i}\geq\sum_{i=1}^{t}m_{i}\). We get \(\overline{c}_{p}(J)\leq C\cdot\frac{n-t}{p-1}\) by the minimality of \(n\) and \(\overline{c}_{p}(H/(Z(H)J))\leq\frac{t-1}{p-1}\) by Schreier's conjecture. This is a contradiction since \(C\geq 1\). We conclude that \(t=1\). Since \(H\) is perfect, \(H=JZ(H)\) and so \(H=J\) is quasisimple. Since \(H\) acts absolutely irreducibly on \(V\) and is quasisimple, the final contradiction follows from Proposition 2.5. ## 4. Proofs of the main results Proof of Theorem 1.1.: Let \(V\) be a finite vector space of dimension \(n\) over the field of size \(q\). Let \(H\) be a subgroup of \(\operatorname{GL}(V)\) acting completely reducibly on \(V\). Let \(r\) be the number of irreducible summands of the \(H\)-module \(V\). We claim that \(c_{p}(H)\leq C\cdot\frac{n-r}{p-1}\) for some universal constant \(C\). We prove the bound by induction on \(n\). If \(n=1\), then the size of \(H\) is not divisible by \(p\) and so \(c_{p}(H)=0\). Assume that \(n\geq 2\) and that the claim is true for \(n-1\). If the \(H\)-module \(V\) contains an irreducible summand \(W\) of dimension \(1\) and \(K\) denotes the centralizer of \(W\) in \(H\), then \(c_{p}(H)=c_{p}(K)\leq C\cdot\frac{(n-1)-(r-1)}{p-1}\) by the induction hypothesis. We may assume that every submodule of \(V\) has dimension at least \(2\). In particular, \(r\leq n/2\). The number of composition factors of \(H\) isomorphic to the cyclic group of order \(p\) is at most \(((4/3)n-r)/(p-1)\) by [3, Theorem 1]. This is at most \(\frac{8}{3}\frac{n-r}{p-1}\) since \(r\leq n/2\). Thus \[c_{p}(H)\leq\frac{8}{3}\frac{n-r}{p-1}+\overline{c}_{p}(H)\leq C\cdot\frac{n- r}{p-1},\] where \(C\) is \(8/3\) plus a constant whose existence is assured by Proposition 3.1. Proof of Corollary 1.2.: Let \(C\) be a constant whose existence is assured by Theorem 1.1. Let \(G\) be an affine primitive permutation group of degree \(p^{n}\) where \(p\) is a prime and \(n\) is an integer. Let \(H\) be a point-stabilizer in \(G\) satisfying \(c_{p}(H)\geq 1\). The diameter of any nondiagonal orbital graph of \(G\) is at most \((p-1)n\) by [12, Proposition 3.2]. On the other hand, \(p-1\leq C(n-1)/c_{p}(H)\) by Theorem 1.1.
2310.14076
On the Relationship Between Relevance and Conflict in Online Social Link Recommendations
In an online social network, link recommendations are a way for users to discover relevant links to people they may know, thereby potentially increasing their engagement on the platform. However, the addition of links to a social network can also have an effect on the level of conflict in the network -- expressed in terms of polarization and disagreement. To this date, however, we have very little understanding of how these two implications of link formation relate to each other: are the goals of high relevance and conflict reduction aligned, or are the links that users are most likely to accept fundamentally different from the ones with the greatest potential for reducing conflict? Here we provide the first analysis of this question, using the recently popular Friedkin-Johnsen model of opinion dynamics. We first present a surprising result on how link additions shift the level of opinion conflict, followed by explanation work that relates the amount of shift to structural features of the added links. We then characterize the gap in conflict reduction between the set of links achieving the largest reduction and the set of links achieving the highest relevance. The gap is measured on real-world data, based on instantiations of relevance defined by 13 link recommendation algorithms. We find that some, but not all, of the more accurate algorithms actually lead to better reduction of conflict. Our work suggests that social links recommended for increasing user engagement may not be as conflict-provoking as people might have thought.
Yanbang Wang, Jon Kleinberg
2023-10-21T17:52:58Z
http://arxiv.org/abs/2310.14076v4
# On the Relationship Between Relevance and Conflict in Online Social Link Recommendations ###### Abstract In an online social network, link recommendations are a way for users to discover relevant links to people they may know, thereby potentially increasing their engagement on the platform. However, the addition of links to a social network can also have an effect on the level of conflict in the network -- expressed in terms of polarization and disagreement. To this date, however, we have very little understanding of how these two implications of link formation relate to each other: are the goals of high relevance and conflict reduction aligned, or are the links that users are most likely to accept fundamentally different from the ones with the greatest potential for reducing conflict? Here we provide the first analysis of this question, using the recently popular Friedkin-Johnsen model of opinion dynamics. We first present a surprising result on how link additions shift the level of opinion conflict, followed by explanation work that relates the amount of shift to structural features of the added links. We then characterize the gap in conflict reduction between the set of links achieving the largest reduction and the set of links achieving the highest relevance. The gap is measured on real-world data, based on instantiations of relevance defined by 13 link recommendation algorithms. We find that some, but not all, of the more accurate algorithms actually lead to better reduction of conflict. Our work suggests that social links recommended for increasing user engagement may not be as conflict-provoking as people might have thought. ## 1 Introduction Recent years have seen an explosion in the usage of social media and online social networks. In 2022, an estimated 4.6 billion people worldwide used online social networks regularly [1]. Online social networks such as Facebook, LinkedIn, and Twitter are transforming the society by enabling people to exchange opinions and knowledge at an unprecedented rate and scale. However, there are also significant concerns about the disruptive effects of social media, as people observe the growing level of polarization and disagreement in online communities. Conflicts arise over topics ranging from politics [2; 3], to entertainment [4; 5] and healthcare [6]. For our purposes in this paper, we will adopt terminology from the social sciences in distinguishing between polarization and disagreement as follows: _polarization_ will refer to how much people's opinions deviate from the average, and _disagreement_ will refer to how much people directly connected to each other in the social network differ in their opinions. Reducing polarization and disagreement can be a benefit both to individual users of these platforms, who would experience less conflict and stress, and potentially also to the platforms themselves, to the extent that too much contentiousness can make the platform unattractive to users. The root causes of increased polarization and disagreement have been the subject of much concern and debate. One popular idea is Eli Praiser's "filter bubble" theory [7]. The theory attributes the intensification of online conflict to the recommendation algorithms deployed by providers for maximizing user engagement. It conjectures that a person becomes more biased and cognitively blinded because recommendation algorithms keep suggesting new connections and new pieces of content associated with similar-minded people, thus solidifying the person's pre-existing bias. The "filter bubble" theory is influential, but the empirical evidence supporting it has been limited. For example, [8] conducted experiments of hypothesis testing on the effect of personalization on consumer fragmentation, concluding that there was not enough evidence to support the "filter bubble" theory; [9] also employed a data-driven method by examining the data from a movie recommender system and checked if users may gradually get exposed to more diverse content; [10] surveyed a number of active twitter users and suggested that the offline habits of online users may actually play a more important role in creating "filter bubble". These empirical studies indicate that the magnitude of the "filter bubble" effect in social recommendation system may not be as significant as the theory's popularity implies. However, more principled understandings of the strength of "filter bubble" effect caused by social recommendations are missing from the picture. This paper aims to explore theoretical evidence and principled characterizations of the relationship between relevance and conflict in online social link recommendations. We seek to understand **how social links recommended for maximizing user engagement (_i.e._ relevance) may shape the polarization and disagreement (_i.e._ conflict) landscape of a social network.** We note that the "filter bubble" theory essentially counter-poses two important aspects of social link recommendations: "relevance" and "reduction of conflict". The former has been well-studied as the classic problem of link prediction [11, 12, 13]; the latter involves social accountability of link recommendations, and has received rapidly increasing intentions in recent years [14, 15]. To date, however, there has been very little attempt to study the relationship between these two aspects, in contrast to the well-established paradigms to research the relationship between relevance and novelty [16, 17], relevance and diversity [18, 19], relevance and serendipity [20, 21] in recommendations. To theoretically analyze how opinions change in response to the addition of new links, it is necessary for us to choose a base model that specifies the basic rules of (at least approximately) how opinions propagate in social networks. There are several options for this purpose, including the Friedkin-Johnsen (FJ) model [22], the Hegselmann-Krause (HK) model [23], the voter's model [24], etc. In this work, we choose the popular FJ model which will be formally introduced in Sec.2. There are two reasons for the FJ model to be the most suitable base model for our analytical purpose. The first is its outstanding empirical validity and practicability: according to recent surveys [25], the FJ model is the only opinion dynamics model to date on which a sustained line of human-subject experiments has confirmed the model's predictions of opinion changes [26, 27, 28, 29, 30, 31, 32, 33, 34, 35]. The second reason is the availability of necessary notions and definitions associated with the conflict measure: to our best knowledge, the FJ model is also the only opinion dynamics model upon which social tensions like polarization and disagreement have been rigorously defined [14], and widely accepted [36, 37, 38, 15, 39]. Therefore, it is most meaningful to conduct in-depth theoretical analysis based on the FJ model. **Proposed Questions.** Three questions are central to our research of the relationship between relevance and minimization of opinion conflict (polarization and disagreement) in link recommendations: 1. [leftmargin=*] 2. What are the structural features of the links that can reduce conflict most effectively? Are the features aligned with those of relevant links (_i.e._ links most likely to be accepted by users)? 3. What is the empirical degree of alignment between relevance and conflict minimization for various link recommendation algorithms executed on real-world data? 4. What are the limitations of our theoretical analysis? **Main Results.** For **Q1**, we first study the amount of change in opinion conflict (polarization + disagreement) caused by general link additions. We derive closed-form expressions for this, which reveal a perhaps surprising fact that purely adding social links can't increase opinion conflict. Because link additions essentially improves network connectivity, we further present a theorem that uses connectivity terms to characterize opinion conflict, leading to a conclusion aligned with the surprising fact. To interpret the structural features of links that can reduce opinion conflict most effectively, we conduct a series of explanation work on the closed-form expressions derived for conflict change. We manage to associate the expressions' components to various types of graph distances, which are then summarized into two criteria for finding conflict-minimizing links in social networks: (1) for a single controversial topic, conflict-minimizing links should have both end nodes as close as possible in the network and their expressed opinions on that topic as different as possible; (2) for a random distribution of controversial topics, conflict-minimizing links should have both end nodes in different "clusters" of the network while still remaining fairly well-connected with each other. For **Q2**, we introduce a model-agnostic measure called conflict awareness to empirically evaluate a recommendation model's ability of reducing conflict. We measure conflict awareness for many link recommendation algorithms on real-world social networks. We find that, some, but not all, of the more accurate recommendation algorithms have better ability to reduce conflict more effectively. For **Q3**, we discuss a limitations of analyzing the change of opinion conflict using the FJ model, presented in the form of a paradox. The paradox indicates that reducing conflict on social networks by suggesting friend links could actually make people more stressful and upset about their social engagement. This leaves an interesting topic for future study. ## 2 Preliminaries ### Social Network Model We consider a social network modeled by an undirected graph \(G=(V,E)\) where \(V\) is the set of nodes and \(E\) is the set of links. The adjacency matrix \(A=(a_{ij})_{|V|\mid|V|}\) is a symmetric matrix with \(a_{ij}=1\) if link \(e=(i,j)\in E\), and \(a_{ij}=0\) otherwise. In more general case \(a_{ij}\) can also be a non-negative scalar representing the strength of social interaction between \(i\) and \(j\). The Laplacian matrix \(L\) is defined as \(L=D-A\) where \(D=\text{diag}((\sum_{j=1}^{|V|}a_{ij})_{i=1,\ldots,|V|})\) is the degree matrix. The incidence matrix \(B\) is a \(|V|\times|E|\) matrix whose all elements are zero except that for each link \(e=(i,j)\in E\), \(B_{ie}=1\), \(B_{je}=-1\) (\(i\), \(j\) interchangeable as \(G\) is undirected). Given a link \(e=(i,j)\), its edge (indicator) vector \(b_{e}\) is the column vector in \(B\) whose column index is \(e\), \((b_{e})_{i}=1,(b_{e})_{j}=-1\). Note that modeling a social network by an undirected graph does _not_ restrict the social influence between two connected people to be symmetric in both directions. In the Friedkin-Johnsen model introduced below, the amount of influence carried by a link is normalized by the social presence (node degree). Symmetry is thus broken because the two destination nodes can have different degrees. ### Friedkin-Johnsen Opinion Model The Friedkin-Johnsen (FJ) model is one of the most popular models for studying opinion dynamics on social networks in recent years. Its basic assumption is that each person \(i\) has two opinions: an initial ("innate" [39]) opinion \(s_{i}\) that remains fixed, and an expressed opinion \(z_{i}\) that evolves by iteratively averaging \(i\)'s initial opinion and its neighbors' expressed opinions at each time step: \[z_{i}^{(0)}=s_{i},\ \ \ \ z_{i}^{(t)}=\frac{s_{i}+\sum_{j\in N_{i}}a_{ij}z_{i}^{ (t)}}{1+\sum_{j\in N_{i}}a_{ij}} \tag{1}\] where \(N_{i}\) is the neighbors of node \(i\); \(a_{ij}\) is the interpersonal interaction strength: in the simplest form, it takes binary (0/1) values indicating whether \(i,j\) are friends with each other; more sophisticated rules for assigning continuous values also exist. It can be proved that the expressed opinions will eventually reach equilibrium, expressed in vector form: \(z^{(\infty)}=(I+L)^{-1}s\), where \(z^{(\infty)},s\in\mathbb{R}^{|V|}\) are the opinion vectors. For simplicity, this paper writes \(z\) in place of \(z^{(\infty)}\) as the primary focus is the equilibrium state of \(z\). While expressed opinions are guaranteed to reach equilibrium, they rarely reach global consensus. Previous studies [14; 36] have extended \(z,s\), and their interplay with \(G\) into a plethora of measures to reflect various types of tension over the social network, among which the most important ones are: * **Disagreement:**\(\mathcal{D}(G,s)=\sum_{(i,j)\in E}a_{ij}(z_{i}-z_{j})^{2}=s^{T}(I+L)^{-1}L(I+L)^{-1}s\); * **Polarization:**\(\mathcal{P}(G,s)=\sum_{i\in V}(z_{i}-\bar{z})^{2}=\bar{z}^{T}\bar{z}=\bar{s}^{T}(I+ L)^{-2}\bar{s}\); * **Conflict:**\(\mathcal{C}(G,s)=\mathcal{D}(G,s)+\mathcal{P}(G,s)=s^{T}(I+L)^{-1}s\); where the mean \(\bar{z}=\frac{\sum_{|V|}^{|V|}z_{i}}{|V|}\), and zero-centered vector \(\bar{z}=z-\bar{z}\); likewise for \(\bar{s},\bar{s}\). In fact, all \(s\) on the right-hand-side above can be replaced by \(\tilde{s}\) and the equations still hold [14]. Again, for simplicity we follow the convention to assume \(s\) to be always zero-centered. The tilde accent is thus omitted. ### Spanning Rooted Forest A _forest_ is an acyclic graph. A _tree_ is a connected forest. A _rooted tree_ is a tree with one marked node as its root. A _rooted forest_ is a forest with one marked node in each of its component. In other words, a rooted forest is a union of disjoint rooted trees. Given a graph \(G=(V,E)\), its _spanning rooted forest_ is a rooted forest with node set \(V\). Later in this paper, we will use various counts of spanning rooted forests to help interpret mathematical quantities arising from our analysis. ## 3 Conflict Change Caused by Link Additions ### Link Additions Help Reduce Conflict We start by analyzing the effect of link additions on the conflict measure of social networks. The following theorem provides closed-form expressions of both conflict change and its expected value (over random distributions of initial opinions) caused by the addition of a new link. We assume the link to have unit weight, but the result can be easily generalized to the case of continuous weights. Surprisingly, our theorem shows that both conflict change and its expected value are non-positive terms no matter which link is to be added. **Theorem 1**.: _Given initial opinions \(s\) and social network \(G=(V,E)\) with Laplacian matrix \(L\), let \(G_{+e}=(V,E\cup\{e\})\) denote the new social network. The change of conflict of expressed opinions caused by adding \(e\) is given by_ \[\Delta_{+e}\mathcal{C}=\mathcal{C}(G_{+e},s)-\mathcal{C}(G,s)=-\frac{(z_{i}- z_{j})^{2}}{1+b_{e}^{T}(I+L)^{-1}b_{e}}\leq 0 \tag{2}\] _The topology term \(L\) can be marginalized by considering initial opinions as independent samples from a random distribution with finite variance, i.e. \(s_{i}\sim\mathcal{D}(0,\sigma^{2})\ \ \ iid.\) the expected conflict change can be expressed as:_ \[\Delta_{+e}\mathbb{E}_{s}[\mathcal{C}]=\mathbb{E}_{s\sim\mathcal{D}(0,\sigma ^{2})}[\mathcal{C}(G_{+e},s)-\mathcal{C}(G,s)]=-\frac{\sigma^{2}|(I+L)^{-1}b_{ e}|_{2}^{2}}{1+b_{e}^{T}(I+L)^{-1}b_{e}}\leq 0 \tag{3}\] The marginalization of \(L\) in expected conflict change allows us to focus on the effect of network structure when considering conflict change. It also reflects the fact that people may hold different initial opinions \(s\) upon different controversial topics. We also computationally validate the theorem, especially to check that the conflict change caused by link additions is indeed non-positive. See Appendix C for more details. ### Network Connectivity Helps Contract Conflict The addition of links always improves the connectivity of a social network. Therefore, to understand the effects of link additions on conflict, it also helps to examine what network connectivity implies about conflict in general. Here is an analysis that shows having opinions propagate on a better-connected social network helps "contract" more conflict. The setup is as follows. We create a "control group" where the effect of idea exchange over social networks gets eliminated: imagine if the same group of people that get studied are instead totally disconnected with each other, their expressed opinions \(z\) would stay consistent with their initial opinions \(s\) since no pressure is felt from the outside; meanwhile, their disagreement term no longer exists because the term is defined only for connected people. Therefore, the conflict of this control group \(\mathcal{C}(G_{0},s)\) would just be the polarization of initial opinions, \(s^{T}s\); here the \(G_{0}=G_{0}\). Corresponding to this control group is the "treatment group" where opinions propagate on the network, with conflict rate \(\mathcal{C}(G_{0},s)=s^{T}(I+L)^{-1}s\). We now compare the two groups and have the following conflict contraction theorem: **Theorem 2**.: _Given initial opinions \(s\) and social network \(G=(V,E)\) with Laplacian matrix \(L\), we can bound the ratio of conflict between the control and group the treatment group :_ \[1+\max_{(i,j)\in E}(d_{i}+d_{j})\geq\frac{\mathcal{C}(G_{0},s)}{\mathcal{C}(G,s)}\geq 1+\frac{1}{2}d_{\min}h_{G}^{2}\geq 1 \tag{4}\] _where \(d_{i}\), \(d_{j}\) are degrees of nodes \(i,j\); \(d_{min}\) is \(G\)'s minimum node degree; \(h_{G}\) is G's Cheeger constant._ Theorem 2 shows the range of the influence that a social network can have on public opinion conflict. Both bounds are expressed in relation to network connectivity measures. Remarkably, the lower bound \(\leq 1\) suggests that facilitating idea exchange almost always contracts conflict, and the range of contraction rate depends on the connectivity bottleneck \(d_{\min}\) and \(h_{G}\). In general, a larger \(h_{G}\), meaning a better-connected network with less bottleneck, leads to a larger contraction rate -- a more ideal case in terms of public benefit. See Appendix C for computational validations of the theorem. ## 4 Interpreting Features of Conflict-Minimizing Links Last section shows that recommending new links to people helps reduce opinion conflict in general. We now proceed to investigate how different links may reduce different amount of opinion conflict. A natural question in that regard is **(Q1)** how we can characterize the _conflicting minimization_ feature (_i.e._ reducing most conflict) of social links, especially in terms of their relationship with the _relevance_ feature (_i.e._ the likelihood that users will accept and like the recommended link). ### Conflict-Minimizing Links Our characterization of conflict-minimizing links is extended from Theorem 1: \(\Delta_{+e}\mathcal{C}=-\frac{(z_{i}-z_{j})^{2}}{1+b_{e}^{2}\left(I+L\right)^{ -1}b_{e}}\) and \(\Delta_{+e}\mathbb{E}_{s}[\mathcal{C}]=-\frac{\sigma^{2}|(I+L)^{-1}b_{e}|_{2} ^{2}}{1+b_{e}^{2}\left(I+L\right)^{-1}b_{e}}\). It is straightforward to see that the numerator \((z_{i}-z_{j})^{2}\) is the difference between expressed opinions of node \(i\) and \(j\). The rest two terms, \(b_{e}^{T}\left(I+L\right)^{-1}b_{e}\) and \(\sigma^{2}|(I+L)^{-1}b_{e}|_{2}^{2}\), can be interpreted by the following two theorems. **Theorem 3**.: _Given a social network \(G\) and a link \(e=(i,j)\) to add, the term \(b_{e}^{T}\left(I+L\right)^{-1}b_{e}\) measures a type of graph distance between nodes \(i,j\). The distance can be interpreted by the following quantity:_ \[b_{e}^{T}(I+L)^{-1}b_{e}\equiv\mathcal{N}^{-1}(\mathcal{N}_{ij}+\mathcal{N}_{ ji}) \tag{5}\] * \(\mathcal{N}\) _is the total number of spanning rooted forests of_ \(G\)_;_ * \(\mathcal{N}_{xy}\) _is the total number of spanning rooted forests of_ \(G\)_, in which node_ \(x\) _is the root of the tree to which_ \(x\) _belongs, and_ \(y\) _belongs to a different tree than that_ \(x\)_-rooted tree;_ Together with the interpretation of \((z_{i}-z_{j})^{2}\), Theorem 3 gives the two criteria for finding conflict-minimizing links over fixed initial opinions \(s\): 1. The two end nodes should be as close as possible in the network (so \(1+b_{e}^{T}\left(I+L\right)^{-1}b_{e}\) is small); 2. Expressed opinions at the end nodes should be as different as possible (so \((z_{i}-z_{j})^{2}\) is large); Criterion 1 is perhaps a bit surprising as one may think connecting two remote people should introduce more balanced perspectives to both of them. On the other hand, the two criteria still make much sense when viewed together: there must be something unusual about the network structure when two close friends with supposedly strong influence to each other actually hold very different opinions. The suggested conflict-minimizing link can be seen as a correction to the network structure's unusualness. **Theorem 4**.: _Given a social network \(G=(V,E)\) and a link \(e=(i,j)\) to add, the term \(\sigma^{2}|(I+L)^{-1}b_{e}|_{2}^{2}\) also measures a type of graph distance between nodes \(i\) and \(j\). The distance can be interpreted by the following quantity:_ \[\sigma^{2}|(I+L)^{-1}b_{e}|_{2}^{2}\equiv\sigma^{2}\mathcal{N}^{-2}\sum_{k \in V}\left(\mathcal{N}_{ik}-\mathcal{N}_{jk}\right)^{2} \tag{6}\] _where \(\mathcal{N}\) and \(\mathcal{N}_{xy}\) follow the definitions in_ **Theorem 3**_._ Similar to Theorem 3, Theorem 4 also explains \(\sigma^{2}|(I+L)^{-1}b_{e}|_{2}^{2}\) by a type of graph distance. However, note that there is a subtle difference between the two types of distance. The subtlety is especially important to distinguish because \(\Delta_{+e}\mathbb{E}_{s}[\mathcal{C}]\) is the ratio between the two terms. **Corollary 1**.: \(\Delta_{+e}\mathbb{E}_{s}[\mathcal{C}]\) _can be completely expressed by counts of different types of spanning rooted forests as defined in Theorem 3_ \[\Delta_{+e}\mathbb{E}_{s}[\mathcal{C}]\equiv-\sigma^{2}\mathcal{N}^{-1}( \mathcal{N}+\mathcal{N}_{ij}+\mathcal{N}_{ji})^{-1}\sum_{k\in V}\left(\mathcal{ N}_{ik}-\mathcal{N}_{jk}\right)^{2} \tag{7}\] We use Corollary 1 to give intuitive interpretations of \(\Delta_{+e}\mathbb{E}_{s}[\mathcal{C}]\). First, notice that given a social network \(G\), the number of \(G\)'s spanning rooted forests \(\mathcal{N}\) in the denominator is a constant. Therefore, the competing terms in \(\Delta_{+e}\mathbb{E}_{s}[\mathcal{C}]\) are \(\mathcal{N}_{ij}+\mathcal{N}_{ji}\) and \(\sum_{k\in V}\left(\mathcal{N}_{ik}-\mathcal{N}_{jk}\right)^{2}\). According to Theorem 3, \(\mathcal{N}_{xy}\) essentially measures the distance between \(x\) and \(y\) by counting \(x\)-rooted spanning forests that separate \(x\) and \(y\) into different components. Therefore, \(\mathcal{N}_{ij}+\mathcal{N}_{ji}\) emphasizes "local disconnectedness" between \(i\) and \(j\), while \(\sum_{keV}(\mathcal{N}_{ik}-\mathcal{N}_{jk})^{2}\) emphasizes \(i\) and \(j\)'s "global position gap" - "global" means that \(i\) and \(j\)'s position is defined relative to (every node of) the entire network. The following example further illustrates their difference. **Example 1**.: _Consider a random network generated from the stochastic block model with node partitions \([N_{A},N_{B},N_{C},N_{D}]\) = \([100,100,10,10]\) and block matrix shown in Figure 1 left. The diagram in Figure 1 middle illustrates the network: A, B are the two main clusters with high link density (0.5); the other two clusters, C, D, are much smaller and have lower link densities (0.1, 0.3) both internally and to A, B. Structurally speaking, C, D serve as two "bridges" between A and B._ _Now consider adding links to three groups of previously disconnected node pairs defined as:_ * _Group 1: both end nodes in Cluster A, e.g._ \((a_{3},a_{4})\)__ * _Group 2: one end node in Cluster A, one end node in cluster B, both linked to some other node(s) in cluster C, e.g._ \((a_{1},b_{1})\)__ * _Group 3: one end node in Cluster A, one end node in cluster B, both linked to some other node(s) in cluster D, e.g._ \((a_{2},b_{2})\)__ Our simulation shows that Group 1 introduce least conflict reduction on average -- in fact much less than the other two groups do. Since each pair of nodes in Group 1 are from the same densely connected cluster, it means that the numerator term \(\sum_{k\in V}(\mathcal{N}_{ik}-\mathcal{N}_{jk})^{2}\) dominates \(\Delta_{*e}\mathbb{E}_{s}[\mathcal{C}]\). Therefore, we may conclude that in general a link reduces more conflict if it involves two nodes that are globally distant. Comparing Group 2 and 3, we can further see that node pairs in Group 2 have stronger local connectivity than those in Group 3 ("bridge" C has higher link densities than "bridge" D). The fact that Group 2 reduces more conflict on average shows that stronger local connectivity actually positively contribute to conflict minimization. We have now found the following two features of conflict-minimizing links, subject to a random sample of initial opinions \(s\) under very mild conditions: 1. On a global scale, both node ends belonging to different clusters or having relatively disjoint neighborhood (so that \(\sigma^{2}|(I+L)^{-1}b_{e}|_{2}^{2}\) is large); 2. On a local scale, both ends indirectly but still decently well connected with each other (so that \(1+b_{e}^{T}(I+L)^{-1}b_{e}\) is small); ### Relating Conflict Minimization to Relevance Relevant links are links that are likely to be accepted by users. Over the past two decades, relevant links have been extensively studied as the core subject of link prediction problem in many different contexts [11, 12, 13]. We know that the relevance of social links has strong correlation with small Figure 1: A barbell-like social network with cluster and bridge structures, generated from stochastic block model. Different groups of links have different structural features, producing different expected conflict change when added to network, shown in the right panel. The sample means and their \(95\%\)-intervals are reported based on repeated simulations. Note that this example gives a special network for illustrative purpose, but our interpretations of Thrm.3, 4 and Cor.1 apply to any networks. graph distance between the two end nodes [12, 40]. Comparing this existing knowledge with our characterizations of conflict-minimizing links, it is not hard to perceive that relevance and conflict minimization are not always incompatible with each other. Instead, they can have a decent degree of alignment in some cases. ## 5 Measuring the Degree of Alignment Between Relevance and Conflict Minimization on Real-World Data Sec. 4's analysis shows that the two features of relevance and conflict minimization are _not_ strictly incompatible with each other. This section discusses how we can measure the two features' degree of alignment on real-world data. ### Definition of Conflict Awareness We start by formulating link additions: A link addition function \(f\) is defined as: \(f(e;G,\beta):(V\times V)\rightarrow[0,+\infty)\), where the function parameters are a given social network \(G=(V,E)\) and a budget \(\beta\) for adding links. \(f\) is defined on node pairs, subject to the budget constraint \(\sum_{e\in V\times V}f(e;G,\beta)=\beta\). Among all possible link addition functions, there is a conflict-minimizing function \(f^{*}(e;G,\beta)\), which is the function that reduces the most conflict under budget \(\beta\). We use \(\Delta_{f}\mathcal{C}\) and \(\Delta_{f}\mathbb{E}_{s}[\mathcal{C}]\) to denote the conflict change and expected conflict change caused by applying \(f\) over the network \(G\). The two terms are related by \(\Delta_{f}\mathbb{E}_{s}[\mathcal{C}]\equiv\int_{s}\rho(s)\ \Delta_{f} \mathcal{C}\ d_{s}\), where \(\rho(s)\) is the probability density function of \(s\). We further use \(L_{f}\) to denote the Laplacian of the network formed by only the new links added by \(f\). **Definition 1**.: _Given a social network \(G\), initial opinions \(s\), and a positive budget \(\beta\), the conflict awareness (CA) of a link addition function \(f(e;G,\beta)\) is defined by the conflict reduced by applying \(f\) to add links, divided by the best possible conflict reduction by applying \(f^{*}\) to add links:_ \[\textbf{CA}(f)=\Delta_{f}\mathcal{C}\ \big{/}\ \Delta_{f^{*}}\mathcal{C} \tag{8}\] _where_ \[\Delta_{f}\mathcal{C} =\ s^{T}(I+L+L_{f})^{-1}s-s^{T}(I+L)^{-1}s \tag{9}\] \[\Delta_{f^{*}}\mathcal{C} =\min_{L_{f}} \Delta_{f}\mathcal{C}\] (10) subject to \[L_{f}\in\mathcal{L}\ (\textit{Laplacian constraint})\] (11) \[\text{Tr}(L_{f})\leq 2\beta\ (\textit{budget constraint}) \tag{12}\] \(L+L_{f}\) is the Laplacian matrix of the network after being modified by \(f\), so the definition of \(\Delta_{f}\mathcal{C}\) is straightforward. \(\Delta_{f^{*}}\mathcal{C}\) is defined as the objective of an optimization problem, which essentially looks for the best network with total link weights \(\beta\) to be superimposed over the original network \(G\). The measure conflict awareness is useful for two reasons. First, CA essentially measures how "bad" any link recommendation algorithm is with regards to minimizing opinion conflict. By assigning \(f\) a function that recommends relevant links, CA immediately becomes a quantifier that shows how much mismatch exists between \(f\)-suggested relevant links and conflict-minimizing links. Second, CA is a better measure than the pure conflict change \(\Delta_{f}\mathcal{C}\). This is because CA is normalized to \([0,1]\), which allows meaningful comparisons across different social networks. We will see that this property becomes especially helpful in practice for characterizing a link recommendation algorithm. We can further show that CA also has a computationally desirable feature of being convex. **Proposition 1**.: _In Definition 1, \(\Delta_{f^{*}}\mathcal{C}\) is the objective of a convex optimization problem._ We can generalize Definition 1 to expectation of conflict awareness over a distribution of initial opinions \(s\) and prove its convexity. See Appendix B.6 for details. ### Measuring Conflict Awareness on Real-World Social Networks #### 5.2.1 Motivations and Experimental Setup We use the measure of conflict awareness defined in Sec. 5.1 to empirically investigate two important questions on the relationship between relevance and conflict reduction in link recommendations: * What is the conflict awareness of some of the popular off-the-shelf link recommendation algorithms? High conflict awareness (_e.g._ close to \(1.0\)) means that the algorithm is effective at reducing conflict, and low conflict awareness (_e.g._ less than \(0.2\)) means the opposite. * Does a more accurate link recommendation algorithm have higher conflict awareness? If we observe that for most recommendation algorithms the answer is affirmative, it means that relevance and conflict reduction are positively correlated; if we observe the opposite, it means relevance and conflict reduction are incompatible in practice. **Datasets.** We use two real-world datasets: Reddit and Twitter, collected by [39], one of the pioneering works that conduct empirical studies on the FJ model. See Appendix C.3 for more details on how the network data and the initial opinions are generated for the two datasets. **Baselines.** We test three groups of 13 different link recommendation methods. * Unsupervised distance-based measures: Personalized PageRank [41], Katz Index [42], Jaccard Index [43], Adamic Adar [44], Common Neighbors [45], Preferential Attachment [45], Resource Allocation [46]. They are among the most classic methods for link recommendations, whose distance-based heuristics underpin many deep-learning-based link recommendation methods. * Self-supervised graph learning: Logistic Regression (with Node2Vec [47] embeddings as input node features), Graph Convolutional Neural Network (GCN) [48], Relational Graph Convolutional Nerual Network (R-GCN) [49], Graph Transformer [50], SuperGAT [51]. The last three one are all GNN-based methods that achieved previous state-of-the-art on link prediction task. * Conflict minimization solver: This is to solve the convex optimization problem defined by Eq.(10)-(12); they find optimal weights for links to be added with the sole goal of minimizing the conflict. However, it is crucial to note that this solver has total ignorance of relevance, _i.e._ it can't differentiate positive and negative links apart. Therefore, it may actually end up recommending many negative (_i.e._ invalid) links that have no effect on conflict. **Evaluation pipeline.** For each dataset, we randomly sample \(\beta=100\) positive links from the edge set, and \(\beta\cdot\eta\) negative links from all disconnected node pairs; the negative sampling rate \(\eta\in[1,10]\) is a hyperparameter. The positive links are then removed from the network to be reserved for testing, together with the negative links. It is crucial to note that in this setting (and in the real world), only a positive link can be added to the social network. If a negative link gets recommended (_i.e._ assigned positive weights), it can't be added to the network. The whole process is repeated for 10 times with different random seeds. All links in the original network have unit weights, although the recommender is allowed to assign continuous weights to new links. As there are \(\beta\) positive links, we require that the total recommended weights must sum up to \(\beta\), which is enforced through linear scaling of \(f\)'s output. See Appendix C.5. **Evaluation metrics.** Besides conflict awareness, we also measure the recall and precision@10 of each link recommendation method as proxies for "relevance". **Reproducibility:** Our code and data can be downloaded from here. Other configurations of the numerical experiment can be found in Appendix C.4 #### 5.2.2 Result Analysis Fig.2 shows the measurement results of both conflict awareness and recall rate for the 13 link recommendation algorithms on the two datasets. The precision@10 measurement results are in Appendix C. The y-axis is the negative sampling rate \(\eta\). The results allow us to make the following observations. Figure 2: Measurement of conflict awareness and recall for 13 link recommendation algorithms on samples of Reddit and Twitter social networks. The x-axis is the \(\eta=\frac{\#\text{negative links}}{\#\text{positive links}}\) that controls class imbalance in the test set. First, we observe that the conflict awareness can vary a lot for different algorithms under different settings. For example, on Reddit's social network, R-GCN's conflict awareness can be as high as 0.95 with \(\eta=1\), while Jaccard Index's conflict awareness is mostly below 0.2. Second, we examine the effect of \(\eta\). In principle, a larger \(\eta\) means it is harder to identify positive links in the test set. For both networks, we can see that the conflict awareness of most algorithms drop as \(\eta\) goes up, though the four GNN-based methods seem to be slightly less affected. Similar trends can be observed from the recall plots (b) and (d), where GNN-based algorithms stay quite robust as the task gets increasingly harder. These observations suggests that the ability to suggest "relevant" links can be crucial for maintaining good conflict awareness, especially when the recommendation task is hard. Third, we have interesting observations on the "Conflict Minimization" algorithm: on both networks, this algorithm consistently produces the worst recall among all algorithms -- barely usable from the perspective of relevance. However, although there are not many relevant links in its recommendation, the algorithm still has the best conflict awareness most of the time. among the algorithm's recommended links, the few ones that are actually relevant (positive) are highly effective to reduce conflict. In that sense, we can still clearly see the there exists a certain amount of misalignment between "relevance" and "conflict reduction". ## 6 Limitation: the Paradox of Conflict and Happiness We found a limitation in the long line of existing works analyzing opinion conflict based on the FJ model, which our work also inherits from. The conflict measure we've discussed so far may _not_ reflect user's happiness in their social engagement. Originally proposed in the seminal work [52], the _happiness_ measure (or equivalently the _unhappiness_) quantifies the amount of mental pressure felt by people in their social engagement, as the sum of two terms: the amount of disagreement with friends (_i.e._ our disagreement term), and the amount of opinion shift between initial opinions and expressed opinions, _i.e._ internal conflict, defined as \[\mathcal{I}(G,s)=\sum_{i\in V}(z_{i}-s_{i})^{2}=s^{T}((I+L)^{-1}-I)^{2}s\] We use \(\mathcal{U}(G,s)\) to denote the unhappiness of people over a social network \(G\) and initial opinions \(s\). Fig. 3 illustrates the relationship of important concepts in the FJ model we've discussed so far. Fig.3 shows that the common term, disagreement, is shared by both measures of polarization and disagreement. Therefore, it seems likely that when one measure changes (say conflict drops due to the addition of a new link), the other measure should change in the same direction. Surprisingly however, this is not true. In fact, the following theorem shows the two measures to **always** change in opposite directions when network structure changes! **Theorem 5**.: _Given initial opinions \(s\) and social network \(G=(V,E)\), we have the following conservation law about conflict and unhappiness (notice the RHS is independent of the graph structure):_ \[\mathcal{C}(G,s)+\mathcal{U}(G,s)=s^{T}s \tag{13}\] Theorem 5 reveals a paradox: reducing conflict by modifying the structure of a social network always comes at the expense of more unhappiness of people. To resolve this paradox goes beyond this paper's scope, but would be a very interesting topic to study in the future. ## 7 Conclusion In this work, we analyzed the relationship between relevance and opinion conflict in online social link recommendations. We present multiple pieces of evidence challenging the view that the two objectives are totally incompatible in link recommendations. For future work, it would be extremely interesting to study recommendation algorithms that can combine the two features. We also conjecture that rigorous bounds can be derived for the conflict awareness of some classical link recommendation methods such as Personalized PageRank and Katz Index. Figure 3: Relationship of the important concepts in the Friedkin-Johnsen opinion model. ## Acknowledgement The authors thank Eva Tardos and Sigal Oren for their helpful feedback on this work. This work is supported in part by a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, MURI grant W911NF-19-0217, AFOSR grant FA9550-19-1-0183, ARO grant W911NF19-1-0057, a Simons Collaboration grant, and a grant from the MacArthur Foundation.
2306.06394
PEAR: Primitive enabled Adaptive Relabeling for boosting Hierarchical Reinforcement Learning
Hierarchical reinforcement learning (HRL) has the potential to solve complex long horizon tasks using temporal abstraction and increased exploration. However, hierarchical agents are difficult to train due to inherent non-stationarity. We present primitive enabled adaptive relabeling (PEAR), a two-phase approach where we first perform adaptive relabeling on a few expert demonstrations to generate efficient subgoal supervision, and then jointly optimize HRL agents by employing reinforcement learning (RL) and imitation learning (IL). We perform theoretical analysis to $(i)$ bound the sub-optimality of our approach, and $(ii)$ derive a generalized plug-and-play framework for joint optimization using RL and IL. Since PEAR utilizes only a handful of expert demonstrations and considers minimal limiting assumptions on the task structure, it can be easily integrated with typical off-policy RL algorithms to produce a practical HRL approach. We perform extensive experiments on challenging environments and show that PEAR is able to outperform various hierarchical and non-hierarchical baselines on complex tasks that require long term decision making. We also perform ablations to thoroughly analyse the importance of our various design choices. Finally, we perform real world robotic experiments on complex tasks and demonstrate that PEAR consistently outperforms the baselines.
Utsav Singh, Vinay P. Namboodiri
2023-06-10T09:41:30Z
http://arxiv.org/abs/2306.06394v5
# PEAR: Primitive enabled Adaptive Relabeling for boosting Hierarchical Reinforcement Learning ###### Abstract Hierarchical reinforcement learning (HRL) has the potential to solve complex long horizon tasks using temporal abstraction and increased exploration. However, hierarchical agents are difficult to train as they suffer from inherent non-stationarity due to continuously changing low level primitive. We present primitive enabled adaptive relabeling (PEAR), a two-phase approach where firstly we perform adaptive relabeling on a few expert demonstrations to generate subgoal supervision dataset, and then employ imitation learning for regularizing HRL agents. We bound the sub-optimality of our method using theoretical bounds and devise a practical HRL algorithm for solving complex robotic tasks. We perform experiments on challenging robotic tasks: maze navigation, pick and place, rope manipulation and kitchen environments, and demonstrate that the proposed approach is able to solve complex tasks that require long term decision making. Since our method uses a handful of expert demonstrations and makes minimal limiting assumptions on task structure, it can be easily integrated with typical model free reinforcement learning algorithms to solve most robotic tasks. We empirically show that our approach outperforms previous hierarchical and non-hierarchical baselines, and exhibits better sample efficiency. We also perform real world robotic experiments by deploying the learned policy on a real robotic rope manipulation task and demonstrate that PEAR consistently outperforms the baselines. Here is the link for supplementary video: [https://tinyurl.com/pearOverview](https://tinyurl.com/pearOverview) ## 1 Introduction In recent years, reinforcement learning has been successfully applied to a number of short-horizon robotic manipulation tasks [1; 2; 3; 4]. However, long horizon tasks are difficult to train [5] due to inherent issues like credit assignment and require long-term planning. These tasks require large amount of environment interactions for learning, especially in sparse reward scenarios [6]. Hierarchical reinforcement learning (HRL) [7; 8; 9; 10; 11] holds the promise of solving complex tasks using increased exploration and temporal abstraction [12]. In the goal-conditioned feudal architecture [8; 9], the higher level policy predicts subgoals for the lower primitive, which in turn tries to achieve the subgoal by performing atomic actions directly on the environment. Unfortunately, HRL approaches suffer from non-stationarity[13; 14] when multiple hierarchical levels are trained simultaneously. Due to continuously changing lower primitive behavior, the previously collected off-policy transitions are rendered obsolete, leading to unstable higher level state transition and reward functions. A particular class of hierarchical approaches [15; 16; 17] segment expert demonstrations into subgoal transition dataset, and consequently leverage the subgoal dataset to bootstrap learning. Ideally, the segmentation process should produce subgoals at appropriate level of difficulty for the lower primitive, in order to properly balance the task split between hierarchical levels. One possible approach of task segmentation is to perform fixed window based relabeling [15] on expert demonstrations. Despite being simple, this approach is effectively a brute force segmentation approach, and thus may generate subgoals that are too easy or too hard with respect to the current ability of continuously changing lower primitive, possibly leading to degenerate solutions. This leads to the question: can we do better than fixed relabeling and devise an efficient task segmentation approach? As Greek philosopher Heraclitus famously said: _there is nothing permanent except change_. Hence, our idea is to consider the changing lower primitive and dynamically generate efficient subgoals in consonance with the current goal reaching capability of lower primitive. In our approach, the action value function of the lower level policy is used to perform _adaptive relabeling_ on a handful of expert demonstrations to dynamically generate a curriculum of reachable subgoals for lower primitive. This subgoal dataset is then used to train an imitation learning based regularizer. Our approach thus combines HRL with primitive enabled imitation learning regularization to devise an elegant HRL method algorithm that ameliorates non-stationarity. We call our hierarchical approach: _primitive enabled adaptive relabeling (PEAR)_ for boosting hierarchical reinforcement learning. We also derive sub-optimality bounds in section 3.3 to theoretically justify the benefits of adaptive relabeling in our hierarchical framework. We perform extensive experimentation on complex robotic tasks: maze navigation, pick and place, rope manipulation and kitchen environments, and empirically show that our adaptive relabeling based approach clearly outperforms other hierarchical and non-hierarchical baselines on all tasks. The methodology details are provided in section 3 and experimentation details and results are provided in section 5. In summary, we propose a theoretically justified practical hierarchical reinforcement learning algorithm which can be easily integrated with off-policy reinforcement learning for solving complex long horizon tasks. ## 2 Background **Off-policy Reinforcement Learning** We define our goal-conditioned off-policy reinforcement learning setup as follows: _Universal Markov Decision Process_ (UMDP) [18] are Markov Decision processes augmented with the goal space \(G\), where \(\mathcal{M}=(\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma,\mathcal{ G})\). Here, \(\mathcal{S}\) is state space, \(\mathcal{A}\) is action space, \(\mathcal{P}(s^{{}^{\prime}}|s,a)\) is the state transition probability function, \(\mathcal{R}\) is reward function, and \(\gamma\) is discount factor. \(\pi(a|s,g)\) represents the goal-conditioned policy which predicts the probability of taking action \(a\) when the state is \(s\) and goal is \(g\). We use off-policy Soft Actor Critic (SAC) [19] algorithm in our off-policy reinforcement learning setup. The overall objective is to maximize expected future discounted reward distribution \(J=(1-\gamma)^{-1}\mathbb{E}_{s\sim d^{\pi},a\sim\pi(a|s,g),g\sim G}\left[r(s_{ t},a_{t},g)\right]\) **Hierarchical Reinforcement Learning** In our goal-conditioned hierarchical reinforcement learning setup, the overall policy \(\pi\) is divided into multi-level policies. We consider a two level hierarchical setup, where the higher level policy \(\pi^{H}(s_{g}|s,g)\) predicts subgoals [8]\(s_{g}\) for the lower level primitive, and the lower primitive \(\pi^{L}(a|s,s_{g})\) tries to achieve those subgoals by executing primitive actions \(a\) on the environment. \(\pi^{H}\) generates subgoals \(s_{g}\) after every \(c\) timesteps and \(\pi^{L}\) tries to achieve \(s_{g}\) within \(c\) timesteps. \(\pi^{H}\) gets sparse extrinsic reward \(r_{ex}\) from the environment, whereas \(\pi^{L}\) gets sparse intrinsic reward \(r_{in}\) from \(\pi^{H}\). \(\pi^{L}\) gets rewarded with reward \(0\) if the agent reaches within \(\delta^{L}\) distance of the predicted subgoal \(s_{g}\), and \(-1\) otherwise: \(r_{in}=-1(\|s_{t}-s_{g}\|_{2}>\delta^{L})\). Similarly, \(\pi^{H}\) gets extrinsic reward \(0\) if the achieved goal is within \(\delta^{H}\) distance of the final goal \(g\), and \(-1\) otherwise: \(r_{ex}=-1(\|s_{t}-g\|_{2}>\delta^{H})\). We assume access to a small number of directed expert demonstrations states (not actions) \(D=\{e^{i}\}_{i=1}^{N}\), where \(e^{i}=(s_{0}^{e},s_{1}^{e},\dots,s_{T-1}^{e})\). ## 3 Methodology We explain our proposed primitive enabled adaptive relabeling (PEAR) approach, which leverages a handful of expert demonstrations \(D\) to solve long horizon tasks using reinforcement learning and imitation learning. We propose a two step approach: \((i)\) the current lower primitive \(\pi^{L}\) is used to adaptively relabel expert demonstrations to generate efficient subgoal supervision \(D_{g}\), and \((ii)\) the typical reinforcement learning objective is jointly optimized with additional imitation learning based regularization objective using \(D_{g}\). In this section, we first explain primitive enabled adaptive relabeling. We then explain our joint optimization approach, and finally derive theoretical bounds for our proposed approach with respect to the optimal hierarchical policy. ### Primitive enabled adaptive relabeling In a typical goal-conditioned RL setting, the action value function \(Q_{\pi^{L}}(s,g,a)\) describes the expected cumulative reward when the starting state is \(s\), goal is \(g\), lower primitive takes action \(a\) and follows policy \(\pi^{L}\) for the rest of the episode. PEAR uses \(Q_{\pi^{L}}(s,g,a)\) to parse the expert demonstration trajectories \(D\) and generate efficient subgoal transition dataset \(D_{g}\). Intuitively, \(Q_{\pi^{L}}(s,g,a)\) considers the current goal reaching capability of the lower primitive for selecting good subgoal transitions \(D_{g}\). When \(s_{i}^{e}\) is provided as subgoal, \(Q_{\pi^{L}}(s,s_{i}^{e},a_{i})\) computes the expected cumulative reward when start state is \(s\) and next primitive action is \(a_{i}\). Intuitively, a high value of \(Q_{\pi^{L}}(s,s_{i}^{e},a_{i})\) implies that there is high probability that the goal \(s_{i}^{e}\) is a good subgoal for the the current lower primitive, since it expects to achieve a high intrinsic reward for this subgoal state from the higher policy. Conversely, a low value of \(Q_{\pi^{L}}(s,s_{i}^{e},a_{i})\) implies that the lower primitive considers \(s_{i}^{e}\) hard, since it expects to achieve a low intrinsic reward for \(s_{i}^{e}\). As explained below, we compute efficient subgoal transitions using PEAR, and add them to \(D_{g}\). We depict a single pass of adaptive relabeling in Figure 1. ``` 1:Initialize \(D_{g}=\{\}\) 2:for each \(e=(s_{0}^{e},s_{1}^{e},\ldots,s_{T-1}^{e})\) in \(\mathcal{D}\)do 3: Initial state index \(init\gets 0\) 4: Subgoal transitions \(D_{g}^{e}=\{\}\) 5:for i = 1 to \(T-1\)do 6: compute \(Q_{\pi^{L}}(s_{init}^{e},s_{i}^{e},a_{i})\) 7: where \(a_{i}=\pi^{L}(s_{i-1}^{e},s_{i}^{e})\) 8:if\(Q_{\pi^{L}}(s_{init}^{e},s_{i}^{e},a_{i})<Q_{th}\)then 9:\(w=(i-1)\) 10:for j = \(init\)to \(w\)do 11:for k = (\(init+1\))to \(w\)do 12: Add \((s_{j},s_{w},s_{k})\) to \(D_{g}^{e}\) 13: initial state index \(init\gets w\) 14:\(D_{g}\gets D_{g}\cup D_{g}^{e}\) ``` **Algorithm 1** Adaptive Relabeling Consider expert demonstrations dataset \(D=\{e^{j}\}_{i=1}^{N}\), where each trajectory \(e^{j}=(s_{0}^{e},s_{1}^{e},\ldots,s_{T-1}^{e})\). We start with an expert state demonstration trajectory \(e^{j}=(s_{0}^{e},s_{1}^{e},\ldots,s_{T-1}^{e})\). Let the initial state be \(s_{0}^{e}\). We incrementally provide states \(s_{i}^{e}\) for \(i=1\) to \(T-1\) as subgoals to lower primitive action value function \(Q_{\pi^{L}}(s_{0}^{e},g=s_{i}^{e},a_{i})\), where \(a_{i}=\pi^{L}(s_{i-1}^{e},s_{i}^{e})\). At every step, we compare \(Q_{\pi^{L}}(s_{0}^{e},s_{i}^{e},a_{i})\) to the \(Q_{thresh}\) hyperparameter. If \(Q_{\pi^{L}}(s_{0}^{e},s_{i}^{e},a_{i})>=Q_{thresh}\), we move on to next expert demonstration state \(s_{i+1}^{e}\). Otherwise if \(Q_{\pi^{L}}(s_{0}^{e},g=s_{i}^{e},a_{i})<Q_{thresh}\), we use \(s_{i}^{e}\) to compute subgoal transitions and populate \(D_{g}\). This is explained in Algorithm 1. It is important to note that as lower primitive is trained, it keeps getting better at achieving harder subgoals and so the previously collected \(D_{g}\) needs to be updated according to its current goal reaching capability. Accordingly, we clear and re-populate \(D_{g}\) after every \(p\) timesteps (where \(p\) hyperparameter depends on the environment). This periodic re-population generates a natural curriculum for lower primitive which enables \(Q_{\pi^{L}}\) to always pick reachable subgoals. Thus using adaptive relabeling, PEAR generates a curriculum of subgoals for the lower primitive that are selected according to the current goal reaching ability of lower primitive. The pseudocode for PEAR is given in Algorithm 2. Figure 2 shows the evolution of subgoals during training in some of our experiments. ``` 1:Initialize \(D_{g}=\{\}\) 2:for\(i=1\ldots N\)do 3:if\(i\%p==0\)then 4: Clear \(D_{g}\) 5: Populate \(D_{g}\) via adaptive relabeling 6: Collect experience using \(\pi^{H}\) and \(\pi^{L}\) 7: Update lower primitive via SAC and IL regularizer using \(D\) (Eq 4 or Eq 6) 8: Sample transitions from \(D_{g}\) 9: Update higher policy via SAC and IL regularizer using \(D_{g}\) (Eq 3 or Eq 5) ``` **Algorithm 2** PEAR ### Joint optimization In this section, we explain how we make use of subgoal transition dataset \(D_{g}\) to learn an imitation learning (IL) regularizer, and perform joint optimization of hierarchical policies. We consider both behavior cloning (BC) and inverse reinforcement learning (IRL) regularization. We theoretically explain the rationale behind our choice of regularizers later in Section 3.3. Henceforth, PEAR-IRL will represent PEAR with IRL regularizer and PEAR-BC will represent PEAR with BC regularizer. We first consider the behavior cloning regularizer. Let \((s^{\epsilon},s^{\epsilon}_{g},s^{\epsilon}_{next})\sim D_{g}\) be a subgoal transition from the expert trajectory where \(s^{\epsilon}\) is current state, \(s^{\epsilon}_{next}\) is next state, \(g^{\epsilon}\) is final goal and \(s^{\epsilon}_{g}\) is subgoal supervision. Let \(s_{g}\) be the subgoal predicted by the high level policy \(\pi^{H}_{\theta}(\cdot|s^{\epsilon},g^{\epsilon})\) and the BC parameters be \(\zeta\). The behavior cloning regularization objective is as follows: \[\min_{\zeta}\mathbb{E}_{(s^{\epsilon},s^{\epsilon}_{g},s^{\epsilon}_{next}) \sim D_{g}}[s^{\epsilon}_{g}-s_{g}]^{2} \tag{1}\] We now consider the IRL objective, which is implemented as a GAIL [20] like objective implemented using LSGAN [21]. Let \(\mathbb{D}^{H}_{\epsilon}\) be the higher level discriminator with parameters \(\epsilon\). The IRL regularization objective is as follows: \[\max_{\pi^{H}_{\theta}}\min_{\epsilon}\frac{1}{2}\mathbb{E}_{(s^{ \epsilon},\cdot,\cdot)\sim D_{g},s_{g}\sim\pi^{H}_{\theta}(\cdot|s^{\epsilon },g^{\epsilon})}[\mathbb{D}^{H}_{\epsilon}(\pi^{H}_{\theta}(\cdot|s^{ \epsilon},g^{\epsilon}))-0]^{2}+\frac{1}{2}\mathbb{E}_{(s^{\epsilon},s^{ \epsilon}_{g},\cdot)\sim D_{g}}[\mathbb{D}^{H}_{\epsilon}(s^{\epsilon}_{g})-1 ]^{2} \tag{2}\] Let \(J^{H}_{BC}\) and \(J^{L}_{BC}\) represent upper and lower BC objectives, which depend on parameters \(\zeta_{H}\) and \(\zeta_{L}\) respectively. Let \(J^{H}_{D}\) and \(J^{L}_{D}\) represent upper and lower IRL objectives, which depend on parameters \((\theta_{H},\epsilon_{H})\) and \((\theta_{L},\epsilon_{L})\) respectively. The higher level policy learns to predict efficient subgoals that maximize the sum of discounted future rewards for our task using off-policy reinforcement learning. Let this objective function be represented as \(J^{H}_{\theta_{H}}\) and \(J^{L}_{\theta_{L}}\) for upper and lower policies. Additionally, the BC and IRL regularization objectives create a natural curriculum for regularizing higher level policy to predict subgoals that are closer to the distribution of subgoal dataset \(D_{g}\). Using BC, the high and lower level policies are trained by optimizing Equations 3 and 4. \[\min_{\zeta_{H}}\max_{\theta_{H}}(J^{H}_{\theta_{H}}+\psi*J^{H}_{BC}(\zeta_{H})) \tag{3}\] \[\min_{\zeta_{L}}\max_{\theta_{L}}(J^{L}_{\theta_{L}}+\psi*J^{L}_{BC}(\zeta_{L})) \tag{4}\] Using IRL, the high level policy and lower level are trained by optimizing Equations 5 and 6. \[\min_{\epsilon_{H}}\max_{\theta_{H}}(J^{H}_{\theta_{H}}+\psi*J^{H}_{D}(\theta _{H},\epsilon_{H})) \tag{5}\] \[\min_{\epsilon_{L}}\max_{\theta_{L}}(J^{L}_{\theta_{L}}+\psi*J^{L}_{D}(\theta _{L},\epsilon_{L})) \tag{6}\] The lower policy is trained using primitive expert demonstration dataset \(D\), whereas the upper level is trained using subgoal transition dataset \(D_{g}\). \(\psi\) is the regularization weight hyper-parameter. We perform ablation analysis for choosing \(\psi\) in our experiments in Appendix Section 8.4. Figure 1: **Adaptive Relabeling Overview**: We segment expert demonstrations by consecutively passing demonstration states as subgoals (for \(i=1\) to 7), and finding the state \(i=4\) where \(Q_{\pi^{L}}(s,s_{i},a_{i})<Q_{thresh}\). Since \(w=3\) was the last reachable subgoal, we use \(w\) to populate \(D_{g}\), and continue with \(w\) as the next start state. Figure 2: **Subgoal evolution**: In maze navigation (Row 1), pick and place (Row 2), and rope manipulation (Row 3) tasks, as lower primitive improves with training, higher level subgoal predictions (blue spheres) also improve. This generates a curriculum of reachable subgoals for lower primitive (red spheres represent final goal). ### Suboptimality analysis In this section, we analyze the suboptimality of our method, and examine how the imitation learning objective affects the performance. Let \(\pi^{*}\) and \(\pi^{**}\) be unknown higher level and lower level optimal policies respectively. Let \(\pi^{H}_{\theta_{H}}\) be our high level policy and \(\pi^{L}_{\theta_{L}}\) be our lower primitive policy, where \(\theta_{H}\) and \(\theta_{L}\) are trainable parameters of higher and lower level policies respectively. \(D_{TV}(\pi_{1},\pi_{2})\) denotes total variation divergence between probability distributions \(\pi_{1}\) and \(\pi_{2}\). \(s\) is the current state, \(g\) is the final episodic goal, \(s_{g}\) is the subgoal provided by upper level policy and \(\tau\) are \(c\) length sub-trajectories. Let \(\Pi^{H}_{D}\) and \(\Pi^{L}_{D}\) be upper and lower level probability distributions which generate datasets \(D_{H}\) and \(D_{L}\) respectively. \(\kappa\) is a distribution over states and actions, and \(G\) is the goal space. Firstly, we extend the definition from [22] to goal-conditioned policies: **Definition 1**.: \(\pi^{*}\) is \(\phi_{D}\)-common in \(\Pi^{H}_{D}\), if \(\mathbb{E}_{s\sim\kappa,\pi^{H}_{D}\sim\Pi^{H}_{D},g\sim G}[D_{TV}(\pi^{*}( \tau|s,g)||\pi^{H}_{D}(\tau|s,g))]\leq\phi_{D}\) We define the suboptimality of policy \(\pi\) with respect to optimal policy \(\pi^{*}\) as: \[Subopt(\theta)=|J(\pi^{*})-J(\pi)| \tag{7}\] **Theorem 1**.: Assuming the optimal policy \(\pi^{*}\) is \(\phi_{D}\) common in \(\Pi^{H}_{D}\), the suboptimality of upper policy \(\pi^{H}_{\theta_{H}}\), over \(c\) length sub-trajectories \(\tau\) sampled from \(d^{\pi^{*}}_{c}\) can be bounded as: \[|J(\pi^{*})-J(\pi^{H}_{\theta_{H}})|\leq\lambda_{H}*\phi_{D}+\lambda_{H}* \mathbb{E}_{s\sim\kappa,\pi^{H}_{D}\sim\Pi^{H}_{D},g\sim G}[D_{TV}(\pi^{H}_{ D}(\tau|s,g)||\pi^{H}_{\theta_{H}}(\tau|s,g))]] \tag{8}\] where \(\lambda_{H}=\frac{2}{(1-\gamma)(1-\gamma^{c})}R_{max}\|\frac{d^{\pi^{*}}_{c} }{\kappa}\|_{\infty}\) Furthermore, the suboptimality of lower primitive \(\pi^{L}_{\theta_{L}}\) can be bounded as: \[|J(\pi^{**})-J(\pi^{L}_{\theta_{L}})|\leq\lambda_{L}*\phi_{D}+ \lambda_{L}*\mathbb{E}_{s\sim\kappa,\pi^{L}_{D}\sim\Pi^{L}_{D},s_{g}\sim\pi^{ L}_{\theta_{L}}}[D_{TV}(\pi^{L}_{D}(\tau|s,s_{g})||\pi^{L}_{\theta_{L}}(\tau|s,s_{g}))]] \tag{9}\] where \(\lambda_{L}=\frac{2}{(1-\gamma)^{2}}R_{max}\|\frac{d^{\pi^{**}}_{c}}{\kappa}\|_ {\infty}\) The proofs for Equations 8 and 9 are provided in Appendix Section 8.1. Equation 8 can be rearranged to yield the following form: \[J(\pi^{*})\geq J(\pi^{H}_{\theta_{H}})-\lambda_{H}*\phi_{D}- \lambda_{H}*\mathbb{E}_{s\sim\kappa,\pi^{H}_{D}\sim\Pi^{H}_{D},g\sim G}[d(\pi^ {H}_{D}(\tau|s,g)||\pi^{H}_{\theta_{H}}(\tau|s,g))] \tag{10}\] where (considering \(\pi^{H}_{D}(\tau|s,g)\) as \(\pi_{A}\) and \(\pi^{H}_{\theta_{H}}(\tau|s,g)\)) as \(\pi_{B}\), \[d(\pi_{A}||\pi_{B})=D_{TV}(\pi_{A}||\pi_{B}) \tag{11}\] This can be perceived as a minorize maximize algorithm which intuitively means: the overall objective can be optimized by \((i)\) maximizing the objective \(J(\pi^{H}_{\theta_{H}})\) via RL, and \((ii)\) minimizing TV divergence between \(\pi^{*}\) and \(\pi^{H}_{\theta_{H}}\). We use entropy regularized Soft Actor Critic [23] to maximize \(J(\pi^{H}_{\theta_{H}})\). In Equation 8, the suboptimality bound is dependent on \(\phi_{D_{g}}\), which represents how good is the subgoal dataset \(D_{g}\) populated by PIP. A lower value of \(\phi_{D_{g}}\) implies that the optimal policy \(\pi^{*}\) is closely represented by the dataset \(D_{g}\). Since we use lower primitive to parse expert demonstrations, as the lower primitive gets better, \(\pi_{D_{g}}\) gets closer to \(\pi^{*}\). Hence \(D_{g}\) improves and the value of parameter \(\phi_{D}\) decreases, which implies that suboptimality bound in Equation 8 gets tighter. The dataset \(D_{g}\) is cleared every every \(p\) timesteps using adaptive relabeling, as explained in Algorithm 1. This periodic re-population generates a natural curriculum of reachable subgoals for the lower primitive. Notably, different parametrizations of \(d\) yield different imitation learning regularizers. When \(d\) is formulated as Kullback-Leibler divergence, the imitation learning regularizer takes the form of behavior cloning (BC) objective [24], and when \(d\) is formulated as Jensen-Shannon divergence, the imitation learning objective takes the form of inverse reinforcement learning (IRL) objective. Thus, the theoretical analysis provides a nice rationale of how different choices of imitation learning based regularizers leads to different approaches. ## 4 Related Work Hierarchical reinforcement learning (HRL) framework [25; 7; 26; 27] promises the advantages of temporal abstraction and increased exploration [12]. The options architecture [7; 11; 28; 29; 30; 10] learns temporally extended macro actions and termination function to propose an elegant hierarchical framework. However, such approaches may produce degenerate solutions in the absence of proper regularization. Some typical approaches approaches restrict the problem search space by greedily solving for specific goals [31; 32], which has also been extended to hierarchical RL [33; 34; 35]. In goal-conditioned hierarchical feudal learning [8; 9], the higher level agent produces subgoals for the lower primitive, which in turn executes atomic actions on the environment. However, off-policy feudal HRL approaches are cursed by non-stationarity issue. Some prior approaches [13; 14] deal with the non-stationarity by relabeling previously collected transitions for training goal-conditioned policies. In contrast, our proposed approach deals with non-stationarity by using an imitation learning based regularizer. We empirically show in section 5 that our regularization based approach outperforms relabeling based hierarchical approaches on a number of complex long horizon tasks. Prior methods [1; 24; 36] leverage expert demonstrations to improve sample efficiency and accelerate learning. A typical line of work uses imitation learning to bootstrap learning in complex tasks [37; 38; 17; 39]. Previous approaches use fixed relabeling [15] for performing task segmentation. However, such approaches might result in unbalanced task split between hierarchical levels. In contrast, our approach sidesteps this limitation by segmenting expert demonstration trajectories into _meaningful_ subtasks. Intuitively, this enables balanced task split, thereby avoiding degenerate solutions. ## 5 Experiments The experiments aim to answer the following questions: \((i)\) Does leveraging expert demonstrations enable PEAR to boost learning? \((ii)\) Does adaptive relabeling outperform fixed relabeling? \((iii)\) Is PEAR able to solve long horizon tasks? We perform experiments on four robotic Mujoco [40] environments: \((i)\) maze navigation, \((ii)\) pick and place, \((iii)\) rope manipulation, and \((iv)\) franka kitchen. For qualitative results, please refer to the video at [https://tinyurl.com/pearOverview](https://tinyurl.com/pearOverview) **Environment details** In maze navigation, the \(7\)-DOF robotic arm navigates across randomly generated four room mazes. The closed gripper (fixed at table height) has to navigate across the maze to the goal position (shown as red sphere in Fig 2). In pick and place environment, the \(7\)-DOF robotic arm gripper has to navigate to the square block, pick it up and bring it to the goal position. In rope manipulation task, a deformable soft rope is kept on the table and the \(7\)-DoF robotic arm performs pokes to nudge the rope towards the desired goal rope configuration. In the kitchen environment, the \(9\)-DoF franka robot is has to perform a complex multi-stage task in order to achieve the final goal. Although many such permutations can be chosen, we formulate the following task: the robot has to first open the microwave door, then switch on the specific gas knob where the kettle is placed. The maximum task horizon \(T\) is kept at \(225\), \(50\), \(25\) and \(280\) timesteps in maze, pick and place, rope and kitchen environments respectively. The lower primitive is allowed to execute for \(c\) timesteps, ie \(15\), \(7\), \(5\),and \(17\) for the maze, pick and place, rope and kitchen respectively, after which the upper level policy generates the next subgoal. We use \(100\) expert demonstrations for maze navigation, pick and place, and rope manipulation each, and \(28\) expert demonstrations for the kitchen task. The experiments are run for \(4.73e5\), \(1.1e5\), \(1.58e6\), and \(5.32e5\) timesteps in maze, pick and place, rope and kitchen respectively. The expert demonstration collection procedure and other extensive environment details are provided in the Appendix Section 8.2 and 8.3 respectively. **Implementation details** The actor, critic and discriminator networks are formulated as \(3\) layer fully connected neural networks with \(512\) neurons in each layer. In our experiments, we use off-policy Soft Actor Critic [19] for optimizing RL objective, using Adam [41] optimizer. The regularization weight hyper-parameter \(\Psi\) is set at \(0.001\), \(0.005\), \(0.005\), and \(0.005\) for maze, pick and place, rope and kitchen respectively. The hyper-parameter \(p\) is set to be \(1.1e4\), \(2500\), \(3.9e5\), and \(1.4e4\) for maze, pick and place, rope and kitchen respectively. When calculating the distance threshold hyper-parameter \(p\), we normalize the \(Q_{\pi^{L}}\) values of a trajectory before comparing with \(D_{thresh}\): \(((Q_{\pi^{L}}(s_{0}^{e},s_{i}^{e},a_{i})-min\_value)/max\_value)*100\) for \(i=1\) to \(T-1\). The distance threshold hyperparameter \(D_{thresh}\) is set at \(10\), \(0\), \(0\), and \(0\) for maze, pick and place, rope and kitchen respectively. The hyper-parameter values are set after extensive experimentation. We provide other implementation details and ablation experiments in the Appendix Section 8.3 and 8.4. ### Evaluation and results In Table 1, we report the success rate performance of our method and other baselines averaged over \(5\) seeds, and evaluated over \(N=100\) random episodic rollouts. In our experiments, we do not perform any supervised pre-training for any of the hierarchical levels. We perform comparisons with a number of baselines and ablations to demonstrate that our method indeed boosts hierarchical learning. We present two variants of our approach as mentioned in Section 3: PEAR-IRL represents PEAR with inverse reinforcement learning regularizer and PEAR-BC represents PEAR with behavior cloning regularizer. Relay Policy Learning (RPL) [15] originally used supervised pre-training of hierarchical levels, followed by relay fine tuning. In order to ascertain fair comparisons, we use an ablation of RPL which does not involve supervised pre-training. By comparing our method with RPL, we demonstrate that adaptive relabeling indeed outperforms fixed relabeling on all environments. Hierarchical actor critic (HAC) [14] deals with non-stationarity by relabeling transitions and assuming optimal transitions by the lower primitive. We perform comparisons with this baseline and show that our adaptive relabeling based approach outperforms HAC and indeed ameliorates non-stationarity. Furthermore, our approach outperforms pure hierarchical (HIER) baseline, demonstrating the advantage of leveraging expert demonstrations via IL regularization. HIER-NEG is a hierarchical baseline where the upper level is negatively rewarded if the lower primitive fails to achieve the subgoal. We found empirically that this does not perform well and hypothesize that this is due to reward biasing caused by the negative reward signal (notably, the reward is no longer sparse for the lower primitive). We perform comparisons with Discriminator Actor Critic (DAC) [42], which is a flat (single-level) approach that leverages expert demonstrations using a learned discriminator. Since our approach outperforms this baseline, this shows the importance of efficient hierarchical learning and adaptive relabeling. We also include FLAT baseline which is a single level baseline that does not use any expert demonstrations, although we found that it fails to show any significant results in any of the tasks. The training plots for the four environments are provided in Figure 3. In all experiments, PEAR exhibits faster convergence and consistently outperforms other baselines on all tasks. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & Maze & Pick n Place & Rope & Kitchen \\ \hline PEAR-IRL(ours) & **0.83 \(\pm\) 0.04** & **0.96 \(\pm\) 0.01** & **0.24 \(\pm\) 0.03** & **0.89 \(\pm\) 0.06** \\ PEAR-BC(ours) & 0.55 \(\pm\) 0.08 & **0.45 \(\pm\) 0.28** & **0.23 \(\pm\) 0.06** & **0.41 \(\pm\) 0.31** \\ RPL & 0.58 \(\pm\) 0.09 & 0.28 \(\pm\) 0.17 & 0.13 \(\pm\) 0.07 & 0.08 \(\pm\) 0.1 \\ HAC & **0.6 \(\pm\) 0.23** & 0.0 \(\pm\) 0.0 & 0.02 \(\pm\) 0.01 & 0.0 \(\pm\) 0.0 \\ HIER-NEG & 0.01 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 & 0.01 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 \\ HIER & 0.02 \(\pm\) 0.02 & 0.0 \(\pm\) 0.0 & 0.01 \(\pm\) 0.0 & 0.0 \(\pm\) 0.0 \\ DAC & 0.02 \(\pm\) 0.02 & 0.21 \(\pm\) 0.06 & 0.03 \(\pm\) 0.01 & 0.0 \(\pm\) 0.0 \\ FLAT & 0.01 \(\pm\) 0.01 & 0.0 \(\pm\) 0.0 & 0.03 \(\pm\) 0.01 & 0.0 \(\pm\) 0.0 \\ \hline \end{tabular} \end{table} Table 1: Success rate comparison Figure 3: The success rates plots of PEAR and other baselines in maze navigation (Col. 1), pick and place (Col. 2), rope manipulation (Col. 3), and kitchen environment (Col. 4) vs number of timesteps. We perform experiments on real world robotic manipulation task using Dobot Magician robot (Fig 5). In this complex task, PEAR outperforms RPL baseline, which fails to show any progress on the task. ### Ablative analysis We perform ablative studies to analyse our design choices (plots are provided in Appendix Section 8.4). We perform ablation analysis to analyse how varying distance threshold hyperparameter \(D_{thresh}\) affects performance. We empirically found that even a low value of \(D_{thresh}\) is sufficient for selecting good subgoals. It is essential to select a good \(p\) value for effective adaptive relabeling. We empirically found that a large value of \(p\) will not be able to generate a good curriculum of subgoals for learning, as a smooth increase in subgoal difficulty is desirable. Conversely, we found that small value of \(p\) is also not desirable, and we hypothesize that this is because frequently re-populating the subgoal dataset using relabeling prohibits stable learning. In RPL baseline, we perform ablations to choose window size hyperparameter. We empirically prove that using a fixed window size is sub-optimal, and show that adaptive relabeling outperforms fixed relabeling on all tasks. We also empirically evaluated effect to changing learning rate \(\psi\) hyperparameter. If \(\psi\) is too small, PEAR looses the imitation regularization advantage, and if \(\psi\) is too large, the learned policy might overfit to expert demonstrations. Finally, we perform ablation study of changing the number of expert demonstrations. Indeed, we consistently find that increasing expert demonstrations improves learning. ## 6 Limitations In this work, we assume availability of directed expert demonstrations. Unavailability of such directed demonstrations might cause challenges for the proposed approach. We have not considered undirected demonstrations in this work, but we plan to explore this avenue in the future. Additionally, our ablation analysis shows that the number of available expert demos affects the performance, especially in complex rope and kitchen environments. Hence, we plan to devise approaches that are more robust to changing number of demonstrations. \(D_{g}\) is periodically re-populated in our adaptive relabeling approach. This periodic relabeling is an additional overhead and might be a bottleneck in tasks where relabeling cost is high. In future work, we plan to consider such environments and hope to devise solutions to resolve this issue. Notably, adaptive relabeling causes negligible overhead in the environments that are considered in this work. ## 7 Discussion and future work We propose primitive enables adaptive relabeling (PEAR), a hierarchical reinforcement learning and imitation learning based approach that performs adaptive relabelling on a handful of expert demonstrations to solve complex long horizon tasks. We perform comparisons with a number of hierarchical and non-hierarchical approaches and demonstrate that PEAR is able to consistently outperform the baselines in the robotic task environments. In future work, we plan to address longer sequential decision making tasks. Additionally, we hope to analyse generalization beyond expert demonstrations. We hope that PEAR encourages future research in the area of adaptive relabeling and leads to better approaches for solving long horizon tasks. Figure 4: Success rate comparison Figure 5: Real Rope Manipulation
2308.01157
LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs
We show that large language models (LLMs) are remarkably good at working with interpretable models that decompose complex outcomes into univariate graph-represented components. By adopting a hierarchical approach to reasoning, LLMs can provide comprehensive model-level summaries without ever requiring the entire model to fit in context. This approach enables LLMs to apply their extensive background knowledge to automate common tasks in data science such as detecting anomalies that contradict prior knowledge, describing potential reasons for the anomalies, and suggesting repairs that would remove the anomalies. We use multiple examples in healthcare to demonstrate the utility of these new capabilities of LLMs, with particular emphasis on Generalized Additive Models (GAMs). Finally, we present the package $\texttt{TalkToEBM}$ as an open-source LLM-GAM interface.
Benjamin J. Lengerich, Sebastian Bordt, Harsha Nori, Mark E. Nunnally, Yin Aphinyanaphongs, Manolis Kellis, Rich Caruana
2023-08-02T13:59:35Z
http://arxiv.org/abs/2308.01157v2
# LLMs Understand Glass-Box Models, Discover Surprises, ###### Abstract We show that large language models (LLMs) are remarkably good at working with interpretable models that decompose complex outcomes into univariate graph-represented components. By adopting a hierarchical approach to reasoning, LLMs can provide comprehensive model-level summaries without ever requiring the entire model to fit in context. This approach enables LLMs to apply their extensive background knowledge to automate common tasks in data science such as detecting anomalies that contradict prior knowledge, describing potential reasons for the anomalies, and suggesting repairs that would remove the anomalies. We use multiple examples in healthcare to demonstrate the utility of these new capabilities of LLMs, with particular emphasis on Generalized Additive Models (GAMs). Finally, we present the package TalkToEBM as an open-source LLM-GAM interface. ## Introduction Large language models (LLMs) offer the potential to automate data science through natural language interfaces, but it is difficult to embed complex models or datasets in confined context windows. While GPT-4 has a context window size of up to 32k tokens, paying equal attention to all parts of the context remains a challenge [1] and the practicality of lengthy context windows is questionable. Machine learning models often involve billions of parameters, accentuating the need for compact, modular function representations that more easily interface with LLMs. In this paper, we show that LLMs pair remarkably well with interpretable models that are decomposable into modular components. Specifically, we show that GPT-4 is able to describe, interpret and debug univariate graphs, and by applying a form of chain-of-thought reasoning[2], GPT-4 can understand Generalized Additive Models (GAMs). GAMs [3, 4] represent complex outcomes as sums of univariate component functions (graphs); thus, by analyzing each of these component functions in turn, the LLM does not need to understand the entire model at once. After analyzing and summarizing each graph, the LLM can operate on component summaries to produce model-level analyses. This modularity simplifies the application of LLMs to data science and machine learning and enables LLM-based analyses to scale to very large datasets while staying within small context windows. Next, we show that because LLMs have large amounts of prior knowledge, LLMs can be used to provide model interpretations that are grounded in domain expertise. Specifically, we show that LLMs can automatically detect surprises and anomalies in models such as GAM graphs that appear to contradict expectations. By highlighting these surprises, the LLM can suggest problems in all aspects of the analysis, including data collection, imputation, model fitting, or model specification. This is critical for good data science because real-world datasets are universally polluted by confounders and systemic biases that are often obvious only after close examination with domain expertise (e.g. treatment effects in healthcare [5]); LLMs with extensive prior knowledge offer a system to automatically detect and report these potential problems. Depending on the application, these potential problems may require users to correct the data, correct the model, or change the underlying system. Ultimately, LLMs offer the potential to be used as tools that automate important yet repetitive aspects of the data science process. LLMs are especially useful when paired with statistical tools, such as glass-box interpretable models, that break complex reasoning tasks into separable components. While LLMs such as GPT-4 are not yet able to directly understand large tables of tabular data, they are able to interface with glass-box GAMs trained on that data, providing a new opportunity for humans to interact and learn with their data. ### Our Approach Our approach (Figure 2) is to use model representations that provide _separable_ components. Separable components can be independently analyzed, summarized, and combined with hierarchical reasoning.1 Glass-box models (e.g. GAMs) that can be decomposed into univariate component graphs fit this approach because the univariate components can be separated and later combined without approximation. Thus, we use GAMs to split the complex task of reasoning about an entire model into a sequence of small tasks of reasoning about individual graphs. Concretely, we prompt the LLM with a general introductory prompt about the task, model and data set, then successively provide each graph as key-value lists. For each graph, we can ask the LLM to perform various tasks (for example, to summarize and answer questions about the graph). To draw conclusions about the entire model, the summaries of individual graphs are aggregated in a new query to the LLM. This approach is available in our open-source software package TalkToEBM. Figure 1: Our approach to connect LLMs with data interpretations. By using glass-box interpretable models that can be decomposed into modular components, we enable the LLM to use hierarchical reasoning by considering each modular component in turn. This enables the LLM to analyze large models and perform complex reasoning tasks without requiring the entire model to be stored in a single context window. ### Related Work The disruptive potential of using LLMs to automate tasks in data science has inspired several related approaches. Slack et al. [6] develop a natural language interface to give practitioners conversational access to model explanations. Their work intends to provide access to generic (potentially black-box) models, and so the LLM does not have direct access to model internals. As a result, [6] cannot use the same chain-of-thought approach we explore with GAMs to enable scalability and complex model-level reasoning. On the other end of the pipeline for automated data science, Bisercic et al. [7] showed that LLMs can extract tabular datasets from unstructured text and then train interpretable models (e.g. linear regression and small decision trees) on top of this data. Similarly, recent works have explored the potential of LLMs for data wrangling and cleaning [8, 9], or traditional supervised and unsupervised learning tasks like classification or density estimation [10, 11, 12, 13]. These works usually rely on fine-tuning, suggesting that today's LLMs have only a limited ability to solve complex tasks with raw tabular data in-context. All in all, these approaches to automated data preprocessing and model estimation are complementary systems to the analytical system proposed in this paper, together suggesting a path toward fully automated data science which includes automated data preprocessing, model fitting, and interpretations. The Importance of Iterative Investigations in Data ScienceWhile the predictive accuracy of machine learning tools is undoubtedly crucial, it is not sufficient as a standalone objective when employing these tools to analyze and influence real-world systems. Instead of singularly pursuing accuracy, we need to embark on a cyclical investigative process to comprehend the model's nuances, the effects it has learned, and the implications it holds for the dataset and corresponding real-world system. Taking healthcare data as an instance, we may develop a system that effectively predicts the outcomes for hospitalized patients. However, applying this retrospectively derived knowledge to future diagnostic policies can lead to flawed treatment decisions. This occurs because the patients identified as low-risk in the retrospective study are those who received effective treatment, not necessarily those who inherently bore the least risk [4]. Moreover, relying heavily on predictive accuracy can be deceptive, particularly considering the inconsistent recording of real-world healthcare data [14], which could lead to statistical endorsement of unrelated confounding factors [15]. Hence, using predictive accuracy as an end goal, our goal is to employ interpretable models to uncover unexpected effects. This mirrors the philosophy proposed by [5], which demonstrated that systemic confounding presents opportunities to enhance healthcare. This prior work relied on human experts to design explicit statistical tests for each category of surprising pattern in GAM feature graphs, providing a handle on false-discovery rates but limiting the breadth of potential applications. Our work builds on this insight that interpretable models reveal hidden confounding and proposes to automate surprise discovery with LLMs. ## Results We begin by examining the ability of LLMs to understand individual graphs. We next outline how this capability can be used to explain entire models and summarize data sets. Finally, we use the domain knowledge embedded in the LLM to find surprising effects (i.e. hidden confounding). We are particularly interested in the use of this approach to identify confounding factors and suboptimal treatment protocols in observational healthcare data; thus, in all of these experiments, we use a dataset of pneumonia patients from [16] as a running example. The approach is broadly applicable and more examples on publicly available datasets are included in the package repository TalkToEBM. ### LLMs Understand Graphs LLMs are able to understand and reason about simple graphs. The main challenge is to encode each graph in a text description which enables straightforward reasoning. Piecewise-constant graphs, such as those estimated by tree-based GAMs, can be exactly described as a list of regions and values. By encoding piecewise-constant graphs as JSON objects of x-bins and y-values (Figure 1A), we pass an exact and efficient description of the graph to the LLM. The LLM can summarize this encoding (Figure 1B) and enter into a dialogue with the user (Figure 1C). For example, consider the following question about the influence of age on pneumonia mortality risk (Figure 1A): **Question:**_What would be the average risk of a 82-year old patient?_ A Graphs can be encoded as text... ## Appendix B and understood by \(\mathrm{LLMs}\)... **GPT-4:**_"[...] The general pattern of the graph indicates that as the age increases, the log-odds contribution also increases, meaning that older age is associated with a higher risk of death due to pneumonia._ _In the age range from about 18 to 67.5 years, the contribution to the log-odds is negative, which means the model predicts a lower risk of death from pneumonia in this age group. Starting from around 69.5 years, the contribution becomes positive, indicating a higher risk of death from pneumonia as the patient's age increases._ _It's important to note that the exact turning point where the risk increases can vary, as indicated by the lower and upper bounds for each age range. These bounds represent the 95% confidence interval, which is a measure of the uncertainty around the estimate. However, the overall trend remains the same: older age is associated with higher risk of death from pneumonia._ _In conclusion, age is a significant predictor of pneumonia-related mortality according to this model, with older patients generally at higher risk."_ ## Appendix C to enable conversational data science. **Summarization** **Q:**: **Does the risk increase monotonically with age?** **Q:**: **What would be the average risk of an 82-year old patient?** **GPT-4:**_"[...] according to the model, an 82-year old patient would have a 57.2% probability of dying from pneumonia on average. However, remember this is a model-based prediction and actual risk could be different based on other patient-specific factors not included in this model."_ **GPT-4:**_"The average risk for an 82-year old patient can be derived from the mean log-odds contribution for the age interval "(81.5, 85.5)", which is 0.29. Log-odds are a way of expressing probability and can be converted into a probability with the following formula: \(p\) = exp(log-odds) /(1 + exp(log-odds)). Using this formula, the average risk is approximately 57.2%. This means that, according to the model, an 82-year old patient would have a 57.2% probability of dying from pneumonia on average. However, remember this is a model-based prediction and actual risk could be different based on other patient-specific factors not included in this model."_ This answer is correct since the relevant interval in the graph is indeed (81.5, 85.5): 0.29 and the arithmetic translation from log-odds to probability is correct. The LLM can also do simple reasoning about this graph: **Question:**_Does the risk increase monotonically with age?_ **GPT-4:**_"Yes, according to the provided data, the risk of death from pneumonia increases monotonically with age. [...]"_ **Question:**_Which patients are most at risk?_ **GPT-4:**_"According to the model's output, the patients most at risk are those in the oldest age group, specifically those aged 88.5 to 106 years. [...]"_ More complicated analysis questions begin to push at the bounds of GPT-4's reasoning capabilities: **Question:**_What is the average increase in risk due to a 5-year increase in age?_ **GPT-4:**_"To calculate the average increase in risk due to a 5-year increase in age, we need to look at the mean log-odds contributions of adjacent 5-year age groups and calculate the difference. However, the age groups provided are not all of equal length, so we cannot calculate an exact average increase in risk for a 5-year increase in age. If we wanted to estimate this, we could consider the log-odds difference between the "(77.5, 80.5)" and "(81.5, 85.5)" age groups. The mean log-odds contribution increases from 0.216 to 0.29, a difference of 0.074 in log-odds. To convert this difference in log-odds to a difference in probability, we would use the formula: \(p\) = exp(log-odds) / (1 + exp(log-odds)). Applying this formula, we find that a 5-year increase in age from 80 to 85 years is associated with an increase in the average risk from about 53.8% to 57.2%, which is about a 3.4% increase in risk due to a 5-year increase in age. Bear in mind that this is an approximation based on the provided data, and the actual increase in risk may vary depending on the specific age range and other factors. Also, as previously mentioned, there are many other factors that can influence a patient's risk of dying from pneumonia, not just age."_ This answer from GPT-4 is factually correct but incomplete. We asked for the average increase in risk represented by the graph, but GPT-4 opted to answer only about two local averages. This example demonstrates the common need for LLMs to perform guided bottom-up analyses rather than unguided global queries, motivating our strategy to interface with the LLM via modular function representations rather than non-separable black-box models. ### LLMs Understand Interpretable Models GAMs are separable into a set of univariate component functions (Figure 3A). Starting from the univariate component graphs, we use chain-of-thought prompting [2] to summarize the effects of the different features in the model. For example, in the pneumonia dataset, the model summarizes the role of'respiration rate' as follows: ## Appendix C and analyzed jointly by LLMs. **GPT-4:** "_The model predicts the likelihood of death from pneumonia based on several patient features. The most influential global feature is age, with increasing age associated with a higher likelihood of death, particularly after the age of 73. Respiration rate and BUN level also have substantial influences on the model's predictions, with higher levels of both associated with an increased risk of death._ _Other significant features include the number of diseases a patient has, their systolic blood pressure, and their pO2 levels. More diseases, lower systolic blood pressure, and very low or very high pO2 levels are all associated with a higher likelihood of death._ _The model also shows that certain medical readings can increase or decrease the likelihood of death, depending on their levels. For example, both very low and very high glucose levels, [...]_ _In summary, the model provides an intricate and nuanced prediction of the likelihood of death from pneumonia based on a wide range of patient features. [...]"_ Figure 3: LLMs can be used to reason about complicated outcomes if the outcomes are first broken down into an additive model before passing to the LLM. **Question:**: _"Now provide a brief executive summary of the role of the feature in the model. Assume that the reader already knows what the model is about, so you don't need to describe the outcome or feature details. Use at most 7 sentences. Avoid boilerplate sentences like 'other factors may also impact this likelihood' or 'further analysis would be needed'."_ **Respiration Rate (Feature Summary):**: _"The model's output indicates an increasing likelihood of death as the respiration rate increases. Specifically, a respiration rate of 32 or less tends to decrease the likelihood of patient death, while a rate above 32 increases this likelihood. However, the model exhibits some uncertainty in its predictions, as shown by overlapping confidence intervals across different respiration rate intervals. This suggests that while the respiration rate is influential, its precise impact may vary between patients."_ We then aggregate the different summaries and ask the LLM to provide an overall summary of the GAM. We can also provide the model with additional information. Because GAMs also provide straightforward measures of feature importance, we also provide the LLM with global feature importance scores. **Model Summary (GPT-4, Pneumonia):**: _"The model predicts the likelihood of death from pneumonia based on several patient features. The most influential global feature is age, with increasing age associated with a higher likelihood of death, particularly after the age of 73. Respiration rate and BUN level also have substantial influences on the model's predictions, with higher levels of both associated with an increased risk of death._ _Other significant features include the number of diseases a patient has, their systolic blood pressure, and their pO2 levels. More diseases, lower systolic blood pressure, and very low or very high pO2 levels are all associated with a higher likelihood of death._ _The model also shows that certain medical readings can increase or decrease the likelihood of death, depending on their levels. For example, both very low and very high glucose levels, pH levels, and potassium levels can increase the risk of death, while moderate levels are associated with a lower risk._ _Finally, the model shows a considerable amount of uncertainty in its predictions for certain features, such as age, heart rate, and albumin levels. This is likely due to a lack of data points in certain ranges or the complex, non-linear relationships these features may have with the outcome._ _In summary, the model provides an intricate and nuanced prediction of the likelihood of death from pneumonia based on a wide range of patient features. However, given the complexity of the relationships and the uncertainty in some predictions, the model's output should be interpreted with caution and used in conjunction with clinical judgement."_ ### LLMs Automatically Find Surprising Effects Finally, LLMs can apply their embedded domain knowledge to provide grounded interpretations of these graphs and models. We are particularly motivated to use LLMs to automatically detect surprising effects, as this is a common but difficult task in data analysis that typically requires domain expertise and iteratative data exploration, model fitting, and evaluation. However, LLMs can automate this process by generating a ranked list of surprises that may indicate underlying model or data problems. To accomplish this, we include in our system prompt an instruction to pay attention to abnormal and surprising effects: **System Prompt:**: _"[...] Pay special attention to values or patterns that appear abnormal. It is very important that you alert the user about these potentially surprising aspects of the graphs."_ Once the LLM has been prompted to save notes about these surprises, we can subsequently ask the LLM to summarize and rate the different surprises. Surprises can be identified in individual graphs: **Question:**: _"Great, now summarize the the most important surprising aspects of this graph in at most 5 sentences."_ ## 3 GPT-4 Surprises _"The graph of albumin levels versus predicted log-odds of death from pneumonia reveals several surprising aspects. Firstly, it includes negative albumin levels, which are physiologically impossible and might represent data errors. Secondly, the graph shows an initial increase in predicted death risk with increasing albumin levels up to about 2.1, contrary to typical clinical understanding that lower albumin levels usually signify poorer health status. Lastly, there's a sharp decrease in death risk for albumin levels above 2.6, suggesting potential interactions with other factors not captured in the model. These findings highlight the need for careful data validation and consideration of the multifaceted relationships between health measures."_ _"The graph of sodium levels versus the risk of dying from pneumonia reveals three surprising aspects: 1) a sharp decrease in risk at very high sodium levels (159 and above), contrary to the expectation of higher risk with severe hypernatremia; 2) a peculiar peak in risk at sodium levels of 156 to 159, suggesting possible overfitting or unaccounted confounding factors; 3) a lower-than-expected risk at very low sodium levels (0 to 110), potentially due to aggressive treatment of severe hyponatremia. These unexpected patterns may indicate data quality issues, overfitting, or unaddressed confounders, warranting further investigation."_ _"The graph depicting the relationship between 'percentage_bands' and the predicted death from pneumonia shows a surprising nonlinear pattern. Most notably, the risk of death sharply increases when 'percentage_bands' is within the range of 42.0 to 62.0. After this peak, the risk decreases, but still remains higher than at lower 'percentage_bands' values. These patterns suggest a possible threshold effect or could indicate issues with the data. Further investigation is needed to understand these unexpected patterns. "_ _"The most surprising aspect of the heart rate graph is the unexpected drop in the predicted log-odds of death from pneumonia for patients with extremely high heart rates, specifically in the 170 to 200 beats per minute range. This is contrary to medical expectations, as severe tachycardia (high heart rate) is generally associated with a higher risk of death in severe illnesses like pneumonia. Possible explanations for this counterintuitive pattern could include data quality issues, presence of confounding factors, or limited data in this heart rate range. The wide confidence intervals for this range also indicate a higher level of uncertainty in the model's predictions. This unusual pattern requires further investigation to ensure accurate interpretation and application."_ _"The graph for creatinine level shows a surprising decrease in the predicted log-odds of death from pneumonia as creatinine levels increase beyond approximately 5.0 and then again after about 10.4, which is counterintuitive given that higher creatinine levels usually indicate worse health outcomes. This could potentially be due to a skewed distribution of data with fewer patients having very high creatinine levels, causing the model to learn inaccurate patterns. Alternatively, it might reflect that patients with extremely high creatinine levels receive more aggressive or specialized care, improving their outcomes. The wide confidence intervals in these ranges also suggest a high level of uncertainty in the model's predictions. [...]"_ Figure 4: Pneumonia effects and surprises. In each example, we show the estimated impact on patient mortality (left) with 95% confidence intervals shaded and observed training data points as red ticks along the horizontal axis, and (right) the surprises reported by GPT-4. **GPI-4:**_'The graph for creatinine level shows a surprising decrease in the predicted log-odds of death from pneumonia as creatinine levels increase beyond approximately 5.0 and then again after about 10.4, which is counterintuitive given that higher creatinine levels usually indicate worse health outcomes. This could potentially be due to a skewed distribution of data with fewer patients having very high creatinine levels, causing the model to learn inaccurate patterns. Alternatively, it might reflect that patients with extremely high creatinine levels receive more aggressive or specialized care, improving their outcomes. The wide confidence intervals in these ranges also suggest a high level of uncertainty in the model's predictions. These surprising patterns warrant further investigation into the data and consultation with clinical experts."_ By asking the model to summarize the entire model as a ranked list of surprises, we generate an ordered list of hypotheses to investigate for confounding or novel discoveries (Figure 4). **Finding Surprises and Abnormal Patterns (GPT-4, Pneumonia):**_'The Generalized Additive Model (GAM) predicts the likelihood of death from pneumonia based on patient factors. However, it reveals several surprising patterns._ _The most surprising factors are Heart Rate, Systolic Blood Pressure, Albumin Level, Percentage Bands, Creatinine Level, and Sodium Level._ _The Heart Rate graph shows a decrease in predicted risk of death for patients with extremely high heart rates, contrary to medical expectations._ _The Systolic Blood Pressure graph shows a counterintuitive pattern: as blood pressure increases, the risk of death decreases, which is surprising as high blood pressure usually indicates severe health risks._ _The Albumin levels graph includes negative albumin levels, which are impossible, and shows an initial increase in predicted death risk with increasing albumin levels up to about 2.1, contrary to typical understanding that lower albumin levels signify poorer health._ _The 'Percentage Bands' graph shows a surprising nonlinear pattern, with the risk of death sharply increasing when 'percentage_bands' is within the range of 42.0 to 62.0._ _The Creatinine Level graph shows a decrease in predicted risk of death as creatinine levels increase beyond approximately 5.0, which is counterintuitive as higher creatinine levels usually indicate worse health outcomes._ _The Sodium Levels graph reveals a sharp decrease in risk at very high sodium levels, a peak in risk at levels of 156 to 159, and a lower-than-expected risk at very low sodium levels, all contrary to standard medical expectations._ _These patterns may indicate overfitting, unknown confounding factors, data quality issues, or unique dataset characteristics. Further investigation is necessary for accurate clinical decision-making."_ Directly interfacing with LLMs can lead to unstructured and confusing results. For example, the second surprise listed for albumin ("an initial increase in predicted death risk with increasing albumin levels up to about 2.1", Figure 4A) can be interpreted in multiple ways. Is the LLM referring to the change in mortality risk from 0.0 to 2.1g/dL that is mostly attributable to the effect of missing albumin values at 0.0g/dL, the change in mortality risk from 1.0 to 2.1g/dL that is mostly attributable to the effect of the jump at 1.5 g/dL, or the gradual increase in mortality risk from 1.5 to 2.1 g/dL? This ambiguity motivates development of standardized graph-LLM interfaces and methods to verify LLM interpretations prior to presenting them to the user. ## Discussion In this study, we have shown that because LLMs are remarkably good at understanding graphs, they are also able to understand and explain glass-box Generalized Additive Models (GAMs) which decompose learning into an additive sum of feature functions represented as graphs. Moreover, LLMs are able to use their extensive background knowledge to detect and describe anomalies in the learned feature functions that defy expectations. While the experiments and results reported here are preliminary, they encourage future exploration to automate data science with LLMs. Memorization, Data Pollution, and Hallucinations: What Happens if the Data or Models Trained on the Data Are in the LLM's Train Set? One important concern in using LLMs to automate data science is whether the LLM itself was trained on the studied data, papers written about interpreting models trained on that data, or similar datasets that may cause the LLM to hallucinate insights. See [17] for a discussion of this issue on medical datasets. In our preliminary experiments reported here, the pneumonia data we use has not been made public, so we are confident that the pneumonia data is not in the LLM's train set. However, there have been several papers that discuss interpretable models trained on the pneumonia data [4] and the LLM may have been trained on these papers. As a result, LLMs hallucinate results when prompts are too open-ended. For example, GPT-3.5 hallucinates surprises about pneumonia patients without ever being given a model: **Question:** "You are an expert statistician and data scientist. Your task is to interpret global explanations produced by a generalized additive model (GAM). GAMs produce explanations in the form of graphs that contain the effect of a specific input feature. Answer all questions to the best of your ability, taking into consideration your knowledge about the real world. Pay special attention to values or patterns that appear abnormal. It is very important that you alert the user about these potentially surprising aspects. This model represents outcomes of hospitalized patients with pneumonia. The outcome is in-hospital mortality. What surprises do you find? For each surprise, tell me the specific feature, the feature values (bins) that have surprising effects, and rank the surprise on a scale from 0-5 where 0 means unsurprising and 5 means surprising." **GPT-3.5:** "As an expert statistician and data scientist, I'll interpret the global explanations produced by the generalized additive model (GAM) for the outcomes of hospitalized patients with pneumonia. Let's discuss the surprises: 1. Surprise: Age has a surprising effect on in-hospital mortality. Specific Feature: Age Feature Values (Bins): The age bins where the effect is surprising are 18-30 and 70-85. Ranking: 5/5 Explanation: The GAM model shows that patients between the ages of 18 to 30 and 70 to 85 have a significantly higher or lower in-hospital mortality rate compared to other age groups. This finding is unexpected as one might assume that older patients would generally have a higher mortality rate. Further investigation is warranted to understand the underlying factors contributing to this surprising effect. 2. Surprise: Length of hospital stay has a surprising effect on in-hospital mortality. Specific Feature: Length of Hospital Stay Feature Values (Bins): The effect is surprising for patients with a length of stay between 1 to 3 days. Ranking: 4/5 Explanation: The GAM model indicates that patients with a length of stay between 1 to 3 days have a notably higher mortality rate compared to those with slightly shorter or longer stays. This finding is unexpected as shorter hospital stays are often associated with better outcomes, but here, it seems to be the opposite. Further investigation is needed to understand the reasons behind this surprising trend. 3. Surprise: Blood pressure has a surprising effect on in-hospital mortality. Specific Feature: Blood Pressure Feature Values (Bins): The effect is surprising for patients with extremely high or low blood pressure values. Ranking: 5/5 Explanation: The GAM model reveals that patients with either extremely high or extremely low blood pressure readings have a significantly higher risk of in-hospital mortality compared to patients with more moderate blood pressure values. This unexpected finding contradicts the common belief that extremely high blood pressure is the primary concern.2 Further analysis is crucial to comprehend the reasons for this surprising relationship. Footnote 2: In its hallucination, the LLM invented a belief – it is well known that extremely low blood pressure is linked to in-hospital mortality [18]. 4. Surprise: Seasonal effect on in-hospital mortality. Specific Feature: Season (Time of Year) Feature Values (Bins): The effect is surprising during the winter season. Ranking: 3/5 Explanation: The GAM model suggests that in-hospital mortality rates are higher during the winter season compared to other seasons. While it is not entirely surprising to observe some seasonal variations in health outcomes, the magnitude of the effect during the winter is unexpected. This finding warrants further investigation to identify potential factors contributing to increased mortality during this time. 5. Surprise: Smoking status has a surprising effect on in-hospital mortality. Specific Feature: Smoking Status Feature Values (Bins): The effect is surprising for current smokers. Ranking: 4/5 Explanation: The GAM model indicates that current smokers have a lower mortality rate compared to non-smokers or former smokers. This finding is unexpected as smoking is generally associated with adverse health outcomes, including higher mortality rates. Further analysis is needed to understand the potential underlying factors contributing to this surprising relationship. Please note that these surprises are based on the statistical analysis of the GAM model's output and might require further investigation and validation with domain experts and additional data to draw definitive conclusions. Interpretation of statistical models should always be done cautiously, considering the complexity of real-world factors that may influence the outcomes." In this example, we never gave the LLM any model to interpret, but the LLM still readily volunteered hallucinated "surprises" and gave plausible explanations. This behavior is undesirable. GPT-4 correctly provides hypothetical insights in response to this malformed question: **GPT-4:** "I'm sorry for the confusion, but I'm unable to view or interpret specific graphs or data directly as I don't have access to external databases or the capability to view images or graphs. As an AI text model developed by OpenAI, I work purely on the basis of the text-based inputs and my training data up to September 2021. However, I can provide general insights on how to interpret generalized additive model (GAM) results. In GAMs, a separate smooth function is estimated for each predictor. If you look at the plot for each predictor, the y-axis shows the effect on the response after accounting for the average effects of all other predictors. It's important to examine these plots to see if there's an intuitive or medically-explainable link between the predictor and the response. For example, let's say you're looking at the age of patients as a feature. If you observe that in-hospital mortality increases sharply for very young and very old patients, but decreases for middle-aged patients, this might be surprising (rank 4-5) given that pneumonia is generally more dangerous for older people due to weakened immune systems and potential comorbidities. However, in the case of variables like the white blood cell count, you might expect a U-shaped relationship with mortality, with both very low and very high values being associated with increased risk. If you instead observe a simple linear relationship, this might be surprising (rank 4-5), as it would contradict our expectations based on medical knowledge. Please provide more detailed information about the specific features and their effects for a more detailed interpretation. Remember to also look for interactions between variables, as these can sometimes produce surprising results." These responses demonstrate the dangers of data pollution when interfacing with LLMs. Despite the serious concern for memorization and pollution from prior literature, we suspect one of the reasons why LLMs are so good at interpreting graphs is because they have read a vast literature (larger than any one human being) that includes graphs and descriptions of graphs, which has given them significant expertise in interpreting graphs. This skill makes LLMs well-matched to additive models such as GAMs that learn models consisting of graphs. ### Can LLMs Explain Black-Box Models? While we suspect it is unlikely that the current generation of LLMs could be prompted in such a way as to directly understand and explain complex black-box models such as deep neural nets, or large ensembles of boosted trees or random forests, intermediate steps of model explanation could be used to enable LLMs to explain black-box models. For example, SHAP [19] generates explanations that are additive and composed of graphs, and can be viewed as a very specific form of GAM trained via distillation on the original black-box model [20]. While SHAP explanations are useful in data analysis, so LLM explanations of SHAP explanations will likely also be useful, it is important to keep in mind that the explanations generated by black-box interpretation methods are approximate explanations of more complex black-box models, and thus any LLM explanations will also be approximate. As a result, strange behaviors could emerge, including instability or adversarial attacks [21]. In contrast, glass-box models such as EBMs provide exactly additive learned functions, so there is no approximation needed to interpret the model. ### Dealing with Finite Context Window Length The textual description length of the entire GAM on the pneumonia dataset, after simplifying the graphs by removing small artifacts and rounding numbers to significant digits, is 43,592 GPT-4 tokens. Even this relatively simple model contains too many tokens to be directly encoded in GPT-4's 32k context window. Black-box models might be much more complex. On the other hand, univariate feature graphs are much more compact: the maximum description length of a single graph in the pneumonia GAM is 2,345 GPT-4 tokens. This easily fits within GPT-4's context window. The challenge of breaking models into compact context windows is analogous to the challenge of human interpretability of machine learning models: humans are not good at understanding multiple simultaneous effects. However, when effects can be represented as separable and hierarchical components, even complicated effects can be reliably interpreted by humans. Because additive models decompose learned functions into an additive sum of terms, both humans and LLMs can effectively reason about GAMs one term at a time. ### The Importance of Separable Components: Graph Scale and Vertical Offset affect LLM Interpretation LLMs are sensitive to small variations in how the graphs are presented, including variations that humans would ignore. Sometimes this sensitivity is an advantage (e.g. the LLMs can identify jumps in the graph that might be ignored by humans), but other times this sensitivity causes fragility in explanations. For example, in EBMs, by default each graph is centered vertically so that the mean predicted score from that graph is zero on the train set. This vertical offset is arbitrary: because GAM terms are additive, we can shift one graph up (or down) as long as we shift other graphs down (or up) the same amount so that the additive offsets compensate for each other. Typically, EBMs set the mean of each component to zero and collect the offsets as an intercept term that reflects the baseline prediction of the model for the entire population. However, different choices of centering can be made (e.g. some experts in some domains prefer that the minimum of each graph starts at zero so that the contribution of each graph is always positive or zero). Quick experiments suggest that LLMs yield somewhat different results when asked to explain graphs that are always positive vs. graphs that are both positive and negative -- when explaining graphs that have both positive and negative regions, the LLM often focuses on describing regions as adding or subtracting from risk (i.e., positive or negative values), at the expense of focusing on the relative shape of the graph which is unchanged by shifting the graph up or down.3 Footnote 3: Interestingly, we have observed this same effect in human experts when they are new to interpreting EBM plots that are mean-centered! ### GPT-4 vs. GPT-3.5 For this project, we experimented both with GPT-4 and GPT-3.5. We found that GPT-4 is able to work with graphs at a remarkable level of accuracy, whereas GPT-3.5 often struggles to perform the basic tasks of our setup. Consider, for example, this response of GPT-3.5 when we ask the model about the average risk of a 82-year old individual, which can be compared to the reported results. **GPT-3.5:**: _"Based on the graph, the average risk of a patient aged 82 years old dying from pneumonia would be around 0.264 (or -0.264 in log-odds). This value is obtained by taking the mean value for the age interval that includes 82 years old, which is between 80.5 and 81.5 years old, and is equal to 0.238."_ Of course, there is a potential that results with GPT-3.5 and other LLMs might be improved given sufficient prompt engineering or fine-tuning. We leave this exploration for future work. ## Materials and methods ### Datasets PneumoniaThe 1989 MedisGroups Comparative Hospital Database (MCHD) pneumonia dataset [16] contains information on inpatients from 78 hospitals in 23 states in the US between July 1987 and December 1988. The MCHD contains over 250 pieces of clinical information that include patient demographic characteristics, history and physical examination findings, and laboratory and radiological results, from which 46 variables were selected [16] with known or postulated association with mortality in patients with community-acquired pneumonia. We used patient data that were collected during the first 48 hr of hospitalization. ### Methods Model EstimationWe use Explainable Boosting Machines (EBMs), a form of GAM [3] trained with boosted decision trees [22, 23] using the open-source package InterpretML [24]. GAMs are ideal to build glass-box models of patient risk because: (1) GAMs can be precisely decomposed into risk curves of single variables for interpretation, (2) the flexible component functions allow risk curves of any shape without any implicit preferences, (3) many treatment protocols and clinical decisions (which sum multiple sources of evidence) are inherently additive models (e.g. SAPS II [25]), APACHE II [26]), (4) GAMs provide the ability to edit the model [27] and reason about changes to univariariable treatment protocol thresholds. We use boosted trees to train GAMs because tree-based models are scale invariant allowing features to be represented in their original natural units (including Boolean, nominal, ordinal, or continuous-valued attributes) without biases of pre-processing. Tree-based models can estimate discontinuities that smoother models such as spline-based GAMs and neural networks miss. The EBM GAMs in InterpretML are particularly useful for healthcare because they use a round-robin fitting procedure that helps ensure hidden effects are observable in the estimated model. The only change we make to the default hyperparameters is to increase the number of rounds of bootstrap sampling to 100, following the convention suggested by the algorithm designers [4]. These GAMs also provide state-of-the-art accuracy on tabular data (benchmarked in Table S1 of [5]). ### Code availability A Python tool for LLM-GAM interface, automated model analysis, and surprise finding is available at: github.com/interpretml/TalkToEBM.
2303.03432
A polar prediction model for learning to represent visual transformations
All organisms make temporal predictions, and their evolutionary fitness level depends on the accuracy of these predictions. In the context of visual perception, the motions of both the observer and objects in the scene structure the dynamics of sensory signals, allowing for partial prediction of future signals based on past ones. Here, we propose a self-supervised representation-learning framework that extracts and exploits the regularities of natural videos to compute accurate predictions. We motivate the polar architecture by appealing to the Fourier shift theorem and its group-theoretic generalization, and we optimize its parameters on next-frame prediction. Through controlled experiments, we demonstrate that this approach can discover the representation of simple transformation groups acting in data. When trained on natural video datasets, our framework achieves better prediction performance than traditional motion compensation and rivals conventional deep networks, while maintaining interpretability and speed. Furthermore, the polar computations can be restructured into components resembling normalized simple and direction-selective complex cell models of primate V1 neurons. Thus, polar prediction offers a principled framework for understanding how the visual system represents sensory inputs in a form that simplifies temporal prediction.
Pierre-Étienne H. Fiquet, Eero P. Simoncelli
2023-03-06T19:00:59Z
http://arxiv.org/abs/2303.03432v2
# Polar prediction of natural videos ###### Abstract Observer motion and continuous deformations of objects and surfaces imbue natural videos with distinct temporal structures, enabling partial prediction of future frames from past ones. Conventional methods first estimate local motion, or optic flow, and then use it to predict future frames by warping or copying content. Here, we explore a more direct methodology, in which each frame is mapped into a learned representation space where the structure of temporal evolution is more readily accessible. Motivated by the geometry of the Fourier shift theorem and its group-theoretic generalization, we formulate a simple architecture that represents video frames in learned local polar coordinates. Specifically, we construct networks in which pairs of convolutional channel coefficients are treated as complex-valued, and are optimized to evolve with slowly varying amplitudes and linearly advancing phases. We train these models on next-frame prediction in natural videos, and compare their performance with that of conventional methods using optic flow as well as predictive neural networks. We find that the polar predictor achieves better performance while remaining interpretable and fast, thereby demonstrating the potential of a flow-free video processing methodology that is trained end-to-end to predict natural video content. Machine Learning, ICML ## 1 Introduction One way to frame the fundamental problem of vision is that of representing the signal in a form that is more useful for performing visual tasks, be they estimation, recognition, or motor action. Perhaps the most general "task" is that of temporal prediction, which has been proposed as a fundamental goal for unsupervised learning of visual representations (Foldiak, 1991). But previous research along these lines has generally focused on estimating temporal transformations rather than using them to predict: for example, extracting slow features (Wiskott & Sejnowski, 2002), or finding sparse codes that have slow amplitudes and phases (Cadieu & Olshausen, 2012). In video processing and computer vision, a common strategy for temporal prediction is to first estimate local translational motion, and to then (assuming no acceleration) use this to warp and/or copy previous content to predict the next frame. Such motion compensation is a fundamental component in video compression schemes like MPEG (Wiegand et al., 2003). These video coding standards are the result of decades of engineering efforts, and have enabled reliable and efficient digital video communication that is now commonplace. But motion estimation is a difficult nonlinear problem, and existing methods fail in regions where temporal evolution is not translational and smooth: for example, expanding or rotating motions, discontinuous motion at occlusion boundaries, or mixtures of motion arising from semi-transparent surfaces (e.g., viewing the world through a dirty pane of glass). In compression schemes, these failures of motion estimation lead to prediction errors, which must then be repaired by sending additional corrective bits. Human perception does not seem to suffer from such failures - subjectively, we can anticipate the time-evolution of visual input even in the vicinity of these commonly occurring non-translational changes. In fact, those changes are often highly informative, as they reveal object boundaries, and provide ordinal depth and other information about the visual scene. This suggests that the human visual system uses a different strategy, perhaps bypassing altogether the estimation of local motion, to represent and predict evolving visual input. Toward this end, and inspired by recent hypotheses that primate visual representations support prediction by "straightening" the temporal trajectories of naturally-occurring input (Henaff et al., 2019), we formulate an objective for learning an image representation that facilitates prediction by linearizing the temporal trajectories of frames of natural video. To motivate the separation of instantaneous spatial representation from temporal prediction, we first consider the special case of rigidly translating video content. When ex pressed in the frequency domain, translation corresponds to linear phase advancement (section 2.1), and prediction of rigidly translating content reduces to angular extrapolation (section 2.2). We generalize this factorization using group representation theory (section 2.3), and describe a neural network architecture that maps individual video frames to a latent complex-valued representation. Within this latent space, coefficients can be temporally predicted by phase advancement and then mapped back to generate an estimated frame. The entire systems may then be trained end-to-end to minimize next frame prediction errors (section 3). We report training results of several such systems, and show that they produce systematic improvements in predictive performance over both conventional motion compensation methods, and direct predictive neural networks (section 4). Finally, we relate this approach to previous work (section 5) and discuss its significance and implications (section 6). ## 2 Background ### Base case: the Fourier shift theorem Our approach is motivated by the well-known behavior of Fourier representations with respect to signal translation (note that this elementary example will later lead to our proposed generalization). Specifically, the complex exponentials that make up the Fourier basis are the eigenfunctions of the translation operator, and translation of inputs produces systematic phase advances of frequency coefficients. Let \(x\in\mathbb{R}^{N}\) be a discrete signal indexed by spatial location \(n\in[0,N-1]\), and let \(\widetilde{x}\in\mathbb{C}^{N}\) be its Fourier transform indexed by \(k\in[0,N-1]\). We write \(x^{v}(n)=x(n-v)\), the translation of \(x\) by \(v\) modulo \(N\) (ie. circular shift with period N). Defining \(\phi=e^{i2\pi/N}\), the primitive N-th root of unity, and \(\mathcal{F}_{nk}=\phi^{nk}\), the \(N\times N\) Fourier matrix, we can express the Fourier shift theorem1 Footnote 1: Proof by substituting \(m=n-v\): \[\widetilde{x^{v}}(k) =\sum_{n=0}^{N-1}\phi^{-kn}x(n-v)=\sum_{m=-v}^{N-1-v}\phi^{-kv} \phi^{-km}x(m)\] \[=\phi^{-kv}\sum_{n=0}^{N-1}\phi^{-kn}x(n)=\phi^{-kv}\widetilde{x} (k).\] where \(\mathcal{F}^{*}\) is the conjugate transpose of the Fourier matrix and \(D(v)=\text{diag}(\phi^{0},...,\phi^{-(n-1)v})\) is a diagonal matrix. This relationship may be depicted in a compact diagram: \[\begin{array}{c}\widetilde{x}(k)\xrightarrow{\text{ advance phase}}\phi^{-kv} \widetilde{x}(k)\\ \begin{Bmatrix}\mathcal{F}^{*}\\ x(n)\end{Bmatrix}x(n-v).\end{array} \tag{1}\] In the context of our goals, the diagram illustrates the point that transforming to the frequency domain renders translation a "simpler" operation: a phase advance is a rotation in the two dimensional (complex) plane. ### Prediction via angular extrapolation Now consider observations of a signal that translates at a constant velocity over time, \(x(n,t)=y(n-vt)\). Although the temporal evolution is easy to describe, it traces a highly non-linear trajectory in the signal state space, rendering prediction difficult (specifically, linear extrapolation fails). As an example, Figure 1 shows a signal consisting of a sum of two sinusoidal components. Transforming the signal to the frequency domain simplifies the description. In particular, the translational motion now corresponds to circular motion of the two (complex-valued) Fourier coefficients associated with the constituent sinusoids. The motion is further simplified by a polar coordinate transform to extract phase and amplitude of each Fourier co Figure 1: Translation of a 1D signal consisting of a sum of two sinusoidal components: \(x(n,t)=\sin(2\pi(n-t))+\sin(2\pi 3(n-t))/2\). Lower left: three snapshots of the signal as it translates. Lower right: In the high-dimensional space representing the signal (each axis corresponding to the signal value at one location), the temporal trajectory is highly curved. Shown is the projection of the signal vector into the 3D space of the top three principal components. Three colored points indicate the three snapshots in lower left panel. Upper left: Fourier transform of the signal, showing complex-valued coefficients as a function of frequency. In this representation the temporal trajectory corresponds to linearly increasing phase of the two sinusoidal components, each at a rate proportional to its frequency. Upper right: a polar coordinate transform to amplitude and phase of each frequency component leads to a representation that evolves along a straight line, and is thus readily predictable (phases are unwrapped for display purposes). efficient. Specifically, the motion is now along a straight trajectory, with both phases advancing linearly (but at different rates), and both amplitudes constant. Note that this is a geometric property that holds for any rigidly translating signal, and offers a simple means of predicting content over time. Indeed, we can use the shift property (see section 2.1) on \(x(n,t+1)=x^{v}(n,t)\) and observe that prediction is now reduced to linear extrapolation of each coefficient's phase. We have the three step process: \[\widetilde{x}(k,t) =\sum_{n=0}^{N-1}\phi^{-kn}x(n,t),\] (analyze) \[\widetilde{x}(k,t+1) =\phi^{-kv}\widetilde{x}(k,t),\] (advance phase) \[x(n,t+1) =\frac{1}{N}\sum_{k=0}^{N-1}\phi^{nk}\widetilde{x}(k,t+1).\] (synthesize) Since we assumed that the motion from time \(t\) to \(t+1\) is identical to that from time \(t-1\) to \(t\) (ie. no acceleration), the phase advance \(kv\) can be computed from the past two representations as \(kv=\angle\widetilde{x}(k,t)-\angle\widetilde{x}(k,t-1)\), where \(\angle z\) indicates the phase of the complex number \(z\). Thus, a polar coordinate transformation in the frequency domain converts translational motion into trajectories that are predictable via linear phase extrapolation. ### Generalization: representing commutative Lie groups Natural videos are replete with rich temporal transformations, such as continuous deformations of objects and surfaces. Assuming that these transformations can be described as groups, we will aim to learn their group representation from data. To this end, we seek a parameterization that generalizes beyond translation and the frequency domain. Remarkably, Fourier analysis can be seen as a special case of the representation theory of compact commutative Lie (ie. smooth) groups (Mackey, 1980). In harmonic analysis, the celebrated Peter-Weyl Theorem (1927) establishes the completeness of the irreducible representations of any compact continuous group (an irreducible representation is a subspace that is invariant to group action and that can not be further decomposed). Furthermore, it follows that every compact Lie group admits a faithful (ie. injective) representation given by an explicit complete orthogonal basis, constructed from finite-dimensional irreducible representations (Hall, 2013). Accordingly, the action of a compact Lie group can be expressed as a rotation within each irreducible representation - thereby generalizing the Fourier shift property (an example is the construction of steerable filters (Freeman et al., 1991) in the computational vision literature). In the case of compact commutative Lie groups, the irreducible representations are one-dimensional and complex valued: they are pairs of real valued basis functions. Therefore, the angular extrapolation mechanism described in the previous section (2.2) can be employed for prediction in a much wider setting than that of translational motion. We will rely on the parameterization suggested by the representation theory of compact commutative Lie groups to learn the harmonic basis functions of the transformations at play in natural videos. ## 3 Learning to predict with angular extrapolation To generalize beyond translation and the Fourier transform, we aim to learn a representation of video frames that enables prediction via angular extrapolation. Specifically, we focus on next frame prediction, and optimize two parameterized mappings: one for the analysis and one for the synthesis transform. This framework is illustrated in Figure 2, which provides a generalization of the Fourier shift diagram (1). Figure 2: Unsupervised predictive representation learning framework. Each video frame is transformed using a parametric mapping \(f_{w}\), to an internal representation consisting of pairs of coefficients arranged in spatial channels. Predictions of individual complex coefficients at time \(t+1\) are computed by advancing the phase of the current coefficients by an amount equal to the phase advance over the interval from \(t-1\) to \(t\). At each time step, one such coefficient is depicted as a vector in two dimensions and the top arrows indicates how they are combined (each vector corresponds to the complex coefficient at a particular location within one channel pair). Predicted frames are then generated by applying the parameterized inverse mapping \(g_{w}\) on the advanced coefficients. Forward and inverse mappings are jointly trained to minimize mean squared prediction error between the predicted and actual frame at time \(t+1\). ### Architecture and objective function When focusing on a small region in an image sequence, the transformation observed as time passes can be approximated as a _local_ translation. That is to say, in a spatial neighborhood around position \(n\), \(m\in N(n)\), we have: \(x(m,t+1)\approx x(m-v,t)\). We can use the decomposition described for global rigid translation, replacing the Fourier transform with a local convolutional operator (Fleet & Jepson, 1990), processing each spatial neighborhood of the image independently and in parallel, and applying angular extrapolation to the coefficients computed at each position. We use the same weights for the encoding and decoding stages, that is to say the analysis operator is the transpose of the synthesis operator (also true of the Fourier transform and its inverse). Sharing these weights reduces the number of parameters and simplifies interpretation of the learned solution. This "polar predictor" (hereafter, **PP**) is consistent with the general scheme described in figure 2 where \(f_{w}\) is taken to be linear and convolutional, and \(g_{w}\) is its transpose. In practice, we assumed 64 convolutional channels with filters of size \(17\times 17\) pixels, with no additive constants. At every position in the image (spatial indices are omitted for clarity of notation), each coefficient \(y_{j}(t)\) is computed as an inner product between the input \(x(t)\) and the filter weights \(w_{j}\) of each channel \(j\in[0,63]:y_{j}(t)=w_{j}^{T}x(t)\). In order to obtain phases, we combine coefficients in pairs, indexed by \(k\in[0,31]\), which can be written as single complex coefficient as: \(z_{k}(t)=y_{2k}(t)+iy_{2k+1}(t)\in\mathbb{C}\), and expressed in polar coordinates as: \(z_{k}(t)=a_{k}(t)e^{i\theta_{k}(t)}\). This polar coordinate transformation is the only non-linear step used in the PP architecture, and serves as a bivariate non-linear activation function, differing markedly from the typical (pointwise) rectification operations found in convolutional neural networks. With this notation, linear phase extrapolation reduces to \(\hat{z}_{k}(t+1)=a_{k}(t)e^{i(\theta_{k}(t)+\Delta\theta_{k}(t))}\), where the phase advance \(\Delta\theta_{k}(t)\) is equal to the phase difference over the interval from \(t-1\) to \(t\): \(\Delta\theta_{k}(t)=\theta_{k}(t)-\theta_{k}(t-1)\). The advanced coefficients can be written in a more compact form, using complex arithmetic, as: \[\hat{z}_{k}(t+1)=\frac{z_{k}(t)^{2}\overline{z_{k}(t-1)}}{|z_{k}(t)||z_{k}(t- 1)|}, \tag{2}\] where \(\overline{z}\) and \(|z|\) respectively denote complex conjugation and complex modulus of \(z\). This formulation in terms of complex coefficients has the benefit of handling phases implicitly, bypassing the discontinuities of phase unwrapping and the instability of angular variables (phase is unstable when amplitude is low). We find that such an indirect formulation of phase processing is necessary for the stability of training, as previously noted in the texture modeling literature (Portilla & Simoncelli, 2000). Finally, the estimated next frame is generated by applying the transposed convolution \(g_{w}\) (with the same weights as \(f_{w}\)) to the advanced coefficients. As a more substantial generalization of polar prediction, we use deep convolutional neural networks to instantiate nonlinear mappings for both the encoder \(f_{w}\) and the decoder \(g_{w}\) (each with independent filters). Specifically, the "deep polar predictor" (**deepPP**) operates by transforming two frames of input into the encoding space, \(z(t-1)=f_{w}(x(t-1))\) and \(z(t)=f_{w}(x(t))\), applying the polar prediction of equation 2 to this encoded representation, and then decoding the next frame from this prediction, \(\hat{x}(t+1)=g_{w}(\hat{z}(t+1))\). While the PP model learns a linear representation, the deepPP model is nonlinear, with potential to enhance prediction by adapting to signal properties. In order to isolate the effects of non-linearities from those of spatial scale, we chose the number of layers and the kernel sizes of deepPP so that the effective receptive field size was matched to that of the PP model. Specifically, both the encoder and the decoder are composed of 4 convolutional layers, each with 64 channels, and using filter kernels of size \(5\times 5\) followed by half-wave rectification (ReLU). For both PP and deepPP models, convolutional kernels \(w\) are learned by minimizing the average squared prediction error: \[\min_{w}\mathbb{E}_{t}\|x(t+1)-\hat{x}(t+1)\|_{2}^{2}.\] The computation of this prediction error is restricted to the center of the image because moving content that enters from outside the video frame is inherently unpredictable. Specifically, we trim a 17-pixel strip from each side. Note that we only perform valid convolutions to avoid artificial interference with prediction (zero-padding creates undesirable boundary artifacts). ### Comparison models We compare our method to the traditional motion-compensated coding approach that forms the core of interpicture coding in well established compression standards such as MPEG. Block matching is an essential component of these standards, allowing the compression of video content by up to three orders of magnitude with moderate loss of information. For each block in a frame, typical coders search for the most similar spatially displaced block in the previous frame (typically measured with MSE), and communicate the displacement coordinates to allow prediction of frame content by translating blocks of the (already transmitted) previous frame. We implemented a "diamond search" algorithm (Zhu & Ma, 2000) operating on blocks of \(8\times 8\) pixels, with a maximal search distance of 8 pixels which balances accuracy of motion estimates and speed of estimation (the search step is computationally intensive). We use the estimated displacements to perform causal motion compensation (**cMC**), using displacement vectors estimated from the previous two observed frames (\(x_{t-1}\) and \(x_{t}\)) to predict the _next_ frame (\(x_{t+1}\)) rather than the current one (as in MPEG). To isolate the effects of the polar prediction, we also implemented a predictor using _linear extrapolation_ of the responses of a deep neural network (**deepL**), with architecture identical to that of the deep polar predictor. That is to say, we replace equation 2 by: \(\hat{y}_{j}(t+1)=2y_{j}(t)-y_{j}(t-1)\), which amounts to enforcing linear dynamics in the latent space of the non-linear representation. Finally, we implemented a more direct convolutional neural network predictor (**CNN**), that maps two successive observed frames to an estimate of the next frame (Mathieu et al., 2016). This predictor jointly transforms and predicts visual signals without explicitly partitioning spatial content representation and temporal feature extrapolation. For this, we used a CNN composed of 20 stages, each consisting of 64 channels, and computed with \(3\times 3\) filters without additive constants, followed by half-wave rectification. Note that, unlike all other predictors, this model jointly processes pairs of frames to generate predictions. ### Datasets and training To train, test and compare these models, we use the DAVIS dataset (Pont-Tuset et al., 2017), which was originally designed as a benchmark for video object segmentation. Image sequences in this dataset contain diverse motion of scenes and objects (eg., with fixed or moving camera, and objects moving at different speeds and directions), which make next frame prediction challenging. Each clip is sampled at 25 frames per second, and is approximately 3 seconds long. The set is subdivided into 60 training videos (4741 frames) and 30 test videos (2591 frames). We pre-processed the data, converting all frames to monochrome luminance values, and scaling their range to the interval \([-1,1]\). Frames are cropped to a \(256\times 256\) central region, where most of the motion tends to occur, and then spatially down-sampled to \(128\times 128\) pixels. We assume the temporal evolution of natural signals to be sufficiently and appropriately diverse for training, and do not apply any additional data augmentation procedures. We train on brief temporal segments containing 11 frames, which allows for prediction of 9 frames, processing these in batches of size 4. We train each model for one hundred epochs using the Adam optimizer (Kingma & Ba, 2015) with default parameters and a learning rate of \(3\cdot 10^{-4}\). The learning rate is halved at epochs 50, 60, 70, 80, 90, 100. We use batch normalization before every half-wave rectification, rescaling by the standard deviation of channel coefficients (but with no additive terms). Similarly, we also trained on the larger UCF-101 dataset (Soomro et al., 2012). This dataset, initially designed for action recognition, contains about 2.5 million frames, which amounts to over 27 hours of video data. Note that, unlike the DAVIS dataset, the clips are only available in compressed video formats and may contain motion artifacts (due to inter-frame coding). We used the same pre-processing procedure, except that we reduced frames by directly cropping a \(128\times 128\) central region (without any down-sampling). We employ the same training procedure, except that we only run training for 25 epochs. ## 4 Unsupervised representation learning ### Recovery analysis To experimentally validate our approach, we first verified that a the PP model can robustly recover known symmetries in small synthetic datasets consisting of translating or rotating image patches. For these experiments, we applied encoding and decoding transforms to the entire patch (i.e., non-convolutionally). Wen trained on translating image patches, the PP model learned approximately sinusoidal filters, shifted in phase by \(\pi/2\) - i.e., a local Fourier transform. Similarly, when trained on rotating patches, the learned filters represented circular harmonics. We also found that PP extracts meaningful representations when multiple kinds of transformations are at play (eg. mixing both translations and rotations), and when the transformation are not perfectly translational (eg. translation with open boundary condition). Learned filters for each of these cases are provided in Figure 5 in the appendix. ### Performance on Natural Videos We summarize the main prediction results in Table 1. First, observe that the predictive algorithms considered in this study perform significantly better than the baseline obtained by simply copying the last frame. Second, the polar predictor (PP) performs nearly as well as the convolutional neural network (CNN) in terms of test mean squared error on DAVIS. This demonstrates the remarkable power of the polar predictor: the PP model has \begin{table} \begin{tabular}{l c c c c} \multicolumn{1}{c}{\multirow{2}{*}{Algo.}} & \multicolumn{2}{c}{DAVIS} & \multicolumn{2}{c}{UCF-101} & \multicolumn{1}{c}{\# param.} \\ \cline{2-5} & train & test & train & test & \\ \hline Copy & \(0.064\) & \(0.065\) & \(0.0302\) & \(0.0286\) & \\ cMC & \(0.048\) & \(0.049\) & \(--\) & \(0.0299\) & \\ deepL & \(0.034\) & \(0.037\) & \(0.0220\) & \(0.0217\) & \(665,856\) \\ CNN & \(0.031\) & \(0.035\) & **0.0210** & \(0.0215\) & \(666,496\) \\ PP & \(0.036\) & \(0.035\) & \(0.0245\) & \(0.0229\) & \(18,496\) \\ deepPP & **0.028** & **0.032** & \(0.0216\) & **0.0210** & \(665,856\) \\ \end{tabular} \end{table} Table 1: Prediction error computed on the DAVIS and UCF-101 datasets. Values indicate average Mean Squared Error. parameters and uses a single non-linearity, while the CNN is composed of 20 non-linear layers. Finally, observe that deepPP achieves the lowest mean squared error, notably outperforming the deepL model which uses linear extrapolation in an otherwise identical architecture. It also outperforms the CNN on the UCF-101 test dataset while remaining significantly simpler. Thus, the prediction task benefits substantially from use of a fixed nonlinear transformation to polar coordinates. While average performance values provide a compact summary, it is also informative to examine the distribution of prediction errors on individual frames from the test set. Figure 3 shows pairwise comparison of the predictive algorithms for each frame in the DAVIS dataset. To make the contrast more apparent, we display performance difference on the vertical axis. Note that while the models have been optimized to reduce mean squared error, we show root mean squared error (RMSE) in order to facilitate visual inspection of the results (the concavity of the square root spreads out small differences). We see that (a) The polar predictor representation systematically outperforms causal motion compensation, especially on difficult examples. (b) The polar predictor outperforms the CNN on the bulk of easy to medium cases but this advantage is reversed for harder examples. (c) The deep polar predictor outperforms the single layer polar predictor overall, indicating that the non-linearity in representation can help. (d) The deep polar predictor clearly outperforms the deep linear predictor, revealing the strong benefit of using a polar extrapolation mechanism over a linear one. ### Learned filters In order to better understand these results, we visualized the learned PP filters trained on the DAVIS dataset and observed that the learned filter are selective for orientation and spatial frequency, and that that they tile the frequency domain. Filters in each pair have a similar frequency preference, and are related by a 90 degrees phase shift (see in Figure 5(a) and 5(b) in the Appendix). This relationship is analogous to that of sines and cosines and is consistent with the structure of the angular extrapolation described in equation 2. ### Examples Consider a set of example videos, chosen to illustrate behaviors of the methods being compared. In Figure 4, we see a wall, its shadow and their sharp boundaries against a grass background as the camera moves. Both PP and deepPP generate good results, cMC produces a sharp prediction at the expense of significant blocking artifacts, and both the CNN and deepL tend toward excessive blurring. Here, the cMC is is significantly sharper than the others, but introduces substantial artifacts. Again, the PP methods produce sharper results than either the CNN or deepL methods. A few additional informative examples are displayed in the appendix (see Figure 8, 9, 10). ## 5 Related work Our method is conceptually related to sparse coding with complex-valued coefficients (Cadieu & Olshausen, 2012) in that it factorizes natural videos into form and motion. But it differs in a number of important ways: (1) sparse coding focuses on representing, not predicting, the signal; (2) we do not promote sparsity of either amplitude or phase components; (3) finally, the discontinuity arising from selection of sparse subsets of coefficients seems at odds with the representation of continuous group actions, while our explicit mapping into polar coefficients aims for a smooth and continuous parameterization of the transformations that occur in natural videos. Several other studies have aimed to learn representations that decompose signal identity and attribute (ie. a _what-where_, or _invariance-equivariance_ factorization). Figure 3: Detailed performance comparison (in Root Mean Squared Error) of predictive algorithms. Each point corresponds to a frame in the test set. Vertical axes represent difference in performance: For points lying below the horizontal axis, the method whose performance is plotted on the horizontal axis achieves a lower RMSE than the comparison method. Black crosses indicate average RMSE. Red point corresponds to example in Figure 4, green to Figure 8 and blue to Figure 9. In particular learning linearized features from video was explored using a heuristic extrapolation mechanism (Goroshin et al., 2015). The authors developed specialized "soft max-pooling" and "soft argmax-pooling" modules and tested them on the small NORB dataset. A related approach aimed at finding video representations which decompose content and pose in order to enable prediction (Hsieh et al., 2018). This work explicitly identifies spatial components that are easier to predict in the moving MNIST and bouncing balls datasets. More sophisticated architectures have been developed to tackle the challenge of natural video prediction. In particular, a recurrent instantiation of the predictive coding theory (Rao and Ballard, 1999) introduced a stacked convolutional LSTM architecture (Lotter et al., 2017). In contrast, our framework scales to prediction of natural videos while remaining simple: we rely on principles of signal processing and representation theory to employ a polar non-linearity (and we describe an effective and stable implementation), but we do not explicitly model the stochastic nature of the video prediction problem. Our method is also related to work that adopts a Lie group formalism in representation learning. Since the seminal work that proposed learning Lie group generators from dynamic signals (Rao and Ruderman, 1998), the polar parametrization was explored in (Cohen and Welling, 2014) to identify irreducible representations in a synthetic dataset. The continuous group formalism has also been combined with sparse coding (Chen et al., 2018; Chau et al., 2020) to model natural images as points on a latent manifold. More recently, bispectral neural networks (Sanborn et al., 2022) have been shown to learn image representations invariant to a given global transformation (in particular cyclic translation and rotation of MNIST digits). In contrast to the coding approach, our formulation relies on a prediction objective to jointly discover and exploit the symmetries implicit in data. In order to scale to natural video data, where multiple unknown and noisy transformations are play, we developed a convolutional approach that adapts to the local structure of transformations. This formulation can represent a very large family of local symmetries (including diffeomorphisms and non-smooth fields of local translations). This generality comes at the cost of precisely identifying what groups of transformations are captured by the learned representation. Finally, in the fluid mechanics literature, the Koopman operator approach (Mezic, 2005) has been used to lift a system from its original state-space to a higher dimensional representation space where its dynamics can be linearized - a dynamical analog of the well known kernel trick. This formalism has spurred a line of work in machine learning that relies on autoencoders to learn coordinate systems that approximately linearize dynamics (Lusch et al., 2018; Azencot et al., 2020). In this perspective, our work can also be interpreted as learning the spectral properties of an abstract Koopman operator operating on video data, specifically estimating its complex eigenvectors. Our approach makes an inertial assumption and does not require an auxiliary network to compute velocities. Moreover it relies on a convolutional approach and is able to predict raw videos (which tend to contain richer structure than typical fluid flows). ## 6 Discussion We have presented a simple self-supervised representation-learning framework based on next-frame prediction. It unveils the temporal structure of natural videos using local polar coordinates. Our approach jointly discovers and exploits the local symmetries present in the temporal evolution of image sequences, in particular the spatio-temporal redun Figure 4: A typical example image sequence from the DAVIS test set. The first three frames on the top row display the unprocessed images, and last five frames show the respective prediction for each method (with their shorthand above). The bottom row displays error maps computed as the difference between the target image \(x(t+1)\) and each predicted next frame on the corresponding position in the first row. Images, predictions and error maps are all shown on the same scale. dancies due to local deformation of image content. We assumed that spatial processing and temporal extrapolation can be partitioned into i) the learned parameterized mappings, one that extract pairs of local features from individual frames and one that generates a frame from the coefficients; and ii) a fixed angular extrapolation mechanism that advances coefficients and embodies an inertial hypothesis (ie. evolving content will continue to evolve in the same way). Our empirical results demonstrate that these assumptions, far from being too limiting, correspond well to the structure of natural videos and provide a natural representation thereof. Specifically, we used the the polar coordinate transformation as a bivariate non-linear activation function acting on pairs of coefficients in the representation. Predictions in this representation were computed by phase advancement, which was implemented implicitly (Eq. 2). Compared to linear extrapolation, angular extrapolation achieved higher prediction accuracy on natural video. Using terminology from group theory, our polar models factorize signals into an invariant part, which is stable in time, and an equivariant part, which evolves linearly. This choice of prediction mechanism, motivated by principles of signal processing and harmonic analysis, acts as a structural prior. Although the conventional deep convolutional network (CNN) considered here could in principle have discovered this solution, it failed to do so (within the constraints of our architecture). The polar predictor, on the other hand, is well-matched to the task, and achieves a good solution using only a fraction of the number of parameters. It is optimized on a mean squared error objective, without any other additional regularization - which facilitates interpretability. This exemplifies a fundamental theme in computational vision and machine learning: when possible, let the representation do the analysis. Our approach to prediction has the advantage of being motion-informed while not relying on explicit motion estimation. Because it is not constrained to assigning a single motion vector at every location and instead represents a distribution of phases, this method bypasses known difficulties of motion estimation in handling non-translational motions and outperforms a conventional causal motion compensated algorithm. In the era of GPU computing, it admits a very fast implementation that has potential for applications in video compression. Moreover, the polar predictor takes the form of a predictive auto-encoder that associates a latent representation vector to each frame. This representation may prove useful for other tasks like object categorization, segmentation, or estimation of heading direction for a moving observer. Several natural extensions of the work presented here can be further explored: (i) treating the angular extrapolation prediction mechanism as a more general building block that is cascaded in a multi-layer architecture; (ii) optimizing representation layers deeper in the hierarchy to make predictions at longer timescales; (iii) measuring prediction error directly in the representation domain, while avoiding representation collapse - such a local objective function would allow a potential connection with biological neural architecture and to human visual perception. (iv) examining and interpreting what is learned in the deepPP model, especially around occlusion boundaries (which is not invertible, and therefore not a group action).
2310.04265
Clique number of tournaments
We introduce the notion of clique number of a tournament and investigate its relation with the dichromatic number. In particular, it permits defining $\dic$-bounded classes of tournaments, which is the paper's main topic.
Pierre Aboulker, Guillaume Aubian, Pierre Charbit, Raul Lopes
2023-10-06T14:11:26Z
http://arxiv.org/abs/2310.04265v1
# Clique number of tournaments ###### Abstract We introduce the notion of clique number of a tournament and investigate its relation with the dichromatic number. In particular, it permits defining \(\overrightarrow{\chi}\)-bounded classes of tournaments, which is the paper's main topic. ###### Contents * 1 Introduction * 2 Definitions and Notations * 3 \(\chi\)-bounded classes of tournaments * 3.1 First properties about \(\overrightarrow{\chi}\) and \(\overrightarrow{\omega}\) * 3.2 Substitution and a class of tournaments with unbounded \(\overrightarrow{\omega}\) * 3.3 Do tournaments with bounded twin-width are dic-bounded? * 4 Classes of tournaments defined by forbidding a single tournament * 4.1 Gentlemen are the same as heroes * 4.2 Gyarfas-Sumner Conjecture for tournaments * 4.3 Relation with the Erdos-Hajnal property and the BIG-BIG conjecture * Links with domination number * 6 Conclusion and future direction ## 1 Introduction In this paper, we only consider _graphs_ or _directed graphs_ (_digraphs_ in short) with no loops, no parallel edges or arcs nor anti-parallel arcs (in particular our digraphs contain no cycle of length \(2\)). Given an undirected graph \(G\), we denote by \(\omega(G)\) the size of a maximum clique of \(G\) and by \(\chi(G)\) its chromatic number. Given a digraph \(G\), we denote by \(\overrightarrow{\chi}(G)\) its _dichromatic number_, that is the minimum integer \(k\) such that the set of vertices of \(G\) can be partitioned into \(k\) acyclic subdigraphs. Relations between the chromatic number and the clique number of a graph have been studied for decades in structural graph theory. The goal of this paper is to introduce a notion of clique number for digraphs, that would be a lower bound for the dichromatic number as in the undirected case, and start to investigate its relation with the dichromatic number. Given a digraph \(D\), and a total order \(\prec\) on \(V(D)\), we denote \(D^{\prec}\) the (undirected) graph with vertex set \(V(D)\) and edge \(uv\) if \(u\prec v\) and \(vu\in A(D)\). We call it the _backedge graph_ of \(D\) with respect to \(\prec\). It is straightforward that every independent set of \(D^{\prec}\) induces an acyclic digraph. As a consequence, we have that \(\overrightarrow{\chi}(D)\leq\chi(D^{\prec})\). Conversely, by taking an ordering built from a \(\overrightarrow{\chi}(D)\)-dicolouring, that is taking colour classes one after the other, and ordering each colour class in a topological ordering, we get that: \[\overrightarrow{\chi}(D)=\min\,\left\{\chi(D^{\prec}):\prec\text{is a total order of }V(D)\right\}\] This gives an alternative definition for the dichromatic number, which naturally leads to the following definition of the _clique number of a digraph1_ Footnote 1: This definition was introduced during a discussion between the authors and Stephan Thomasé in Sete during the fifth ANR Digraphs meeting. \[\overrightarrow{\omega}(D)=\min\,\left\{\omega(D^{\prec}):\prec\text{is a total order on }V(D)\right\}\] We point out that although this is the first time (up to our knowledge), that this definition formally appears, the idea of looking at the clique number of ordered digraphs is not new, and in the nice survey [22], Nguyen, Scott and Seymour study (amongst other things) the clique number of backedge graphs of tournaments, so the idea was clearly in their minds. Obviously, since \(\omega(G)\leq\chi(G)\) for any graph \(G\), we also have \(\overrightarrow{\omega}(D)\leq\overrightarrow{\chi}(D)\) for any digraph \(D\). In the context of graphs, since there are families of graphs with clique number \(2\) but an arbitrarily large chromatic number, there has been in the past decades a very important amount of work dedicated to the study of so-called \(\chi\)-bounded classes of graphs, that is classes for which \(\chi\) is bounded above by a function of \(\omega\). See [24] for a survey on \(\chi\)-boundedness. Analogously, we say that a class of digraphs is _\(\overrightarrow{\chi}\)-bounded_ if there exists \(f\) such that for every digraph \(D\in\mathcal{C}\), \(\overrightarrow{\chi}(D)\leq f(\overrightarrow{\omega}(D))\). The object of the paper is to give first results and conjectures about clique number of tournaments and \(\overrightarrow{\chi}\)-bounded classes of tournaments. We briefly discuss clique number of general digraphs in Section 6. Note that another definition of \(\overrightarrow{\chi}\)-boundedness is given in [1] where the clique number of a digraph \(D\) is defined as the maximum size of a transitive tournament contained in \(D\). (More precisely, it is defined as the size of a maximum clique of the underlying of \(D\), but since an orientation of the complete graphs on \(2^{k}\) vertices contains \(TT_{k}\), if a class of oriented graphs is \(\overrightarrow{\chi}\)-bounded for one notion, it is also \(\overrightarrow{\chi}\)-bounded for the other). Such a definition does not give a lower bound on the dichromatic number, which is the reason why we were looking for another definition. The next section is devoted to notations and definitions used throughout the paper. Section 3 establishes some first properties of clique number, and explores the connections between the clique number of a tournament and the clique number of its backedge graphs. We then endeavour to extend standard results on \(\chi\)-boundedness to tournaments. In subsection 3.2, we describe a simple family having arbitrarily large clique number and prove that \(\overrightarrow{\chi}\)-boundedness of tournaments is preserved by substitution (and a similar result for some classes of digraphs). We then discuss in subsection 3.3 if, as in the undirected case, classes of tournaments of bounded twin-width are \(\overrightarrow{\chi}\)-bounded. A fruitful discussion when studying tournaments involves examining the class of tournaments not containing a given tournament \(T\), and deciding which \(T\) will ensure that this class has a given property. For example, choices of \(T\) guaranteeing a small dichromatic number [4] (such \(T\) are called heroes), a small domination number [10] or a small twin-width [15] have been studied before. Section 4 is devoted to this for the property of being \(\overrightarrow{\chi}\)-bounded. In Section 4.1 we study gentlemen, which are tournaments such that the clique number of tournaments not containing them is bounded, and prove that gentlemen are the same as heroes. In Subsection 4.2, we propose an analogue of Gyarfas-Summner Conjecture for tournaments, proving multiple results supporting this conjecture. We then link \(\overrightarrow{\chi}\)-binding tournaments to the famous Erdos-Hajnal property and to the \(BIG\Rightarrow BIG\) conjecture in Subsection 4.3. Eventually, in Section 5, we discuss local to global results for clique number, trying to adapt and generalize results of Harutyunyan, Le, Thomasse and Wu in [19] about dichromatic number and domination number. ## 2 Definitions and Notations Definitions and notations of this paper that are not explained in this section follow from classical textbooks such as [3], [5] or [13]. Given two disjoint sets of vertices \(X,Y\) of a digraph \(D\), we write \(X\Rightarrow Y\) to say that for every \(x\in X\) and for every \(y\in Y\), \(xy\in A(D)\), and we write \(X\to Y\) to say that every arc with one end in \(X\) and the other one in \(Y\) is oriented from \(X\) to \(Y\) (but some vertices of \(X\) might be non-adjacent to some vertices of \(Y\)). When \(X=\{x\}\) we write \(x\Rightarrow Y\) and \(x\to Y\). We also use the symbol \(\Rightarrow\) to denote a composition operation on digraphs: for two digraphs \(D_{1}\) and \(D_{2}\), \(D_{1}\Rightarrow D_{2}\) is the digraph obtained from the disjoint union of \(D_{1}\) and \(D_{2}\) by adding all arcs from \(V(D_{1})\) to \(V(D_{2})\). A _dominating set_ of a digraph \(D\) is a set of vertices \(X\) such that \(N^{+}[X]=V(D)\). The _dominating number_\(\operatorname{dom}(D)\) of \(D\) is the size of a smallest dominating set of \(D\). A _tournament_ is an orientation of a complete graph. A _transitive tournament_ is an acyclic tournament and we denote by \(TT_{n}\) the unique acyclic tournament on \(n\) vertices. Given three tournaments \(T_{1},T_{2},T_{3}\), we denote by \(\Delta(T_{1},T_{2},T_{3})\) the tournament obtained from disjoint copies of \(T_{1},T_{2},T_{3}\) by adding arcs in such a way that \(T_{1}\Rightarrow T_{2}\), \(T_{2}\Rightarrow T_{3}\) and \(T_{3}\Rightarrow T_{1}\). If one or more of the tournaments \(T_{i}\) is a transitive tournament \(TT_{k}\), we simplify the notation by using its size \(k\) instead of writing \(TT_{k}\) in the \(\Delta\) construction: for example, \(\Delta(1,k,T)\) corresponds to \(\Delta(TT_{1},TT_{k},T)\) and \(\Delta(1,1,1)\) is simply the directed triangle, which we also denote by \(C_{3}\). A _class_ of graphs (resp. digraphs) is a collection of graphs (resp. digraphs) that is closed under induced subgraphs, meaning that if \(G\) belongs to the collection, then any induced subgraph of \(G\) also belongs to the collection. Given a collection \(\mathcal{C}\) the _hereditary closure_ of \(\mathcal{C}\) is the class of all induced subgraphs of elements of \(\mathcal{C}\). Given a set of digraphs \(\mathcal{H}\), we say that a digraph \(G\) is \(\mathcal{H}\)_-free_ if it contains no member of \(\mathcal{H}\) as an induced subgraph and denote by \(Forb(\mathcal{H})\) the class of \(\mathcal{H}\)-free digraphs. We write \(Forb(F_{1},\ldots,F_{k})\) instead of \(Forb(\{F_{1},\ldots,F_{k}\})\) for simplicity. Because of the definition of \(\overrightarrow{\omega}\), this paper is very often concerned with total orders on the vertices of a graph or a tournament. For a graph or tournament \(T\) we denote by \(\mathfrak{S}(T)\) the set of total orderings of \(V(T)\). Given a graph or tournament with a total ordering \(\prec\) of its vertex set \(V\) and two disjoint subsets \(A,B\) of \(V\), we write \(A\prec B\) to say that for every \(a\in A\) and every \(b\in B\), \(a\prec b\). For a digraph or tournament with a total ordering \(\prec\) of its vertex set \(V\), an arc \(uv\) such that \(u\prec v\) is called _forward_, and otherwise it is called _backward_. Recall that given a tournament \(T\), and a total order \(\prec\) on \(V(T)\), the backedge graph \(T^{\prec}\) of \(T\) with respect to \(\prec\) is the (undirected) graph with vertex set \(V(T)\) and edges \(uv\) if \(u\prec v\) and \(vu\in A(T)\) (i.e. \(vu\) is backward). An ordering \(\prec\) such that \(\omega(T^{\prec})=\overrightarrow{\omega}(T)\) (resp. \(\overrightarrow{\chi}(T^{\prec})=\overrightarrow{\chi}(T)\)) is called an _\(\overrightarrow{\omega}\)-ordering_ of \(T\) (resp. \(\overrightarrow{\chi}\)-ordering). We denote by \(\mathfrak{S}_{\overrightarrow{\omega}}(T)\) the set of \(\overrightarrow{\omega}\)-orderings and by \(\mathfrak{S}_{\overrightarrow{\chi}}(T)\) the set of \(\overrightarrow{\chi}\)-orderings. ## 3 \(\chi\)-bounded classes of tournaments ### First properties about \(\overrightarrow{\chi}\) and \(\overrightarrow{\omega}\) We begin with an easy fact relating the clique number of a digraph and the clique number of its strong components. **Property 3.1**: The clique number of a digraph is equal to the maximum clique number of its strong components. **Proof:** Assume a digraph \(D\) has two strong components \(A\) and \(B\) such that \(A\to B\) and let us prove that \(\overrightarrow{\omega}(D)=\max(\overrightarrow{\omega}(A),\overrightarrow{ \omega}(B))\). It is clear that \(\overrightarrow{\omega}(D)\geq\max(\overrightarrow{\omega}(A),\overrightarrow {\omega}(B))\). Let \(\prec_{A}\) (resp. \(\prec_{B}\)) be an \(\overrightarrow{\omega}\)-ordering of \(A\) (resp. of \(B\)). Then the ordering of \(D\) obtained from \(\prec_{A}\) and \(\prec_{B}\) by setting, for every \(a\in V(A)\) and \(b\in V(B)\), \(b\prec a\) satisfies \(\omega(D^{\prec})=\max(\overrightarrow{\omega}(A),\overrightarrow{\omega}(B))\). As observed in the introduction, the definition of \(\overrightarrow{\omega}\) immediately implies that \(\overrightarrow{\omega}(T)\leq\overrightarrow{\chi}(T)\) for any tournament \(T\). The following proves a relation with the domination number. In Section 5 we will expose some other results linking \(\operatorname{dom}\), \(\overrightarrow{\chi}\) and \(\overrightarrow{\omega}\) **Property 3.2**: For every tournament \(T\), \(\operatorname{dom}(T)\leq\overrightarrow{\omega}(T)\leq\overrightarrow{\chi}(T)\). **Proof:** We already know that the second inequality holds. Let \(T\) be a tournament, set \(V(T)=\{v_{1},\ldots,v_{n}\}\) and assume that \(v_{1}\prec v_{2}\prec\cdots\prec v_{n}\) is an \(\overrightarrow{\omega}\) ordering of \(T\). Greedily construct a dominating set \(X\) of \(T\) as follows: \(v_{1}\in X\), and if \(v_{i}\) is the last vertex added to \(X\), add \(v_{j}\) to \(X\) where \(j\) is minimum such that there is an arc from \(v_{j}\) to every vertex of \(X\). Then \(X\) is a dominating set of \(T\), and it induces a clique in the backedge graph of \(T\) defined by an \(\overrightarrow{\omega}\)-ordering. So \(\operatorname{dom}(T)\leq\overrightarrow{\omega}(T)\). In [22], the following fundamental inequality is proved (the second inequality is trivial by definition). We give a proof anyway, to make the paper self-contained and familiarize the reader with the notations. Moreover our prove is presented slightly differently then the one in [22]. **Theorem 3.3** ([22]): _For any tournament \(T\) and ordering \(\prec\) of \(V(T)\)._ \[\frac{\chi(T^{\prec})}{\omega(T^{\prec})}\leq\overrightarrow{\chi}(T)\leq\chi (T^{\prec})\] **Proof:** Let \(T\) be a tournament and \(\prec\) an ordering of \(V(T)\). Let \(X\subseteq V(T)\) such that \(T[X]\) is a transitive tournament. To prove that \(\chi(T^{\prec})\leq\omega(T^{\prec})\)\(\overrightarrow{\chi}(T)\), it suffices to prove that \(\chi(T^{\prec}[X])\leq\omega(T^{\prec})\). Let \(\varphi:X\to\mathbb{N}\) be such that \(\varphi(x)\) is the number of vertices of a longest \(\prec\)-decreasing path in \(T^{\prec}[X]\) finishing in \(x\). We claim that \(\varphi\) is a \(\omega(T^{\prec})\)-colouring of \(T^{\prec}\). Let \(u,v\in X\) with \(u\prec v\) and \(uv\in E(T^{\prec})\). Then \(\varphi(u)\geq\varphi(v)+1\), so \(\varphi\) is a colouring of \(T^{\prec}[X]\). If \(x_{1},x_{2},x_{3}\in X,x_{3}\prec x_{2}\prec x_{1}\) and \(x_{1}x_{2},x_{2}x_{3}\in E(T^{\prec})\), then \(x_{1}x_{2},x_{2}x_{3}\in A(T)\) and thus, since \(T[X]\) is a transitive tournament, \(x_{1}x_{3}\in A(T)\), i.e. \(x_{1}x_{3}\in E(T^{\prec})\). This implies that the vertices of a \(\prec\)-decreasing path in \(T^{\prec}[X]\) induces a clique of \(T^{\prec}\). So for every vertex \(x\in X\), \(\varphi(x)\leq\omega(T^{\prec})\). \(\blacksquare\) Observe that for an aribtrary order \(\prec\), \(\omega(T^{\prec})\) and \(\chi(T^{\prec})\) can be arbitrarily larger than \(\overrightarrow{\omega}(T)\) or \(\overrightarrow{\chi}(T)\). For example, there is an ordering \(\prec\) of \(TT_{n}\) such that \(\omega(T^{\prec})=\chi(T^{\prec})=n\), while \(\overrightarrow{\omega}(TT_{n})=\overrightarrow{\chi}(TT_{n})=1\). However, an \(\overrightarrow{\omega}\)-ordering always provides a good approximation of \(\overrightarrow{\chi}\) in the following sense: **Property 3.4**: _For every tournament \(T\) and every \(\overrightarrow{\omega}\)-ordering \(\prec\) we have:_ \[\overrightarrow{\chi}(T)\leq\chi(T^{\prec})\leq\overrightarrow{\chi}(T)^{2}\] **Proof:** Let \(T\) be a tournament and \(\prec\) an \(\overrightarrow{\omega}\)-ordering of \(T\). By Theorem 3.3, we have that \(\chi(T^{\prec})\leq\omega(G^{\prec})\)\(\overrightarrow{\chi}(T)\). But since \(\omega(G^{\prec})=\overrightarrow{\omega}(T)\) and \(\overrightarrow{\omega}(T)\leq\overrightarrow{\chi}(T)\), we get that: \[\overrightarrow{\chi}(T)\leq\chi(T^{\prec})\leq\overrightarrow{\chi}(T)^{2}\] It is natural to ask if the following stronger form of the above property holds (we have no reason to believe it does, but we could not find a counter-example). **Question 3.5**.: _Is it true that for every tournament \(T\), there exists \(\prec\in\mathfrak{S}(T)\) such that \(\prec\) is both a \(\overrightarrow{\omega}\)-ordering and a \(\overrightarrow{\chi}\)-ordering?_ Given a class of tournaments \(\mathcal{T}\), let us denote by \(\mathcal{T}^{\prec}\) the class of all backedge graphs of tournaments in \(\mathcal{T}\) : \[\mathcal{T}^{\prec}=\{T^{\prec}:T\in\mathcal{T},\prec\in\mathfrak{S}(T)\}\] A natural question is whether \(\mathcal{T}\) is \(\overrightarrow{\chi}\)-bounded has to do with the fact that \(\mathcal{T}^{\prec}\) is \(\chi\)-bounded in the usual sense for undirected graphs. And one can ask the same question for the more restricted class of "optimal" backedge graphs \(\mathcal{T}^{\prec\preceq}\) : \[\mathcal{T}^{\prec\preceq}=\{T^{\prec}:T\in\mathcal{T},\prec\in\mathfrak{S}_{ \overrightarrow{\omega}}(T)\}\] The following theorem answers these questions. **Theorem 3.6**: _Let \(\mathcal{T}\) be a class of tournaments. The following properties are equivalent:_ * \(\mathcal{T}\) _is_ \(\overrightarrow{\chi}\)_-bounded._ * \(\mathcal{T}^{\prec}\) _is_ \(\chi\)_-bounded._ _._ * \(\mathcal{T}^{\prec\preceq}\) _is_ \(\chi\)_-bounded._ * \((i)\Rightarrow(ii)\): let \(f\) be a function such that any tournament \(T\in\mathcal{T}\) satisfies \(\overrightarrow{\chi}(T)\leq f(\overrightarrow{\omega}(T))\). Now for any tournament \(T\in\mathcal{T}\) and \(\prec\in\mathfrak{S}(T)\). \[\chi(T^{\prec}) \leq\omega(T^{\prec})\overrightarrow{\chi}(T) \text{by Theorem \ref{thm:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq: next Theorem 3.9 implies that \(\overrightarrow{\chi}(T)\leq 9^{\overrightarrow{\omega}(T)}\) for every tournament \(T\) in this class. This is due to the fact that this class can also be defined through the operation of _substitution_, defined below. Given two digraphs \(D_{1}\) and \(D_{2}\) with disjoint vertex sets, a vertex \(u\in V(D_{1})\), and a digraph \(D\), we say that \(D\) is obtained by _substituting_\(D_{2}\) for \(u\) in \(D_{1}\) provided that the following holds: * \(V(D)=(V(D_{1})\setminus u)\cup V(D_{2})\), * \(D[V(D_{1})\setminus u]=D_{1}\setminus u\), * \(D[V(D_{2})]=D_{2}\), * for all \(v\in V(D_{1})\setminus u\) if \(vu\in A(D_{1})\) (resp. \(uv\in A(D_{1})\), resp. \(u\) and \(v\) are non-adjacent in \(D_{1}\)), then \(V(H_{1})\Rightarrow v\) (resp. \(v\Rightarrow V(D_{2})\), resp. there is no arcs between \(v\) and \(V(D_{2})\)) in \(D\). Given a class of digraphs \(\mathcal{C}\) we define \(\mathcal{C}^{subst}\) to be the closure of \(\mathcal{C}\) under substitution. It is a well-known easy-to-prove fact that, if a class of (undirected) graphs is \(\chi\)-bounded, so is \(\mathcal{C}^{subst}\). The first item of the following theorem proves the analogue for classes of tournaments. The second item applies to classes of digraphs (instead of tournaments) but only in the case of families of digraphs whose underlying graphs have bounded chromatic number. In that case we get a better \(\overrightarrow{\chi}\)-binding function. **Theorem 3.9**: Let \(\mathcal{T}\) be a class of digraphs. 1. If \(\mathcal{T}\) is a class of tournaments \(\overrightarrow{\chi}\)-bounded by a function \(f\), then \(\mathcal{T}^{subst}\) is \(\overrightarrow{\chi}\)-bounded by function \(g(w)=(3wf(w))^{w}\). 2. If there exists \(K\) such that for every digraph \(D\) in \(\mathcal{T}\) the underlying graph of \(D\) has chromatic number at most \(K\), then \(\mathcal{T}^{subst}\) is \(\overrightarrow{\chi}\)-bounded by function \(g(w)=(3K)^{w}\). **Proof :** We will prove the two statements simultaneously, as the difference only occurs at one particular point of the proof. We want to prove that for every \(T\in\mathcal{T}^{subst}\), \(\overrightarrow{\chi}(T)\leq g(\overrightarrow{\omega}(T))\). Assume by contradiction that the result does not hold and let \(T\in\mathcal{T}^{subst}\) be a counter-example minimizing \(\overrightarrow{\omega}(T)+|V(T)|\). We refer to this by saying "by minimality of \(T\)". Define \(w:=\overrightarrow{\omega}(T)\) and observe that \(w\geq 2\) since if \(\overrightarrow{\omega}(T)=1\), then \(T\) is acylic, so \(\overrightarrow{\chi}(T)=1\) and the result holds. Since \(T\in\mathcal{T}^{subst}\), \(T\) can be constructed from a digraph \(X\in\mathcal{T}\), with \(|V(X)|\geq 2\), by substituting each vertex \(v\in V(X)\) by a digraph \(T_{v}\in\mathcal{T}^{subst}\). For any set of vertices \(Y\subseteq V(X)\), we define \(T_{Y}=T[\cup_{y\in V}V(T_{y})]\). We call a vertex \(v\in X\)_big_ if \[\overrightarrow{\chi}(T_{v})\geq 2g(w-1)+3\] and _small_ otherwise. We call \(B\) the set of big vertices and \(S\) the set of small vertices (so \(V(X)=B\cup S\)). Let \(\prec\) be an \(\overrightarrow{\omega}\)-ordering of \(T\). We are going to construct, in two steps, a colouring of \(T\) using at most \(g(w)\) colours. First we colour the vertices of \(T_{S}\) with at most \(g(w)\) colours in such a way that for every small vertex \(s\) the digraph \(T_{s}\) is properly dicoloured, and with the additional property that if \(xy\) is a backward arc of \(T\) such that there is two distinct small vertices \(s\) and \(s^{\prime}\) such that \(x\in T_{s}\) and \(y\in T_{s^{\prime}}\), then \(x\) and \(y\) receive distinct colours. Secondly we use induction to colour the vertices of \(T_{B}\) and argue that no directed cycle is monochromatic. For the first step, we distinguish between the two cases of the theorem's statement. For both cases, we set \(\phi_{*}\) to be a dicolouring of \(T_{s}\) using at most \(2g(w-1)+2\) colours (such a dicolouring exists by definition of \(S\)). * In case \(1\) we observe that since \(X\in\mathcal{T}\) and \(\overrightarrow{\omega}(X)\leq\overrightarrow{\omega}(T)=w\), we have \(\overrightarrow{\chi}(X)\leq f(w)\) and thus there exists a colouring \(\phi\) of \(X[S]\) using at most \(f(w)\) colours. We colour the vertices of \(T_{S}\) by a Cartesian product of colourings: for each \(s\in S\), each vertex \(x\in T_{s}\) receives the colour \((\phi(s),\phi_{*}(x))\). It is clear that this yields a dicolouring of \(T_{S}\) using at most \(f(w)(2g(w-1)+2)\) distinct colours. Now, by Theorem 3.3, we have: \[\chi(T_{S}^{<})\leq\omega(T_{S}^{<})\ \overrightarrow{\chi}(T_{S})\leq wf(w)(2g(w-1)+2) \leq 3wf(w)g(w-1)\leq g(w)\] A colouring of \(T_{S}^{<}\) with at most \(g(w)\) gives a dicolouring of \(T_{S}\) such that no backward arc is monochromatic, which implies the property described in the previous paragraph. * In case \(2\), we apply a similar approach. Define \(H\) to be the undirected graph with vertex set \(S\) and with an edge \(ss^{\prime}\) if \(ss^{\prime}\) is an arc of \(X\) and there exists \(x\in T_{s}\) and \(y\in T_{s^{\prime}}\) with \(y\prec x\). Observe that \(H\) is a subgraph of the underlying graph of \(X[S]\) so by assumption there exists a colouring \(\phi\) of \(H\) using at most \(K\) colours. We colour the vertices of \(T_{S}\) again by a Cartesian product of colourings: each vertex \(x\in T_{s}\) receives the colour \((\phi(s),\phi_{s}(x))\). This colouring uses at most \(K(2g(w-1)+2)\leq g(w)\) colours and satisfies the desired property: no \(T_{s}\) contains a monochromatic directed cycle (because of \(\phi_{s}\)) and if \(xy\) is a backward arc of \(T\) such that there is two distinct small vertices \(s\) and \(s^{\prime}\) such that \(x\in T_{s}\) and \(y\in T_{s^{\prime}}\), then \(ss^{\prime}\in E(H)\) and thus \(\phi(s)\neq\phi(s^{\prime})\) which implies that \(x\) and \(y\) receive distinct colours as desired. For the second step, we first observe that for each big vertex \(b\in B\), the digraph \(T_{b}\) has strictly less vertices then \(T\) (because \(X\) has at least two vertices), so by minimality of \(T\) we have \(\overrightarrow{\chi}(T_{b})\leq g(\overrightarrow{\omega}(T_{b}))\leq g(w)\). Moreover since \(b\) is big, \(\overrightarrow{\chi}(T_{b})>g(w-1)\), so by minimality of \(T\) we also get \(\overrightarrow{\omega}(T_{b})=w\). For each \(b\in B\), and each \(x\in T_{b}\), we define the two following digraphs: \[T_{b}[\prec x]=T[\{u\in V(T_{b}):u\prec x\}]\text{ and }T_{b}[x\prec]=T[\{u\in V (T_{b}):x\prec u\}]\] and, since \(\overrightarrow{\chi}(T_{b})\geq 2g(w-1)+3\), there is a vertex \(m_{b}\in V(T_{b})\) such that \[\overrightarrow{\chi}(T_{b}[\prec m_{b}])\geq g(w-1)+1\text{ and }\overrightarrow{\chi}(T_{b}[m_{b} \prec])\geq g(w-1)+1\] Hence, by minimality of \(T\), for every \(b\in B\): \[\overrightarrow{\omega}(T_{b}[\prec m_{b}])\geq w\text{ and }\overrightarrow{ \omega}(T_{b}[m_{b}\prec])\geq w\] Let \(b\in B\). We claim that the inequalities above imply that if \(m_{b}\) is incident to a backward arc, then the other extremity of the arc is in \(T_{b}\). There are two symmetric cases (depending on whether \(m_{b}\) is the tail or the head of the backward arc), so let us assume by contradiction that \(xm_{b}\) is an backward arc and \(x\in T_{s}\), \(x\neq b\). Then because of the substitution \(xm_{b}\in A(T)\) implies that \(xy\in A(T)\) for any \(y\in T_{b}\). In particular \(xy\in A(T)\) for any \(y\in T_{b}[\prec m_{b}]\). Since \(\overrightarrow{\omega}(T_{b}[\prec m_{b}])\geq w\), a clique of size \(w\) in \(T_{b}[\prec m_{b}]^{\sim}\) together with \(y\) form a clique of size \(w+1\) in \(T^{\sim}\), a contradiction. The argument is exactly the same if \(m_{b}x\) is an arc with \(x\prec m_{b}\) because \(\overrightarrow{\omega}(T_{b}[m_{b}\prec])\) also equals \(w\). We are now ready to conclude. Consider the colouring of \(V(T_{S})\) obtained in the first step of the proof and extend it to \(V(T)\) by assigning to the vertices of each \(T_{b}\) for \(b\) big a valid dicolouring using at most \(g(w)\) colours (remember that we showed that \(\overrightarrow{\chi}(T_{b})\leq g(w)\)). We claim that this defines a valid dicolouring of \(T\). Assume by contradiction that there exists a monochromatic directed cycle and let \(C\) be a minimal such cycle. Since for a fixed \(x\in V(X)\), any two vertices in \(V(T_{x})\) share the same adjacency relation with each vertex of \(V(T)\setminus V(T_{x})\), the minimality of \(C\) implies that \(C\) contains at most one vertex of \(V(T_{x})\) for any \(x\in V(X)\), for otherwise \(C\) would be entirely included into some \(T_{x}\), which is not possible since the colouring is valid dicolouring on each digraph \(T_{x}\). Now define \(C^{\prime}\) to be the directed cycle obtained from \(C\) by replacing each vertex belonging to \(V(T_{b})\) for some \(b\in B\) by the vertex \(m_{b}\). Since \(C^{\prime}\) is a directed cycle, it must contain some backward arc, i.e. some arc \(uv\) with \(v\prec u\). Since \(u\) and \(v\) do not belong to the same \(T_{x}\), and because of the property of the vertices \(m_{b}\) proven in the previous paragraph, both vertices \(u\) and \(v\) belong to \(T_{S}\). But then they are vertices of \(C\) and the arc \(uv\) is thus monochromatic and backward, which contradicts the property of the colouring of \(T_{S}\) established in the first step of the proof. Note that the hereditary closure of \(\{\widetilde{S}_{n},n\in\mathbb{N}\}\) mentioned earlier is easily seen to be exactly \(\{TT_{1},TT_{2},\overrightarrow{C_{3}}\}^{subst}\). Therefore the first item implies \(\overrightarrow{\chi}(T)\leq 9^{\overrightarrow{\omega}(T)}\) for any \(T\) which is a subtournament of some \(\widetilde{S}_{n}\). We do not know if this class is \(\overrightarrow{\chi}\)-bounded by a polynomial function. One can prove for example that that the order of magnitude of \(\overrightarrow{\chi}(\widetilde{S}_{n})\) is \((3/2)^{n}\) but it could be that the lower bound \(\overrightarrow{\omega}(\widetilde{S}_{n})\geq n\) given by Lemma 3.8 is far from tight. Chudnovsky, Penev, Scott and Trotignon [11] proved that if a class of graphs is \(\chi\)-bounded by a polynomial function, so is its closure under substitution. Could it be that the same holds for tournaments? **Question 3.10**.: _Is it true that if a class of tournaments \(\mathcal{T}\) is polynomially \(\overrightarrow{\chi}\)-bounded, then so is \(\mathcal{T}^{subst}\)._ Before closing this section on substitutions we mention here another sequence of tournaments belonging to \(\{TT_{1},TT_{2},\overrightarrow{C_{3}}\}^{subst}\) that will be of use in the proof of Theorem 4.2. Let \(S_{1}=TT_{1}\) and inductively, for \(n\geq 2\), let \(S_{n}=\Delta(1,S_{n-1},S_{n-1})\). It is easy to observe that \(\overrightarrow{\chi}(S_{n})=n\). Since \(S_{n}\) is obviously a subtournament of \(\widetilde{S}_{n}\), we have therefore \(\overrightarrow{\omega}(S_{n})\geq\log_{9}(n)\). Again it could be that this logarithm is not necessary. It is clear that \(\overrightarrow{\omega}(S_{1})=1\), \(\overrightarrow{\omega}(S_{2})=2\) and it is not hard (but a bit laborious) to prove that \(\overrightarrow{\omega}(S_{3})=2\) and \(\overrightarrow{\omega}(S_{4})=3\). The clique number of \(S_{k}\) for \(k\geq 5\) is not known but we doubt that one can compute an exact formula for it. ### Do tournaments with bounded twin-width are \(\overrightarrow{\chi}\)-bounded? _Twin-width_ is a parameter introduced in [7] measuring the complexity of a binary structure. We refer to [15] for the definitions of the twin-width of a graph, an ordered graph and a tournament. Given a graph or digraph \(G\) and a total order \(\prec\) on \(V(G)\), we denote by \(tww(G)\) the twin-width of \(G\) and by \(tww(G,\prec)\) the twin-width of the ordered graph (digraph) \((G,\prec)\). Classes of graphs of bounded twin-width have been shown to be \(\chi\)-bounded [6, 23], and even polynomially \(\chi\)-bounded [8]. **Theorem 3.11** ([8]): _For every \(k\geq 1\), the class of undirected graphs with twin-width at most \(k\) is polynomially \(\chi\)-bounded._ Observe that for every integer \(k\geq 2\), \(S_{k}\) has twin-width \(1\) and dichromatic number \(k\), so tournaments with bounded twin-width can have arbitrarily large dichromatic number. **Conjecture 3.12**: _Let \(k\geq 1\). The class of tournaments with twin-width at most \(k\) is \(\overrightarrow{\chi}\)-bounded._ Given a tournament \(T\) and an ordering \(\prec\) of \(V(T)\), we denote by \((T,\prec)\) the ordered tournament with ordering \(\prec\), and by \((T^{\prec},\prec)\) the ordered backedge graph with ordering \(\prec\). The following conjecture implies Conjecture 3.12. **Conjecture 3.13**: _There exists a function \(f\), such that for every tournament \(T\), there exists an ordering \(\prec^{*}\) of \(V(T)\) such that:_ \[\omega(T^{\prec^{*}})\leq f(\overrightarrow{\omega}(T))\quad\text{ and }\quad tww(T,\prec^{*})\leq f(tww(T))\] **Theorem 3.14**: _Conjecture 3.13 implies Conjecture 3.12._ **Proof:** Let \(\mathcal{T}_{k}\) the class of tournaments with twin-width at most \(k\). For each \(T\in\mathcal{T}_{k}\), we associate an ordering \(\prec^{*}_{T}\) given by Conjecture 3.13. For every \(T\in\mathcal{T}_{k}\), we have (the two first inequalities are true for every tournament and any ordering, the last one comes from the property of \(\prec^{*}_{T}\)): \[tww(T^{\prec^{*}_{T}})\leq tww(T^{\prec^{*}_{T}},\prec^{*}_{T})\leq tww(T, \prec^{*}_{T})\leq f(k) \tag{1}\] Hence, the class of undirected graphs \(\mathcal{C}_{k}=\{T^{\prec^{*}_{T}}\mid T\in\mathcal{T}_{k}\}\) has bounded twin-width, and is thus \(\chi\)-bounded by a polynomial function \(g\). Let \(T\in\mathcal{T}_{k}\). We have: \(T^{\prec^{*}_{T}}\in\mathcal{C}_{k}\), \(tww(T^{\prec^{*}_{T}})\leq f(k)\) by (1), \(\omega(T^{\prec^{*}_{T}})\leq f(\overrightarrow{\omega}(T))\) by the choice of \(\prec^{*}_{T}\). Hence, \[\chi(T^{\prec^{*}_{T}})\leq g(f(\overrightarrow{\omega}(T)))\] and thus \(\overrightarrow{\chi}(T)\leq g(f(\overrightarrow{\omega}(T))\). \(\blacksquare\) Geniet and Thomasse [15] introduced a particular ordering of tournaments called _BST-ordering_. Informally, a \(BST\)-ordering of a tournament \(T\) is based on a rooted binary search tree on vertex set \(V(T)\) with the property that for every vertex \(x\), the left child of \(x\) and its descendent are in \(N^{-}(x)\), and the right child of \(x\) and its descendent are in \(N^{+}(x)\), and the order is the left-to-right order defined by this tree. See [15] Section 4 for a formal definition. They prove that \(BST\)-orderings give an approximation of the twin-width in the following sense: **Theorem 3.15** ([15]): _There exists a function \(f\) such that, for every tournament \(T\) and any \(BST\)-ordering \(\prec\) of \(T\), we have:_ \[tww(T,\prec)\leq f(tww(T))\] Hence, \(BST\)-orderings are natural candidates for the ordering of Conjecture 3.13. **Conjecture 3.16**: _There exists a function \(f\) such that, for every tournament \(T\), there exists a \(BST\)-ordering \(\prec\) of \(T\) such:_ \[\omega(T^{\prec})\leq f(\overrightarrow{\omega}(T))\] Classes of tournaments defined by forbidding a single tournament In this section, we will investigate the classes defined by forbidding a single tournament. Our main question will be to understand which tournaments \(H\) are such that \(\operatorname{Forb}(H)\) is \(\chi\)-bounded. Such a tournament is said to be _\(\overrightarrow{\chi}\)-binding_. ### Gentlemen are the same as heroes The most trivial case of \(\chi\)-bounding function is a constant function. A tournament \(H\) is a _hero_ if there exists an integer \(c_{H}\) such that every \(H\)-free tournament \(T\) has dichromatic number at most \(c_{H}\). In a seminal paper, Berger, Choromanski, Chudnovsky, Fox, Loebl, Scott, Seymour and Thomasse [4] characterized heroes: **Theorem 4.1** (_Berger, Choromanski, Chudnovsky, Fox, Loebl, Scott, Seymour and Thomasse [4]_): A tournament \(H\) is a hero if and only if: * \(H=TT_{1}\), or * \(H=H_{1}\Rightarrow H_{2}\), where \(H_{1}\) and \(H_{2}\) are heroes in tournaments, or * \(H=\Delta(1,k,H_{1})\) or \(H=\Delta(1,H_{1},k)\), where \(k\geq 1\) and \(H_{1}\) is a hero in tournaments. Similarly, we say that a tournament \(H\) is a _gentleman_ if there exists a number \(c_{H}\) such that every \(H\)-free tournament has clique number at most \(c_{H}\). Since \(\overrightarrow{\omega}(T)\leq\overrightarrow{\chi}(T)\) for any tournament \(T\), heroes are gentlemen. We prove that the converse is also true. In [22], Nguyen, Scott and Seymour introduce a class of tournaments called _crossing tournaments_ (it is the class \(\mathcal{T}[\mathcal{C}]\) where \(\mathcal{C}\) is the class of circle graphs). They prove that crossing tournaments are \(S_{3}\)-free and prove a result (see 12.4 in [22]) that can be translated in our language by saying that they have arbitrarily large clique number. It is a key ingredient in the proof of the following theorem. **Theorem 4.2**: Gentlemen and heroes are the same. **Proof :** For every tournament \(T\), \(\overrightarrow{\omega}(T)\leq\overrightarrow{\chi}(T)\), thus it is clear that all heroes are gentlemen. Let us now prove that all gentlemen are heroes. Suppose there exists a gentleman \(H\) that is not a hero, and let it be chosen so as to minimize \(|V(H)|\). Since all subtournaments of a gentleman are gentlemen (because tournaments not containing a subtournament of \(H\) do not contain \(H\) and thus have bounded clique number), every subtournament of \(H\) is a hero by minimality of \(V(H)\). Consider the sequence of tournaments \(S_{n}\) defined at the end of subsection 3.2. Since this sequence has unbounded clique number and \(H\) is a gentlemen, there exists an integer \(k\) such that \(H\) is a subtournament of \(S_{k}\). This implies that either \(H=A\Rightarrow B\) or \(H=\Delta(1,A,B)\) for some tournaments \(A\) and \(B\). Since \(A\) and \(B\) are two strict subtournaments of \(H\), they are heroes by minimality of \(H\). Thus \(H\neq A\Rightarrow B\) for \(H\) would be a hero by Theorem 4.1. Thus \(H=\Delta(1,A,B)\). But we know that \(S_{3}\) is not a gentleman since crossing tournaments are \(S_{3}\)-free and can have arbitrarily large clique number. Thus \(H\) does not contain \(S_{3}=\Delta(1,\vec{C_{3}},\vec{C_{3}})\). This implies that one of \(A\) or \(B\) does not contain \(\vec{C_{3}}\) and thus either \(A\) or \(B\) is a transitive tournament, which implies \(H\) is a hero, a contradiction. ### Gyarfas-Sumner Conjecture for tournaments We propose the following analogue of the celebrated Gyarfas-Sumner Conjecture [16, 25] that states that a graph \(F\) is \(\chi\)-binding if and only if \(F\) is a forest (where, as in the directed case, a graph \(F\) is \(\chi\)-binding if the class of graphs not containing \(F\) as an induced subgraph is \(\chi\)-bounded). **Conjecture 4.3**: A tournament \(H\) is _\(\overrightarrow{\chi}\)-binding if and only if \(H\) has a backedge graph which is a forest._ Despite the link between \(\overrightarrow{\chi}\)-bounded classes of tournaments and \(\chi\)-bounded classes of graphs given by Theorem 3.6, we were not able to prove that Gyarfas-Sumner Conjecture applies or is applied by Conjecture 4.3. We now believe that the two conjectures are independent, but we would be very happy if a bridge between them was shown. To support the conjecture, we prove that : * the "only if" part is true (Theorem 4.4), * it is enough to prove it for trees instead of forests (Proposition 4.5), * if it holds for a tournament \(T\) then it holds for the tournaments obtained by reversing every arc of \(T\) (Proposition 4.7), * if it holds for two tournaments \(H_{1}\) and \(H_{2}\) then it holds for the tournament \(H_{1}\Rightarrow H_{2}\) (Theorem 4.9), A _star_ is a tree that has at most one non-leaf vertex. We also prove that heroes admit a backedge graph that is a disjoint union of stars. See Proposition 4.6. At the end of the section we also discuss the case of tournaments \(T\) that admits a backedge graph that is a matching. **Theorem 4.4**: Let \(H\) be a tournament. If \(H\) is \(\overrightarrow{\chi}\)-binding, then \(H\) admits an ordering whose backedge graph is a forest. **Proof :** Let \(H\) be a tournament that does not admit an ordering whose backedge graph is a forest. Let \(\mathcal{C}\) be the class of undirected graphs with girth at least \(|V(H)|+1\), and let \(\mathcal{T}[\mathcal{C}]\) be the class of tournaments that admit a graph of \(\mathcal{C}\) as a backedge graph. Let \(T\in\mathcal{T}[\mathcal{C}]\) and let \(X\subseteq V(T)\) such that \(|X|=|V(H)|\). \(T\) admits an ordering such that the backedge graph has girth at least \(|V(H)|+1\), so \(T[X]\) admits an ordering for which the backedge graph is a forest. So \(T[X]\neq H\). This proves that tournaments in \(\mathcal{T}[\mathcal{C}]\) are \(H\)-free. Since for every \(T\in\mathcal{T}[\mathcal{C}]\), a backedge graph of \(T\) has girth \(|V(H)|+1\geq 4\), we have \(\overrightarrow{\omega}(T)\leq 2\). By a celebrated result of Erdos [14], for every integer \(k\), there exists \(G\in\mathcal{C}\) such that \(\chi(G)\geq k\). Let \(T\in\mathcal{T}[\mathcal{C}]\) such that \(T\) admits an ordering \(\prec\) such that \(T^{\prec}=G\). By Theorem 3.3, \(\overrightarrow{\chi}(T)\geq\chi(T^{\prec})/\omega(T^{\prec})=k/2\). This proves that there are \(H\)-free tournaments with clique number \(2\) and arbitrarily large dichromatic number, i.e. \(H\) is not \(\overrightarrow{\chi}\)-binding. Let \(T\) be a tournament admitting a forest as a backedge graph. We claim that \(T\) also admits a tree as a backedge graph. Indeed, let \(\prec\) be an ordering of \(V(T)\) such that \(T^{\prec}\) is a forest and among such ordering, assume it minimizes the number of connected components of \(T^{\prec}\). We claim that \(T^{\prec}\) is a tree. Assume for contradiction that it is not. Let \(v\) be the smallest vertex not in the same connected component of \(T^{\prec}\) as the first vertex, and let \(u\) be the vertex preceding \(v\) in \(\prec\). Then the backedge graph resulting from switching \(u\) and \(v\) in the ordering is obtained from \(T^{\prec}\) by adding the edges \(uv\) and thus has one less connected component than \(T^{\prec}\), a contradiction. We thus have the following: **Proposition 4.5**: It is enough to prove Conjecture 4.3 for tournaments that admit a tree as a backedge graph. As mentioned before, heroes are by definition \(\overrightarrow{\chi}\)-binding tournaments, and by Theorem 4.4 they admit a backedge graph that is a forest. The following proposition proves that for heroes, this backedge graphs is a star forest. **Proposition 4.6**: If \(H\) is a hero, then \(H\) admits a backedge graph that is a disjoint union of stars. **Proof :** We prove this using the inductive construction of heroes given by Theorem 4.1. It is true for \(TT_{1}\) so we need to maintain this property if \(H=H_{1}\Rightarrow H_{2}\) and if \(H=\Delta(1,k,H_{1})\). If \(H=H_{1}\Rightarrow H_{2}\), consider orderings \(\prec_{i}\) of \(H_{i}\) given by the induction, and simply construct the order on \(V(H)\) in which all vertices of \(H_{1}\) are placed before those of \(H_{2}\) (respecting \(\prec_{1}\) and \(\prec_{2}\)). This adds no new back arc, so the back edge graph is the union of those of \(H_{1}\) and \(H_{2}\), so we have our result. If \(H=\Delta(1,k,H_{1})\), then consider the ordering \(\prec\) of \(H_{i}\) given by the induction, and construct the order on \(V(H)\) obtained by placing the vertices of \(TT_{k}\) first so that all arcs go forward, then the vertices of \(H_{1}\) in the order \(\prec\), and finally the vertex \(x\) corresponding to the "1" in \(\Delta(1,k,H_{1})\). The only new back arcs are the one from \(x\) to the vertices of the \(TT_{k}\), which produce a star, so we again get our desired result. Given a tournament \(T\), the reverse \(T_{r}\) of \(T\) is the tournament obtained from \(T\) by reversing the direction of every arc. **Proposition 4.7**: \(\ Given that definition, and for a given class \(\mathcal{O}\) of ordered graphs, one can define \(\operatorname{Forb}(\mathcal{O})\) as the set of ordered graphs that do not contain any member of \(\mathcal{O}\) as an induced ordered subgraph. We say that a class of ordered graphs \(\mathcal{O}\) is \(\chi\)-bounded if the set of graphs \(G\) such that there exists \(\prec\) with \((G,\,\prec)\in\mathcal{O}\) is \(\chi\)-bounded (we simply ignore the orderings here). Let \(\mathcal{T}\) be a class of tournaments. Recall that \(\mathcal{T}^{\prec}\) is the set of graphs that are backedge graphs of a tournament in \(\mathcal{T}\). We now define the ordered version of it as follows: \[\mathcal{T}_{o}^{\prec}=\{(T^{\prec},\prec):T\in\mathcal{T},\prec\in\mathfrak{ S}(T)\}\] Note that, given a tournament \(T\), \(\{T\}^{\prec}\) is the set of backedge graphs of \(T\) and \(\{T\}_{o}^{\prec}\) the set of ordered backedge graphs of \(T\). The following is an ordered analogue of Theorem 3.6. **Property 4.10**: Let \(T\) be a tournament. The class of tournaments \(Forb(T)\) is \(\overrightarrow{\chi}\)-bounded if and only if the class of ordered undirected graphs \(Forb(\{T\}_{o}^{\prec})\) is \(\chi\)-bounded. **Proof:**: Assume first that \(Forb(T)\) is \(\overrightarrow{\chi}\)-bounding by a function \(f\). Let \((G,\prec_{G})\in Forb(\{T\}_{o}^{\prec})\). Let \(T\) be the tournament on the same vertex set as \(G\), such that \((T^{\prec_{G}},\prec_{G})=(G,\,\prec_{G})\) (i.e. \(T\) is obtained from \((G,\,\prec_{G})\) by oriented edges of \(G\) from right to left, and all other edges from left to right). Since \((G,\,\prec_{G})\in Forb(\{T\}_{o}^{\prec})\), \(T\in Forb(T)\) and thus \(\overrightarrow{\chi}(T)\leq f(\overrightarrow{\omega}(T))\leq f(\omega(G))\) (because \(\overrightarrow{\omega}(T)\leq\omega(G)\)). We have the first inequality by Theorem 3.3 \[\chi(G)=\chi(T^{\prec})\leq\overrightarrow{\chi}(T)\omega(T^{\prec})\leq f( \omega(G))\omega(G)\] which proves that \(Forb(\{T\}_{o}^{\prec})\) is \(\chi\)-bounded. Assume now that \(Forb(\{T\}_{o}^{\prec})\) is \(\chi\)-bounded by a function \(f\). Let \(T\in Forb(T)\) and let \(\prec\) be an \(\overrightarrow{\omega}\)-ordering of \(T\). Then \(T^{\prec}\in Forb(\{T\}_{o}^{\prec})\). Hence: \(\overrightarrow{\chi}(T)\leq\chi(T^{\prec})\leq f(\omega(T^{\prec}))=f( \overrightarrow{\omega}(T))\) Brianski, Davies and Walczak [9] studied for which ordered graphs \((G,\,\prec_{G})\), \(Forb((G,\,\prec_{G}))\) is \(\chi\)-bounded, and claimed in a personal communications to have proven that excluding any ordered matching yields a \(\chi\)-bounded class. **Conjecture 4.11**: Let \((M,\prec)\) be an ordered graph with maximum degree \(1\). Then the class of \((M,\prec)\)-free ordered graphs is \(\chi\)-bounded. By Property 4.10, we have: **Lemma 4.12**.: _If Conjecture 4.11 holds, then any tournament that admits a backedge graph of maximum degree \(1\) is \(\overrightarrow{\chi}\)-binding._ For the usual undirected version of Gyarfas-Sumner conjecture, one of the first non trivial cases that was proven (by Gyarfas, see [17]) concerns the class of graphs that do not contain a path of fixed length as an induced subgraph. An analogue for tournaments could be the tournament \(TP_{n}\) on \(n\) vertices obtained from the transitive tournament \(TT_{n}\) by reversing the direction of each arc of the unique Hamiltonian path \(v_{1}v_{2}\ldots v_{n}\). Now consider \(\prec\) to be the ordering \(v_{2}\prec v_{1}\prec v_{4}\prec v_{3}\prec\ldots\prec v_{2p}\prec v_{2p-1} \prec\ldots\prec v_{n}\prec v_{n-1}\) (assuming for simplicity that \(n\) is even, otherwise we end with \(v_{n}\)). It is easy to observe that the backedge graph of \(TP_{n}\) with respect to \(\prec\) has maximum degree \(1\). Hence, by 4.10, we have: **Lemma 4.13**.: _If Conjecture 4.11 holds, then for every \(k\geq 1\), \(TP_{k}\) is \(\overrightarrow{\chi}\)-binding._ ### Relation with the Erdos-Hajnal property and the \(Big\Rightarrow Big\) conjecture A tournament \(H\) has the _Erdos-Hajnal property_ if there exists an integer \(c\) such that for every \(H\)-free tournaments \(T,\,T\) contains a transitive tournament on \(|T|^{c}\) vertices. It was proven in [2] that the famous Erdos-Hajnal conjecture on undirected graphs is equivalent to the conjecture saying that every tournament has the Erdos-Hajnal property. **Theorem 4.14**: If \(H\) is a polynomially \(\overrightarrow{\chi}\)-binding tournament, then \(H\) has the Erdos-Hajnal property. **Proof:** Let \(H\) be a tournament and \(c\) an integer such that for every \(H\)-free tournaments \(T\), \(\overrightarrow{\chi}(T)\leq\overrightarrow{\omega}(T)^{c}\). Let us prove that \(H\) has the Erdos-Hajnal property (for the constant \(\frac{1}{1+c}\)). Let \(T\) be an \(H\)-free tournament. If \(\overrightarrow{\omega}(T)\geq n^{\frac{1}{1+c}}\), then \(T\) contains a transitive tournament of size \(n^{\frac{1}{1+c}}\) and we are done. So assume that \(\overrightarrow{\omega}(T)\leq n^{\frac{1}{1+c}}\). Then \(\overrightarrow{\chi}(T)\leq n^{\frac{c}{1+c}}\) and thus \(T\) contains a transitive tournament on \(\frac{n}{n^{\frac{1}{1+c}}}=n^{\frac{1}{1+c}}\) vertices. A tournament \(H\) has the _strong Erdos-Hajnal property_ if there exists a number \(c\) such that all \(H\)-free tournaments contain two disjoint set of vertices \(A\) and \(B\) such that \(A\Rightarrow B\) and \(|A|,|B|\geq c|H|\). It can be shown that if \(H\) has the strong Erdos-Hajnal then it has the Erdos-Hajnal property, and that not every tournament has the strong Erdos-Hajnal property (the Paley tournament on \(7\) vertices being an example). Heroes have the strong Erdos-Hajnal property since bounded dichromatic number implies a transitive tournament of linear size, and thus a directed cut of linear size. In [12], the authors prove that every tournament that has the strong Erdos-Hajnal property admits a backedge graph that is a forest and conjecture that the converse is true : **Conjecture 4.15** ([12]): A tournament \(H\) has the strong Erdos-Hajnal property if and only if it has a backedge graph that is a forest. Note that in view of our conjecture 4.3, it could be that being chi-bounding is the same has having the strong Erdos-Hajnal property. We are not able to prove any inclusion yet. We say that a class of tournaments \(\mathcal{T}\) has the \(BIG\Rightarrow BIG\)_property_ if there exists a function \(f\) such that, for every \(T\in\mathcal{T}\), if \(\overrightarrow{\chi}(T)\geq f(t)\), then \(T\) contains two disjoint subtournaments \(A\) and \(B\) such that \(\overrightarrow{\chi}(A)\), \(\overrightarrow{\chi}(B)\geq t\) and \(A\Rightarrow B\). In [22], the following beautiful conjecture is proposed: **Conjecture 4.16** (\(BIG\Rightarrow BIG\) Conjecture [22]): The class of all tournaments has the \(BIG\Rightarrow BIG\) property. Nguyen, Scott and Seymour proved in [21] that the \(BIG\Rightarrow BIG\) Conjecture implies the Erdos-El-Zahar conjecture, which states that there exists a function \(f\) such that, for every integer \(c\), every graph \(G\) with \(\chi(G)\geq f(\omega(G),c)\) contains two disjoint subgraphs \(A\) and \(B\) such that \(\chi(A),\chi(B)\geq c\) and there is no edge between \(A\) and \(B\). Klingelhofer and Newman [20] showed recently the other direction, that is the Erdos-El-Zahar conjecture implies the \(BIG\Rightarrow BIG\) Conjecture. To prove it, they first prove the following beautiful theorem. Given an oriented graph \(G\), we denote by \(\alpha(G)\) the size of a maximum independent set of \(G\). **Theorem 4.17** ([20]): There exists a function \(\lambda\) such that, for every integer \(t\), if \(G\) is an oriented graph such that for every \(a\in A(T)\), \(\overrightarrow{\chi}(G[N^{+}(a)])\leq t\), then \(\overrightarrow{\chi}(G)\leq\lambda(t,\alpha(G))\). Applying the exact same method as Klingelhofer and Newman used to prove that the Erdos-El-Zahar conjecture implies the \(BIG\Rightarrow BIG\) Conjecture, we can prove the following. **Theorem 4.18**: If \(H\) is a \(\overrightarrow{\chi}\)-binding tournament, then the class of \(H\)-free tournaments has the \(BIG\Rightarrow BIG\) property **Proof:** Let \(\ell\) be a function defined inductively as follows: \(\ell(1)=1\) and for every \(t\geq 1\), \(\ell(t+1)=(t+1)+\binom{t+1}{2}\ell(t)\). Given a tournament \(T\), we say that a subtournament \(X\) of \(T\) is a \(t\)-cluster if \(\overrightarrow{\chi}(X)\geq t\) and \(|X|\leq\ell(t)\) Let \(H\) be a \(\overrightarrow{\chi}\)-binding tournament and \(c\) be an integer. Let \(T\) be an \(H\)-free tournament such that \(T\) does not contain two disjoint subtournaments \(A\) and \(B\) such that \(A\Rightarrow B\) and \(\overrightarrow{\chi}(A)\), \(\overrightarrow{\chi}(B)\geq c\). We want to prove that the dichromatic number of \(T\) is bounded by a function of \(c\). We first prove a weaker statement : we prove by induction on \(t\) that if \(T\) contains no \(t\)-cluster for some \(t\) and no two disjoint subtournaments \(A\) and \(B\) such that \(A\Rightarrow B\) and \(\overrightarrow{\chi}(A)\), \(\overrightarrow{\chi}(B)\geq c\), then \(\overrightarrow{\chi}(T)\) is bounded (by a function of \(c\) and \(t\)). Since a \(1\)-cluster is a vertex, the result trivially holds for \(t=1\). Now assume it holds for \(t<2c\), and let us prove it for \(t+1\). So assume \(T\) has no \((t+1)\)-cluster, and say that an arc \(a\) is _heavy_ if \(N(a)\) contains a \(t\)-cluster, and it is _light_ otherwise (we recall that if \(xy\) is an arc, \(N(xy)\) denotes the set of vertices \(z\) such that \(yz\in A\) and \(zx\in A\)). Let \(T_{h}\) be the oriented graph induced by the heavy arcs, and \(T_{\ell}\) the oriented graphs induced by the light arcs. We first claim that the underlying graph of \(T_{h}\) has maximum clique at most \(t\). Assume by contradiction there exists \(K\) of size \(t+1\) inducing a tournament in \(T_{h}\). For every arc \(a\) with both endvertices in \(K\), \(a\) is heavy so there exists \(C_{a}\) a \(t\)-cluster included in \(N(a)\). Let \(X\) be the subtournament of \(T\) induced by the union of \(K\) and all such sets \(C_{a}\). The number of its vertices is at most \((t+1)+\binom{t+1}{2}I(t)=l(t+1)\). If \(X\) admits a dicolouring with at most \(t\) colours then there must be two vertices \(x,y\) in \(K\) that get the same colour (because \(K\) has size \(t+1\)), but then this colour cannot appear in \(C_{xy}\) (for it would create a monochromatic \(C_{3}\)), which contradicts the fact that \(C_{xy}\) has dichromatic number at least \(t\). Hence \(\overrightarrow{\chi}(X)\geq t+1\) and so \(X\) is a \((t+1)\)-cluster which contradicts our hypothesis. Hence we are proven our claim, which can be stated as \(\alpha(T_{\ell})\leq t\). By induction, since for every light arc \(a\), \(N(a)\) contains no \(t\)-cluster, we have that \(\overrightarrow{\chi}(N^{+}(a))\) is bounded for every light arc \(a\). Now, by Theorem 4.17, \(\overrightarrow{\chi}(T_{\ell})\) is also bounded, say by \(k\). Let \((S_{1},\ldots,S_{k})\) be a dicolouring of \(T_{\ell}\), i.e. \(T_{\ell}[S_{i}]\) is acyclic for \(i=1,\ldots,k\). Then, for \(i=1,\ldots,k\), there is an ordering \(\prec_{i}\) of \(S_{i}\) such that all backward arcs of \((T[S_{i}]\), \(\prec_{i})\) are heavy. Hence \(\overrightarrow{\omega}(T[S_{i}])\leq t\), and since \(T[S_{i}]\) is \(H\)-free and \(H\) is \(\overrightarrow{\chi}\)-binding, \(\overrightarrow{\chi}(T[S_{i}])\) is bounded, which implies that \(\overrightarrow{\chi}(T)\) is also bounded. We can now conclude. Either \(T\) contains no \(2c\)-cluster and we win by what precedes, or \(T\) contains a \(2c\)-cluster \(X\). Partition \(V(T)\setminus V(X)\) with respect to their adjacency to \(X\). This gives a partition of \(V(T)\setminus X\) into at most \(2^{|X|}\leq 2^{\ell(2c)}\) parts. Assume by contradiction that one of these parts, let it be \(A\), has dichromatic number at least \(c\). Call \(B^{+}\) (resp. \(B^{-}\)) the subset of \(X\) such that \(A\Rightarrow B^{+}\) (resp \(B^{-}\Rightarrow A\)). Since \(\overrightarrow{\chi}(X)\geq 2c\), one of \(B^{+}\) or \(B^{-}\) has dichromatic number of at least \(c\), and we get a contradiction with the assumption on \(T\). So every such part \(A\) has dichromatic number at most \(c\) and hence \(\overrightarrow{\chi}(T)\) is bounded by \(\ell(2c)+2^{\ell(2c)}c\). ## 5 Local to Global - Links with domination number Informally, given a digraph parameter \(\gamma\), a \(\gamma\)-cluster of a tournament \(T\) is a subtournament \(X\) of \(T\) of bounded size with large \(\gamma\). In this section, we investigate for which parameters \(\gamma_{1}\) and \(\gamma_{2}\) we have that, for all tournaments \(T\) with sufficiently large \(\gamma_{1}\), \(T\) has a \(\gamma_{2}\)-cluster. We say that _large \(\gamma_{1}\) implies a \(\gamma_{2}\)-cluster_ if there exists two functions \(f\) and \(\ell\) such that for every integer \(k\), if \(\gamma_{1}(T)\geq k\), then \(T\) contains a subtournament \(X\) such that \(\gamma_{1}(X)\geq k\) and \(|X|\leq\ell(k)\). We review what is known on this topic and propose some new conjectures. Such property was first studied by Thomasse, Le, Harutyunyan and Wu in [18]: **Theorem 5.1** (_Large dom implies a \(\overrightarrow{\chi}\)-cluster, [18]_): There exists two functions \(f\) and \(\ell\) such that, for every integer \(k\), every tournament \(T\) with \(\operatorname{dom}(T)\geq f(k)\) contains a subtournament \(X\) with \(|X|\leq\ell(k)\) and \(\overrightarrow{\chi}(X)\geq k\). In the same paper, Thomasse, Le, Harutyunyan and Wu conjectured the following: **Conjecture 5.2** (_Large dom implies a dom-cluster, [18]_): There exist two functions \(f\) and \(\ell\) such that, for every integer \(k\), every tournament \(T\) with \(\operatorname{dom}(T)\geq f(k)\) contains a subtournament \(X\) with \(|X|\leq\ell(k)\) and \(\operatorname{dom}(X)\geq k\). Since \(\operatorname{dom}(T)\leq\overrightarrow{\omega}(T)\leq\overrightarrow{\chi}(T)\), the following is stronger than Theorem 5.1 but weaker than Conjecture 5.2. **Conjecture 5.3** (_Large dom implies an \(\overrightarrow{\omega}\)-cluster_): There exist two functions \(f\) and \(\ell\) such that, for every integer \(k\), every tournament \(T\) with \(\operatorname{dom}(T)\geq f(k)\) contains a subtournament \(X\) with \(|X|\leq\ell(k)\) and \(\overrightarrow{\omega}(X)\geq k\). A tournament \(R\) is a _rebel_ if tournaments not containing \(R\) have bounded domination number. A tournament is a poset tournament if it admits a backedge graph that is a comparability graph. Chudnovsky, Kim, Liu, Seymour and Thomasse proved in [10] that every rebel is a poset tournament, but the converse remains open. **Conjecture 5.4** (_Chudnovsky, Kim, Liu, Seymour and Thomasse_): Every poset tournament is a rebel. In particular, the \(S_{k}\) defined in Section 3.2 are posets tournaments, and have arbitrarily large clique number (see Corollary??). Hence, if one can prove that, for every integer \(k\), \(S_{k}\) is a rebel, then Conjecture 5.3 holds. More generally: **Theorem 5.5**: Conjecture 5.4 implies Conjecture 5.3. Thomasse, Le, Harutyunyan and Wu applied Theorem 5.1 to prove Theorem 4.8, which can be seen as a local to global theorem about dichromatic number. The analogue of Theorem 4.8 for clique number is the following conjecture. **Conjecture 5.6**: There exists a function \(g\) such that, for every integer \(t\), if \(T\) is a tournament such that for every \(v\in V(T)\), \(\overrightarrow{\omega}(N^{+}(v))\leq t\), then \(\overrightarrow{\omega}(T)\leq g(t)\). The analogue of Theorem 5.1 for clique number is Conjecture 5.3 and indeed we have the following implication. **Theorem 5.7**: Conjecture 5.3 implies Conjecture 5.6 * Let \(T\) be a tournament and \(t\in\mathbb{N}\) such that for every vertex \(v\in V(T)\), \(\overrightarrow{\omega}(N^{+}(v))\leq t\). Let \(f\) and \(\ell\) be the functions given by Conjecture 5.3. We will prove that \(\overrightarrow{\omega}(T)\leq\max(tf(t+1),t\ell(t+1))\). If \(\operatorname{dom}(T)<f(t+1)\), then, since \(\overrightarrow{\omega}(N^{+}(v))\leq t\) for every \(v\in V(T)\), we have \(\overrightarrow{\omega}(T)<tf(t+1)\). So we may assume that \(\operatorname{dom}(T)\geq f(t+1)\) and thus \(T\) has a subtournament \(X\) such that \(|X|\leq\ell(t+1)\) and \(\overrightarrow{\omega}(A)\geq t+1\). Hence, since for every \(v\in V(T)\), \(\overrightarrow{\omega}(N^{+}(v))\leq t\), we have \(A\subsetneq N^{+}(v)\). So \(A\) is a dominating set of \(T\), and thus \(\overrightarrow{\omega}(T)\leq t|A|\leq t\ell(t+1)\) Since \(\operatorname{dom}(T)\leq\overrightarrow{\omega}(T)\) for every tournament \(T\), the following conjecture implies Conjecture 5.3 and would give a natural property of the clique number of tournaments. **Conjecture 5.8** (_Large \(\overrightarrow{\omega}\) implies a \(\overrightarrow{\omega}\)-cluster_)** There exists two functions \(f\) and \(\ell\) such that, for every integer \(k\), every tournament \(T\) with \(\overrightarrow{\omega}(T)\geq f(k)\) contains a subtournament \(X\) with \(|X|\leq\ell(k)\) and \(\overrightarrow{\omega}(X)\geq k\). We believe (or maybe only hope) that the above conjecture is true, and actually we were not even able to disprove the following stronger form of it, where \(f\) is taken to be the identity: **Question 5.9**.: _Is there a function \(\ell\) such that, for every tournament \(T\), if \(\overrightarrow{\omega}(T)\geq k\), then \(T\) has a subtournament \(A\) such that \(|A|\leq\ell(k)\) and \(\overrightarrow{\omega}(A)\geq k\)._ Let us say that a tournament \(T\) is \(k\)-\(\overrightarrow{\omega}\)-critical if \(\overrightarrow{\omega}(T)=k\) and for every \(v\in V(T)\), \(\overrightarrow{\omega}(T-v)=k-1\). Observe that the only \(1\)-\(\overrightarrow{\omega}\)-critical tournament if the one vertex tournament, and the only \(2\)-\(\overrightarrow{\omega}\)-critical tournament is \(\vec{C}_{3}\). **Conjecture 5.10**: For every integer \(k\geq 3\), there is an infinite number of \(k\)-\(\overrightarrow{\omega}\)-critical tournaments. Observe that if Conjecture 5.10 is true (resp. false), then it answers to the negative (resp. to the positive) to Question 5.9. It is also open if large \(\overrightarrow{\omega}\) implies a \(\overrightarrow{\chi}\)-cluster (resp. a \(dom\)-cluster). On the negative side, Thomasse, Le, Harutyunyan and Wu proved the following: **Theorem 5.11** (_Large \(\overrightarrow{\chi}\) does not imply a \(\overrightarrow{\chi}\)-cluster_)** For every integer \(K,\ell\), there exists a tournament \(T\) such that \(\overrightarrow{\chi}(T)\geq K\), and all subtournaments \(X\) of \(T\) on at most \(\ell\) vertices are \(2\)-dicolourable. ## 6 Conclusion and future direction We did not give much thought yet to the clique number of digraphs. On this matter, it is to be noted that Theorem 3.3 does not hold for digraphs, which implies that many of the results that we proved for tournaments cannot be proved as easily for digraphs. Actually, even if our definition of clique number makes at first glance as much sense applied to tournaments or digraphs, we strongly believe that the notion will be very fruitful on tournaments, while on digraphs it is less clear for the moment. On the positive side, we think that Theorem 3.9 can be generalised to classes of digraphs. **Conjecture 6.1**: If a class of digraphs \(\mathcal{C}\) is \(\overrightarrow{\chi}\)-bounded, then so is its closure under substitution. Note that, while Theorem 3.3 does not hold for digraphs, a similar weaker bound exists for digraphs of bounded independence number. We denote by \(\alpha(D)\) the size of a maximum independent set in \(D\) and by \(R(i,j)\) the smallest integer such that every graph on \(R(i,j)\) vertices contains either a clique of size \(j\), or an independent set of size \(j\). \(R(i,j)\) exists for all integers \(i,j\) by Ramsey Theorem. **Theorem 6.2 ([22])**: For any digraph \(D\) and ordering \(\prec\) of \(V(D)\), we have: \[\frac{\chi(D^{\prec})}{R(\omega(D^{\prec})+1,\alpha(D)+1)}\leq\overrightarrow {\chi}(D)\leq\chi(D^{\prec})\] **Proof :** Let \(D\) be a digraph and \(\prec\) an ordering of \(V(D)\). Set \(\omega=\omega(D^{\prec})\) and \(\alpha=\alpha(D)\). Let \(X\subseteq V(D)\) such that \(D[X]\) is acyclic. To prove that \(\chi(D^{\prec})\leq R(\omega+1,\alpha+1)\)\(\overrightarrow{\chi}(D)\), it suffices to prove that \(\chi(D^{\prec}[X])\leq R(\omega+1,\alpha+1)\). Let \(\varphi:X\to\mathbb{N}\) be such that \(\varphi(x)\) is the number of vertices of a longest \(\prec\)-decreasing path in \(D^{\prec}[X]\) finishing in \(x\). We claim that \(\varphi\) is a \(R(\omega+1,\alpha+1)\)-colouring of \(D^{\prec}\). Let \(u,v\in X\) with \(u\prec v\) and \(uv\in E(D^{\prec})\). Then \(\varphi(u)\geq\varphi(v)+1\), so \(\varphi\) is a colouring of \(D^{\prec}[X]\). Suppose for contradiction that \(\varphi\) uses more than \(R(\omega+1,\alpha+1)\) colours. Then there is a \(\prec\)-decreasing path \(P\) of size \(R(\omega+1,\alpha+1)+1\). Note that since \(D[V(P)]\) is acyclic, every arc of \(D[V(P)]\) corresponds to an edge of \(D^{\prec}[V(P)]\) (i.e. for every \(x,y\in V(P)\), if \(xy\in A(D)\), then \(y\prec x\) and thus \(xy\in E(D^{\prec})\)). Hence, \(\alpha(D^{\prec}[V(P)]=\alpha(D[V(P)])\leq\alpha\). Now, since \(|V(P)|>R(\omega+1,\alpha+1)\) and \(\alpha(D^{\prec}[V(P)])\leq\alpha\), we get that \(\omega(D^{\prec}[V(P)])\geq\omega+1\), a contradiction. Using the above inequation, most results of the paper can be adapted to classes of tournaments with bounded independence number. An obvious topic that we did not investigate is the complexity of computing the clique number. Nguyen, Scott and Seymour ask in Section 9 of [22] if deciding if a tournament has bounded clique number is in co-NP. **Acknowledgement** This research was partially supported by the ANR project DAGDigDec (JCJC) ANR-21-CE48-0012 and by the group Casino/ENS Chair on Algorithmics and Machine Learning.
2308.06638
Angle-Resolved Pair Photoemission Theory for Correlated Electrons
In this paper we consider the possibility and conditions for pair photoemission whereby two incident photons emit pairs of electrons from a candidate material as a novel method to measure and visualize electronic correlations. As opposed to double photoemission - where a single photon precipitates the ejection of a pair electrons via a subsequent electron energy loss scattering process - we show that pair photoemission need not be limited to interference between initial photoelectrons and valence electrons, and moreover, can occur without the energy penalty of two work functions. This enables detection of pairs of electrons at high energy resolution that may be correlated in the same quantum many-body states.
Thomas P. Devereaux, Martin Claassen, Xu-Xin Huang, Michael Zaletel, Joel E. Moore, Dirk Morr, Fahad Mahmood, Peter Abbamonte, Zhi-Xun Shen
2023-08-12T19:59:40Z
http://arxiv.org/abs/2308.06638v1
# Angle-Resolved Pair Photoemission Theory for Correlated Electrons ###### Abstract In this paper we consider the possibility and conditions for pair photoemission whereby two incident photons emit pairs of electrons from a candidate material as a novel method to measure and visualize electronic correlations. As opposed to "double photoemission" - where a single photon precipitates the ejection of a pair electrons via a subsequent electron energy loss scattering process - we show that pair photoemission need not be limited to interference between initial photoelectrons and valence electrons, and moreover, can occur without the energy penalty of two work functions. This enables detection of pairs of electrons at high energy resolution that may be correlated in the same quantum many-body states. ## I Introduction Over the past decades, angle-resolved photo-emission spectroscopy (ARPES) has emerged as a paradigmatic experimental probe of electronic structure and correlations, band topology or surface states, unconventional superconductivity or the enigmatic pseudogap phase, granting insight to characterize electronic behavior in new quantum materials. By measuring the kinetic energy and angular dependence of photo-emitted electrons, ARPES supplies information on the energy and momentum dependence of valence electrons in a material, and is widely understood to reflect to a good approximation the behavior of the single-particle spectral function [1]. Higher order photoemission processes have been utilized to further obtain information beyond the single-particle density of states. In "double photoemission" for example, a highly energetic photon causes the emission of an electron which may cause a second electron to be photoemitted via the Coulomb interaction if it can impart enough energy for the second electron to escape to a detector [2]. For example, a photoemitted core electron may be accompanied by Auger electron emission, whereby the energy emitted by Auger decay of the core hole is utilized to cause another electron to be emitted [3]. Such "shake-off" or "secondaries" spectra contained both photoemitted core and Auger electrons. The fact that the energies of the two electrons can be themselves continuous yet sum to conserve energy can show that the electrons are correlated, and a comparison with single-particle photoemission can be utilized to determine the so-called "exchange-correlation" hole energy [4]. In analogy to photon- or electron-based coincidence spectroscopies, recently an interesting proposal suggested extending ARPES to use energy and angle-resolved coincidence detection to account for two-photon two-electron photo-emission events and extract two-particle Bethe-Salpeter wave functions [6] of valence electrons of the material. Here, in contrast to double photoemission due to Coulomb drag, the coincidence signal derives from two-photon absorption at lower photon energies. Measurement of the angle dependence of two-photon coincidence events at the detector hence importantly permits resolving the momenta and energies of the ejected electron pairs, without the need to "disentangle" highly-complex "Coulomb drag" processes that complicates the study of important low energy effects. While the possibility to extract electronic correlations from coincidence counts in ARPES immediately suggests a variety of applications such as elucidating unconventional pairing mechanisms in high-temperature superconductors or heavy-fermion compounds, a key question concerns understanding exactly the nature of what this new probe actually measures in a correlated electron system, and how to interpret its result in terms of more intuitive quantities such as pair correlation functions or the superconducting gap. Indeed, it is straightforward to see that the coincidence signal does not map onto more readily interpretable superconducting pair correlation functions, since pairs of detected electrons that comprise a coincidence signal are not necessarily ejected from the sample at the same time. On the other hand, a more microscopic description of the pair ARPES cross section is necessary, to permit a formal accounting for final-state effects and the detector geometry and differentiate from double photoemission. In this work, we address these questions by developing a generic theoretical description of angle-resolved pair photoemission and studying its behavior in light of superconducting instabilities in the attractive and repulsive Hubbard model on small clusters. We show that pair photoemission need not be limited to interference between initial photoelectrons and valence electrons, and moreover, can occur without the energy penalty of two work functions. This enables detection of pairs of electrons at high energy resolution that may be correlated in the same quantum many-body states. Fig. 1 depicts a schematic of the pair photoemission process for a two-dimensional sample or surface state. Two photons with energy \(\hbar\omega_{\rm ph}\) eject two electrons from the sample, which are subsequently observed at the detector at the same time \(t\) with both angle and energy resolution. For simplicity, but without loss of generality we ignore bulk effects, and henceforth denote three-dimensional positions and momenta using bold notation \({\bf r}\), whereas their two-dimensional components in the sample plane are denoted by \(\bar{\bf r}\). Suppose that sample and emitted electrons are described by fields \(\hat{\Phi}(\bar{\bf r})\) and \(\hat{\Psi}({\bf r})\), respectively (we suppress implicit spin indices, for conciseness), and are governed by a generic Hamiltonian \(\hat{H}\) \[\hat{H}_{0}=\hat{H}_{\rm valence}(\hat{\Phi})+\hat{H}_{\rm emitted}(\hat{ \Psi})+\hat{H}_{\rm v-e}(\hat{\Phi},\hat{\Psi}) \tag{1}\] such that emitted electrons \(\hat{\Psi}\) behave as freely-propagating waves at long distances from the sample while appropriately encapsulating final-state effects (inverse LEED) as well as possible back actions \(\hat{H}_{\rm v-e}(\hat{\Phi},\hat{\Psi})\) which would be important for Coulomb-drag mediated double photoemission. The photoemission process \(\hat{H}_{\rm el\text{-}ph}\) now takes a sample electron \(\hat{\Phi}(\bar{\bf r})\) to a propagating final state \(\hat{\Psi}({\bf r})\) \[\hat{H}_{\rm el\text{-}ph}(t)=s(t)\int d^{3}{\bf r}\ g({\bf r})\ \hat{\Psi}^{ \dagger}({\bf r})\hat{\Phi}_{S}(\bar{\bf r})\ e^{-i\omega_{\rm ph}t}+{\rm h.c.} \tag{2}\] where \(g({\bf r})\) is the dipole matrix element and \(s(t)\) describes a Gaussian probe pulse envelope with \[s(t)=e^{-(t-t_{0})^{2}/2\sigma_{\rm pr}^{2}} \tag{3}\] Subsequently, the photo-electron detector measures the mean momentum \({\bf k}\) of propagating electron wave packets, described by a photo-current \[\langle\hat{J}_{\bf k}\rangle=\frac{{\bf k}}{e}\iint d{\bf x}d{\bf x}^{\prime }\ \phi_{\bf k}^{*}({\bf x})\phi_{\bf k}({\bf x}^{\prime})\ \langle\hat{\Psi}^{\dagger}({\bf x})\hat{\Psi}({\bf x}^{ \prime})\rangle \tag{4}\] where \(\phi_{\bf k}({\bf r})\) denotes a wave packet centered at the detector location. ## II Formalism ### Single-Electron ARPES The conventional "single-electron" ARPES signal now follows straightforwardly [5] from a perturbative expansion in \(\hat{H}_{\rm el\text{-}ph}\) of the measured photocurrent \[I_{\bf k} =\int_{-\infty}^{t}d\tau d\tau^{\prime}s(\tau)s(\tau^{\prime})e^{ i\omega_{\rm ph}(\tau-\tau^{\prime})}\] \[\times\int d{\bf x}d{\bf x}^{\prime}\phi_{\bf k}^{*}({\bf x}) \phi_{\bf k}({\bf x}^{\prime})\int d\bar{\bf r}d\bar{\bf r}^{\prime}g^{*}({ \bf r})g({\bf r}^{\prime})\] \[\times\langle\hat{\Phi}^{\dagger}(\bar{\bf r},\tau)\hat{\Psi}({\bf r },\tau)\hat{\Psi}^{\dagger}({\bf x},t)\hat{\Psi}({\bf x}^{\prime},t)\hat{ \Psi}^{\dagger}({\bf r}^{\prime},\tau^{\prime})\hat{\Phi}(\bar{\bf r}^{\prime },\tau^{\prime})\rangle \tag{5}\] where \(\langle\cdot\rangle={\rm tr}\{\cdot\ e^{-\beta\hat{H}_{0}}\}/Z\) denotes thermal expectation values with respect to \(\hat{H}_{0}\). If back action \(\hat{H}_{\rm valence\text{-}emitted}\) between emitted and valence electrons can be neglected, this expression simplifies drastically, as \(\langle\hat{\Phi}^{\dagger}(\bar{\bf r},\tau)\hat{\Psi}({\bf r},\tau)\hat{\Psi }^{\dagger}({\bf x},t)\hat{\Psi}({\bf x}^{\prime},t)\hat{\Psi}^{\dagger}({\bf r }^{\prime},\tau^{\prime})\hat{\Phi}(\bar{\bf r}^{\prime},\tau^{\prime})\rangle= \langle\hat{\Phi}^{\dagger}(\bar{\bf r},\tau)\hat{\Phi}(\bar{\bf r}^{\prime}, \tau^{\prime})\rangle\langle\hat{\Psi}({\bf r},\tau)\hat{\Psi}^{\dagger}({\bf x },t)\rangle\langle\hat{\Psi}({\bf x}^{\prime},t)\hat{\Psi}^{\dagger}({\bf r}^{ \prime},\tau^{\prime})\rangle\). Furthermore, assuming a single electronic valence band \(\hat{\Phi}(\bar{\bf r})=\sum_{\bar{\bf k}}u_{\bf k}(\bar{\bf r})e^{i\bar{\bf k }\bar{\bf r}}c_{\bar{\bf k}}\) with Bloch function \(u_{\bf k}(\bar{\bf r})\), and neglecting the detector wave packet shape functions \(\phi_{\bf k}({\bf x})\to e^{i{\bf k}\bf x}\) (thereby discarding time-of-flight information), one arrives at \(({\bf k}\to{\bf k})\): \[I_{\bf k}=-i\int_{-\infty}^{t}d\tau d\tau^{\prime}e^{i\omega_{ \rm ph}(\tau-\tau^{\prime})}s(\tau)s(\tau^{\prime})\left|M_{\bf k}\right|^{2}\times\] \[\times\ G_{\bf k}^{<}(\tau,\tau^{\prime})\mathcal{G}_{\bf k}^{*}( \tau,t)\mathcal{G}_{\bf k}(t,\tau^{\prime}) \tag{6}\] where \(M_{\bf k}\) is a matrix element evaluated from \(g({\bf r})\) and the Bloch function of the single valence band, \(G_{\bf k}^{<}(\tau,\tau^{\prime})\) is the lesser sample Green's function \[G_{\bf k}^{<}(\tau,\tau^{\prime})=i\langle\hat{c}_{\bf k}^{\dagger}(\tau)\hat{c} _{\bf k}(\tau^{\prime})\rangle \tag{7}\] and \[\mathcal{G}_{\bf k}(t,t^{\prime})=-i\langle\hat{\mathcal{T}}\hat{c}_{\bf k}(t) \hat{c}_{\bf k}^{\dagger}(t^{\prime})\rangle \tag{8}\] is the propagating electron Green's function for an inverse LEED state. Finally, a drastically simplified expression can be provided, if \(\hat{\mathbf{\phi}}_{\mathbf{k}}\) is approximated by a free electron Green's function with dispersion \(\epsilon_{\mathbf{k}}=\mathbf{k}^{2}/2m_{0}\). Then, defining the energy \(\omega\) observed at the detector as \[\omega\equiv\omega_{\text{ph}}-\frac{\mathbf{k}^{2}}{2m_{0}}-W \tag{9}\] where \(W\) is the work function of the sample, and taking \(t\rightarrow\infty\), one finally arrives at \[I_{\mathbf{k}}=i\int_{-\infty}^{\infty}d\tau d\tau^{\prime}e^{i \omega(\tau-\tau^{\prime})}s(\tau)s(\tau^{\prime})\left|M_{\mathbf{k}}\right|^{ 2}G_{\bar{\mathbf{k}}}^{<}(\tau,\tau^{\prime}) \tag{10}\] This is the usual expression for single-particle ARPES in terms of convolutions of shape functions, matrix elements and the lesser Green's function[5]. ### Angle-resolved Pair Photoemission Similarly, a coincidence measurement signal can be defined as \[\langle\hat{J}_{\text{k}_{1}\text{k}_{2}}\rangle=\frac{\text{k}_ {1}\text{k}_{2}}{e^{2}}\int d\mathbf{x}_{1}d\mathbf{x}_{1}^{\prime}d\mathbf{x }_{2}d\mathbf{x}_{2}^{\prime}\ \phi_{\text{k}_{1}}^{*}(\mathbf{x}_{1})\phi_{\text{k}_{2}}^{*}( \mathbf{x}_{1}^{\prime})\phi_{\text{k}_{2}}(\mathbf{x}_{2}^{\prime})\] \[\times\sum_{\nu\nu^{\prime}}\phi_{\text{k}_{1}}(\mathbf{x}_{2}) \ \langle\hat{\Psi}_{\nu}^{\dagger}(\mathbf{x}_{1},t)\hat{\Psi}_{\nu^{ \prime}}^{\dagger}(\mathbf{x}_{1}^{\prime},t)\hat{\Psi}_{\nu^{\prime}}( \mathbf{x}_{2}^{\prime},t)\hat{\Psi}_{\nu}(\mathbf{x}_{2},t)\rangle \tag{11}\] In complete analogy to single-electron ARPES, the photo-detection rate can now be evaluated from a perturbative expansion in \(\tilde{H}_{\text{el-ph}}\). To first order the response involves a single photoemission vertex and vanishes. We note that this contribution is essential for Coulomb-mediated double photoemission for high photon energies. Here, a perturbative expansion in \(\tilde{H}_{\text{v-e}}(\hat{\Phi},\hat{\Psi})\) additionally accounts for the Coulomb interaction mediated back action of the photo emitted electron, imparting enough energy on a second sample electron to eject it, rendering the coincidence signal non-zero. As discussed above, we are primarily interested in two-photon two-electron pair ARPES processes at lower photon energy; in this regime, the double emission contribution is negligible for energetic reasons. To second order in \(\hat{H}_{\text{el-ph}}\), the two-photon two-electron coincidence photo-detection signal formally reads \[D_{\text{k}_{1}\text{k}_{2}} =\sum_{\begin{subarray}{c}\sigma_{1}\sigma_{1}^{\prime}\nu^{ \prime}\\ \sigma_{2}\sigma_{2}^{\prime}\nu^{\prime}\end{subarray}}\ \int\limits_{-\infty}^{t}d\tau_{1}d\tau_{2}\int \limits_{-\infty}^{\tau_{1}}d\tau_{1}^{\prime}\int\limits_{-\infty}^{\tau_{2}}d \tau_{2}^{\prime}\] \[\times e^{i\omega_{\text{ph}}(\tau_{1}+\tau_{1}^{\prime}-\tau_{2} ^{\prime})}s(\tau_{1})s(\tau_{1}^{\prime})s(\tau_{2})s(\tau_{2}^{\prime})\] \[\times\int d\mathbf{r}_{1}d\mathbf{r}_{1}^{\prime}d\mathbf{r}_{2 }d\mathbf{r}_{2}^{\prime}g^{*}(\mathbf{r}_{1})g^{*}(\mathbf{r}_{1}^{\prime})g (\mathbf{r}_{2}^{\prime})g(\mathbf{r}_{2})\] \[\times\int d\mathbf{x}_{1}d\mathbf{x}_{1}^{\prime}d\mathbf{x}_{2 }d\mathbf{x}_{2}^{\prime}\ \phi_{\text{k}_{1}}^{*}(\mathbf{x}_{1})\phi_{\text{k}_{2}}^{*}(\mathbf{x}_{1}^ {\prime})\phi_{\text{k}_{2}}(\mathbf{x}_{2}^{\prime})\phi_{\text{k}_{1}}( \mathbf{x}_{2})\] \[\times\langle\left\langle\hat{\Phi}_{\sigma_{1}}^{\dagger}( \mathbf{r}_{1},\mathbf{r}_{1})\hat{\Psi}_{\sigma_{1}}(\mathbf{r}_{1},\mathbf{ r}_{1})\hat{\Phi}_{\sigma_{1}^{\prime}}^{\dagger}(\mathbf{r}_{1}^{\prime},\mathbf{r}_{1}^{ \prime})\hat{\Psi}_{\sigma_{1}^{\prime}}(\mathbf{r}_{1}^{\prime},\mathbf{r}_{1 }^{\prime})\right.\] \[\times\hat{\Psi}_{\nu}^{\dagger}(\mathbf{x}_{1},t)\hat{\Psi}_{ \nu^{\prime}}^{\dagger}(\mathbf{x}_{1}^{\prime},t)\hat{\Psi}_{\nu^{\prime}}( \mathbf{x}_{2}^{\prime},t)\hat{\Psi}_{\nu}(\mathbf{x}_{2},t)\] \[\times\left.\hat{\Psi}_{\sigma_{2}^{\prime}}^{\dagger}(\mathbf{r} _{2}^{\prime},\mathbf{r}_{2}^{\prime})\hat{\Phi}_{\sigma_{2}^{\prime}}( \mathbf{r}_{2}^{\prime},\mathbf{r}_{2}^{\prime})\hat{\Psi}_{\sigma_{2}}^{ \dagger}(\mathbf{r}_{2},\mathbf{r}_{2})\hat{\Phi}_{\sigma_{2}}(\mathbf{r}_{2}, \mathbf{r}_{2})\right\rangle \tag{12}\] Assuming negligible back action or Coulomb interactions between photo-emitted electrons and low-energy sample electrons, this daunting multi-point correlation function can be decomposed in analogy to conventional ARPES. The coincidence detection rate can be written as \[D_{\text{k}_{1}\text{k}_{2}}=\int\limits_{-\infty}^{t}d\tau_{1}d \tau_{2}\int\limits_{-\infty}^{\tau_{1}}d\tau_{1}^{\prime}\int\limits_{-\infty}^{ \tau_{2}}d\tau_{2}^{\prime}\ s(\tau_{1})s(\tau_{1}^{\prime})s(\tau_{2})s(\tau_{2} ^{\prime})\] \[\times\int d\mathbf{k}d\mathbf{k}^{\prime}d\mathbf{q}\sum_{ \begin{subarray}{c}\sigma_{1}\sigma_{1}^{\prime}\nu^{\prime}\\ \sigma_{2}\sigma_{2}^{\prime}\sigma_{2}^{\prime}\nu^{\prime}\end{subarray}}G_{ \bar{\mathbf{k}}\bar{\mathbf{k}}^{\prime}\bar{\mathbf{q}}}^{\sigma_{1}^{\prime} \sigma_{2}^{\prime}}(\tau_{1},\tau_{1}^{\prime},\tau_{2}^{\prime},\tau_{2})e^{i \omega_{\text{ph}}(\tau_{1}+\tau_{1}^{\prime}-\tau_{2}-\tau_{2}^{\prime})}\] \[\times\left[F_{\mathbf{k}\mathbf{q}}^{\sigma_{1}\sigma_{1}^{ \prime}\nu\nu^{\prime}}(\tau_{1},\tau_{1}^{\prime})\right]^{*}F_{\mathbf{k}^{ \prime}\mathbf{q}}^{\sigma_{2}^{\prime}\sigma_{2}^{\prime}\nu\nu^{\prime}}(\tau_{2}, \tau_{2}^{\prime}) \tag{13}\] where \[G_{\bar{\mathbf{k}}\mathbf{k}^{\prime}}^{\sigma\nu^{\prime}}(\tau_{1},\tau_{1}^{ \prime},\tau_{2}^{\prime},\tau_{2})=\langle\hat{c}_{\bar{\mathbf{k}}\sigma}^{ \dagger}(\tau_{1})\hat{c}_{\bar{\mathbf{q}}-\bar{\mathbf{k}}\sigma^{\prime}}^{ \dagger}(\tau_{1}^{\prime})\hat{c}_{\bar{\mathbf{q}}-\bar{\mathbf{k}}^{\prime} \nu^{\prime}}(\tau_{2}^{\prime})\hat{c}_{\bar{\mathbf{k}}^{\prime}\nu}(\tau_{2 }^{\prime})\rangle \tag{14}\] is the two-particle Green's function for the sample, and \[F_{\mathbf{k}\mathbf{q}}^{\sigma\sigma^{\prime}\nu\nu^{\prime}}( \tau,\tau^{\prime}) =\int d\mathbf{p}d\mathbf{p}^{\prime}\ \phi_{\text{k}_{1}}(\mathbf{p})\phi_{\text{k}_{2}}(\mathbf{q}- \mathbf{p})\ M_{\mathbf{q}-\mathbf{k}}M_{\mathbf{k}}\] \[\times\langle 0|\hat{\Psi}_{\mathbf{p}\nu^{\prime}}(t)\hat{\Psi}_{\mathbf{q}- \mathbf{p}\nu}(t)\hat{\Psi}_{\mathbf{q}-\mathbf{k}\sigma^{\prime}}(\tau^{\prime}) \hat{\Psi}_{\mathbf{k}\sigma}^{\dagger}(\tau)\left|0\right\rangle \tag{15}\] is a four-point function for the photo-emitted electrons which includes the Fourier-transformed detector shape functions \(\phi_{\text{k}}(\cdot)\) and evaluated with respect to the vacuum state. Furthermore, \(M_{\mathbf{k}}\) denote photo-excitation matrix elements defined in terms of the dipole matrix element and valence electron Bloch functions introduced above. \(F_{\mathbf{k}\mathbf{q}}(\tau,\tau^{\prime})\) encodes both propagation and time-of-flight information, as well as interactions between the two photo-emitted electrons. A drastic simplification follows from treating emitted electrons as free fermions. In this case, \(F_{\mathbf{k}\mathbf{q}}(\tau,\tau^{\prime})\) factorizes to where \(\epsilon_{\bf k}={\bf k}^{2}/2m_{0}\) is the dispersion of the photo-emitted electrons. In analogy to the theory for conventional ARPES [5], we can now make the assumption that the detector wave packet momentum width can be neglected \(\phi_{\bf k}({\bf k})\rightarrow\delta({\bf k}-{\bf k})\), discarding again time-of-flight information and dependence on the detector position. Denote the energies observed at the two detectors minus the photon energy as \[\omega_{1,2}\equiv\omega_{\rm ph}-\frac{{\bf k}_{1,2}^{2}}{2m_{0}}-W \tag{17}\] with \(W\) the work function of the sample, and taking \(t\rightarrow\infty\), the coincidence detection rate can be written as \[D_{{\bf k}_{1}{\bf k}_{2}}^{(0)} = \int\limits_{-\infty}^{\infty}d\tau_{1}d\tau_{1}^{\prime}d\tau_{2 }d\tau_{2}^{\prime}\ s(\tau_{1})s(\tau_{1}^{\prime})s(\tau_{2})s(\tau_{2}^{ \prime}) \tag{18}\] \[\times \sum_{\sigma\sigma^{\prime}}\langle\hat{\mathcal{T}}\hat{c}_{{ \bf k}_{1}\sigma}^{\dagger}(\tau_{1})\hat{c}_{{\bf k}_{2}\sigma^{\prime}}^{ \dagger}(\tau_{1}^{\prime})\hat{\mathcal{T}}\hat{c}_{{\bf k}_{2}\sigma^{ \prime}}(\tau_{2}^{\prime})\hat{c}_{{\bf k}_{1}\sigma}(\tau_{2})\rangle\] \[\times e^{i\left[\omega_{1}(\tau_{1}-\tau_{2})+\omega_{2}(\tau_{1} ^{\prime}-\tau_{2}^{\prime})\right]}\] where \(\hat{\mathcal{T}}\) denotes time ordering, and we additionally omitted the photo-excitation matrix elements \(M_{\bf k}\) for conciseness. ### Fermi's Golden Rule This expression can be recast in a more familiar Fermi's Golden rule by inserting complete sets of the states for the \(N,N-1\), and \(N-2\) particle sectors. Also if we neglect the time dependence of the shape functions so that we only concentrate on frequency resolution, the time integrals can be performed and the following expression is obtained: \[D_{{\bf k}_{1},{\bf k}_{2}}^{(0)}(\omega_{1},\omega_{2}) = \sum_{n}\mid M_{0,n}({\bf k}_{1},{\bf k}_{2},\omega_{1},\omega_{2} )\mid^{2} \tag{19}\] \[\times \delta(E_{n}-E_{0}+\omega_{1}+\omega_{2})\] with \(E_{0}\) denoting the \(N\) particle ground state energy and \(E_{n}\) the eigenenergies of the \(N-2\) particle sector - viz., the expression is simply a matrix element squared times a term that enforces energy conservation. The matrix element reads \[M_{0,n}({\bf k}_{1},{\bf k}_{2},\omega_{1},\omega_{2}) = \sum_{m,\sigma_{1},\sigma_{2}}\Bigg{\{}\frac{\langle n|\,\hat{c} _{{\bf k}_{2}\sigma_{2}}\,|m\rangle\,\langle m|\,\hat{c}_{{\bf k}_{1}\sigma_{ 1}}\,|0\rangle}{E_{m}-E_{0}+\omega_{1}-i\eta} \tag{20}\] \[- \frac{\langle n|\,\hat{c}_{{\bf k}_{1}\sigma_{1}}\,|m\rangle\, \langle m|\,\hat{c}_{{\bf k}_{2}\sigma_{2}}\,|0\rangle}{E_{m}-E_{0}+\omega_{2 }-i\eta}\Bigg{\}}\] with \(E_{m}\) the \(N-1\) particle sector eigenvalues. Note that this expression bears a strong resemblance to the Kramers-Heisenberg expression for resonant inelastic x-ray scattering (RIXS) in which the manifold of \(N-1\) states \(\{|\ m\rangle\langle m\ |\ \}\) play the role of intermediate \(N+1\) core hole states whereby a core electron is photoexcited into the valence band[7]. While for RIXS the final states have the same number of electrons \(N\) as the initial state as the core hole is refilled via photo-deexcitation, the pair photoemission final states have two less electrons \(N-2\). Despite their apparent differences, the functional form of Eq. (20) indicates that we would expect resonant pair photoemission whenever one or both of the frequencies \(\omega_{1,2}\) correspond to the \(N-1\) removal state energies observed in photoemission rather than the core-valence transition energies as in RIXS. To illustrate the differences between pair photoemission and uncorrelated single particle photoemission and how information can be obtained from both, we start by reminding that the "pairing energy" \(\Delta({\bf k}_{1},{\bf k}_{2})\) for two momentum states \({\bf k}_{1,2}\) is \(\Delta({\bf k}_{1},{\bf k}_{2})=E_{2}({\bf k}_{1},{\bf k}_{2})-E_{1}({\bf k}_{ 1})-E_{1}({\bf k}_{2})+E_{0}\) where \(E_{N}\) denotes the energies of the \(N\) particle removal states, viz., where single particle photoemission yielding \(E_{1}-E_{0}\) and pair photoemission \(E_{2}-E_{0}\). By inspection of Eqs. (19) and (20), we can see that \(E_{2}-E_{0}\) is determined by the overall energy conservation by \(\omega_{1}+\omega_{2}\). In other words, this is given by the slope of the line connecting \(\omega_{1}\) and \(\omega_{2}\) for pair photoemission when plotted as a function of both frequencies. The resonance denominator of Eq. (20) shows that the intensity on this line is modulated when \(\omega_{1,2}\) coincide with the single particle energies \(E_{1}-E_{0}\) observed in single particle photoemission. ### Retarded pairing correlator Suppose that the measured pair emission signal is obtained as a function of sum and difference frequencies \[\omega=\omega_{1}+\omega_{2}\,\quad\Delta\omega=\omega_{1}-\omega_{2} \tag{21}\] By inspection of Eq. (18) one can see that the difference frequency \(\Delta\omega\) parameterizes the "retardation" of the pair emission process from the sample, i.e. the time delay between emission of the first and second electron of an observed pair. Integration over the difference frequency \(\Delta\omega\) then yields \[D_{{\bf k}_{1}{\bf k}_{2}}^{(0)}(\omega)=\int\limits_{-\infty}^{ \infty}d\Delta\omega\ D_{{\bf k}_{1}{\bf k}_{2}}^{(0)}\left(\frac{\omega+\Delta \omega}{2},\frac{\omega-\Delta\omega}{2}\right)\] \[=\int\limits_{-\infty}^{\infty}dt\ e^{-i\omega t}\int\limits_{- \infty}^{\infty}d\tau\langle\hat{c}_{{\bf k}_{1}}^{\dagger}(t)\hat{c}_{{\bf k}_ {2}}^{\dagger}(t+\tau)\hat{c}_{{\bf k}_{1}}(\tau)\hat{c}_{{\bf k}_{1}}(0)\rangle \tag{22}\] which can be straightforwardly expressed as a spectral decomposition \[D_{{\bf k}_{1}{\bf k}_{2}}^{(0)}(\omega)= \sum_{n,m}\ \ \delta(\omega+E_{0}-E_{n})\left|\langle n|\,\hat{c}_{{\bf k}_{1}} \,|m\rangle\,\langle m|\,\hat{c}_{{\bf k}_{2}}\,|0\rangle\right|^{2} \tag{23}\] Thus coincidence pair ARPES can yield the dynamic superconducting pairing susceptibility. In a BCS superconductor, the resulting response has a peak at finite \(\omega\) that corresponds to the momentum-dependent superconducting gap. Spin-resolved pairing as well as pairing that can occur at finite momenta corresponding to a pair density wave was recently examined in Ref. [8]. Importantly, coincidence pair ARPES can also provide measurements for the dynamic pair susceptibility in materials at temperatures above the ordered phase or for systems that may be highly frustrated or condense into a different, non-superconducting pair state. While dynamic pairing correlations have been measured via different numerical methods, such as determinant Quantum Monte Carlo for example [9; 10], susceptibility measurements have been lacking. ## III Applications ### Free electrons If the valence electrons within the sample are free, the four-point function factorizes to \[G^{(2)}_{\mathbf{k}_{1},\mathbf{k}_{2}}(\tau_{1},\tau_{1}^{\prime},\tau_{2}^{ \prime},\tau_{2})\to G^{<}_{\mathbf{k}_{1}}(\tau_{1}-\tau_{2})\ G^{<}_{ \mathbf{k}_{2}}(\tau_{1}^{\prime}-\tau_{2}^{\prime}) \tag{24}\] for \(\mathbf{k}_{1}\neq\mathbf{k}_{2}\) and the coincidence detection rate becomes a product of single-particle ARPES detection rates \[D^{(0)}_{\mathbf{k}_{1}\mathbf{k}_{2}}(\omega_{1},\omega_{2})\to P_{ \mathbf{k}_{1}}(\omega_{1})P_{\mathbf{k}_{2}}(\omega_{2}) \tag{25}\] which only contributes if the quantum numbers of the photodetected electrons are not identical due to Pauli exclusion. This is a useful check to determine the overall magnitude of the pair ARPES compared to single particle ARPES, and can help to assess the spectral intensities of two-particle collective modes separately from the single particle continuum. ### BCS theory If the system of interest is well-described by a BCS mean field ansatz, the valence band is again composed of free Bogoliubov fermions. In this case, the four-point function factorizes, and the coincidence pair photoemission signal additionally includes a pairing term: \[D^{(0)}_{\mathbf{k}_{1}\mathbf{k}_{2}}(\omega_{1},\omega_{2})=P_{\mathbf{k}_{ 1}}(\omega_{1})P_{\mathbf{k}_{2}}(\omega_{2})+\left|P^{\mathrm{pair}}_{ \mathbf{k}_{1}\mathbf{k}_{2}}(\omega_{1},\omega_{2})\right|^{2} \tag{26}\] where \[P^{\mathrm{pair}}_{\mathbf{k}_{1}\mathbf{k}_{2}}(\omega_{1,2})=\int d\tau d \tau^{\prime}s(\tau)s(\tau^{\prime})e^{i(\omega_{1}\tau+\omega_{2}\tau^{ \prime})}\langle\mathcal{T}\hat{c}^{\dagger}_{\mathbf{k}_{1}}(\tau)\hat{c}^{ \dagger}_{\mathbf{k}_{2}}(\tau^{\prime})\rangle \tag{27}\] is a weighted time average of the time-ordered anomalous Green's function. For the case where the shape functions \(s(t)=1\) the BCS singlet pair wavefunction gives the value \[P^{\mathrm{pair}}_{\mathbf{k}_{1}\mathbf{k}_{2}}(\omega_{1,2}) =\delta(\mathbf{k}_{1}+\mathbf{k}_{2})\delta(\sigma_{1}+\sigma_{2 })\delta(\omega_{1}+\omega_{2})\] \[\times\frac{\Delta_{\mathbf{k}_{1}}}{(\omega_{1}-i\eta)^{2}-E_{ \mathbf{k}_{1}}^{2}} \tag{28}\] with the Bogoliubov energy given by \(E_{\mathbf{k}}^{2}=\epsilon_{\mathbf{k}}^{2}+\Delta_{\mathbf{k}}^{2}\) for free particle dispersion \(\epsilon_{\mathbf{k}}\), and \(\eta\) is a small real quantity [6]. We note that Eq. (28) yields a sharp peak at the Fermi level (\(\omega_{1,2}=0\)) when the delta functions are satisfied, indicating that pair ARPES can be used to detect the underlying Cooper pair structure in terms of center of mass spin (i.e., singlet versus triplet) and momentum (i.e., Fulde-Ferrel or pair density-wave state) as has been noted previously [6; 8]. Moreover, the fermion momentum dependence of the energy gap \(\Delta(\mathbf{k})\) can be scanned and directly measured. ### Hubbard models for correlated electrons The single band Hubbard model may provide a simple way to characterize the behavior of pair photoemission for correlated electrons in systems without superconducting long-range order. Specifically we will utilize eigenstates of the particle-hole symmetric Hubbard model \[H=-t\sum_{\langle i,j\rangle,\sigma}c^{\dagger}_{i,\sigma}c_{j,\sigma}+U\sum _{i}(n_{i,\uparrow}-\frac{1}{2})(n_{i,\downarrow}-\frac{1}{2}) \tag{29}\] on an 8A (diamond) Betts cluster [12] to construct pair ARPES [Fig. 2(a)]. Here \(c_{i,\sigma},c^{\dagger}_{i,\sigma}\) removes, adds a particle at site \(i\) with spin \(\sigma\), \(n_{i,\sigma}\) is the particle density per spin at site \(i\), \(t\) denotes hybridization between nearest neighbor sites \(i\) and \(j\), and \(U\) is a measure of the local interaction between opposite spins. Throughout we assume units where \(\hbar=1\). While much work has been performed via density matrix group renormalization techniques (DMRG) for example to ascertain whether the Hubbard model in the thermodynamic limit harbors superconductivity, our goal is more modest. By examining the eigenstates and constructing pair ARPES on finite clusters, which cannot have a bona fide phase transition, we may be able to highlight how coincidence spectroscopy can be used to quantitatively measure pair field susceptibilities in systems where \(U(1)\) gauge symmetry is not broken but fluctuating order may be inferred. Pairing has been long investigated in exact diagonalization studies of the Hubbard model on small clusters [13; 14; 15]. The pair binding energy \(\Delta\) is defined as the energy difference between the ground state energies of \(N\) and \(N-2\) particle systems minus twice the energy of the \(N-1\) system: \[\Delta=E_{N}+E_{N-2}-2E_{N-1} \tag{30}\] A negative \(\Delta\) indicates an effective electron pair attraction. The pair binding energy \(\Delta\) obtained for the repulsive and attractive Hubbard model at half-filling \(N_{electrons}=8=N\) is shown in Fig. 2(b). For repulsive \(U\), \(\Delta\) is negative for \(U/t\lesssim 8\) and becomes positive for larger values. The ground state of the attractive Hubbard model (\(U<0\)) can be well modeled as a BCS superconducting paired state [15], and possesses a pair binding energy that increases with \(\mid U\mid\). The ARPES spectra are identical for repulsive or attractive \(U\) via particle-hole symmetry. Fig. 2(b) depicts the spectral functions as a function of \(U/t\), with peaks corresponding to the three unique momenta on the 8A Betts cluster \((0,0),(\pi/2,\pi/2)\) (6-fold degenerate), and \((\pi,\pi)\), shown in blue, green and red. As noted previously [11], spectral peaks move to deeper binding energies as \(\mid U\mid\) is increased, and the development of the lower Hubbard band can be more clearly observed. While a pairing gap is clearly observable for attractive \(U\), a superconductor cannot be distinguished from a Mott gap in single-electron ARPES. Indeed, the spectra for positive and negative \(U\) are identical by virtue of particle hole symmetry. This further motivates investigating pair photoemission, which intrinsically discriminates between pair and density excitations. By inspection of the denominators in Eq. (20), one can expect that for pair ARPES the largest intensity will be obtained for a given \(\mathbf{k_{1}},\mathbf{k_{2}}\) when the energies \(\omega_{1},\omega_{2}\) are tuned to the respective energy positions of ARPES removal spectra, giving roughly the similar pattern as that obtained by simply multiplying the two independent ARPES spectral functions for photoemitted electrons with opposite spin. We focus only on momentum states lying closest to the chemical potential and consider two-particle removal ARPES spectra for opposite spins and momenta \(\mathbf{k_{1,2}}\) drawn from the six degenerate momentum points \((\pm\pi/2,\pm\pi/2),(\pi,0)\), and \((0,\pi)\). Obtaining the eigenstates for the sectors containing \(N_{electrons}=8,7,6\) allows for the construction of Fermi golden rule pair ARPES spectral functions \(D_{\mathbf{k_{1},k_{2}}}(\omega_{1},\omega_{2})\) via Eqs. (19) and (20), or equivalently via Eq. (22) upon inclusion of the probe shape functions. We focus on spin-resolved pair photoemission spectra; the spin-agnostic response follows via summing equal- and opposite-spin contributions. The resulting pair photoemission spectra are shown in Fig. 2(d) for \(\mathbf{k}_{1}=-\mathbf{k}_{2}=(\pi/2,\pi/2)\) and opposite spins \(\sigma_{1}=\uparrow,\sigma_{2}=\downarrow\), for both repulsive and attractive interactions. While both cases show a primary peak at equal pair photoemission energies \(\omega_{1}=\omega_{2}\), corresponding to the particle-hole-symmetric Hubbard gap, a key new feature is the emergence of a pair of additional peaks for the attractive Hubbard model, with \(\omega_{1}+\omega_{2}=0\). These directly probe the pair breaking intermediate state and can be understood as a two-step process: First, a photon breaks a Cooper pair to photoemit an electron, while leaving an unpaired electron with pair breaking energy \(2\Delta\) in the sample. (2) Subsequently the second photon photoemits this unpaired electron while removing the extra intermediate state energy from the sample. As final state with two electrons removed from a fully-paired su Figure 2: **Pair ARPES for the attractive and repulsive Hubbard model.** (a) Schematics of the eight-site Betts cluster. (b) Pair binding energy as a function of repulsive and attractive Hubbard interactions at half filling. (c) Single-particle spectrum \(A(\omega)\) as a function of interactions and momentum (blue, green, red correspond to \(\mathbf{k}=0,(\pi/2,\pi/2),(\pi,\pi)\), respectively). (d) Top and bottom row panels show opposite-spin \(D_{\mathbf{k},-\mathbf{k}}(\omega_{1},\omega_{2})\) for repulsive and attractive interactions, respectively, from \(\left|U\right|=0.5\) (left) to \(\left|U\right|=3\) (right). Dashed lines (center and difference frequencies) are guides to the eye. All energies are quoted in units of \(t=1\). While the main photoemission peak with \(\omega_{1}=\omega_{2}\) identically tracks attractive and repulsive Coulomb interactions, for \(U<0\) pair ARPES reveals the pair-breaking intermediate state via the frequency difference spectrum \(\omega_{1}=\omega_{2}\) for \(\omega_{1}+\omega_{2}=0\) (bottom row). perconductor has the same energy as the initial state, the total energy \(\omega_{1}+\omega_{2}\) left in the sample by the photoemission process must equal to zero; the intermediate pair breaking state remains encoded in the energy difference \(\omega_{1}-\omega_{2}\). These observations can be confirmed by comparing the pair photoemission response to uncorrelated pairs of single photoemission processes, depicted in Fig. 3. The momentum dependence on each of the fermion momenta as well as the net total momentum \(\mathbf{q}=\mathbf{k_{1}}+\mathbf{k_{2}}\) and net spin \(\sigma=\sigma_{1}+\sigma_{2}\) can reveal further information of the pair wave function. Fig. (4) plots pair ARPES for different combinations of photoemitted wavevectors and spins, for \(U/t=3\), as a function of center \(\omega_{1}+\omega_{2}\) and relative \(\omega_{1}-\omega_{2}\) frequencies. As expected for singlet pairing in the attractive Hubbard model, one immediately finds that equal-spin photoemitted electrons [Fig. 4(b)] lack the pair breaking peaks at zero center frequency of the opposite-spin response in Fig. 2(d). In contrast, observations of equal-spin pair breaking peaks in the correlated pair ARPES response would be suggestive of triplet pairing instabilities. A similar argument follows for photoemitted pairs of electrons with equal momentum \(\mathbf{k}_{1}=\mathbf{k}_{2}\), as shown in Fig. 4(c) and (d); here, an observation of zero-frequency side peaks would be indicative of finite-momentum pairing [16]. Pair photoemission for other momenta remains strongly suppressed [Fig. 4(e)-(h)] for attractive interactions. It is expected that these results will be affected by the finite size and geometry of the small cluster, as well as adding symmetry breaking terms, such as \(t^{\prime}\), that can break momentum degeneracies. For example, the pair-field correlator obtained for the same Hamiltonian on a \(4\times 2\) cluster that breaks \(C4\) symmetry, increasing the number of non-degenerate momentum points from \(3\) in the 8A cluster to \(6\), has quantitatively the same results for attractive and repulsive \(\mid U\mid=4t\). The largest low frequency contribution is for pair momenta \(\mathbf{q}=(\pi,0)\) and \((\pi/2,\pi)\). By including a negative next nearest hopping \(t^{\prime}=-0.25t\), the low energy pair field correlations are largest for \(\mathbf{q}=(\pi,0)\) and \((0,\pi)\) for \(U=4t\), while for \(U=-4t\), \(\mathbf{q}=(0,0)\) still is largest. These effects are larger clusters and different geometries should be further addressed. Lastly, here we have restricted consideration to zero temperature pair ARPES. One key application of pair ARPES could be to approach ordered phases from high temperature to measure how pair field correlations develop, either through towards a true superconducting transition, or averted by the onset of another competing order, such as charge and/or spin density waves. As these phases all appear to have nearly the same ground state energies in simulations of the Hubbard model, an experimental investigation may provide finer insight into which terms may be missing from the Hubbard model that could formulate a closer contact to materials such as the high temperature superconductors. In summary, we have presented a theory for pair ARPES whereby two photons produce two photoelectrons detected in coincidence, resolved in both energy and momentum. The corresponding two-particle removal spectra can thus be exploited to determine the effects of electron correlations in a direct way. The calculated pair response for the attractive and repulsive Hubbard model at half filling for an 8 site Betts cluster shows spectroscopically how prominent ordering tendencies of superconductivity and the net pair momentum and spin can be inferred directly from experiments. Experimentally, to make data as close to superconducting pair correlation function as possible, one should try to eject the electron pair from the sample at the same time. In such an "instantaneous event", two photons eject two electrons in an "interacting volume", for examples a Cooper pair in a superconductor, or a pair in a Mott insulator that are sufficiently entangled. In a Cooper pair, this means two electrons within the superconducting coherence volume. In a Mott insulator, assuming that Hubbard model is a reasonable starting point, this means two electrons not far from each other so that a cascade of local interactions can entangle the electrons. Our theoretical calculation was carried out in a small cluster such that the entanglement is naturally strong. Such pair photoemission is an experiment with many Figure 3: **Comparing Single and Pair Photoemission.** (a) and (b) depict Pair ARPES \(D_{\mathbf{k},-\mathbf{k}}(\omega_{1},\omega_{2})\) and uncorrelated pairs of single photoemission events \(P_{\mathbf{k}}(\omega_{1})\times P_{-\mathbf{k}}(\omega_{2})\), respectively, with line cuts for center and difference frequencies shown in (c). Depicted responses are computed for attractive interactions \(U/t=-3\) with shape function broadening \(\sigma=4/t\). technical challenges. However, several recent technological advances make it realistic. The first is the emergence of much improved and suitable light sources, such as UV lasers, high harmonic generation, free electron lasers, and photon focusing schemes. Photons within a very short pulse, such as tens of femtoseconds, can be considered as identical and instantaneous within time of flight spectrometers having picosecond resolution. The second is the time-of-flight (TOF) based three dimensional ARPES platform, such as the momentum microscope and its spin filtered variant. The third is the development of two-dimensional ultrafast multichannel detectors. With time and through an integration of these important new technologies, enhanced by timing, energy, momentum discrimination schemes and machine learning algorithms to improve the signal to noise ratio, this new spectroscopy may be developed in the near future. ###### Acknowledgements. Authors would like to thank Frank Marsiglio and Joseph Orenstein for insightful discussions. MZ, JEM, DM, FM, PA, ZXS and TPD acknowledge support for the work from the U.S. Department of Energy (DOE), Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, and through Contract No. DE-AC02-05-CH11231 via the Quantum Materials program (KC2202) (JEM).
2303.13621
A Novel Approach to the Behavioral Aspects of Cybersecurity
The Internet and cyberspace are inseparable aspects of everyone's life. Cyberspace is a concept that describes widespread, interconnected, and online digital technology. Cyberspace refers to the online world that is separate from everyday reality. Since the internet is a recent advance in human lives, there are many unknown and unpredictable aspects to it that sometimes can be catastrophic to users in financial aspects, high-tech industry, and healthcare. Cybersecurity failures are usually caused by human errors or their lack of knowledge. According to the International Business Machines Corporation (IBM) X-Force Threat Intelligence Index in 2020, around 8.5 billion records were compromised in 2019 due to failures of insiders, which is an increase of more than 200 percent compared to the compromised records in 2018. In another survey performed by the Ernst and Young Global Information Security during 2018-2019, it is reported that 34% of the organizations stated that employees who are inattentive or do not have the necessary knowledge are the principal vulnerabilities of cybersecurity, and 22% of the organizations indicated that phishing is the main threat to them. Inattentive users are one of the reasons for data breaches and cyberattacks. The National Cyber Security Centre (NCSC) in the United Kingdom observed that 23.2 million users who were victims of cybersecurity attacks used a carelessly selected password, which is 123456, as their account password. The Annual Cybersecurity Report published by Cisco in 2018 announced that phishing and spear phishing emails are the root causes of many cybersecurity attacks in recent years. Hence, enhancing the cybersecurity behaviors of both personal users and organizations can protect vulnerable users from cyber threats. Both human factors and technological aspects of cybersecurity should be addressed in organizations for a safer environment.
Sarah Sharifi
2023-01-22T18:58:40Z
http://arxiv.org/abs/2303.13621v1
# A Novel Approach to the Behavioral Aspects of Cybersecurity ###### Abstract Cybersecurity, Behavioral Cybersecurity, Cyberattack, Behavioral Cyber, Information Security, Human Factors. ## I A Novel Approach to the Behavioral Aspects of Cybersecurity The Internet and cyberspace are inseparable aspects of everyone's life. Cyberspace is a concept that describes widespread, interconnected, and online digital technology. Cyberspace refers to the online world that is separate from everyday reality. Since the internet and the virtual world are very recent advances in human lives, there are many unknown and unpredictable aspects to it that sometimes can be catastrophic to users in financial aspects [1, 2, 3, 4, 5, 6], high-tech industry [7, 8, 9, 10, 11, 12, 13], and even life-threatening aspects in healthcare [14, 15, 16, 17, 18, 19, 20]. Cybersecurity failures are usually caused by human errors or their lack of knowledge. According to the International Business Machines Corporation (IBM) X-Force Threat Intelligence Index in 2020, around \(8.5\) billion in records were compromised in 2019 due to failures of insiders, which is an increase of more than \(200\) percent compared to the number of records that were compromised in 2018. In another survey performed by the Ernst & Young Global Information Security during 2018-2019, it is reported that \(34\%\) percent of the organizations stated that the employees who are inattentive or do not have the necessary knowledge are the principal vulnerabilities of cybersecurity, and \(22\%\) of the organizations indicated that phishing is the main threat to them [21]. As stated earlier, it is noteworthy to mention that inattentive users are one of the reasons for data breaches and cyberattacks. In fact, the National Cyber Security Centre (NCSC) in the United Kingdom observed that \(23.2\) million users who were victims of cybersecurity attacks used a carelessly selected password, which is "123456", as their account password. On the other hand, the Annual Cybersecurity Report published by Cisco in 2018 announced that phishing and spear phishing emails are the root causes of a good number of cybersecurity attacks in recent years. Given the examples above, enhancing the cybersecurity behaviors of both personal users and organizations can protect vulnerable users from cyber threats. In fact, both human factors and technological aspects of cybersecurity should be addressed in organizations for a safer environment. There are multiple environmental influences studied as factors that contribute to cybersecurity behaviors that emerge from the interaction of an individual operating in a cyber environment, which are listed below: 1. **Organizational Culture for Information Security Management**[22]: Chang and Lin mentioned that the information security technologies utilized in organizations are insufficient for information security management. Hence, although such strategies are important and necessary to be implemented whether at the intra-organizational level or across inter-organizational partners, organizations should also adopt a combination of information security and organization culture aspects. In other words, organizations should both monitor the "outside" patterns and threats and the "inside" human nature such as relationships and activities among employees that are mostly hidden and unconscious. The organizational culture for information security can be categorized as follows: * **Cooperativeness:** The cooperativeness trait refers to the internal culture of cooperation, trust, team work, empowerment, and information sharing inside the organizations. Such behavioral traits create a friendly environment among peers and colleagues and prepares a platform for all members to share information and trust each other such as an extended family. Internal cyberattacks in such a friendly environment is less likely. * **Innovation:** Although this trait may not directly be connected to information security, an organization that promotes creativity, entrepreneurship, and adaptability will benefit from creativity of employees in all aspects including the cybersecurity domain to protect against cyberattacks. * **Consistency:** The consistency trait emphasizes on order, rules and regulations, uniformity, and efficiency. A consistent organization is usually a formalized and regularized company in which cyberattacks and information breaches are less likely. * **Effectiveness:** The effectiveness trait has an emphasis on production, goal achievement, and benefit-oriented measures. A company that empowers such traits in its employees would have a lower risk in cybersecurity attacks and information leak. The four aforementioned constructs of an organization are initiated using \(26\) items that are adapted from instruments measuring the culture of a company [23, 24, 25, 26]. 2. **Policies, Participation in the Security Education, Training, and Awareness Program [27]**: Han et al. mention that most organizations utilize security technologies to induce the employees to comply with Information Security Policies (ISP). However, the state-of-the-art literature on ISP compliance found out that in addition to technology advances in information security, behavioral and social approaches should also be employed to avoid information breaches. The research results by Han et al. suggests that psychological contract fulfillment help mitigating the adverse effects of costs on the information security compliance intention in supervisor teams. Furthermore, employees which are aware of policies by participating in the security education, training, and awareness programs, are anticipated to act in accordance with the information security policies since they are aware of the avails of complying with the information security policies. This study considers a bilateral perspective that compares supervisor and supervisee groups. The reason for separating the supervisor groups from supervisee groups in this study is that these two groups usually have different employment qualities, they are usually in different age groups, they usually differ in their social status and work experiences. As a result, the effects of psychological contracts on their behavior in the organizations may differ. The results in the study by Han et al. demonstrate that the impact of psychological contract fulfillment on the information security policies compliance is more considerable for managers than for their supervisees. Although the work by Han et al. focuses on the psychological contract effects on information security policy compliance for supervisor groups only, there is a lot of opportunities in studying other personalized features of employees, such as job position or tenure, age, work experience, and social status to elaborate other factors that influence the information security policy compliance. In a related cross-culture study by Hovav and D'Arcy [28], they showed that social status, age, and gender influence the intention to misuse information security among Korean users. In particular, more high-ranking members of organizations who have access to more sensitive and crucial information are more plausible to breach the information security policies. Such misuses by senior members of organizations can lead to catastrophic consequences for the company. As a result, it is more important to enforce the perception of psychological contract fulfillment in the supervisors groups rather than the supervisee groups to reinforce the information security policy compliance intention. 3. **Organizational Structure**[29]: Among factors that instigates information security breaches are behavioral and psychological characteristics. There are numerous studies in the literature that address the influencing factors of information security policy compliance behavior in companies. However, in the study by Hong and Furnell, they consider the influence of organizational structures. The authors integrate the theory of planned behavior and the perceived organizational formalization to study the procedure used to form the information security policy compliance behavioral intentions. Authors use the data from a survey of organization employees, in which \(261\) people take part, and the results of the data analysis utilizing the structural equation modeling is as follows. The empirical results suggests that the behavioral habits and cognitive processes theorized by the theory of planned behavior are notably affected by the perceived organizational formalization. This research study proposes that in an effort to enhance the employee information security policy compliance intentions and behavioral habits, organizations are recommended to plan a formalized set of procedures, rules, communications, and policies in general. Additionally, this study has the following results. i) Subjective norms can positively influence attitude, perceived behavioral control, and deterrence certainty, and attitude and perceived behavioral control showed strong positive influences on behavioral intention. ii) The decision-making process for information security policy compliance has been demonstrated to be significantly influenced by organizational formalization. Organizational formalization, in particular, had a favorable impact on attitudes toward information security behavior, perceived behavioral control, and subjective norms. iii) Deterrent certainty and behavioral habit can both benefit from organizational formalization. iv) Perceived behavioral control can encourage behavior, and deterrence certainty can encourage information security policy compliance. v) Although subjective norms do not have a direct impact on behavioral intention to comply with information security policies, they do influence it through the mediating effects of attitudes, deterrent certainty, and perceived behavioral control. 4. **Managerial Participation, and Leadership**[30, 31]: An important aspect of behavioral cybersecurity in organizations is the influence of management leadership on the information security behavior of employees. However, the importance of leadership role in the context of cybersecurity has not been explored extensively. This gap in the literature regarding the influence of leadership on the information security behavior of employees is addressed in the work by Guhr et al. [30]. The research by Guhr et al. utilizes an interactional psychology approach to link the elements of the full-range leadership model to employees' security compliance and participation intentions. Guhr et al. evaluate a multitheoretical model on a proprietary data set that includes \(322\) people from more than 14 branches across the globe. This research adds to the body of knowledge in the fields of information security and cybersecurity by investigating the way that different leadership approaches improve the intended information security behavior of employees. The empirical results by Guhr et al. highlight the significance of transformational management as it can have an impact on the behavior of employee on extra-role and in-role levels regarding information security. In a related work by Hu et al. [31], an individual behavioral model is developed that combines the theory of planned behavior with the role of management and the culture of the company in order to realize the effects of management on the employees' security compliance behavior. Hu et al. find out that the attitudes of Employees toward the perceived and subjective norms of behavioral control over compliance with information security regulations are notably influenced by top management participation in information security activities. They also discover that top management involvement has a remarkable impact on corporate culture, which in turn has an impact on the attitudes of employees about the perceived behavioral control over information security regulations. Additionally, Hu et al. discover that the impacts of management involvement and organizational culture on employee behavioral intentions are conciliated by employee cognitive assumptions regarding information security policy compliance. In addition to the deterrence-oriented treatments offered in the literature, their findings enhance the information security state-of-the-art research by showing the way that management can play a proactive role in molding the compliance behavior of employees. It has been demonstrated that organizational support for employees inline with information security, as a crucial environmental factor, can play an important role in enhancing productive performance of employees in regards with cyberattacks and information breaches [32]. In a related research article, Warkentin et al. indicated that employees who have necessary access to situational support such as help from managers and colleagues, practicing behaviors, and interpersonal help, tend to be conductive to their self-efficacy regarding their information security behaviors. An extensive study on the behavioral factors in cybersecurity is undertaken in the work by Hong and Furnell [21] to support the hypotheses that, a) "Behavioral comprehensiveness will have a positive impact on habits", b) "Self-efficacy will have a positive impact on habits", c) "Response efficacy will have a positive impact on habits", d) "Self-efficacy will have a positive impact on behavioral comprehensiveness", e) Response efficacy will have a positive impact on behavioral comprehensiveness, f) "Situational support will have a positive impact on self-efficacy", and g) "Situational support will have a positive impact on response efficacy". Hong and Furnell use cross-sectional survey to evaluate their research model. According to the 41\({}^{st}\) Statistical Report on Internet Development in China, \(25.4\%\) of the \(772\) million internet users are students, which is the highest proportion of internet users. Since students use internet for numerous purposes such as attending online classes and workshops, doing online shopping, interacting with social media, and sending emails and receiving emails, they are very vulnerable to cybersecurity attacks. As a result, college students are used as the participants in Hong and Furnell's study. The total number of \(432\) students took part in the survey, out of which \(393\) of the questionnaire were included for the statistical analysis. The demography of the respondents are as follows. \(200\) males \((51\%)\) and \(193\) female participants \((49\%)\) responded to the questionnaires, where \(39\) were freshmen (first-year students) \((10\%)\), \(108\) were sophomores (second-year students, \(27\%\)), \(169\) were juniors (third-year students, \(43\%\)), and \(77\) were seniors (fourth- and final-year students, \(20\%\)). The measurements that are used in this study are explained below. 1. **Situational support (SS)**: The SS factor is measured by adopting the approach utilized by Warkentin et al. [33]. The Cronbach's alpha factor of this scale is \(0.906\). 2. **Self-efficacy (SE)**: The SE factor is measured by adopting the approach utilized by Bulgurcu et al. [34] and Hu et al. [31]. The Cronbach's alpha factor of this scale is \(0.928\). 3. **Response efficacy (RE)**: The RE factor is measured by adopting the approach utilized by Johnston & Warkentin [35]. The Cronbach's alpha factor of this scale is \(0.915\). 4. **Habit (HA)**: The HA factor is measured by adopting the approach utilized by Tsai et al. [36]. The Cronbach's alpha factor of this scale is \(0.925\). 5. **Behavioral comprehensiveness (BC)**: The BC factor is measured by adopting the approach utilized by Limayem et al. [37]. 6. **Control variables**: Contr4ol variables such as gender, college major, grade, and other scenarios can influence the information security compliance intention. Hence, such factors are taken into account in the data analysis of this study. As shown in Figure 1, the loadings of all the items are greater than \(0.5\), all the composite reliability (CR) values are greater than \(0.7\), and all the average variance extracted (AVE) values are greater than \(0.5\). As a result, the assessment is considered to have a good convergent validity. Furthermore, the correlations between constructs that are presented in Figure 2 provide support for the hypotheses of this work. In addition, the Bootstrap analysis of significance test on serial multiple mediation effects of self-efficacy and behavioral comprehensiveness are presented in Figure 3. The data analyses support the hypotheses that both self-efficacy and response efficacy have a positive impact on behavioral comprehensiveness; situational support has a positive impact on both self-efficacy and response efficacy; and Situational support can promote cybersecurity behavioral habits through the serial multiple mediating effects of self-efficacy and behavioral comprehensiveness. There is no doubt that breaches of data can be damaging in cyberattacks. As an example, the cyberattack on the Ukraine power grid in 2015 resulted in \(225,000\) people to experience power outage [38]. It is observed that the average ransom attack jumped from \(\$373\) and \(\$294\) in 2014 and 2015, respectively, to \(\$1077\) in 2016 [38]. Hence, it is crucial to establish a strong curriculum in both high schools and undergraduate schools to familiarize the internet users with the consequences of cyberattacks and information theft [39, 40]. Fig. 1: Factor Loading of Items for SS, SE,Fig. 2: The correlations between constructs [21]. RE, and HA [21]. Fig. 3: Bootstrap analysis of significance test on serial multiple mediation effects of self-efficacy and behavioral comprehensiveness ## II Future Work In the context of cybersecurity and information security, one of the common themes is to approach employees of large organizations via phishing emails. In such emails, employees are incentivized with immediate rewards and tempting offers to click on links or download documents; as a result of which, an information breach occurs. The phishing emails are usually sent to victims at the end of working hours or when they are fatigued. As a solution to avoid information breaches as a result of employees' fatigue, two approaches are presented below: 1. **Creating identifiers**: Most employees of large companies and organizations communicate via email internally. As a result, the emails that are sent from external email addresses can be labelled with an identifier so that employees pay more attention to them. As an example, the employees, including the students, professors, and staff of the University of Texas at Dallas are usually in contact with each other and send emails internally. Hence, those emails that are sent via emails that are external to the University of Texas at Dallas can be labelled for recipients. 2. **Artificial Intelligence**: As mentioned earlier, employees are usually the victims of cyberattacks and phishing emails when they are fatigued. As a result, artificial intelligence tools such as image processing and video processing can be used to identify the level of fatigue of employees and warn them when the classification algorithms identify that the employees are fatigued. Machine learning methods for classification and learning can be used for fatigue detection of employees [41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51]. Relevant methods considering the behavioral aspects of users are used in the literature [52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63]. This approach is definitely more controversial compared to the first one, since employees may not like to be monitored by image/video processing tools continuously during their work hours, even if the data is deleted and not used elsewhere. The subconscious of human beings is in a way that prefers not to be monitored, but in any case, those employees who are willing to use such a tool can benefit from the fatigue recognition aspect at the expense of being monitored.
2304.09806
A New $\pm$ Iwasawa Theory and Converse of Gross-Zagier and Kolyvagin Theorem (with an Appendix by Yangyu Fan)
Let $p>3$ be a prime. In this paper we develop a new kind of anticyclotomic local $\pm$-Iwasawa theory at $p$ for Hecke characters of quadratic imaginary fields which is valid for all ramification types of $p$ (split, inert and ramified). As an application we deduce the converse of Gross-Zagier-Kolyvagin theorem for these CM forms, which states that Selmer rank one implies analytic rank one. To carry out the Iwasawa theory argument we employ a recent construction of a new type of $p$-adic $L$-function by Andreatta-Iovita, and generalized by Yangyu Fan to Shimura curves in the Appendix, and a ``virtual Heenger family'' made via a limiting procedure from a Heegner family along Coleman-Mazur eigencurve constructed by Jetchev-Loeffler-Zerbes.
Xin Wan
2023-04-19T16:45:36Z
http://arxiv.org/abs/2304.09806v1
# A New \(\pm\)-Iwasawa theory and Converse of Gross-Zagier and Kolyvagin Theorem ###### Abstract Let \(p>3\) be a prime. In this paper we develop a new kind of anticyclotomic local \(\pm\)-Iwasawa theory at \(p\) for Hecke characters of quadratic imaginary fields which is valid for all ramification types of \(p\) (split, inert and ramified). As an application we deduce the converse of Gross-Zagier-Kolyvagin theorem for these CM forms, which states that Selmer rank one implies analytic rank one. To carry out the Iwasawa theory argument we employ a recent construction of a new type of \(p\)-adic \(L\)-function by Andreatta-Iovita, and generalized by Yangyu Fan to Shimura curves in the Appendix, and a "virtual Heenger family" made via a limiting procedure from a Heegner family along Coleman-Mazur eigencurve constructed by Jetchev-Loeffler-Zerbes. ###### Contents * 1 Introduction * 2 Families of Heegner Cycles * 2.1 Shimura Curves and Heegner Cycles * 2.2 Construction of Jetchev-Loeffler-Zerbes * 3 Local Theory * 3.1 Preliminary * 3.2 \(\pm\)-theory * 3.3 Explicit Reciprocity Law for Elliptic Units * 3.4 Choice of Global Characters * 3.5 Main Local Definition * 4 Iwasawa Theory * 4.1 Rubin's Main Conjecture * 4.2 Selmer groups and global duality argument * 4.3 Construction of the Virtual Heegner cycle * A Appendix: Rational Shimura curve and BDP type \(p\)-adic special value formula at non-split primes (by Yangyu Fan) * A.1 Nearly overconvergent quaternionic modular forms * A.2 Iteration of Gauss-Manin connection * A.3 \(p\)-adic Waldspurger formula for non-split \(p\) A.3.1 Explicit Waldspurger formulae * A.3.2 The \(p\)-adic L-function and the \(p\)-adic Waldspurger formulae ## 1 Introduction The deep relation between special values of \(L\)-functions and arithmetic objects is a central problem in number theory. Let \(p\) be a prime number. The famous Bloch-Kato conjecture [6] predicts that for a motive whose \(p\)-adic realization is \(\rho\), the rank of its \(p\)-adic Selmer group is equal to the vanishing order of the \(L\)-function \(L(\rho^{\vee}(1),s)\) at \(s=0\). This conjecture is still wide open in general. Suppose \(\rho\) is the motive associated to an elliptic curve \(E\) over \(\mathbb{Q}\), the converse of the Gross-Zagier and Kolyvagin theorem is an important special case predicted by the Bloch-Kato conjecture and states that if the rank of the Selmer group is \(1\), then the vanishing order of its \(L\)-function is exactly \(1\). In the case when \(E\) has no complex multiplication, the result is proved by Skinner [36] under the assumption of finiteness of the \(p\)-part of Shafarevich-Tate group \(\Sha_{E}\), and also by Zhang [39] when \(E\) is ordinary at \(p\). These assumptions are removed later by the work of the author [37] and Castella-Wan [13] by using anticyclotomic Iwasawa theory. This converse theorem has important arithmetic implications including average analytic rank of elliptic curves and the result of Bhargava-Skinner-Zhang that at least \(66\) percent of elliptic curves satisfy the rank part of BSD conjecture. In the CM case, Rubin proved this converse theorem in the \(p\)-ordinary case (i.e. \(p\) is split in the quadratic imaginary field \(\mathcal{K}\)) under the assumption that the \(p\)-part of the \(\Sha_{f}\) is finite. Recently Burungale-Tian [10] removed the assumption of the Shafarevich-Tate group. In the case when \(E\) has good supersingular reduction (here \(p\) is inert in \(\mathcal{K}\)), Rubin set up a general local Iwasawa theory at \(p\). More precisely Rubin defined the \(\pm\)-subspaces \(H^{1}_{+}\) (\(H^{1}_{-}\)) of the rank two (over the anticyclotomic Iwasawa algebra \(\Lambda^{-}\)) module \(H^{1}_{\mathrm{Iw}}(\mathcal{K}^{-}_{\infty,p},\psi)\) to be elements specializing to elements in \(H^{1}_{f}\) (the finite part) at arithmetic points \(\phi\) corresponding to characters of \(\Gamma^{-}\) of odd (even) powers of \(p\), respectively. Rubin also made a fundamental conjecture stating that \[H^{1}_{\mathrm{Iw}}(\mathcal{K}^{-}_{\infty},\psi)=H^{1}_{+}\oplus H^{1}_{-}.\] On the other hand Rubin constructed \(\pm\)-Heegner family \(\kappa_{\pm}\) from cycles over \(\mathcal{K}^{-}_{n}\) of level prime to \(p\) which satisfy the norm relation \[\mathrm{tr}(\kappa_{p^{n+2}})=-\kappa_{p^{n}},\] modified by the \(\pm\)\(p\)-adic logarithm function. _Remark 1.1_.: The existence of the \(\pm\)-local Iwasawa theory and \(\pm\) Heegner family is a special feature of the case that \(p\) is inert. As far as we know this should be the first time in literature when \(\pm\)-theory appears -- way earlier than analogous theory for the cyclotomic setting found by Pollack (analytic) and Kobayashi (arithmetic). This conjecture was proved recently by Burungale-Kobayashi-Ota [9], and from it they also deduced an anticyclotomic Iwasawa main conjecture involving the \(\pm\)\(p\)-adic \(L\)-functions, and the converse of Gross-Zagier and Kolyvagin theorem in this case from Rubin's \(\pm\) Heegner point. An interesting feature in this theory is that global tools play an essential role in proving local results. The main purpose of this paper is to develop a new kind of \(\pm\)-local Iwasawa theory which applies to all ramification types of \(p\) in \(\mathcal{K}\) (split, inert or ramified) and apply it to show the converse of Gross-Zagier and Kolyvagin theorem for CM characters of \(\mathcal{K}\). We first fix the notations. Let \(p\geq 3\) be an odd prime. Let \(\mathcal{K}\) be a quadratic imaginary field over \(\mathbb{Q}\) such that \(p\) is nonsplit. Let \(p\) be the prime of \(\mathcal{K}\) above \(p\). Let \(\psi\) be a Hecke character of \(\mathcal{K}\) with Archimedean type \((1,0)\) whose restriction to \(\mathbb{A}_{\mathbb{Q}}^{\times}\) is \(|\cdot|_{\mathcal{K}\times\mathbb{Q}}\) where \(\chi_{\mathcal{K}/\mathbb{Q}}\) is the quadratic character corresponding to \(\mathcal{K}/\mathbb{Q}\). Let \(\mathcal{K}_{\infty}^{-}\) be the anticyclotomic \(\mathbb{Z}_{p}\)-extension of \(\mathcal{K}\). Let \(\mathcal{K}_{n}^{-}\) be the sub-extension of \(\mathcal{K}_{\infty}^{-}\) of index \(p^{n}\) over \(\mathcal{K}\). We have \(\Gamma^{-}:=\operatorname{Gal}(\mathcal{K}^{-}/\mathcal{K})\simeq\mathbb{Z}_ {p}\) and let \(\Lambda^{-}:=\mathcal{O}_{L}[[\Gamma^{-}]]\) be the anticyclotomic Iwasawa algebra. Let \(\Psi_{\mathcal{K}}\) be the natural character \(G_{\mathcal{K}}\to\Gamma_{\mathcal{K}}\hookrightarrow\mathcal{O}_{L}[[\Gamma _{\mathcal{K}}]]\) and \(\Psi_{\mathcal{K}}^{-}\) be the character \(G_{\mathcal{K}}\to\Gamma_{\mathcal{K}}^{-}\hookrightarrow\mathcal{O}_{L}[[ \Gamma_{\mathcal{K}}^{-}]]\). Let \(\chi\) be a Hecke character of \(\mathcal{K}^{\times}\backslash\mathbb{A}_{\mathcal{K}}^{\times}\), we write \(\boldsymbol{\chi}\) for the \(\Lambda^{-}\)-valued character giving the natural Galois action \[\Psi_{\mathcal{K}}^{-}:G_{\mathcal{K}}\to\Gamma_{\mathcal{K}}^{-}\hookrightarrow \Lambda^{-}\] of \(G_{\mathcal{K}}\) multiplied by \(\chi\). We often write \(\phi_{0}\) for the origin point of \(\operatorname{Spec}\Lambda^{-}\). Our main result is the following. **Theorem 1.2**.: _Let \(\psi\) be as above. Suppose \(p>3\) and the \(p\)-part of the conductor of \(\psi\) is \(p^{n}\) for \(n\geq 2\). Suppose the rank of the Selmer group for \(\psi\) is \(1\), then the vanishing order of \(L(\psi,s)\) at \(s=1\) is also \(1\)._ The assumptions are put due to use of results of Andreatta-Iovita [1] in constructing various \(p\)-adic \(L\)-functions for nonsplit primes. (The same \(p\)-adic \(L\)-function has been constructed earlier by Daniel Kriz. Andreatta-Iovita's results have explicit interpolation formulas which are more convenient for our applications.) In this paper we do not try to optimize the main result - instead we set up the main theoretic framework, leaving improvements to future work. For example it seems the approach of Molina's thesis may enable one to improve the convergence radius appearing in the work of Andreatta-Iovita. This seems plausible to work even in some cases when \(p=2\). Our method is different from literature: we develop a new kind of \(\pm\)-local Iwasawa theory, which is valid for all ramification types for \(p\) (split, inert and ramified cases). Our theory is analogous to Rubin's \(\pm\)-theory in format but quite different in nature: we divide the arithmetic points in \(\operatorname{Spec}\Lambda^{-}\) into two parts \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\) depending on the Archimedean types instead of the conductors at \(p\). We define the \(+\)-part (\(--\)part) submodules of rank \(1\) of the local Iwasawa theoretic Galois cohomology at \(p\) over some "shrinked" weight space (see below) to be the subspace which specializes in \(H^{1}_{f}\) (the kernel of the dual exponential map \(\exp^{*}\)) at arithmetic points in \(\mathcal{X}_{1}\) (\(\mathcal{X}_{2}\)) respectively. The existence and ranks of these submodules are studied by looking at various modules of elliptic units. A crucial advantage of our theory over existing theory is its flexibility -- in the argument proving the converse theorem, we can freely shrink the weight space (this is because we are only focusing at the local analytic behavior near the origin point) -- this means we take some large number \(m\) and define \(\Lambda^{-,\prime}\) to be \(\mathcal{O}_{L}[[p^{-m}X]]\otimes\mathbb{Q}_{p}\), which corresponds to points on a disc of radius \(p^{-m}\) -- and then the analogous version of Rubin's conjecture in [33] becomes much easier to prove, i.e. we can easily prove that \[H^{1}(\mathcal{K}_{p},\psi\otimes\Lambda^{-,\prime})=H^{1}_{+}\oplus H^{1}_{-}\] after shrinking the weight space. The key idea to prove it is to use the work of Aflalo-Nakovar [2] (which generalizes earlier work of Cornut-Vatsal) on Mazur's conjecture on generic non-torsionness of certain Heegner points, together with the \(p\)-adic Gross-Zagier formula proved by Andreatta-Iovita [1]. _Remark 1.3_.: The existence of this kind of \(\pm\)-local theory at \(p\) comes from the root number determined by Archimedean weights. It is interesting to find a purely local \(p\)-adic explanation of By Gross-Zagier formula, to show the analytic rank is \(1\), it is enough to (see [38, Introduction]) * produce a weight \(2\) CM eigenform \(f\) and a Hecke character \(\chi\) such that the central character of \(f\) is the inverse of the restriction of \(\chi\) to \(\mathbb{A}_{\mathbb{Q}}^{\times}\), and such that the \(L\)-function \(L_{\mathcal{K}}(f,\chi,s)\) splits up into two \(L\)-functions \(L(\psi,s)\) and \(L(\psi^{\prime},s)\) such that \(L(\psi^{\prime},1)\neq 0\); * Show that the Heegner point \(\kappa_{f,\chi}\) constructed from \((f,\chi)\) is not torsion. To carry out the argument, we need the existence of various anticyclotomic \(p\)-adic \(L\)-functions when \(p\) is nonsplit, together with analogues of the Bertolini-Darmon-Prasanna formula. Our main tool for this is the recent work of Andreatta-Iovita ([1]) on the modular curve case using a technique of interpolation powers of the Gauss-Manin connection, and in the appendix of this paper by Fan using techniques developed in his thesis. On the other hand by a slightly more general version of the explicit reciprocity law for elliptic units proved by Kato, we can deduce an Iwasawa main conjecture for the CM \(p\)-adic \(L\)-function of Andreatta-Iovita, using the main conjecture for elliptic units proved in [22] generalizing results of Rubin [34]. As in [37], it is necessary to consider certain Heegner family (see Remark 1.4). In this paper we first define a so called "virtual Heenger family" \(\kappa_{f,\boldsymbol{\chi}}^{\text{virt}}\) which means an element in the global cohomology class \(H^{1}_{-}(\mathcal{K},\boldsymbol{\psi})\) (a rank \(1\) module) whose localization at \(p\) satisfies the explicit reciprocity law expected for the family of Heegner cycles, and whose specialization to \(\phi_{0}\) is \(\kappa_{f,\chi}\), where \(\phi_{0}\) is the origin point in the weight space where \(\boldsymbol{\chi}\) specializes to \(\chi\). The name "virtual" is due to the fact that it is constructed through a limiting procedure instead of an actual Heegner family for \((f,\chi)\) (see later). As in [23], using the Iwasawa main conjecture for the CM \(p\)-adic \(L\)-function of Andreatta-Iovita (which in turn follows from Rubin's main conjecture for elliptic units), we use Poitou-Tate exact sequence to show that the specialization of this virtual Heegner family at \(\phi_{0}\) is non-torsion. Now it is enough to construct such a virtual Heenger family, and then we can conclude that \(\kappa_{f,\chi}\) is non-torsion. Note that Rubin's construction of the \(\pm\) Heegner family uses crucially that the prime \(p\) is unramified in \(\mathcal{K}\) and \(E\) has prime to \(p\) conductor. To display a virtual Heegner family, we use a different idea - we consider a Coleman family \(\mathcal{F}\) passing through \(f\) (note that a generic member of \(\mathcal{F}\) is not CM), and bring in the construction of Jetchev-Loeffler-Zerbes [23] for the two-variable Heegner families over the Coleman family \(\mathcal{F}\). (Note that the CM form associated to \(\psi\) is supercuspidal at \(p\). So in order to make this work our idea is to take a form corresponding to a CM character unramified at \(p\) so that the CM form has finite slope, and move the ramification of \(\psi\) at \(p\) to the CM anticyclotomic character \(\chi\) part.) This is not very natural as the subfamily when specializing \(\mathcal{F}\) to \(f\) should be regarded as a \(p\)-adic limit of Heegner cycles for forms of general weight instead of a Heegner family for \(f\) - in fact we do not have enough arithmetic points on the \(1\)-dimensional subspace over the specialization of \(\mathcal{F}\) at \(f\) where we know the information for \(\kappa_{\mathcal{F},\chi}\). Thus the actual argument is more convoluted. We use some auxiliary anticyclotomic twist \(\chi^{\prime}\) of \(\chi\) and show that the corresponding \(p\)-adic \(L\)-function for \((f,\chi^{\prime})\) takes nonzero value at the origin point. By further shrinking the weight space, we may assume that \(\kappa_{f,\boldsymbol{\chi}^{\prime}}/\mathcal{L}_{p,f,\chi^{\prime}}\) is a generator of \(H^{1}_{+}\) at \(p\), which we denote as \(v_{+}\). Here the \(\mathcal{L}_{p,f,\chi^{\prime}}\) is the Andreatta-Iovita type \(p\)-adic \(L\)-function we use. By comparing the explicit reciprocity law for Heegner cycles for generic member of \(\mathcal{F}\), we find that \(\kappa_{\mathcal{F},\chi}\) is just \(\mathcal{L}_{p,\mathcal{F},\boldsymbol{\chi}}\cdot v_{+}\), which specializes to say that \(\kappa_{f,v}=\mathcal{L}_{p,f,\chi}\cdot v_{+}\) for the basis \(v_{+}\). This provides the virtual Heegner family which is enough for the Iwasawa theory argument. _Remark 1.4_.: In fact even in the (simplest) ordinary case (say the work [37] of the author), we are unable to prove the converse of the Gross-Zagier-Kolyvagin theorem without using the family of Heegner point. (See work of Skinner [36] which assumes the finiteness of the \(p\)-part of Shafarevich-Tate group.) More concretely, in _loc.cit._ we do not get a priory the non-vanishing of the specialization of the BDP \(p\)-adic \(L\)-function \(\mathcal{L}^{\mathrm{BDP}}\) to \(\phi_{0}\). Instead we first use Iwasawa theory of the Heegner family to deduce the non-torsionness of the Heegner point. The non-vanishing of \(\phi_{0}(\mathcal{L}^{\mathrm{BDP}})\) is then a consequence of this non-torsionness in the weight \(2\) case (this uses that for Abelian varieties \(A\), the localization map \(A(\mathbb{Q})\to A(\mathbb{Q}_{p})\) is injective). In fact the same argument in [37] proves the non-torsionness of the Heenger cycle if the Selmer rank is \(1\) for ordinary modular forms of general weight \(k\), but it is not known if the corresponding (non-classical) specialization of the BDP \(p\)-adic \(L\)-function to \(\phi_{0}\) is nonzero if \(k\neq 2\). _Remark 1.5_.: An interesting point is that the specialization of \(\kappa_{\mathcal{F},\boldsymbol{\chi}}\) to \(f\) lies in \(H^{1}_{+}\) does not seem to be seen in a purely local way (say from triangulation of \((\varphi,\Gamma)\)-modules). Instead it comes from the fact that it lies in the (rank one) global Iwasawa cohomology class. A different approach (introduced by Burungale and Tian) to attack this converse theorem is choose auxiliary quadratic imaginary field \(L/\mathbb{Q}\) where \(p\) is split and consider base change to \(L\). Then one may try to generalize the work of Hsieh [20] on CM main conjecture via Eisenstein congruences on \(\mathrm{U}(2,1)\) to the case allowing ramification at \(p\). However at this moment, it seems still hard to make the Eisenstein congruence approach work when \(p=2\). Finally we mention that Daniel Kriz earlier gave a proof of the converse of Gross-Zagier Kolyvagin theorem at nonsplit primes [29]. Our method, which is based on the newly developed \(\pm\)-theory and construction of Andreatta-Iovita, is more in the style of classical Iwasawa theory and seems much simpler. Acknowledgement: We thank Christophie Cornut, Ming-Lun Hsieh, David Loeffler, David Rohrlich and Ye Tian for useful communications. ## 2 Families of Heegner Cycles ### Shimura Curves and Heegner Cycles Let \(B\) be a quaternion algebra which is split at \(p\) and \(\infty\). A false elliptic curve over a base scheme \(S\) is a relative Abelian surface \(A/S\), together with an embedding \(\iota:\mathcal{O}_{B}\hookrightarrow\mathrm{End}_{S}(A)\) denoted as a pair \((A,\iota)\). A false isogeny of false elliptic curves is an isogeny commuting with the \(\mathcal{O}_{B}\)-action. We follow closely [30] for Shimura curves associated to \(B\) by putting it in the context of PEL type Shimura varieties. We fix an isomorphism \(B(\mathbb{R})\simeq M_{2}(\mathbb{R})\). Let \(*\) be a fixed positive anti-involution on \(B\). Let \(L\) be \(\mathcal{O}_{B}\) regarded as a rank \(4\) module over \(\mathbb{Z}\), equipped with a symplectic pairing \(\langle\cdot,\cdot\rangle:L\times L\to\mathbb{Z}\) on it. Let the \(\mathbb{R}\)-linear ring homomorphism \[h:\mathbb{C}\to\mathrm{End}_{\mathbb{R}}(L\otimes\mathbb{R})\] be such that \(h(i)\) is mapped to the endomorphism sending the identity in \(B(\mathbb{R})\) to \(\begin{pmatrix}&1\\ -1&\end{pmatrix}\in\mathrm{GL}_{2}(\mathbb{R})\simeq B(\mathbb{R})\). Using this \(h\) we can decompose \(L\otimes\mathbb{C}=V_{0}\oplus V_{0}^{c}\) where \(h(z)\) acts by \(1\otimes z\) on \(V_{0}\), and by \(1\otimes z^{c}\) on \(V_{0}^{c}\). Suppose \(A\) is an Abelian surface over \(S\) with a polarization \(\lambda:A\to A^{\vee}\). An \(\mathcal{O}_{B}\)-equivariant symplectic isomorphism \((\alpha_{n},\nu_{n})\) from \((L/nL)_{S}\) to \(A[n]\) consists of: * An \(\mathcal{O}_{B}\)-equivariant isomorphism \[\alpha_{n}:(L/nL)_{S}\simeq A[n]\] of group schemes over \(S\); * An isomorphism \(\nu_{n}:(\mathbb{Z}/n\mathbb{Z}(1))_{S}\simeq\mu_{n,S}\) (here \(\mathbb{Z}/n\mathbb{Z}(1)=2\pi i(\mathbb{Z}/n\mathbb{Z})\)) of group schemes such that \[\nu_{n}\circ\langle\cdot,\cdot\rangle=e^{\lambda}\circ(\alpha_{n}\times\alpha_ {n})\] as maps from \((L/nL)_{S}\times(L/nL)_{S}\) to \(\mu_{n,S}\). Here \(e^{\lambda}\) is the Weil pairing induced by \(\lambda\). We define a PEL moduli problem over a base scheme \(S\) to be 1. An Abelian surface \(A\) over \(S\); 2. An \(\mathcal{O}_{B}\) structure \(i:\mathcal{O}_{B}\hookrightarrow\operatorname{End}_{S}(A)\) of \((A,\lambda)\) such that \(H_{1}(A,\mathbb{Z})\) is isomorphic to \(\mathcal{O}_{B}\) as a left \(\mathcal{O}_{B}\)-module; 3. A principal polarization \(\lambda:A\to A^{\vee}\) of \(A\) such that the associated Rosati involution \(\operatorname{End}^{0}(A)\to\operatorname{End}^{0}(A)\) restricts to \(*\) on \(\iota(\mathcal{O}_{B})\) (Rosati condition). 4. \(\operatorname{Lie}_{A/S}\) with an \(\mathcal{O}_{B}\otimes\mathbb{Z}_{\Sigma}\)-module structure given by \(i\) satisfies the Lie algebra condition, which says that for each \(b\in\mathcal{O}_{B}\), \(\det(b)\) acting on \(\operatorname{Lie}_{A/S}\) is equal to \(\det(b)|_{V_{0}}\). 5. A level structure (full level \(n\) structure), namely a symplectic isomorphism \(\alpha_{n}:L/nL\simeq A[n]\) satisfying the symplectic-liftable condition of [30, Definition 1.3.6.2]. Let \(U\) be the open compact subgroup of \(B(\mathbb{A}_{f})\) corresponding to the full level \(n\) structure. If \(B(\mathbb{Q})\cap U\) is neat, then Kottwitz ([28]) constructed the arithmetic model \(\mathcal{C}\) for the above moduli problem over its reflex field. Then the complex points of the Shimura curve is identified with \[B(\mathbb{Q})\backslash\mathcal{H}\otimes B(\mathbb{A}_{f})/U\] where \(\mathcal{H}\) is Poincare upper half plane. It is also possible to define the level structure for general open compact subgroup \(U\) of \(B(\mathbb{A}_{f})\) and the corresponding Shimura curve. We omit it here and refer to [30, Definition 1.3.7.6, 1.4.1.4] for details. We write \(\mathcal{A}\) for the universal Abelian surface (i.e. false elliptic curve) over \(\mathcal{C}\). We discuss briefly CM points on \(\mathcal{C}\). Recall \(\mathcal{O}_{\mathcal{K}}=\mathbb{Z}\oplus\mathbb{Z}\delta\). Let \(\mathfrak{a}\) be an integral ideal of \(\mathcal{O}_{\mathcal{K}}\). Then there is a left ideal of \(\mathcal{O}_{B}\) in \(B\) given by \[\mathfrak{a}_{B}=\mathcal{O}_{B}(\iota_{\tau}(\mathfrak{a})).\] As \(B\) is an indefinite rational quaternion algebra, it has class number one and \(\mathfrak{a}_{B}\) is principal, generated by some \(\alpha\in B\). Then right multiplication by \(\alpha\) gives a false isogeny \[\varphi_{\mathfrak{a}}:A_{\tau}\to A_{\alpha^{-1}\tau}\] with kernel \(A_{\tau}[\mathfrak{a}]\). We write \(A_{\mathfrak{a}^{-1}\ast\tau}\) for \(A_{\alpha^{-1}\tau}\). There is an embedding \(\iota:\mathcal{K}\hookrightarrow B\) sending \(\delta\) to an element \(\mu\in\mathcal{O}_{B}\) whose minimal polynomial has discriminant \(dc^{2}\). We may further assume that \(\iota(\mathcal{K})\cap\mathcal{O}_{B}=\mathcal{O}_{c}\). Let \(z_{c}\) be the fixed point of \(\iota(\mu)\) in \(\mathcal{H}\), we call \(z_{c}\) a CM point of conductor \(c\) whose associated false elliptic curve is denoted as \(A_{c}\), which is defined over the ring class field \(H_{c}\) of \(\mathcal{K}\) of conductor \(c\). Shimura's reciprocity law describes the action of \(\operatorname{Gal}(H_{c}/\mathcal{K})\) by \[A_{z_{c}}^{(\mathfrak{a}^{-1},H_{c}/\mathcal{K})}=A_{\mathfrak{a}\bullet z_{c}}.\] Let \(A:=A_{1}\) and we fix throughout an isogeny \(\varphi_{0}\) from \(A\) to \(A_{c}\). We define the Heegner cycle as follows. Let \(r\) be such that \(k=2r+2\). Let \(W_{r}=\mathcal{A}_{r}\times A^{r}\) where \(\mathcal{A}_{r}\) is the \(r\)-fold fiber product of \(\mathcal{A}\) over \(\mathcal{C}\). Let \(\epsilon\) be the projector constructed after [8, Proposition 6.4] \[\epsilon:\mathcal{H}^{*}(\mathcal{A}_{r}/\mathcal{C})\otimes H_{\mathrm{dR}}^ {*}(A^{r})\to\operatorname{Sym}^{2r}e\mathcal{H}^{1}\otimes\operatorname{Sym }^{2r}eH_{\mathrm{dR}}^{1}(A)\] where the \(e\) is the projector constructed in [8, Section 2.1]. For any false isogeny \(\varphi:A\to A^{\prime}\) of false elliptic curves, let \(\Gamma_{\varphi}\) be its graph in \(A\times A^{\prime}\). Let \(P\) be a point in \(\mathcal{C}\) defined over an extension \(H_{c}^{\prime}\) of \(H_{c}\) whose associated false elliptic curve is \(A^{\prime}\). Let \(\Upsilon_{\varphi}\) be the \(r\)-th power of \(\Gamma_{\varphi}\) regarded as a subvariety of the fiber of \(W_{r}\) over \(P\). Let \(\Delta_{\varphi,P}:=\epsilon\Upsilon_{\varphi}\). We refer to [8, Section 6] for the notion of \(p\)-adic and etale Abel-Jacobi map. We define \[\kappa_{f,\chi}:=\sum_{g\in\operatorname{Gal}(H_{c}^{\prime}/K)}\chi^{-1}(g) \mathrm{AJ}_{et}(\Delta_{\varphi_{0},P}^{g})\] where we naturally regard the character \(\chi\) as a character of \(\operatorname{Gal}(H_{c}^{\prime}/\mathcal{K})\). Note that \(\Delta_{\varphi_{0}}^{g}=\Delta_{\varphi_{ag}\varphi_{0},P_{Ac}^{g}}\) where \(\mathfrak{a}_{g}\in\operatorname{Pic}(\mathcal{O}_{c})\) is the image of \(g\) under the Artin map. ### Construction of Jetchev-Loeffler-Zerbes We recall the main result of Jetchev-loeffler-Zerbes constructing the \(2\)-variable family of Heegner cycles over Coleman families. In [23] the construction is made for modular curves. But in [25] we explained that the construction can also be done for general Shimura curves. We now recall the set up of the construction, paying special attention to the choice of test vectors in \(\pi_{f}\). Let \(f\) be the stabilized CM form associated to the Hecke character \(\xi\) with Archimedean weight \((-1,0)\) and unramified at \(p\). If \(p\) is ramified in \(\mathcal{K}\) then it takes the new vector at \(p\). If \(p\) is inert in \(\mathcal{K}\) then it is a stabilization of the (unramified) new vector at \(p\). In any case the \(f\) has finite and non-critical slope (slope \(\frac{1}{2}\)), and thus can be deformed into a Coleman family \(\mathcal{F}\). More precisely, following [23], we let the weight space \(\mathcal{W}\) be the \(1\)-dimensional rigid analytic group variety parameterizing continuous characters of \(\mathbb{Z}_{p}^{\times}\), in which we call the point in \(\mathcal{W}(L)\) corresponding to the character \(x\mapsto x^{k}\) (\(k\) is a positive integer) the weight \(k\) point. Let \(B\) be the indefinite quaternion algebra over \(\mathbb{Q}\) associated to the Gross-Zagier formula for the pair \((f,\chi)\) we choose in Section 3.4. That is to say it is ramified exactly at the finite places where the local root number for \(\pi_{f,\mathcal{K}}\otimes\chi\) is \(-1\). Let \(Y_{U}(p^{n})\) be the Shimura curve for \(B\) with prime-to-\(p\) level given by \(U\) and level \(K_{1}(p^{n})\) at \(p\). The theory of overconvergent families of modular forms on Shimura curves has been developed for example by Brasca in [7]. An \(\mathcal{O}_{\mathcal{A}}\)-valued Coleman family of slope \(\frac{1}{2}\) consists of * see below); * An \(\mathcal{O}_{\mathcal{A}}\)-coefficient Serre-Tate expansion (see [15, Section 2.4] for details) \[\mathcal{F}_{\mathrm{new}}\in\mathcal{O}_{L}[[t_{ij}]]\] such that for a Zariski dense set of points \(\phi\in\mathcal{A}(L)\) above the point \(k_{\phi}\) point in \(\mathcal{W}\), the specialization \(\phi(\mathcal{F}_{\mathrm{new}})\) is the Serre-Tate expansion of a weight \(k_{\phi}+2\) eigenform on the Shimura curve \(Y_{U}(p^{n})\) of slope \(\frac{1}{2}\) and new outside \(p\). In fact in view of the classicality result of [26, Theorem 5.2], the \(f\) corresponds to a noble points in the eigencurve in the sense of [23]. Thus we can find a Coleman family \(\mathcal{F}\) whose specialization at an arithmetic point \(\phi_{0}^{\prime}\) is \(f\). Coleman families on modular curves are usually defined using \(q\)-expansion at cusps. Here we use Serre-Tate expansion as our Shimura curve is compact and has no cusps. The existence of such Coleman family over the modular curve is explained in [23], from which one sees the existence on the Shimura curve by the existence of the Jacquet-Langlands correspondence in view of the local automorphic representations of generic member of the Coleman family on the modular curve. Let \(\mathcal{W}_{n}\subset\mathcal{W}\) be the \(n\)-analytic weight space defined in [23, Section 4.2] paramterizing \(n\)-analytic characters. Let \(\mathcal{U}\) be an affinoid disc in \(\mathcal{W}_{n}\), and let \(D_{\mathcal{U},n}\) be the sheaf of \(n\)-analytic distributions on \(Y_{U}(p^{n})\) as constructed in [23, Section 4.1, 4.2]. As in [23, Section 5.2], consider \[H^{1}_{\mathrm{et}}(Y_{U}(p)/\bar{\mathbb{Q}},D_{U,n}(1))\hat{\otimes}_{ \Lambda_{U}[1/p]}\mathcal{O}(\mathcal{A}).\] Since \(f\) is CM, the local representation is either supercuspidal (only at nonsplit places) or principal series (i.e. not a twist of the special representation). By considering the Galois representation \(\rho_{\mathcal{F}}\) associated to \(\mathcal{F}\), we see that in the supercuspidal case, the local representation for \(\mathcal{F}\) is a given supercuspidal representation twisted by a family of unramified characters; in the principal series case, the local representation is a family of principal series representation \(\pi(\xi_{1},\xi_{2})\) where each \(\xi_{i}\) is a given character twisted by a family of unramified characters over \(\mathcal{O}(\mathcal{A})\). We denote the local representation as \(\pi_{v,\mathcal{A}}\) and \(\tilde{\pi}_{v,\mathcal{A}}\) for its dual. Recall that we have picked up a test vector (up to a scalar) \(w_{v}\in\pi_{f,v}\) for the toric integral. We now make some local choices to specify the vector \(\mathcal{F}\) such that for each arithmetic specialization \(\mathcal{F}_{\phi}\), at each place \(v\in\Sigma\) it is equal to the test vector \(w_{v}\) in Section A.3.1 for the toric integral up to a nonzero scalar. In above for each prime \(v\) the \(w_{v}\) is obtained by applying a fixed group algebra element \(x_{g,v}\in L[B(\mathbb{Q}_{v})]\) to the new vector in the case when \(B(\mathbb{Q}_{v})\simeq M_{2}(\mathbb{Q}_{v})\). At places \(v\) where \(B(\mathbb{Q}_{v})\) is ramified, we fix the finite dimensional representation of \(B(\mathbb{Q}_{v})\) associated by the local Jacquet-Langlands correspondence by \(\pi_{f,v}\) (which has to be supercuspidal). This way we may construct our Coleman family \(\mathcal{F}\) as desired above. We also consider the dual Coleman family \(\mathcal{F}^{*}\) obtained from twisting \(\mathcal{F}\) by the inverse of its central character. It is not hard to show using the classicality result of [26, Theorem 5.2], that there is a free rank two module \(\mathcal{A}\)-submodule \(M_{\mathcal{A}}(\mathcal{F})\) of \[H^{1}_{\mathrm{et}}(Y_{U}(p)_{/\bar{\mathbb{Q}}},D_{U,1}(1))\hat{\otimes}_{ \Lambda_{U}[1/p]}\mathcal{O}(\mathcal{A})\] in the eigen-component of \(\mathcal{F}\) such that for each vector in \(M_{\mathcal{A}}(\mathcal{F})\), its \(v\)-component is in the \(w_{v}\)-isotypical part for each \(v\in\Sigma\) outside \(p\). We fix for each \(v\in\Sigma\backslash\{p\}\) the natural pairing \[\pi_{v,\mathcal{A}}\times\tilde{\pi}_{v,\mathcal{A}}\to\mathcal{O}_{\mathcal{ A}}.\] We consider \(M_{\mathcal{A}}(\mathcal{F})\subset\rho_{\mathcal{F}}\otimes(\prod_{v\in\Sigma,v \triangleright p}\pi_{v,\mathcal{A}})^{U}\) and the product pairing \[(\rho_{\mathcal{F}}\otimes(\prod_{v\in\Sigma,v\triangleright p}\pi_{v,\mathcal{ A}})^{U})\times(\rho_{\mathcal{F}^{*}}\otimes(\prod_{v\in\Sigma,v\triangleright p} \tilde{\pi}_{v,\mathcal{A}})^{U})\to\mathcal{O}_{\mathcal{A}}. \tag{2.1}\] Note that it is important to exclude the prime \(p\) in the pairing since the local vectors at \(p\) corresponding to \(\mathcal{F}\) and \(\mathcal{F}^{*}\) pairs to zero. Now we consider the \(\mathcal{F}^{*}\)-eigen component of \(H^{1}_{\mathrm{et}}(Y_{U}(p)_{/\mathbb{Q}},D_{U,1}(1))\hat{\otimes}_{\Lambda_ {U}[1/p]}\mathcal{O}(\mathcal{A})\) whose \(p\)-part is the vector corresponding to \(\mathcal{F}^{*}\) (up to scalar). This module is isomorphic to \((\rho_{\mathcal{F}^{*}}\otimes(\prod_{v\in\Sigma,v\triangleright p}\tilde{\pi} _{v,\mathcal{A}})^{U^{p}})\). We quotient out the subspace which pairs to \(0\) with \(\mathcal{F}\). This quotient is easily seen to be free of rank two over \(\mathcal{O}(\mathcal{A})\), which we denote as \(M_{\mathcal{A}}(\mathcal{F})^{*}\) We can thus define the two-variable family \[\kappa_{\mathcal{F},\chi}\in H^{1}(\mathcal{K},M_{\mathcal{A}}(\mathcal{F})^{ *}\otimes\Lambda^{-\prime})\] of Heegner cycles as in [23, Proposition 5.3.1, Theorem 5.4.1] interpolating \(\kappa_{f_{\phi},\chi_{\phi}}\)'s at arithmetic points \(\phi\). ## 3 Local Theory ### Preliminary Let \(K\) be a finite extension of \(\mathbb{Q}_{p}\) and let \(V\) be a continuous representation of \(G_{K}\) on a \(\mathbb{Q}_{p}\)-vector space of dimension \(d\). Let \(\mathbb{B}_{\mathrm{dR}}\) be deRham period ring of Fontaine and let \(D_{K,\mathrm{dR}}(V):=(V\otimes B_{\mathrm{dR}})^{G_{K}}\) with a filtration \(\mathrm{Fil}^{\bullet}\). We call \(V\) a deRham representation if \(\dim_{\mathbb{Q}_{p}}D_{\mathrm{dR}}(V)=d\). Let \(\mathbb{B}_{\mathrm{cris}}\subset\mathbb{B}_{\mathrm{dR}}\) be the crystalline period ring of Fontaine with the Frobenius action \(\varphi\). Denote \(D_{K,\mathrm{cris}}(V):=(V\otimes\mathbb{B}_{\mathrm{cris}})^{G_{K}}\). Let \[\exp:\frac{D_{\mathrm{dR}}(V)}{\mathrm{Fil}^{0}D_{\mathrm{dR}}(V)+D_{\mathrm{ cris}}(V)^{\varphi=1}}\hookrightarrow H^{1}(K,V)\] be the Bloch-Kato exponential map, which is the boundary map in the cohomology exact sequence of \[0\to V\to V\otimes\mathbb{B}_{\mathrm{cris}}^{\varphi=1}\oplus B_{ \mathrm{dR}}^{+}\to V\otimes\mathbb{B}_{\mathrm{dR}}\to 0\] obtained from the fundamental exact sequence of \(p\)-adic Hodge theory \[0\to\mathbb{Q}_{p}\to\mathbb{B}_{\mathrm{cris}}^{\varphi=1}\oplus\mathbb{B}_{ \mathrm{dR}}^{+}\to\mathbb{B}_{\mathrm{dR}}\to 0.\] We also denote the dual exponential map \[\exp^{*}=\exp_{K,V}^{*}:H^{1}(K,V^{*}(1))\to\mathrm{Fil}^{0}D_{\mathrm{dR}}(D _{\mathrm{dR}}(V^{*}(1)))\] as the dual of the exponential map for \(V\) under the Tate local duality. We define two spaces \[H^{1}_{e}=\ker\{H^{1}(K,V)\to H^{1}(K,\mathbb{B}_{\mathrm{cris}}^{\varphi=1} \otimes V)\}\] and \[H^{1}_{f}=\ker\{H^{1}(K,V)\to H^{1}(K,\mathbb{B}_{\mathrm{cris}}\otimes V)\}.\] In other words, the \(H^{1}_{e}\) is the image of the exponential map. The following lemma is useful. **Lemma 3.1**.: _Suppose \(V\) is a \(2\)-dimensional deRham representation of \(G_{\mathbb{Q}_{p}}\) with Hodge-Tate weight \((a_{1},a_{2})\) for \(a_{1}\leq 0\) and \(a_{2}>0\) and such that \(V^{G_{K}}=0\). Then \(H^{1}_{f}(\mathbb{Q}_{p},V)=H^{1}_{e}(\mathbb{Q}_{p},V)\) and both spaces have dimension \(1\)._ Proof.: This is clear from the dimension formula, say in [3, Proposition 2.8]. ### \(\pm\)-theory CM Hecke Characters: We consider the embedding \[\sigma:\mathbb{Z}_{p}^{2}\simeq\mathcal{O}_{\mathcal{K}_{p},\text{free}}^{\times} =\frac{\mathcal{O}_{\mathcal{K}_{p}}^{\times}}{\mathcal{O}_{\mathcal{K}_{p}, \text{tor}}^{\times}}\hookrightarrow\Gamma_{\mathcal{K}}\] induced by the Artin map. Let \(t_{\mathcal{K},p}\) be the least positive integer killing \(\mathcal{O}_{\mathcal{K}_{p},\text{tor}}^{\times}\) (clearly an even number) times the index of the above embedding. We consider the set \(\mathcal{X}\) of arithmetic points \(\phi\) corresponding to anticylotomic characters (in the sense that it factorizes through \(\Gamma_{\mathcal{K}}^{-}\)) \(\chi_{\phi}\) of \(\Gamma_{\mathcal{K}}\) such that for some integer \(\kappa_{\phi}\) divisible by \(t_{\mathcal{K},p}\), and each \(x\in\mathcal{O}_{\mathcal{K}_{p}}^{\times}\), we have \[\chi_{\phi}(\sigma(x))=x^{\kappa_{\phi}}\bar{x}^{-\kappa_{\phi}}.\] (It is clear such character exists for each \(\kappa_{\phi}\) divisible by \(t_{\mathcal{K},p}\). The reason why we require \(t_{\mathcal{K},p}|\kappa_{\phi}\) is similar to the cyclotomic case that if the weight \(k\) is divisible by \(p-1\) then the \(k\)-th power of the cyclotomic character corresponds to Hecke characters unramified at \(p\).) **Definition 3.2**.: Let \(\mathcal{X}_{1}\) be the set of arithmetic points \(\phi\) as above such that the corresponding \(k_{\phi}\geq 1\). Let \(\mathcal{X}_{2}\) be the set of arithmetic points \(\phi\) as above such that the corresponding \(k_{\phi}\leq-2\). **Lemma 3.3**.: _For each \(\phi\in\mathcal{X}_{1}\) the character \(\psi\chi_{\phi}\) has same root number as \(\psi\); for each \(\phi\in\mathcal{X}_{2}\) the root number of the character \(\psi\chi_{\phi}\) is \(-1\) times the root number of \(\psi\)._ Proof.: We consider the local root numbers of the specializations. The Archimedean root numbers for Hecke characters \(\psi\chi_{\phi}\) for \(\phi\) in \(\mathcal{X}_{1}\) are the same as that for \(\psi\), while the Archimedean root numbers for Hecke characters \(\psi\chi_{\phi}\) for \(\phi\) in \(\mathcal{X}_{2}\) become that for \(\psi\) multiplied by \(-1\) (say using the formula in [32, Lecture 2, 5.1]. At non-Archimedean split primes, the local root number is always \(+1\). At nonsplit places, for \(\phi\) varying in \(\mathcal{X}\) (including \(p\)), the local representation is varying by multiplying by a family of unramified characters taking values \(\alpha\) satisfying \(|\alpha-1|_{p}<1\). Taking the product of the root number over all places, we see the conclusion of the lemma. We also have the following result on local Galois cohomology. **Lemma 3.4**.: _The local cohomology group \(H^{1}(\mathcal{K}_{p},\boldsymbol{\psi})\otimes_{\mathbb{Z}_{p}}\mathbb{Q}_{p}\) is free of rank two over \(\Lambda^{-}\otimes_{\mathbb{Z}_{p}}\mathbb{Q}_{p}\)._ Proof.: The torsion-freeness is easy to see (note that for the ring \(\Lambda^{-}\otimes\mathbb{Q}_{p}\), torsion-freeness is equivalent to freeness). To show the lemma it is enough to show that at a generic set of arithmetic points \(\phi\in\text{Spec}\Lambda^{-}\), the rank of \(H^{1}(\mathcal{K}_{p},\psi\chi_{\phi})\) is \(2\). This follows from considering reductions of the characters \(\psi\chi_{\phi}\) modulo \(p^{n}\) for each \(n\) and using the Euler-Poincare characteristic formula [14, Theorem 2.17]. ### Explicit Reciprocity Law for Elliptic Units We recall some results about elliptic units following Kato in [27, Chapter 15]. We take an elliptic curve \(E\) with CM by \(\mathcal{O}_{\mathcal{K}}\) with the associated character \(\psi_{E}\) of infinity type \((1,0)\). Let \(\mathfrak{f}\) be an ideal of \(\mathcal{O}_{\mathcal{K}}\) contained in the conductor of \(\psi_{E}\). Let \(\psi\) be a Hecke character of \(\mathcal{K}^{\times}\) of infinity type \((-r,0)\) with conductor dividing \(\mathfrak{f}\). We fix a number field \(M\) containing the values of \(\psi\) and let \(L\) be a finite extension of \(\mathbb{Q}_{p}\) containing \(M\). Following _loc.cit._ let \(V_{M}(\psi)=H^{1}(E(\mathbb{C}),\mathbb{Q})^{\otimes r}\otimes M\). We define the etale realization \(V_{L}(\psi):=H^{1}_{\rm{et}}(E,\mathbb{Q}_{p})^{\otimes r}\otimes L\), but with the Galois action given by \(\psi\) (i.e. twisted by the character \(\psi\cdot\psi_{E}^{-r}\) which is of finite order and conductor dividing \(\mathfrak{f}\)). We define the \(1\)-dimensional \(M\)-vector space \(S(\psi)\) to be \({\rm coLie}(E)^{\otimes r}\otimes M\). We also define a map \[{\rm per}_{\psi}:S(\psi)\to V_{\mathbb{C}}(\psi):=V_{M}(\psi)\otimes_{M} \mathbb{C}\] as the map induced by the period map \[{\rm coLie}(E)\to H^{1}(E(\mathbb{C}),\mathbb{C}).\] Let \(g_{\psi}\) be the CM form associated to \(\psi\) as in [27, 15.10]. Let \(S(g_{\psi})\) be the \(M\)-vector space spanned by the newform \(g_{\psi}\). We define \[V_{L}^{\sim}(\psi):=V_{L}(\psi)\oplus\iota V_{L}(\psi)\] where \(\iota\in{\rm Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\) is complex conjugation and the action of \({\rm Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\) on \(\iota V_{L}(\psi)\) is the Galois action on \(V_{L}(\psi)\) composed with conjugation by \(\iota\). Let \(S(g_{\psi})\) be the \(1\)-dimensional \(L\)-vector space generated by the normalized eigenform \(g_{\psi}\). As in [27, 15.11], we fix an isomorphism of \(1\)-dimensional \(L\)-vector spaces \[S(\psi)\simeq S(g_{\psi}),\] which determines a unique isomorphism \[V_{L}^{\sim}(\psi)\simeq V_{L}(g_{\psi})\] as in [27, Lemma 5.11 (1)] (the \(V_{L}(g_{\psi})\) is the Galois representation associated to \(g_{\psi}\)). More generally, for Hecke characters \(\psi\) of general Archimedean type \((-r_{1},r_{2})\) with \(r_{i}\geq 0\), then \(\psi\cdot|\cdot|^{r_{1}}\) is of Archimedean type \((0,r_{1}+r_{2})\). We can define the \(V(\psi)\) and \(S(\psi)\) to be those of \(\psi\cdot|\cdot|^{r_{1}}\) as above, twisted by \((\zeta_{p^{n}})_{n}^{\otimes r_{1}}\) (the \(r_{1}\)-th power of the Tate motive). Let \(\mathcal{K}_{p^{\infty}\mathfrak{f}}\) be the ray class field of \(\mathcal{K}\) of conductor \(p^{\infty}\mathfrak{f}\). We define for any \(\mathbb{Z}_{p}\) representation \(T\) of \(G_{\mathcal{K}}\) the Iwasawa cohomology \[H^{1}_{\rm Iw}(\mathcal{K}^{\Sigma}_{p^{\infty}\mathfrak{f}},T):=\varprojlim \limits_{\mathcal{K}^{\prime}}H^{1}(\mathcal{K}^{\prime,\Sigma},T)\] for \(\mathcal{K}^{\prime}\) running through all finite sub-extensions of \(\mathcal{K}_{p^{\infty}\mathfrak{f}}\). In [27, Section 15.6] it is defined the family of elliptic units \(z_{p^{\infty}\mathfrak{f},\psi}\) as an element in \(H^{1}_{\rm Iw}(\mathcal{K}^{\Sigma}_{p^{\infty}\mathfrak{f}},\mathbb{Z}_{p}(1))\). (In fact in the terminology of [22, Section 5.1] one needs to multiply it by any element in the ideal \(\mathcal{J}_{\Lambda}\) (in the notation of _loc.cit._) to make it in \(H^{1}_{\rm Iw}(\mathcal{K}^{\Sigma}_{p^{\infty}\mathfrak{f}},\mathbb{Z}_{p}(1))\), but this \(\mathcal{J}_{\Lambda}\) is the unit ideal in our component). We have the following slight generalization of [27, Proposition 15.9]. **Proposition 3.5**.: _Let \(r\geq 1\) and \(\psi\) is a Hecke character of \(\mathcal{K}\) of type \((-r_{1},r_{2})\) with \(r_{i}\geq 0\). Let \(\gamma\in V_{M}(\psi)\). Consider the map_ \[H^{1}_{\rm Iw}(\mathcal{K}^{\Sigma}_{p^{\infty}\mathfrak{f}},\mathbb{Z}_{p}(1)) \to H^{1}_{\rm Iw}(\mathcal{K}^{\Sigma}_{p^{\infty}\mathfrak{f}}(\mathbb{Z}_{p }(1))\otimes V_{L}(\psi)\to H^{1}_{\rm Iw}(\mathcal{K}^{\Sigma}_{p^{\infty} \mathfrak{f}},V_{L}(\psi)(1))\] _where the first map is multiplying by \(\gamma\) and the second is the natural identification map (note that \(\psi\) factorizes through \(\mathcal{K}_{p^{\infty}\mathfrak{f}}\)). Let \(z_{p^{\infty}\mathfrak{f},\psi,\gamma}\) be the image of \(z_{p^{\infty}\mathfrak{f},\gamma}\) under the above map._ _Then for each character \(\chi\) of \({\rm Gal}(\mathcal{K}^{\prime}/\mathcal{K})\) where \(\mathcal{K}^{\prime}\) is a finite extension of \(\mathcal{K}\) contained in \(\mathcal{K}(p^{\infty}\mathfrak{f})\), the imgae of \(z_{p^{\infty}\mathfrak{f},\psi,\gamma}\in H^{1}_{\rm Iw}(\mathcal{K}^{\Sigma}_{ p^{\infty}\mathfrak{f}},V_{L}(\psi)(1))\) under \(\exp^{*}\) map is an element in \(S(\psi)\otimes K^{\prime}\) whose image under the map_ \[\Sigma_{\sigma\in{\rm Gal}(K^{\prime}/K)}\chi(\sigma){\rm per}_{\psi}\circ \sigma:S(\psi)\otimes K^{\prime}\to V(\psi)\] _is \(L(\bar{\psi},\chi,r)\cdot\gamma\)._ Proof.: 1 If the infinity type of \(\psi\) is \((-r,0)\), then this is [27, Theorem 15.9]. For general infinity type case, we use the identification along the cyclotomic family of \(z_{p^{\infty}\mathfrak{f},\psi}\) with the zeta element for the CM modular form \(g_{\psi}\) as in [27, (15.16.1)] (under the above identification of \(S(\psi)\) with \(S(g_{\psi})\) and \(V_{L}^{\sim}(\psi)\) with \(V_{L}(g_{\psi})\)), using the density of arithmetic points of Archimedean types \((-r,0)\). The proposition then follows from the explicit reciprocity law for zeta elements as [27, Theorem 12.5]. Footnote 1: We thank D. Loeffler for showing us the following argument. **Definition 3.6**.: We define \(z_{p^{\infty},\psi}\in H^{1}_{\mathrm{Iw}}(\mathcal{K}_{p^{\infty}\mathfrak{f }}^{\Sigma},T):=\varprojlim_{\mathcal{K}^{\prime}}H^{1}(\mathcal{K}^{\prime, \Sigma},T)\) (\(\mathcal{K}^{\prime}\) runs through all finite sub-extensions of \(\mathcal{K}_{p^{\infty}}\)) as the image of \(z_{p^{\infty}\mathfrak{f},\psi}\) under the natural projection. We see immediately from the above proposition and Lemma 3.3 that the specializations of of \(z_{p^{\infty},\psi}\) to arithmetic points in \(\mathfrak{X}_{1}\) lives in \(H^{1}_{f}\) (i.e. the kernel of the \(\exp^{*}\) map). Now it comes the key definition of our local \(\pm\) theory. Let \(\psi^{\prime}\) be a CM character of \(\mathcal{K}^{\times}\backslash\mathbb{A}_{\mathcal{K}}^{\times}\) of Archimedean type \((-1,0)\) such that \(\psi^{\prime}|_{\mathbb{A}_{\mathbb{Q}}^{\times}}\) is the quadratic character \(\chi_{\mathcal{K}/\mathbb{Q}}\) times \(|\cdot|\). We first show that the family of elliptic unit is nonzero along the anticyclotomic line. **Proposition 3.7**.: _The \(z_{p^{\infty},\psi^{\prime}}\) is a nonzero element in \(H^{1}_{\mathrm{Iw}}(\mathcal{K}_{\infty},\mathbf{\psi}^{\prime})\otimes\mathbb{Q }_{p}\)._ Proof.: If the global root number of \(\psi^{\prime}\) is \(+1\), this is a direct consequence of the explicit reciprocity law for elliptic units we proved above, together with Greenberg's result [17, Theorem 1] on non-vanishing of \(L\)-values. If the global root number is \(-1\), we can show the the non-triviality of the elliptic unit family by switching the roles played by \(\iota\) and \(c\circ\iota\) and looking at the image under \(\exp^{*}\) at arithmetic points in \(\mathcal{X}_{2}\) (where the global root numbers of the specializations are \(+1\)). ### Choice of Global Characters Recall that we assume for simplicity that \(p\) is nonsplit in \(\mathcal{K}/\mathbb{Q}\) (as in the split case the theory is already rather complete in literature). Let \(\chi_{3}\) be a Hecke character of \(\mathcal{K}^{\times}\backslash\mathbb{A}_{\mathcal{K}}^{\times}\) of Archimedean type \((0,0)\), and such that \(\chi_{1}:=\psi\chi_{3}^{-1}\) is unramified at \(p\) (thus of Archimedean type \((0,1)\)). We require that the \(\chi_{3}\) is unramified outside \(p\) and another odd inert prime \(\ell_{0}\). Let \(\ell_{1}\) and \(\ell_{2}\) be two odd inert primes away from \(p\) and \(\ell_{0}\) where \(\psi\) and \(\chi_{3}\) are unramified. We choose a Hecke character \(\chi_{\mathrm{aux}}\) of finite order, unramified outside \(\ell_{1}\) and \(\ell_{2}\) such that \(\chi_{1}\chi_{3}^{c}(\chi_{\mathrm{aux}}\chi_{\mathrm{aux}}^{-c})\) has global sign \(+1\) and that \(L(\chi_{1}\chi_{3}^{c}(\chi_{\mathrm{aux}}\chi_{\mathrm{aux}}^{-c}),1)\neq 0\). This is clearly possible: we first take a character of conductor and order a power of \(\ell_{1}\) and use the argument in [16, Page 247] for \(\ell_{1}\) to meet the requirement on root number, and then further twist by a character of conductor and order an even power of \(\ell_{2}\), applying [16, Page 247] again (note that this does not change the global root number) and by [19, Theorem A] for \(P_{1}=\ell_{2}\) to ensure non-vanishing of the central \(L\)-value (as in the proof of Proposition 3.7). Note that \(\chi_{1}\cdot\chi_{\mathrm{aux}}\cdot\chi_{3}\cdot\chi_{\mathrm{aux}}^{-1}=\psi\) and \(\chi_{1}\cdot\chi_{\mathrm{aux}}\cdot(\chi_{3}\cdot\chi_{\mathrm{aux}}^{-1}) ^{c}=\chi_{1}\chi_{3}^{c}(\chi_{\mathrm{aux}}\cdot\chi_{\mathrm{aux}}^{-c})\). We denote \(\chi_{1}\chi_{3}^{c}(\chi_{\mathrm{aux}}\cdot\chi_{\mathrm{aux}}^{-c})\) as \(\psi^{\prime}\). Let \(\xi:=\chi_{1}\chi_{\mathrm{aux}}\) and \(f:=f_{\xi}\) be the CM form corresponding to \(\xi\). Let \(\chi:=\chi_{3}\chi_{\mathrm{aux}}^{-1}\). We will consider the \(p\)-adic Gross-Zagier formula for the pair \((f,\chi)\). **Definition 3.8**.: We write \(\mathcal{L}_{p,\psi}\) for the Andreatta-Iovita \(p\)-adic \(L\)-function (see [1, Proposition 5.9, 6.6] in the Eisenstein case for details, see also Proposition 2.3 there) interpolating algebraic part of the \(L\)-values \(L(\mathcal{K},\psi\chi_{\phi})\) for \(\chi_{\phi}\) varying over characters of \(\Gamma^{-}\) in \(\mathcal{X}_{1}\). We write \(\mathcal{L}_{p,f,\chi}\) for the \(p\)-adic \(L\)-function constructed in the appendix given by \[\chi_{\phi}\mapsto L_{p}(f,\chi\chi_{\phi})\] for \(\chi_{\phi}\) varying over characters of \(\Gamma^{-}\). ### Main Local Definition Let \(\mathcal{F}\) be a Coleman family over some affinoid algebra \(\mathcal{A}\) such that there is an arithmetic point \(\phi_{0}^{\prime}\in\operatorname{Sp}\!\mathcal{A}\) where \(\mathcal{F}\) specializes to \(f\). Let \(\kappa_{\mathcal{F},\boldsymbol{\chi}}\) be the associated two-variable family of Heegner cycles. By abuse of notation we also write \(\phi_{0}\) for the point in this (3-dimensional) weight space \(\operatorname{Spec}\!\mathcal{A}\hat{\otimes}[[\Gamma_{\mathcal{K}}]]\) where \(\mathcal{F}\) specializes to \(f\) and elements in \(\Gamma_{\mathcal{K}}\) are sent to 1. Just like [1, Section 5.3, 6.3], similar as \(\mathcal{L}_{p,f,\chi}\), there is a two variable \(p\)-adic \(L\)-function \(\mathcal{L}_{p,\mathcal{F},\chi}\). **Definition 3.9**.: Let \(\chi^{\prime}\) be a finite odd order anticyclotomic character such that \(L(\psi\chi^{\prime},1)\) is nonzero. Such a character can be found by for example, first take two odd primes \(\ell_{1}\), \(\ell_{2}\) prime to \(p\) such that \(\ell_{1}\) is inert and \(\ell_{2}\) is split in \(\mathcal{K}\), respectively. We first take a twist of \(\psi\) by a finite order anticyclotomic character of conductor some power of \(\ell_{1}\) so that the global root number becomes \(+1\) (clearly possible as in [16, Page 247]), and then take another twist by a finite order anticyclotomic character of conductor some power of \(\ell_{2}\) to ensure the non-vanishing of the central \(L\)-values, using [19, Theorem A]. (Note that as mentioned in _loc.cit._ the assumption (C) is removed later. As we only need non-vanishing we can take the \(p\) there to be a sufficiently large prime.) Now we take \(m>>0\) such that \(z_{p^{\infty},\psi\chi^{\prime}}\) and \(\mathcal{L}_{p,\psi\chi^{\prime}}\) do not take 0-value inside the disc of radius \(p^{-m}\) and centered at origin (clearly this \(m\) exists). Let \(\Lambda^{-,\prime}:=\mathcal{O}_{L}[[p^{-m}X]]\). Let \(H^{1}_{-}(\mathcal{K}_{p},\boldsymbol{\psi})\otimes\mathbb{Q}_{p}\subset H^{ 1}(\mathcal{K}_{p},\boldsymbol{\psi})\) be the \(\Lambda^{-,\prime}\)-module generated by the image of \(v_{-}:=z_{p^{\infty},\psi\chi^{\prime}}/\mathcal{L}_{p,\psi\chi^{\prime}}\) (recall that here \(\boldsymbol{\psi}\) is a \(\Lambda^{-,\prime}\)-coefficient character). We sometimes just write it as \(H^{1}_{-}\) for short. Next we choose another auxiliary anticyclotomic character \(\chi^{\prime\prime}\) of finite order and conductor a power of \(\ell_{1}\) for some odd inert primes \(\ell_{1}\) prime to \(p\), so that \(L(\psi\chi^{\prime\prime},-)\cdot L(\psi^{\prime}{\chi^{\prime\prime}}^{c},-)\) has central vanishing order 1. This can be ensured by applying [2, Theorem 4.3.1] (take \(P_{1}=\ell_{1}\) and \(\chi_{0}\) there to be the trivial character). Moreover we require that the conductor of \(\chi^{\prime\prime}\) is an even power of \(\ell_{1}\) thus the global root number for \(\psi\chi^{\prime\prime}\) is \(-1\). As \(f\) has weight two, we know that the image of \(\kappa_{f,\chi^{\prime\prime}\chi}\) in the local Galois cohomology at \(p\) is also nonzero. We get from Andreatta-Iovita's \(p\)-adic Gross-Zagier formula [1, Theorem 7.1] that \(\phi_{0}(\mathcal{L}_{p,\mathcal{F},\chi\chi^{\prime\prime}})\neq 0\). We define \(v_{\mathcal{F},\boldsymbol{\chi}}:=\frac{\kappa_{\mathcal{F},\boldsymbol{\chi} ^{\prime\prime}}}{\mathcal{L}_{p,\mathcal{F},\chi\chi^{\prime\prime}}}\). Recall we have \[T_{f}|_{G_{\mathcal{K}}}\otimes\chi\chi^{\prime\prime}=\psi\chi^{\prime \prime}\oplus\psi^{\prime}{\chi^{\prime\prime}}^{c}.\] As the Heegner points lies in \(H^{1}_{f}\), we see that the projection to \(H^{1}(\mathcal{K},\psi^{\prime}{\chi^{\prime\prime}}^{c})\) is zero since \(L(\psi^{\prime}{\chi^{\prime\prime}}^{c},1)\neq 0\) by our choice. Thus the image of \(\phi_{0}(\kappa_{\mathcal{F},\chi\boldsymbol{\chi}^{\prime\prime}})\) in \(H^{1}(\mathcal{K}_{p},\psi\chi^{\prime\prime})\) is also nonzero. We shrink the weight space so as to avoid the zero locus of the image of \(\kappa_{\mathcal{F},\chi^{\prime\prime}}\boldsymbol{\chi}\) in \(H^{1}(\mathcal{K}_{p},-)\), and of \(\mathcal{L}_{p,\mathcal{F},\chi\chi^{\prime\prime}}\). (It is actually not hard to see that from Andreatta-Iovita's interpolation formula [1, Theorem 7.1] that \(v_{\mathcal{F},\chi^{\prime\prime}}\boldsymbol{\chi}\), and thus its image in \(H^{1}(\mathcal{K}_{p},\rho_{\mathcal{F}}(\chi^{\prime\prime}\boldsymbol{\chi}) )=H^{1}(\mathcal{K}_{p},\rho_{\mathcal{F}}(\chi^{\prime\prime}\boldsymbol{\chi}))\) (note that \(\chi^{\prime\prime}\) restricts to the trivial character of \(G_{\mathcal{K}_{p}}\)) is independent of the choice of \(\chi^{\prime\prime}\), which justifies the notation.). Let \(H^{1}_{+}(\mathcal{K}_{p},\boldsymbol{\psi})\) be the \(\Lambda^{-,\prime}\otimes\mathbb{Q}_{p}\)-module generated by the natural image \(v_{+}\) of \(v_{\mathcal{F},\chi^{\prime\prime}}\boldsymbol{\chi}\). We show that it specializes to \(H^{1}_{f}\) at all but finitely many arithmetic points in \(\mathcal{X}_{1}\). We consider the family of elliptic units \(z_{p^{\infty},\psi\chi^{\prime\prime}}\). By the explicit reciprocity law for elliptic units, at arithmetic points in \(\mathcal{X}_{1}\), the image of the elliptic units under the \(\exp^{*}\) map is related to a central \(L\)-value with global root number \(-1\), so has to be zero. This means the image at these arithmetic points lies in \(H^{1}_{f}\subset H^{1}\). On the other hand, we see that the central \(L\)-values for \(\psi\chi^{\prime\prime}\chi_{\phi}\) for \(\chi_{\phi}\in\mathcal{X}_{2}\) have global root number \(+1\), and by [17, Theorem 1], takes nonzero values for all but finitely many points. Switch the role played by \(\iota\) and \(\iota\circ c\), we see at arithmetic points in \(\mathcal{X}_{2}\), the image of \(z_{p^{\infty},\psi\chi^{\prime\prime}}\) under \(\exp^{*}\) is nonzero at these points. This implies \(z_{p^{\infty},\psi\chi^{\prime\prime}}\) is not identically zero. Note that \(H^{1}(\mathcal{K},\boldsymbol{\psi}\chi^{\prime\prime})\otimes\mathbb{Q}_{p}\) has rank one over \(\Lambda^{-\prime}\otimes\mathbb{Q}_{p}\). So \(z_{p^{\infty},\psi\chi^{\prime\prime}}\) is obtained from the image of \(v_{\mathcal{F},\boldsymbol{\chi}}\) multiplied by a nonzero element of \(\Lambda^{-\prime}\otimes\mathbb{Q}_{p}\). Recall that the specializations of \(z_{p^{\infty},\psi\chi^{\prime\prime}}\) at \(\mathcal{X}_{1}\) have local images all live in \(H^{1}_{f}\), we see that the specializations of \(v_{+}\) at all but finitely many points in \(\mathcal{X}_{1}\) have local images live in \(H^{1}_{f}\). It is clear by enlarging the \(m\) that we can ensure \[H^{1}=H^{1}_{+}\oplus H^{1}_{-} \tag{3.1}\] over \(\Lambda^{-\prime}\otimes\mathbb{Q}_{p}\) (since the specializations of \(H^{1}_{\pm}\) to \(\phi_{0}\) generate the specialization of \(H^{1}\) to \(\phi_{0}\)). This is the analogue of Rubin's conjecture in our context, which is much easier to see. _Remark 3.10_.: We will see later on that in fact at least by further enlarging \(m\) we can ensure that the specializations of \(H^{1}_{+}\) in \(\mathcal{X}_{1}\) all localizes to \(H^{1}_{f}\), using Rubin's Iwasawa main conjecture. ## 4 Iwasawa Theory ### Rubin's Main Conjecture Let \(\psi^{\prime\prime}\) be the a Hecke character of \(\mathbb{A}^{\times}_{\mathcal{K}}\) whose Archimedean type is \((0,1)\) and whose restriction to \(\mathbb{A}^{\times}_{\mathbb{Q}}\) is \(\chi_{\mathcal{K}/\mathbb{Q}}\cdot|\cdot|\). As before by possibly shrinking the weight space we let \(\Lambda^{\prime}\) be the weight ring \(\Lambda^{-\prime}:=\mathcal{O}_{L}[[p^{-m}X]]\) for some \(m>>0\). We define various Iwasawa modules over \(\Lambda^{-\prime}\). Let \(H^{1}_{\pm}(\mathcal{K}_{p},(\Lambda^{-\prime}(\boldsymbol{\psi}))^{*})\) be the orthogonal complement of \(H^{1}_{\pm}(\mathcal{K}_{p},\boldsymbol{\psi})\) under the local Tate pairing. **Definition 4.1**.: Let \[H^{1}_{\pm}(\mathcal{K}^{\Sigma},\boldsymbol{\psi}):=\ker\{H^{1}(\mathcal{K} ^{\Sigma},\boldsymbol{\psi})\to\frac{H^{1}(\mathcal{K}_{p},\boldsymbol{\psi}) )}{H^{1}_{\pm}(\mathcal{K}_{p},\boldsymbol{\psi})}\}\] and \[H^{1}_{\mathrm{str}}(\mathcal{K}^{\Sigma},\boldsymbol{\psi}):=\ker\{H^{1}( \mathcal{K}^{\Sigma},\boldsymbol{\psi})\to H^{1}(\mathcal{K}_{p},\boldsymbol{ \psi})).\] Let \[\mathrm{Sel}^{\pm}(\mathcal{K},\boldsymbol{\psi}^{*}) :=\ker\{H^{1}(\mathcal{K},\boldsymbol{\psi}^{*})\] \[\to\prod_{v\nmid p}H^{1}(\mathcal{K},\boldsymbol{\psi}^{*})\times \frac{H^{1}(\mathcal{K}_{p},\boldsymbol{\psi}^{*})}{H^{1}_{\pm}(\mathcal{K}_{p },\boldsymbol{\psi}^{*})}\}.\] Put \[X^{\pm}_{\psi}=\mathrm{Sel}^{\pm}_{\psi}(\mathcal{K},\boldsymbol{\psi}^{*})^{*}\] (\(*\) means Pontryagin dual). We also define the strict Selmer groups \[\operatorname{Sel}^{\operatorname{str}}(\mathcal{K},\mathbf{\psi}^{*}) :=\ker\{H^{1}(\mathcal{K},\mathbf{\psi}^{*})\] \[\to\prod_{v\nmid p}H^{1}(\mathcal{K}_{v}^{\Sigma},\mathbf{\psi}^{*}) \times H^{1}(\mathcal{K}_{p}^{\Sigma},\mathbf{\psi}^{*})\}\] and define \(X^{\operatorname{str}}\) similarly. **Lemma 4.2**.: _The module \(X^{\operatorname{str}}_{\psi^{\prime\prime}}\) is torsion over \(\Lambda^{-}\). Moreover the strict Selmer group \(H^{1}_{\operatorname{str}}(\mathcal{K}^{\Sigma},\psi^{\prime\prime}\otimes \Lambda^{-})\) has rank zero over \(\Lambda^{-}\)._ Proof.: The first statement follows from the Euler system argument and the non-triviality of the elliptic unit Euler system over \(\Lambda^{-}\) as we have proved. For the second statement, it is enough to show that at some arithmetic point \(\phi\) the \(H^{1}_{\operatorname{str}}(\mathcal{K}^{\Sigma},\psi^{\prime\prime}\chi_{\phi})\) is zero. This again follows from the Euler system argument and non-triviality of the elliptic unit Euler system along the anticyclotomic line. **Corollary 4.3**.: _The relaxed Selmer group \(H^{1}_{\operatorname{rel}}(\mathcal{K}^{\Sigma},\psi^{\prime\prime}\otimes \Lambda^{-})\) has rank one over \(\Lambda^{-}\)._ Proof.: To show the corollary it is enough to show for a generic arithmetic point \(\phi\), the rank of \(H^{1}_{\operatorname{rel}}(\mathcal{K}^{\Sigma},\psi^{\prime\prime}\chi_{\phi})\) is one. We look at the Poitou-Tate exact sequence (we write \(\Lambda^{-,*}\) for the Pontraygin dual of \(\Lambda^{\prime}\). \[0\to H^{1}_{\operatorname{str}}(\mathcal{K}^{\Sigma},\mathbf{\psi^{ \prime\prime}})\to H^{1}_{\operatorname{rel}}(\mathcal{K}^{\Sigma},\mathbf{\psi^{ \prime\prime}})\to H^{1}(\mathcal{K}_{p},\mathbf{\psi^{\prime\prime}})\] \[\to H^{1}_{\operatorname{rel}}(\mathcal{K}^{\Sigma},\mathbf{\psi^{ \prime\prime}}^{\vee}(1)\otimes\Lambda^{-,*})^{*}\to H^{1}_{\operatorname{ str}}(\mathcal{K}^{\Sigma},\mathbf{\psi^{\prime\prime}}^{\vee}(1)\otimes\Lambda^{-,*})^{*} \to 0.\] From Lemma 4.2 the first and last terms have rank \(0\). From the non-triviality of the elliptic units we know the second and fourth terms have ranks at least one. Then by Lemma 3.4 the third term has rank \(2\). So we conclude. **Theorem 4.4**.: _Suppose \(p\) is odd and the global root number of \(\psi^{\prime\prime}\) is \(+1\). We have_ \[\operatorname{char}_{\Lambda^{-,\prime}}(X^{\operatorname{str}}_{\psi^{\prime \prime}})=\operatorname{char}_{\Lambda^{-,\prime}}(\frac{H^{1}_{\operatorname{ Iw}}(\mathcal{K}_{\infty},\mathbf{\psi}^{\prime\prime})}{\Lambda_{\mathcal{K}}z_{p^{ \infty},\psi^{\prime\prime}}}).\] Proof.: The two-variable Iwasawa main conjecture over \(\mathcal{K}_{\infty}/\mathcal{K}\) is proved in [22, Theorem 5.2], which generalizes earlier results of Rubin ([35, Theorem 2 (ii)], [34, Theorem 4.1 (ii)]) to allow the order of the character divisible by \(p\). In _loc.cit._ the result is formulated in terms of Det functor (see [22, 1.3]) and fundamental line. Our one-variable result follows from the non-vanishing of the elliptic unit Euler system along the anticyclotomic line, and the fact that fundamental line commutes with arbitrary base change. To relate this formulation to the formulation here we use the following. * Let \(T\) be the \(p\)-adic representation of \(G_{\mathcal{K}}\) corresponding to \(\psi^{\prime\prime}\). Global duality which says that \(\operatorname{\text{\cy for intermediate fields \(\mathcal{K}_{n}\) between \(\mathcal{K}\) and \(\mathcal{K}_{\infty}^{-}\), where \(v\) runs over all primes outside \(p\) dividing \(\Sigma\) and \(k(v)\) is the corresponding residue field. When comparing the two formulations, the error term comes from the term \(H^{0}(k(v),H^{1}(I_{n,v},T))\). Note that each prime \(q\neq p\) split in \(\mathcal{K}\) is finitely decomposed in \(\mathcal{K}_{\infty}^{-}\), and \(\varprojlim_{n}H^{0}(k(v),H^{1}(I_{n,v},T))=0\). Each prime \(q\neq p\) nonsplit in \(\mathcal{K}\) splits completely in \(\mathcal{K}_{\infty}^{-}\), and thus \(\varprojlim_{n}H^{0}(k(v),H^{1}(I_{n,v},T))\) is killed by a power of \(p\) (actually this gives the local Tamagawa number). In each case the above error term vanishes after inverting \(p\). Now we explain Remark 3.10 that we promised earlier. For a CM character \(\psi\) of Archimedean type \((1,0)\) whose restriction to \(\mathbb{A}_{\mathbb{Q}}^{\times}\) is \(\chi_{\mathcal{K}/\mathbb{Q}}|\cdot|\), we write \(\mathcal{L}_{p,\psi}^{(2)}\) for the \(p\)-adic \(L\)-function of Andreatta-Iovita interpolating algebraic part of the \(L\)-functions of \(\psi\chi_{\phi}\) for \(\phi\) in \(\mathcal{X}_{2}\). We keep the notation as in Definition 3.9. From \(\phi_{0}(\mathcal{L}_{p,\mathcal{F},\chi\chi^{\prime\prime}})\neq 0\) above and Proposition 4.5 below on the factorization of \(p\)-adic \(L\)-functions, we see that \(\phi_{0}(\mathcal{L}_{\psi\chi^{\prime\prime}}^{(2)})\neq 0\). Moreover the non-torsionness of the Heegner point above also implies that the rank of the Bloch-Kato Selmer group for \(\psi\chi^{\prime\prime}\) is \(1\), and the localization map of it to \(H^{1}_{f}(\mathcal{K}_{p},\psi\chi^{\prime\prime})\) is an isomorphism. So the rank of its strict Selmer group is \(0\). This implies \(\phi_{0}\) is not in the support of the characteristic ideal of \(X_{\psi\chi^{\prime\prime}}^{\mathrm{str}}\otimes\mathbb{Q}_{p}\) as a module over \(\Lambda^{-\prime}\otimes\mathbb{Q}_{p}\). But by Rubin's Iwasawa main conjecture this implies \(\phi_{0}\) is not in the support of the characteristic ideal of \(\frac{H^{1}(\mathcal{K}^{2},\psi\chi^{\prime\prime})}{(\Lambda^{-\prime} \otimes\mathbb{Q}_{p})\cdot z_{p}\infty,\psi\chi^{\prime\prime}}\), which in turn implies \(\phi_{0}(z_{p^{\infty},\psi})\) is nonzero. But we have seen this is an element in the Bloch-Kato Selmer group for \(\psi\chi^{\prime\prime}\), so the localization image of \(\phi_{0}(z_{p^{\infty},\psi\chi^{\prime\prime}})\) in \(H^{1}_{f}(\mathcal{K}_{p},\psi\chi^{\prime\prime})\) is also nonzero. So at least by further enlarging \(m\), we can ensure that the specializations of \(H^{1}_{+}\) to \(\mathcal{X}_{1}\) all localizes to \(H^{1}_{f}\). **Proposition 4.5**.: _Recall that \(f\) is the eigenform associated to the Hecke character \(\xi\). There is a nonzero constant \(C_{\xi,\chi}\) and a unit element \(\mathcal{U}_{\psi,\chi}\) in \(\Lambda_{\mathcal{K}}^{-}\) such that_ \[\mathcal{L}_{p,f,\chi}^{2}=C_{\xi,\chi}\mathcal{U}_{\xi,\chi}\mathcal{L}_{\xi \chi}^{(2)}\mathcal{L}_{\xi\chi^{c}}.\] Proof.: The proposition follows from comparing the interpolation formulas (in [1, Proposition 2.3] and the Appendix) for the various \(p\)-adic \(L\)-functions appearing. Note that in this family the \(k\) in the interpolation formula is fixed and the \(j\) there is varying. The only non-trivial factor is a ratio \((\frac{\mathrm{Vol}(\mathcal{O}_{p^{n}})}{\mathrm{Im}(\tau)\Lambda_{r}^{2}})^ {j}\) with the notation in the appendix. However we claim that the ratio \(\frac{\mathrm{Vol}(\mathcal{O}_{p^{n}})}{\mathrm{Im}(\tau)\Lambda_{r}^{2}}\) is a \(p\)-adic unit, since only in this case the \((\frac{\mathrm{Vol}(\mathcal{O}_{p^{n}})}{\mathrm{Im}(\tau)\Lambda_{r}^{2}})^ {j}\) (\(j\) is varying) can be interpolated in \(p\)-adic analytic family. Then the proposition is clear. Here we raise the question: is there a relation between \(\phi_{0}(\mathcal{L}_{p,\psi}^{(2)})\) and the \(p\)-adic logarithm \(\log_{p}\) of the specialization at \(\phi_{0}\) of \(z_{p^{\infty},\psi}\) (we have seen due to root number reason it does lie in \(H^{1}_{f}\))? We have the following corollary. **Corollary 4.6**.: _Assumptions are as above theorem. Then by possibly shrinking the weight space,_ \[\mathrm{char}_{\Lambda^{-\prime}\otimes\mathbb{Q}_{p}}(X_{\psi}^{-})\subseteq( \mathcal{L}_{p,\psi}^{(2)}).\] Proof.: We first take a split odd prime \(\ell_{1}\) and an auxiliary twist \(\psi^{\prime\prime\prime}\) of \(\psi\) by an anticyclotomic character of order and conductor a power of \(\ell_{1}\) such that the specialization of \(\mathcal{L}_{p,\psi^{\prime\prime\prime}}^{(2)}\) at \(\phi_{0}\) is nonzero. This can be done by the same argument as in Definition 3.9 when choosing \(\chi^{\prime\prime}\), together Proposition 4.5 on the factorization of \(p\)-adic \(L\)-functions. Then we consider \(w:=z_{p^{\infty},\psi^{\prime\prime\prime}}/\mathcal{L}^{(2)}_{p,\psi^{\prime \prime\prime}}\). As is seen from the argument right before this corollary, by shrinking the weight space, we may avoid the zero locus of \(\mathcal{L}^{(2)}_{p,\psi^{\prime\prime\prime}}\) and assume that \(w\) is a basis of \(H^{1}_{+}(\mathcal{K}_{p},\boldsymbol{\psi})\) over \(\Lambda^{-\prime}\) (note that \(\psi\) and \(\psi^{\prime\prime\prime}\) restricts to the same character of \(\mathcal{K}_{p}\)). By comparing the explicit reciprocity law for elliptic units as recalled above at the dense set of arithmetic points in \(\mathcal{X}_{2}\), we see that as elements in \(H^{1}(\mathcal{K}_{p},\boldsymbol{\psi})\otimes\mathbb{Q}_{p}\), \[z_{p^{\infty},\psi}\equiv\mathcal{L}^{(2)}_{p,\psi}\cdot w(\text{mod }H^{1}_{-}( \mathcal{K}_{p},\boldsymbol{\psi})).\] Now the corollary follows from the analogue of Rubin's conjecture 3.1 \[H^{1}(\mathcal{K}_{p},\boldsymbol{\psi}))\otimes\mathbb{Q}_{p}=H^{1}_{+} \oplus H^{1}_{-}\] over \(\Lambda^{-,\prime}\otimes\mathbb{Q}_{p}\), the above theorem and the Poitou-Tate exact sequence \[0\to H^{1}(\mathcal{K}^{\Sigma},\boldsymbol{\psi})\to\frac{H^{1}(\mathcal{K}_ {p},\boldsymbol{\psi})}{H^{1}_{-}(\mathcal{K}_{p},\boldsymbol{\psi})}\to X^{- }_{\psi}\to X^{\text{str}}_{\psi}\to 0.\] ### Selmer groups and global duality argument As the \(H^{1}(\mathcal{K}^{\Sigma},\boldsymbol{\psi})\otimes\mathbb{Q}_{p}\) is mapped to \(H^{1}_{+}\), we have \[H^{1}_{+}(\mathcal{K}^{\Sigma},\boldsymbol{\psi})=H^{1}(\mathcal{K}^{\Sigma},\boldsymbol{\psi}). \tag{4.1}\] We have by the Poitou-Tate exact sequence the following: \[0\to H^{1}_{+}(\mathcal{K}^{\Sigma},\boldsymbol{\psi})\otimes\mathbb{Q}_{p} \to H^{1}_{+}(\mathcal{K}_{p},\boldsymbol{\psi})\otimes\mathbb{Q}_{p}\to X^{ \text{rel}}_{\psi}\otimes\mathbb{Q}_{p}\to X^{+}_{\psi}\otimes\mathbb{Q}_{p}\to 0 \tag{4.2}\] and \[0\to H^{1}(\mathcal{K}^{\Sigma},\boldsymbol{\psi})\otimes\mathbb{Q}_{p}\to \frac{H^{1}(\mathcal{K}_{p},\boldsymbol{\psi})\otimes\mathbb{Q}_{p}}{H^{1}_{- }(\mathcal{K}_{p},\boldsymbol{\psi})\otimes\mathbb{Q}_{p}}\to X^{-}_{\psi} \otimes\mathbb{Q}_{p}\to X^{\text{str}}_{\psi}\otimes\mathbb{Q}_{p}\to 0. \tag{4.3}\] Moreover the argument in [13, Lemma 6.7] shows that for any height one prime \(P\) of \(\Lambda^{-,\prime}\otimes\mathbb{Q}_{p}\), we have \[\text{char}_{P}(X^{\text{rel}}_{\psi,\text{tor}})=\text{char}_{P}(X^{\text{str} }_{\psi}). \tag{4.4}\] Here the subscript tor means the \(\Lambda^{-,\prime}\otimes\mathbb{Q}_{p}\)-torsion part. Consider the \(\Lambda^{-,\prime}\otimes\mathbb{Q}_{p}\)-localization map \[H^{1}(\mathcal{K}^{\Sigma},\boldsymbol{\psi})\otimes\mathbb{Q}_{p}\to H^{1}_{+ }(\mathcal{K}_{p},\boldsymbol{\psi})\otimes\mathbb{Q}_{p}. \tag{4.5}\] For any height one prime \(P\) of \(\Lambda^{-,\prime}\), write \(\text{locind}_{P}\) for the length at \(P\) of the cokernel of map (4.5). Using (4.2), (4.3) and (4.4) we can see that \[\text{leng}_{P}X^{+}_{\psi,\text{tor}}\otimes\mathbb{Q}_{p}+2\text{locind}_{P} =\text{leng}_{P}X^{-}_{\psi}\otimes\mathbb{Q}_{p}. \tag{4.6}\] (Note that the rank of \(X^{-}_{\psi}\) is \(1\).) On the other hand we have proved the main conjecture that \[\text{char}_{\Lambda^{-,\prime}\otimes\mathbb{Q}_{p}}(X^{-}_{\psi})=(\mathcal{ L}^{(2)}_{p,\psi}). \tag{4.7}\] **Definition 4.7**.: We define a virtual Heegner family to be an element \(\kappa_{f,\mathbf{\chi}}^{\mathrm{virt}}\) in \(H^{1}_{-}(\mathcal{K},\mathbf{\psi})\otimes\mathbb{Q}_{p}\) whose image in \(H^{1}(\mathcal{K}_{p},\mathbf{\psi})\otimes\mathbb{Q}_{p}\) satisfies that \[\mathrm{char}(\frac{H^{1}_{-}(\mathcal{K}_{p},\mathbf{\psi})\otimes\mathbb{Q}_{p}} {(\Lambda^{-\prime}\otimes\mathbb{Q}_{p})\kappa^{\mathrm{virt}}})^{2}=( \mathcal{L}^{(2)}_{p,\xi\chi}\cdot\mathcal{L}_{p,\xi\chi^{c}}). \tag{4.8}\] (The existence of a virtual Heegner family is left to later.) Recall that \(\xi\chi=\psi\), and \(L(\xi\chi^{c},1)\neq 0\) by our choice. **Proposition 4.8**.: _If the Selmer rank of the CM elliptic curve \(E\) associated to \(\psi\) is \(1\), then the specialization \(\kappa_{f,\chi}\) of the virtual Heegner family to \(\phi_{0}\) is non-torsion._ Proof.: For each height one prime \(P\), \[\mathrm{ord}_{P}(\mathrm{char}(\frac{H^{1}_{+}(\mathcal{K}_{p},\mathbf{\psi}) \otimes\mathbb{Q}_{p}}{(\Lambda^{-\prime}\otimes\mathbb{Q}_{p})\kappa^{ \mathrm{virt}}}))=\mathrm{locind}_{P}+\mathrm{ord}_{P}(\mathrm{char}(\frac{H^{ 1}_{+}(\mathcal{K},\mathbf{\psi})\otimes\mathbb{Q}_{p}}{(\Lambda^{-\prime}\otimes \mathbb{Q}_{p})\kappa^{\mathrm{virt}}_{f,\chi}})),\] so by (4.8), (4.7) and (4.6), \[(\mathrm{char}(\frac{H^{1}_{+}(\mathcal{K},\mathbf{\psi})\otimes\mathbb{Q}_{p}}{( \Lambda-,\prime\otimes\mathbb{Q}_{p})\kappa^{\mathrm{virt}}_{f,\chi}}))^{2}= \mathrm{char}_{\Lambda-,\prime\otimes\mathbb{Q}_{p}}(X^{+}_{\psi,\mathrm{tor} }\otimes\mathbb{Q}_{p})(\mathcal{L}_{p,\xi\chi^{c}})=\mathrm{char}(X^{+}_{ \psi,\mathrm{tor}}\otimes\mathbb{Q}_{p})\mathrm{char}(X^{+}_{\xi\chi^{c}} \otimes\mathbb{Q}_{p}).\] Here the last step is nothing but Rubin's Iwasawa main conjecture for Andreatta-Iovita \(p\)-adic \(L\)-function which can be proved as in Corollary 4.6 (in fact the proof is more direct). We note that the specialization of the Selmer condition \(H^{1}_{+}(\mathcal{K}_{p},\mathbf{\psi})\) to \(\phi_{0}\) is nothing but the \(H^{1}_{f}(\mathcal{K}_{p},\psi)\). So by self duality of \(H^{1}_{f}\), \(H^{1}_{+}(\mathcal{K}_{p},\Lambda^{-\prime}(\mathbf{\psi})^{*})^{\gamma^{-}=1}=H^{ 1}_{f}(\mathcal{K}_{p},\psi)\). Now under the assumption that the Selmer rank for \(E\) is \(1\), we see that \(X\) does not divide \(\mathrm{char}_{\Lambda^{\prime}}(X^{+}_{\psi,\mathrm{tor}})\), and thus the \(\kappa^{\mathrm{virt}}_{f,\chi}\) must be non-torsion. ### Construction of the Virtual Heegner cycle **Proposition 4.9**.: _The specialization of \(\kappa_{\mathcal{F},\mathbf{\chi}}\) to \(f\) is a virtual Heegner family we defined before._ Proof.: We show that the image in \(H^{1}(\mathcal{K}_{p},\mathbf{\psi})\otimes\mathbb{Q}_{p}\) of the specialization \(\kappa_{f,\mathbf{\chi}}\) of the Heegner family \(\kappa_{\mathcal{F},\mathbf{\chi}}\) to \(f\) does give us a virtual Heegner family. By the formula [1, Theorem 7.1] again we have that the image in the local Galois cohomology at \(p\) of \(\kappa_{\mathcal{F},\mathbf{\chi}}\) coincides with \(\mathcal{L}_{p,\mathcal{F},\chi}\cdot v_{\mathcal{F},\chi}\), by looking at the specialization at a Zariski dense set of points. But the restriction of \(\mathcal{L}_{p,\mathcal{F},\chi}\) to \(f\) is clearly \(\mathcal{L}_{p,f,\chi}\). So we see that the specialization of \(\kappa_{\mathcal{F},\mathbf{\chi}}\) to \(f\) does provide a virtual Heegner family whose specialization to \(\phi_{0}\) is \(\kappa_{f,\chi}\). proof of main theorem: Now that we have constructed a virtual Heegner family, the \(\kappa_{f,\chi}\) is indeed nonzero. This proves that the analytic rank of the pair \((f,\chi)\) is also \(1\) and thus concluding the proof of the main theorem. ## References * [1] Andreatta, F. and Iovita, Katz type \(p\)-adic \(L\)-functions for primes \(p\) non-split in the CM field, arXiv:1905.00792. * [2] Aflalo, E and Nekovar, J., Non-triviality of CM points in ring class field towers, Israel J. Math. 175 (2010), 225-284. * [3] Bellaiche, J., An introduction to Bloch and Kato's conjecture, lecture notes in CMI summer school, preprint, 2009. * [4] M. Bertolini, H. Darmon, and K. Prasanna, Generalized Heegner cycles and p-adic Rankin L-series, Duke Math. J. 162 (2013), no. 6, 1033-1148. * [5] M. Bhargava, C. Skinner and W. Zhang, A majority of elliptic curves over \(\mathbb{Q}\) satisfy the Birch and Swinnerton-Dyer conjecture, arXiv:1407.1826. * [6] Bloch, S.; Kato, K., \(L\)-functions and Tamagawa numbers of motives. The Grothendieck Festschrift, Vol. I, 333-400, Progr. Math., 86, Birkhauser Boston, Boston, MA, 1990. * [7] Riccardo Brasca. \(p\)-adic families of modular forms over Shimura curves. PhD thesis, University degli Studi di Milano, 2012. * [8] H. Brooks, Shimura curves and special values of \(p\)-adic \(L\)-functions, to appear in Intl. Math. Res. Notices (2014). * [9] A. Burungale, S. Kobayashi, K. Ota, Rubin's conjecture on local units in the anticyclotomic tower at inert primes, Ann. Math. Volume 194 (2021), Issue 3, 943-966. * [10] A. Burungale, Ye Tian, p-converse to a theorem of Gross-Zagier, Kolyvagin and Rubin. Invent. Math. 220 (2020), no. 1, 211-253. * [11] M. Chida and M-L. Hsieh, On the anticyclotomic Iwasawa main conjecture for modular forms., Compositio Mathematica, 151(2015), no. 5, 863-897. * [12] L. Cai, J. Shu, and Y. Tian, Explicit Gross-Zagier and Waldspurger formulae, Algebra Number Theory 8 (2014), no. 10, 2523-2572. * [13] Castella, F. and Wan. X, Perrin-Riou's main conjecture for elliptic curves at supersingular primes, [https://web.math.ucsb.edu/](https://web.math.ucsb.edu/) castella/Perrin-Riou.pdf. * [14] Darmon, H.; Diamond, F.; Taylor, R., Fermat's last theorem. Elliptic curves, modular forms and Fermat's last theorem (Hong Kong, 1993), 2-140, Int. Press, Cambridge, MA, 1997. * [15] Y., Fan, Local expansion in Serre-Tate coordinates and p-adic iteration of Gauss-Manin connections, thesis. * [16] Greenberg, R., On the Birch and Swinnerton-Dyer Conjecture, Inventiones mathematicae volume 72, pages 241-265 (1983). * [17] Greenberg, R., On the critical values of Hecke L-functions for imaginary quadratic fields, Inventiones mathematicae volume 79, pages 79-94 (1985). * [18] H. Hida and J. Tilouine, Anti-cyclotomic Katz \(p\)-adic \(L\)-functions and congruence modules, Ann. Sci. Ecole Norm. Sup. (4) 26 (1993), no. 2, 189-259. * [19] Hsieh, M-L., On the non-vanishing of Hecke L-values modulo p, American Journal of Mathematics, 134(2012), no. 6, 1503-1539. * [20] Hsieh, M-L., Eisenstein congruence on unitary groups and Iwasawa main conjectures for CM fields. J. Amer. Math. Soc. 27 (2014), no. 3, 753-862. * [21] M-L, Hsieh, Special values of anticyclotomic Rankin-Selberg L-functions. Documenta Mathematica, 19(2014), 709-767. * [22] Jennifer Johnson-Leung, Guido Kings, On the equivariant and the non-equivariant main conjecture for imaginary quadratic fields, J. Reine Angew. Math., 653 2011, 75-114. * [23] D. Jetchev, D. Loeffler and S.L. Zerbes, Heegner points in Coleman families, Proc. London Math. Soc. 122 (2021), no. 1, 124-152. * [24] D. Jetchev, C. Skinner, X. Wan, The Birch-Swinnerton-Dyer Formula For Elliptic Curves of Analytic Rank One, Cambridge Journal of Mathematics Volume 5, Number 3, 369-434, 2017. * [25] D. Jetchev, C. Skinner, X. Wan, in preparation. * [26] P. Kassaei, Overconvergence and classicality, the case of curves, J. reine angew. Math. 631 (2009), 109-139. * [27] Kato, K., \(p\)-adic Hodge theory and values of zeta functions of modular forms. Cohomologies \(p\)-adiques et applications arithmetiques. III. * [28] R. Kottwitz, Points on some Shimura varieties over finite fields, Journal of AMS 5 (1992), no. 2, 373-443. * [29] D. Kriz, Supersingular main conjectures, Sylvester's conjecture and Goldfeld's conjecture, preprint. * [30] Lan, K-W., Arithmetic compactifications of PEL-type Shimura varieties. London Mathematical Society Monographs Series, 36. Princeton University Press, Princeton, NJ, 2013. xxvi+561 pp. * [31] Rohrlich, D., Root numbers of Jacobi-sum Hecke characters, Illinois J. Math. 36(1): 155-176. * [32] Rohrlich, D., Root Numbers, Coursenotes. * [33] K. Rubin, Local units, elliptic units, Heegner points and elliptic curves, Invent. math. 88, 405-422 (1987). * [34] K. Rubin, the "main conjectures" of Iwasawa theory for imgainray quadratic fields, Invent. Math. 103 (1991) 25-68. * [35] K. Rubin, More "Main conjectures" for imaginary quadratic fields, CRM Proceedings and Lecture Notes, Volume 4, 1994. * [36] C. Skinner, A converse of a theorem of Gross, Zagier and Kolyvagin, Annals of Mathematics, Vol 191, Issue 2, 329-354, 2020. * [37] X. Wan, Heegner Point Kolyvagin System and Iwasawa Main Conjecture, Acta Mathematica Sinica, English Series, 37(1), 104-120. * [38] Yuan, X., Zhang, S. and Zhang, W., The Gross-Zagier Formula on Shimura Curves, Annals of Mathematics Studies, Princeton University Press, 201 * [39] W. Zhang, Selmer groups and the indivisibility of Heegner points. Cambridge Journal of Mathematics, Volume 2, Number 2, 191-253, 2014. Appendix A Appendix: Rational Shimura curve and BDP type \(p\)-adic special value formula at non-split primes (by Yangyu Fan) ### Nearly overconvergent quaternionic modular forms Let \(p\) be an odd prime. Let \(B\) be an indefinite quaternion algebra over \(\mathbb{Q}\) together with a maximal lattice \(\mathcal{O}_{B}\subset B\). Assume that \(B\neq M_{2}(\mathbb{Q})\) and \(B\) is split at \(p\). Fix an isomorphism \(B_{p}\cong M_{2}(\mathbb{Q}_{p})\). For any (small enough) open compact subgroup \(U^{p}\subset\mathbb{B}^{p\infty\times}\), the complex Shimura curve \(B^{\times}\backslash(\mathbb{C}-\mathbb{R})\times\mathbb{B}^{\infty\times}/U ^{p}\mathrm{GL}_{2}(\mathbb{Z}_{p})\) admits a canonical model \(X\) over \(\mathbb{Q}\). According to [7], \(X\) has a canonical proper smooth model \(\mathfrak{X}\) over \(\mathbb{Z}_{(p)}\) which solves the following moduli problem: for any scheme \(S\) over \(\mathbb{Z}_{(p)}\), the set \(\mathfrak{X}(S)\) parameterize isomorphism classes of triples \((A,i,\bar{\eta})\) where * \((A,i)\) is a false elliptic curve over \(S\), i.e. \(A\) is an Abelian scheme over \(S\) of relative dimension \(2\) and \(i:\ \mathcal{O}_{B}\hookrightarrow\mathrm{End}_{S}(A)\) is an injective homomorphism; * \(\bar{\eta}\) is the \(U^{p}\)-orbit of \(\mathcal{O}_{B}\)-linear isomorphism \(\eta:\ \mathcal{O}_{B}\otimes_{\mathbb{Z}}\tilde{\mathbb{Z}}^{p}\to T^{p}(A):= \prod_{\ell\neq p}T_{\ell}(A)\) (locally in the etale topology). Let \(\pi:\ \mathcal{A}\to\mathfrak{X}\) be the universal abelian surface. Let \(-\) be the main involution on \(B\) and fix \(s\in\mathcal{O}_{B}\) such that \(s^{2}=-\mathrm{disc}(B)\). Then (see [6, Page 592]) there exists a unique principal polarization \[\lambda_{\mathcal{A}}:\ \mathcal{A}\to\mathcal{A}^{\vee}\] such that for any geometric point \(\bar{s}\in S\), the Rosati involution on \(\mathrm{End}(\mathcal{A}_{\bar{s}})\) is compatible with \[-^{\dagger}:\ B\to B,\quad x\mapsto s^{-1}\bar{x}s.\] Let \(e\in\mathcal{O}_{B,p}\) be a non-trivial idempotent such that \(e^{\dagger}=e\). Then (see [15, Sect. 2.4]) \[\underline{\omega}:=e\pi_{*}\Omega^{1}_{\mathcal{A}/\mathfrak{X}},\quad\mathbb{ H}:=eR^{1}\pi_{*}\Omega^{\bullet}_{\mathcal{A}/\mathfrak{X}}\] are locally free \(\mathcal{O}_{\mathfrak{X}}\)-modules of rank one and two respectively. Moreover, * \(\mathbb{H}\) is equipped with the Gauss-Manin connection \(\nabla:\ \mathbb{H}\to\mathbb{H}\otimes_{\mathcal{O}_{\mathfrak{X}}}\Omega^{1}_{ \mathfrak{X}/\mathbb{Z}_{p}}\); * there is an exact sequence of \(\mathcal{O}_{\mathfrak{X}}\)-modules \[0\to\underline{\omega}\to\mathbb{H}\to\underline{\omega}^{-1}\to 0.\] * the Kodaira-Spencer map induces an isomorphism \[\operatorname{KS}:\ \underline{\omega}^{\otimes 2}\cong\Omega^{1}_{\mathfrak{X}/ \mathbb{Z}_{p}}\] Let \(\mathcal{G}\) be the \(p\)-divisible of dimension \(1\), height \(2\) cut out from \(\mathcal{A}[p^{\infty}]\) by \(1-e\). Let \(\operatorname{Ha}\) be the Hasse invariant of the reduction of \(\mathcal{G}\). Over the \(p\)-adic completion \(\hat{\mathfrak{X}}\) of \(\mathfrak{X}\), \(\underline{\omega}\) (resp. \(\mathbb{H}\)) coincides with the sheaf of invariant differentials of \(\mathcal{G}\) (resp. the universal vector extension of \(\mathcal{G}\)). The _Hodge ideal_\(\operatorname{Hdg}\subset\mathcal{O}_{\hat{\mathfrak{X}}}\) of \(\mathcal{G}\) is the inverse image of \(\underline{\omega}^{\otimes(1-p)}\operatorname{Ha}\subset\mathcal{O}_{\hat{ \mathfrak{X}}}\) such that (see [1, Lem. A.1]) * Zariski locally, the ideal \(\operatorname{Hdg}\) is generated by two elements; * if \(p\in\operatorname{Hdg}^{2}\), then \(\operatorname{Hdg}\) is invertible. For \(r\geq 2\), let \(\mathfrak{X}_{r}\) be the open formal subscheme in the formal admissible blow-up of \(\hat{\mathfrak{X}}\) with respect to the sheaf of ideals \((p,\operatorname{Hdg}^{r})\) cut out by the condition \(p\in\operatorname{Hdg}^{r}\). Then over \(\mathfrak{X}_{r}\), the \(p\)-divisible group \(\mathcal{G}\) admits the level-\(n\) canonical subgroup \(\mathcal{C}_{n}\) for \(n=1\) when \(r\geq 2\) and \(n=2\) when \(r\geq p+2\) (see [1, Corollary A.2]). Moreover, the Cartier dual \(C^{D}_{n}\) is etale and locally isomorphic to \(\mathbb{Z}/p^{n}\mathbb{Z}\) on the adic generic fiber \(\mathcal{X}_{r}\) of \(\mathfrak{X}_{r}\). Let \(\mathcal{IG}_{n,r}\) be the adic space over \(\mathcal{X}_{r}\) parameterizing all trivializations \(\mathcal{C}^{D}_{n}\cong\mathbb{Z}/p^{n}\mathbb{Z}\). Then \(\mathcal{IG}_{n,r}\) is Galois over \(\mathcal{X}_{r}\) with Galois group isomorphic to \((\mathbb{Z}/p^{n}\mathbb{Z})^{\times}\). Similar to [1, Lem. 3.2], the normalization \(\mathfrak{IG}_{n,r}\) of \(\mathcal{IG}_{n,r}\) over \(\mathfrak{X}_{r}\) is well-defined as \(\mathfrak{X}_{r}\) is normal. Let \(\gamma\in\mathcal{C}^{D}_{1}(\mathfrak{IG}_{1,r})\) be the universal section and let \(\underline{\Omega}\subset\underline{\omega}\) be the \(\mathcal{O}_{\mathfrak{IG}_{1,r}}\)-submodule generated by any lift of \(s=\operatorname{HT}(\gamma)\). Here \(\operatorname{HT}\) is the Hodge-Tate map \[\operatorname{HT}:\ \mathcal{C}^{D}_{1}(\mathfrak{IG}_{1,r})\to\frac{\omega}{p \underline{\omega}}\] Then (see [13, Prop. 1.5.7]) \(\underline{\Omega}\) is locally free of rank \(1\) and the ideal \(\underline{\delta}:=[\Omega:\underline{\omega}]\) is defined over \(\mathfrak{IG}_{1,r}\) with \(\underline{\delta}^{(p-1)}=\operatorname{Hdg}\). Let \(\mathbb{H}^{\sharp}:=\underline{\Omega}+\underline{\mathcal{G}}^{p}\mathbb{H }\subset\mathbb{H}\). Then \(\mathbb{H}^{\sharp}\) is free of rank \(2\) and sits in the exact sequence \[0\to\underline{\Omega}\to\mathbb{H}^{\sharp}\to\operatorname{Hdg}\underline{ \Omega}^{-1}\to 0.\] By the machinery of vector bundles with marked sections (see [2, Sect. 2]) applied to the vector bundle \(\mathcal{F}=\underline{\Omega}\) or \(\mathbb{H}^{\sharp}\) and the marked section \(s\) modulo \(\mathcal{I}:=p\underline{\delta}^{-p}\subset\mathcal{O}_{\mathfrak{IG}_{1,r}}\) over \(\mathfrak{IG}_{1,r}\), one obtains the formal scheme \(\mathcal{V}_{0}(\mathcal{F},s)\) over \(\mathfrak{IG}_{1,r}\) representing the functor \[t:\ T\to\mathfrak{IG}_{n,r}\mapsto\{v\in H^{0}(T,t^{*}(\mathcal{F})^{\vee})|\ \bar{v}(t^{*}(s))\equiv 1\ \text{mod}\ \mathcal{I}\}.\] and (see [13, Sect. 2.3]) for any \(\mathfrak{X}_{r}\)-scheme \(T\), \[V_{0}(\mathcal{F},s)(T)=\{(\rho,v)\in\mathfrak{IG}_{1,r}(T)\times H^{0}(T,\rho ^{*}(\mathcal{F})^{\vee})\ |\ \bar{v}(\rho^{*}(s))\equiv 1\ \text{mod}\ \mathcal{I}\}.\] The extended torus \(\mathcal{T}^{\text{ext}}:=\mathbb{Z}_{p}^{\times}(1+\mathcal{IG}_{a})\) acts on \(\mathcal{V}_{0}(\mathcal{F},s)\) by the rule: Take any \((\rho,v)\in\mathcal{V}_{0}(\mathcal{F},s)(T)\), * for any \(\lambda\in 1+\mathcal{IG}_{a}\), \(\lambda*(\rho,v):=(\rho,\lambda^{-1}v)\), * for any \(\lambda\in\mathbb{Z}_{p}^{\times}\), \(\lambda*(\rho,v):=(\bar{\lambda}\circ\rho,\lambda^{-1}v\circ\gamma_{\lambda}^{-1})\) where \(\gamma_{\lambda}:\ \mathcal{F}\cong\bar{\lambda}^{*}\mathcal{F}\) is characterized by \[\gamma_{\lambda}(s)\equiv\lambda^{-1}s\ \text{mod}\ \mathcal{I}.\] Let \(\mathcal{T}^{\mathrm{ext}}\) act on functions via pull-back, i.e. \((t*f)(\rho,v)=f(t*(\rho,v))\). Then by [2, Lem. 2.6], \(\mathcal{O}_{\mathcal{V}_{0}(\mathbb{H}^{\sharp},s)}\) is equipped with a canonical filtration \(\mathrm{Fil}_{\bullet}\) preserved by the \(\mathcal{T}^{\mathrm{ext}}\)-action such that \(\mathrm{Fil}_{0}=\mathcal{O}_{\mathcal{V}_{0}(\Omega,s)}\). Assume moreover \(r\geq 2\) for \(p\geq 5\) and \(r\geq 4\) if \(p=3\). **Definition A.1**.: Let \(R\) be a \(p\)-adically complete and separated ring. For any _analytic_ character \(\nu:\ \mathbb{Z}_{p}^{\times}\to R^{\times}\), i.e. \(\nu|_{1+p\mathbb{Z}_{p}}=\exp(u\log(t))\) for some \(u\in R\), set \(\nu^{f}=\nu|_{(\mathbb{Z}/q\mathbb{Z})^{\times}}\) and \(\nu^{0}:=\nu(\nu^{f})^{-1}\). Note that \(\nu\) extends to a character on \(\mathcal{T}^{\mathrm{ext}}\). Set \(\mathfrak{m}^{\nu}:=\mathfrak{m}^{\nu^{0}}\otimes_{\mathcal{O}_{\mathfrak{X}_ {r}}\otimes_{\mathbb{Z}_{p}}R}\mathfrak{m}^{\nu^{f}}\) where \[\mathfrak{m}^{\nu^{0}}:=\mathcal{O}_{\mathcal{V}_{0}(\Omega,s)}\otimes_{ \mathbb{Z}_{p}}R[(\nu^{0})^{-1}],\quad\mathfrak{m}^{\nu^{f}}:=\mathcal{O}_{ \mathfrak{Ge}_{1,r}}\otimes_{\mathbb{Z}_{p}}R[(\nu^{f})^{-1}].\] Set \[\mathbb{W}^{\nu}:=\mathbb{W}^{\nu^{0}}\otimes_{\mathcal{O}_{\mathfrak{X}_{r} }\otimes_{\mathbb{Z}_{p}}R}\mathfrak{m}^{\nu^{f}},\quad\mathbb{W}^{\nu^{0}}:= \mathcal{O}_{\mathcal{V}_{0}(\mathbb{H}^{\sharp},s)}\otimes_{\mathbb{Z}_{p}} R[(\nu^{0})^{-1}]\] For any \(h\in\mathbb{N}\), set \[\mathrm{Fil}_{h}\mathbb{W}^{\nu}:=\mathrm{Fil}_{h}\mathbb{W}^{\nu^{0}}\otimes_ {\mathcal{O}_{\mathfrak{X}_{r}}\otimes_{\mathbb{Z}_{p}}R}\mathfrak{m}^{\nu^{ f}},\quad\mathrm{Fil}_{h}\mathbb{W}^{\nu^{0}}:=\mathrm{Fil}_{h}( \mathcal{O}_{\mathcal{V}_{0}(\mathbb{H}^{\sharp},s)}\otimes_{\mathbb{Z}_{p}} R)[(\nu^{0})^{-1}]\] Here \(-[\chi]\) means taking the \(\chi\)-isotypic component. Combining the argument in [2, Prop. 4.7] with [13, Thm 2.3.2], one can show **Proposition A.2**.: _The sheaf \(\mathbb{W}^{\nu}\) and the filtration is functorial in varying \((R,\nu)\). Moreover,_ * \(\mathrm{Fil}_{h}\mathbb{W}^{\nu^{0}}\) _is a locally free_ \(\mathcal{O}_{\mathfrak{X}_{r}}\otimes_{\mathbb{Z}_{p}}R\)_-module of rank_ \(h+1\) _and_ \[\mathrm{Fil}_{0}\mathbb{W}^{\nu^{(0)}}=\mathfrak{m}^{\nu^{(0)}};\quad\mathrm{ Gr}_{h}\mathbb{W}^{\nu^{(0)}}\cong\mathfrak{m}^{\nu^{(0)}}\otimes_{\mathcal{O}_{ \mathfrak{X}_{r}}\otimes_{\mathbb{Z}_{p}}R}\mathrm{Hdg}^{h}\underline{\omega} ^{-2h}.\] * \(\mathbb{W}^{\nu^{(0)}}\) _is the_ \(p\)_-adic completion of_ \(\varinjlim_{h}\mathrm{Fil}_{h}\mathbb{W}^{\nu^{(0)}}\)_;_ * _If the_ \(k\)_-th power character is a specialization of_ \(\nu\) _via_ \([k]:\ \mathrm{Spf}\left(\mathbb{Z}_{p}\right)\to\mathrm{Spf}\left(R\right)\)_, Then as filtered modules,_ \[[k]^{*}(\mathrm{Fil}_{k}\mathbb{W}^{\nu})\mid_{\mathcal{X}_{r}}=\mathrm{Sym}^{k }\mathbb{H}\mid_{\mathcal{X}_{r}}\] _where on the right hand side we consider the Hodge-de Rham filtration._ **Definition A.3**.: The space of \(R\)-families of nearly overconvergent quaternionic modular forms with depth \(r\) (resp. and of order \(h\)), tame level \(U^{p}\), weight \(\nu\) is \[N^{\dagger,r,\nu}(U^{p},R):=H^{0}(\mathfrak{X}_{r}\hat{\otimes}_{\mathbb{Z}_{p }}R,\mathbb{W}^{\nu}),\quad N^{\dagger,r,\nu}_{h}(U^{p},R):=H^{0}(\mathfrak{X}_ {r}\hat{\otimes}_{\mathbb{Z}_{p}}R,\mathrm{Fil}_{h}\mathbb{W}^{\nu}).\] Note that \(N^{\dagger,r,\nu}_{0}(U^{p},R)\) is just the space of overconvergent quaternionic modular forms with depth \(r\) tame level \(U^{p}\), weight \(\nu\) \[M^{\dagger,r,\nu}(U^{p},R):=H^{0}(\mathfrak{X}_{r}\hat{\otimes}_{\mathbb{Z}_{p }}R,\mathfrak{m}^{\nu}).\] On \(M^{\dagger,r,\nu}(U^{p},R)\), there are Hecke operators \(T_{\ell}\) for all places \(\ell\neq p\) such that \(U^{p}\) is maximal at \(\ell\) and \(B\) is split at \(\ell\) and the \(U_{p}\), \(V_{p}\)-operator. In [13], it is shown that \(T_{\ell}\) and \(U_{p}\) extend to filtration-preserving operators on \(N^{\dagger,r,\nu}(U^{p},R)\). Moreover for any finite order character \(\chi:\ \mathbb{Z}_{p}^{\times}\to R^{\times}\), one can define the twist by \(\chi\)-operator \[\theta^{\chi}:\ N^{\dagger,r,\nu}(R)[1/p]\to N^{\dagger,r,\nu+2\chi}(R)[1/p]\] which preserves the filtration, \(U_{p}\circ\theta^{\chi}=0\) and if \(U_{p}(f)=0\), then \(\theta^{\chi^{-1}}\theta^{\chi}(f)=f\). ### Iteration of Gauss-Manin connection Let \(\mathcal{IG}^{\prime}_{1,r}\) be the finite etale Galois cover over \(\mathcal{X}_{r}\) parameterizing compatible trivializations \((\mathbb{Z}/p\mathbb{Z})^{2}\cong\mathcal{G}^{D}[p]\) and \(\mathbb{Z}/p\mathbb{Z}\cong\mathcal{C}^{D}\) over \(\mathcal{X}_{r}\). Since \(\mathfrak{X}_{r}\) is normal, the normalization \(\mathfrak{IG}^{\prime}_{1,r}\) of \(\mathfrak{X}_{r}\) in \(\mathcal{IG}^{\prime}_{1,r}\) is well-defined and finite over \(\mathfrak{X}_{r}\). By [2, Prop 6.3], the Gauss-Manin connection \(\nabla\) on \(\mathbb{H}\) induces a connection \[\nabla^{\sharp}:\ \mathbb{H}^{\sharp}\to\mathbb{H}^{\sharp}\otimes\Omega^{1}_{ \mathfrak{IG}^{\prime}_{n,r}/\mathbb{Z}_{p}}\] such that \(\nabla^{\sharp}\mid_{\Omega}\equiv 0\ \mathrm{mod}\ \mathcal{I}\). Define \(\mathcal{V}_{0}(\mathbb{H}^{\sharp},s)\) over \(\mathfrak{IG}^{\prime}_{n,r}\). Then by [2, Lem 2.9] and [13, Prop 2.2.19], \(\nabla^{\sharp}\) induces a connection \[\nabla^{0}:\ \mathbb{W}^{\nu^{0}}\to\mathbb{W}^{\nu^{0}}\hat{\otimes}_{ \mathcal{O}_{\mathfrak{X}_{r}}}\frac{1}{\mathrm{Hdg}^{2}}\Omega^{1}_{ \mathfrak{X}_{r}/\mathbb{Z}_{p}}\] which satisfies the Griffiths tranversality (after inverting \(p\)) and the induced \(\mathcal{O}_{\mathfrak{X}_{r}}\hat{\otimes}_{\mathbb{Z}_{p}}R\)-linear map \[\mathrm{Gr}_{h}(\nabla^{0}):\mathrm{Gr}_{h}(\mathbb{W}^{\nu^{0}})[1/p]\to \mathrm{Gr}_{h+1}(\mathbb{W}^{\nu^{0}})\otimes_{\mathcal{O}_{\mathfrak{X}_{ r}}}\Omega^{1}_{\mathfrak{X}_{r}/\mathbb{Z}_{p}}[1/p]\] is an isomorphism times with multiplication by \(\nu-h\). Here for an analytic character \(\nu:\ \mathbb{Z}_{p}^{\times}\to R^{\times}\), we also denote \(\lim_{x\to 1}\frac{\log(\nu(x))}{\log(x)}\) by \(\nu\). Moreover by [13, Lem 2.3.11], the derivation \[d:\ \mathcal{O}_{\mathfrak{IG}_{1,r}}\to\Omega^{1}_{\mathfrak{IG}_{1,r}/ \mathbb{Z}_{p}}\] induces a connection \[\nabla^{f}:\ \mathfrak{m}^{\nu^{f}}\to\mathfrak{m}^{\nu^{f}}\otimes_{ \mathcal{O}_{\mathfrak{X}_{r}}}\Omega^{1}_{\mathfrak{X}_{r}/\mathbb{Z}_{p}}[1 /p].\] Thus the connection \[\nabla:=\nabla^{0}\otimes\nabla^{f}:\quad\mathbb{W}^{\nu}\to\mathbb{W}^{\nu} \otimes_{\mathcal{O}_{\mathfrak{X}_{r}}}\Omega^{1}_{\mathfrak{X}_{r}/\mathbb{ Z}_{p}}[1/p]\] satisfies the Griffith's transversality and the induced \(\mathcal{O}_{\mathfrak{X}_{r}}\hat{\otimes}_{\mathbb{Z}_{p}}R\)-linear map \[\mathrm{Gr}_{h}(\nabla):\ \mathrm{Gr}_{h}(\mathbb{W}^{\nu})[1/p]\to \mathrm{Gr}_{h+1}(\mathbb{W}^{\nu})\hat{\otimes}_{\mathcal{O}_{\mathfrak{X}_{ r}}}\Omega^{1}_{\mathfrak{X}_{r}/\mathbb{Z}_{p}}[1/p]\] is an isomorphism times with the multiplication by \(\nu-h\). By the Kodaira-Spencer isomorphism, \(\mathrm{KS}(\Omega^{2})\subset\mathrm{Hdg}\Omega^{1}_{\mathfrak{IG}_{1,r}/ \mathbb{Z}_{p}^{0}}\). Pre-composed \(\mathrm{KS}^{-1}\) with \(\nabla\), we obtain an morphism \(\nabla_{\nu}:\ \mathbb{W}^{\nu}\to\frac{1}{\mathrm{Hdg}^{3}}\mathbb{W}^{\nu+2}\) which induces the \(R\)-linear map \[\nabla_{\nu}:\ N^{\dagger,r,\nu}(R)\to N^{\dagger,r,\nu+2}(R)[1/p].\] When no confusions arise, we simply write \(\nabla_{\nu}\) by \(\nabla\). **Proposition A.4**.: _The following properties for \(\nabla\) hold_ 1. _For any_ \(h\in\mathbb{N}\)_,_ \(\nabla(N^{\dagger,r,\nu}_{h}(R))\subset\nabla(N^{\dagger,r,\nu+2}_{h+1}(R)[1/p])\) _;_ 2. _When_ \(T_{\ell}\) _is defined,_ \(T_{\ell}\circ\nabla=\ell\nabla\circ T_{\ell}\)_;_ 3. \(U_{p}\circ\nabla=p\nabla\circ U_{p}\)_;_ 4. _For any finite order character_ \(\chi:\ \mathbb{Z}_{p}^{\times}\to R^{\times}\)_,_ \(\nabla_{\nu+2\chi}\circ\theta^{\chi}=\theta^{\chi}\circ\nabla_{\nu}\)_._ Proof.: We first prove the claim. **Lemma A.5**.: _Let \(\mathcal{G}^{\prime}_{1,r}\) be a finite order character of \(\mathcal{G}^{\prime}_{1,r}\). Then \(\mathcal{G}^{\prime}_{1,r}\) is a finite order character of \(\mathcal{G}^{\prime}_{1,r}\)._ Proof.: We first prove the claim. **Lemma A.6**.: _Let \(\mathcal{G}^{\prime}_{1,r}\) be a finite order character of \(\mathcal{G}^{\prime}_{1,r}\). Then \(\mathcal{G}^{\prime}_{1,r}\) is a finite order character of \(\mathcal{G}^{\prime}_{1,r}\)._ Proof.: We first prove the claim. **Lemma A.7**.: _Let \(\mathcal{G}^{\prime}_{1,r}\) be a finite order character of \(\mathcal{G}^{\prime}_{1,r}\). Then \(\mathcal{G}^{\prime}_{1,r}\) is a finite order character of \(\mathcal{G}^{\prime}_{1,r}\)._ Proof.: We first prove the claim. **Lemma A.8**.: _Let \(\mathcal{G}^{\prime}_{1,r}\) be a finite order character of \(\mathcal{G}^{\prime}_{1,r}\). Then \(\mathcal{G}^{\prime}_{1,r}\) is a finite order character of \(\mathcal{G}^{\prime}_{1,r}\)._ Proof.: We first prove the claim. **Lemma A.9**.: _Let \(\mathcal{G}^{\prime}_{1,r}\) be a finite order character of \(\mathcal{G}^{\prime}_{1,r}\). Then \(\mathcal{G}^{\prime}_{1,r}\) is a finite order character of \(\mathcal{G}^{\prime}_{1,r}\)._ Proof.: We first prove the claim. **Lemma A.10**.: _Let \(\mathcal{G}^{\prime}_{1,r}\) be a finite order character of \(\mathcal{G}^{\prime}_{1,r}\). Then \(\mathcal{G}^{\prime}_{1,r}\) is a finite order character of \(\mathcal{G}^{\prime}_{1,r}\)._ Proof.: We first prove the claim. **Lemma A.11**.: _Let \(\mathcal{G}^{\prime}_{1,r}\) be a finite order character of \(\mathcal{G}^{\prime}_{1,r}\). Then \(\mathcal{G}^{\prime}_{1,r}\) is a finite order character of \(\mathcal{G}^{\prime}_{1,r}\)._ Proof.: We first prove the claim. **Lemma A.12**.: _Let \(\mathcal{G}^{\prime}_{1,r}\) be a finite order character of \(\mathcal{G}^{\prime}_{1,r}\). Then \(\mathcal{G}^{\prime}_{1,r}\) is a finite order character of \(\mathcal{G}^{\prime}_{1,r}\)._ Proof.: We first prove the claim. **Lemma A.13**.: _Let \(\mathcal{G}^{\prime}_{1,r}\) be a finite order character of \(\mathcal{G}^{\prime}_{1,r}\). Then \(\mathcal{G}^{\prime}_{1,r}\) is a finite order character of \(\mathcal{G}^{\prime}_{1,r}\)._ Proof.: We first prove the claim. **Lemma A.14**.: _Let \(\mathcal{G}^{\prime}_{1,r}\) be a finite order character of \(\mathcal{G}^{\prime}_{1,r}\). Then \(\mathcal{G}^{\prime}_{1,r}\) is a finite order character of \(\mathcal{G}^{\prime}_{1,r}\)._ Proof.: We first prove the claim. **Lemma A.15**.: _Let \(\mathcal{G}^{\prime}_{1,r}\) be a finite order character of \(\mathcal{G}^{\prime}_{1,r}\). Then \(\mathcal{G}^{\prime}_{1,r}\) is a finite order character of \(\mathcal{G}^{\prime}_{1,r}\)._ Proof.: We first prove the claim. **Lemma A.16**.: _Let \(\mathcal{G}^{\prime}_{1,r}\) be a finite order character of \(\mathcal{G}^{\prime}_{1,r}\). Then \(\mathcal{G}^{\prime}_{1,r}\) is a finite order character of \(\mathcal{G}^{\prime}_{1,r}\)._ Proof.: We first prove the claim. **Lemma A.17**.: _Let \(\mathcal{G}^{\prime}_{1,r}\) be a finite order character of \(\mathcal{G}^{\prime}_{1,r}\). Then \(\mathcal{G}^{\prime}_{1,r} _Moreover if the \(k\)-th power character is a specialization of \(\nu\), then the isomorphism_ \[[k]^{*}(\operatorname{Fil}_{k}\mathbb{W}^{\nu})\mid_{\mathcal{X}_{r}}\cong \operatorname{Sym}^{k}\mathbb{H}\mid_{\mathcal{X}_{r}}\] _identifies \([k]^{*}(\nabla)\) with the composition_ \[\operatorname{Sym}^{k}\mathbb{H}\xrightarrow{\nabla}\operatorname{Sym}^{k} \mathbb{H}\otimes\Omega_{\mathfrak{X}_{r}}\xrightarrow{\operatorname{KS}^{-1} }\operatorname{Sym}^{k}\otimes\underline{\omega}^{\otimes 2}\hookrightarrow \operatorname{Sym}^{k+2}\mathbb{H}.\] Proof.: Note that \(\nabla\) satisfies Griffith's transversality, so Item \((i)\) holds. Item \((ii)\) follows from the functority of the Guass-Manin connection and the property of Kodaira-Spencer map. Item \((iii)\) and Item \((iv)\) can be proven using the Serre-Tate local coordinates, we refer to [13, Prop 2.4.15(ii)] (note the typos in _loc.cit_) and [13, Prop 2.4.17] for details. One can verify the identification locally, which follows from the local description of the connection \(\nabla^{\sharp}\). Let \(\mu,\nu:\mathbb{Z}_{p}^{\times}\to R^{\times}\) be analytic characters and take any \(F\in M^{\dagger,r,\nu}(R)\) such that \(U_{p}(F)=0\). We remark that for any \(G\in M^{\dagger,r,\nu}(R)\), the \(p\)-depletion \(G^{[p]}:=G-V_{p}\circ U_{p}G\) satisfies that \(U_{p}G^{[p]}=0\). **Proposition A.5**.: _Assume \(r\geq p+2\) and_ * _there exist_ \(a\in\mathbb{Z}\)_,_ \(\chi:\ (\mathbb{Z}/p\mathbb{Z})^{\times}\to R^{\times}\) _and_ \(v\in pR\) _such that_ \[\nu(t)=\chi^{2}(t)\exp((a+v)\log(t)),\quad\forall t\in\mathbb{Z}_{p}^{\times};\] * _there exist_ \(b\in\mathbb{Z}\)_,_ \(\epsilon:\ (\mathbb{Z}/p\mathbb{Z})^{\times}\to R^{\times}\) _and_ \(u\in p^{2}R\) _such that_ \[\mu(t)=\epsilon(t)\exp((b+u)\log(t)),\quad\forall t\in\mathbb{Z}_{p}^{\times}.\] _When \((1-1/2p)b(p,r)>3(p-1)p+r\), there exists an explicit element \(\nabla_{\nu}^{\mu}(F)\in N^{\dagger,b(p,r),\nu+2\mu}(R)[1/p]\) such that for any classical specialization \(k\) and \(\ell\) of \(\nu\) and \(\mu\),_ \[[k+2\ell]^{*}\nabla_{\nu}^{\mu}(F)=\nabla_{k}^{2\ell}([k]^{*}F).\] Proof.: The element \(\nabla_{\nu}^{\mu}(F)\) is constructed in [13, Theorem 2.5.1] by iteration of \(\nabla\) and the interpolation property is shown in [13, Prop 2.5.2]. By [13, Prop 2.2.19, 2.5.3], one finds \[\operatorname{Hdg}^{3(p-1)pN+rN}(\nabla^{p-1}-\operatorname{Id})^{Np}(F)\in p ^{N}H^{0}(\mathfrak{X}_{r}\hat{\otimes}_{\mathbb{Z}_{p}}R,\sum_{i=0}^{(p-1)pN }\mathbb{W}^{\nu+2i}).\] Then one can apply the argument in the proof of [2, Thm 4.8] to show \(\nabla_{\nu}^{\mu}(F)\in N^{\dagger,b(p,r),\nu+2\mu}(R)[1/p]\) since for any positive integer \(h\) and positive integers \(j_{i}\), \(i=1,\cdots,h\), \[2h+\frac{N}{p}-\sum_{i}v_{p}(j_{i})-\frac{h}{p-1}\geq z\frac{N}{p},\quad z=1- \frac{1}{2p},N=\sum_{i}j_{i}.\] Finally, we briefly explain how to extend these results to levels \[\Gamma_{0}(p) :=\{\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{GL}_{2}(\mathbb{Z}_{p})|c\equiv 0\text{ mod }p\};\] \[\Gamma_{1}(p) :=\{\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\Gamma_{0}(p)|d\equiv 1\text{ mod }p\};\] \[\Gamma_{1}^{1}(p) :=\{\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\Gamma_{1}(p)|a\equiv 1\text{ mod }p\}.\] For any open compact subgroup \(U_{p}\subset\operatorname{GL}_{2}(\mathbb{Z}_{p})\), let \(X(U_{p})\) be the Shimura curve of level-\(U^{p}U_{p}\) associated to \(B\). For the \(\Gamma_{0}(p)\)-level, note that \[\mathcal{X}(\Gamma_{0}(p))_{r}=\mathcal{X}(\Gamma_{0}(p))_{r,c}\sqcup\mathcal{ X}(\Gamma_{0}(p))_{r,a}\] where \(c\) (resp. \(a\)) means the given \(\Gamma_{0}(p)\)-level structure \(D\) coincides with the canonical subgroup (resp. intersects the canonical subgroup trivially). The map \(p:\;(A,D)\mapsto A\) induces an isomorphism \(\mathcal{X}(\Gamma_{0}(p))_{r,c}\cong\mathcal{X}_{r}\) and there exists an integral model \(\mathfrak{X}(\Gamma_{0}(p))_{r,c}\) of \(\mathcal{X}(\Gamma_{0}(p))_{r,c}\) so that \(p^{*}\underline{\omega}\) and \(p^{*}\mathbb{H}\) coincide with those defined by \(\mathcal{G}\) on \(\mathfrak{X}(\Gamma_{0}(p))_{r,c}\). Thus one can extend the results on \(U_{p}\), \(V_{p}\) and \(\nabla\) on \(\mathcal{X}_{r}\) to \(\mathfrak{X}(\Gamma_{0}(p))_{r,c}\). For \(\Gamma=\Gamma_{1}(p)\) or \(\Gamma_{1}^{1}(p)\) and \(*=a\) or \(c\), let \(\mathcal{X}(\Gamma)_{r,*}\) be the pre-image of \(\mathcal{X}(\Gamma_{0}(p))_{r,*}\) via the forgetful map. Then up to enlarging the base field, \(\mathcal{X}(\Gamma_{1}^{1}(p))_{r,*}=\bigsqcup_{\mathbb{F}_{p}^{\times}} \mathcal{X}(\Gamma_{1}(p))_{r,*}\) and \[H^{0}(\mathcal{X}(\Gamma_{1}(p))_{r,*}\otimes R,\mathbb{W}^{\nu})=\bigoplus_ {\chi\in\mathbb{F}_{p}^{\times}}H^{0}(\mathcal{X}(\Gamma_{0}(p))_{r,*}\otimes R,\mathbb{W}^{\nu\chi}).\] Thus all results extend to \(\mathcal{X}(\Gamma_{1}(p))_{r,c}\) and \(\mathcal{X}(\Gamma_{1}^{1}(p))_{r,c}\). ### \(p\)-adic Waldspurger formula for non-split \(p\) Fix an isomorphism \(\mathbb{C}\cong\mathbb{C}_{p}\), which determines an embedding \(\bar{\mathbb{Q}}(\subset\mathbb{C})\hookrightarrow\mathbb{C}_{p}\). Let \(\mathcal{K}(\subset\mathbb{C})\) be an imaginary quadratic field such that \(p\) is non-split in \(\mathcal{K}\). Denote its associated quadratic character by \(\chi_{\mathcal{K}}\). Let \(\theta=\frac{D^{\prime}+\delta}{2}\) where \(\delta=\sqrt{-D_{\mathcal{K}}}\) and \(D^{\prime}=\begin{cases}D_{\mathcal{K}},&2\nmid D_{\mathcal{K}}\\ \frac{D_{\mathcal{K}}}{2},&2\mid D_{\mathcal{K}}\end{cases}\). Let \(N_{\mathcal{K}}\) be the norm character \[N_{\mathcal{K}}:\;\mathcal{K}^{\times}\backslash\mathbb{A}_{\mathcal{K}}^{ \times}\xrightarrow{N_{\mathcal{K}/\mathbb{Q}}}\mathbb{Q}^{\times}\backslash \mathbb{A}^{\times}\xrightarrow{|\cdot|}\mathbb{C}^{\times}.\] Let \(\pi(f_{\operatorname{GL}_{2}})\) be the unitary automorphic representation attached to an eigenform \(f_{\operatorname{GL}_{2}}\in S_{k}(\Gamma_{1}(Np))\) with \(p\nmid N\) and \(2\mid k\). Denote by \(\epsilon\) the central character of \(\pi(f_{\operatorname{GL}_{2}})\). Decompose \(N=N^{+}N^{-}\) such that \(N^{+}\) (resp. \(N^{-}\)) is only divisible by split (resp. non-split) primes. #### a.3.1 Explicit Waldspurger formulae Take an Hecke character \(\mu:\;\mathcal{K}^{\times}\backslash\mathbb{A}_{\mathcal{K}}^{\times}\to \mathbb{C}^{\times}\) of infinite type \((-k-j,j)\), \(j\geq 0\) such that * \(\mu_{-k/2}|_{\mathbb{A}^{\times}}\epsilon=1\) where for any \(m\in\mathbb{N}\), \(\mu_{m}:=\mu N_{\mathcal{K}}^{-m}\); * the global root number \(\epsilon(1/2,\pi(f_{\operatorname{GL}_{2}}),\mu_{-k/2})=+1\). Let \(B\) be the quaternion algebra over \(\mathbb{Q}\) which ramifies exactly at the places \(v\) such that \[\epsilon(1/2,\pi(f_{\mathrm{GL}_{2}})_{v},\mu_{-k/2,v})=-\epsilon_{v}(-1)\chi_{ \mathcal{K},v}(-1).\] Note that \(B\) is split at \(v\mid N^{+}\infty\). Assume moreover \(\mu\) ramifies only at non-split primes and \(B\) is split at \(p\). Choose an element \(J\in B\) such that * \(B=\mathcal{K}\oplus\mathcal{K}J\), * \(Jt=\bar{t}J\) for all \(t\in\mathcal{K}\), * \(J^{2}=\beta\in\mathbb{Q}^{\times}\) with \(\beta>0\) and \(\beta\in(\mathbb{Z}_{q}^{\times})^{2}\) for all \(q\mid pN^{+}\). For \(q\mid pN^{+}\), fix an isomorphism \(i_{q}:\ B_{q}\cong M_{2}(\mathbb{Q}_{q})\) such that * for \(q\mid N^{+}\), \[i_{q}(\theta)=\begin{pmatrix}T(\theta)&-N(\theta)\\ 1&0\end{pmatrix},\quad i_{q}(J)=\sqrt{\beta}\begin{pmatrix}-1&T(\theta)\\ 0&1\end{pmatrix}.\] * for \(q=p\), \[i_{q}(\theta)=\begin{pmatrix}T(\theta)&-p^{-n}\\ p^{n}N(\theta)&0\end{pmatrix},\quad i_{q}(J)=\sqrt{\beta}\begin{pmatrix}-1&p^{- n}N(\theta)^{-1}T(\theta)\\ 0&1\end{pmatrix}\] where \(n:=\mathrm{Cond}(\mu_{p})\) is the conductor of \(\mu_{p}\). Fix a maximal order \(\mathcal{O}_{B}\subset B\) such that \(i_{q}(\mathcal{O}_{B}\otimes\mathbb{Z}_{q})=M_{2}(\mathbb{Z}_{q})\) for each \(q\mid pN^{+}\). Fix an idempotent \(e\) as in [15, P.780](or more concretely [5, P. 4183]). Note that there exists a prime-to-\(p\) integer \(d\in\mathbb{N}\) such that \[\mathcal{O}_{c}:=\mathbb{Z}+c\mathcal{O}_{K}=\mathcal{O}_{B}\cap K,\quad c=p^{ n}d.\] The elliptic curve \(E:=\mathbb{C}/\mathcal{O}_{c}\) has CM by \(\mathcal{O}_{c}\) and is defined over a large enough number field \(F\). Without lose of generality, we assume \(E\) has good reduction at all places above \(p\) and extends to an elliptic curve over \(\mathcal{O}_{F,(p)}\). Let \(\omega_{E,\mathbb{C}}=dz\) be the standard invariant differential and \(\omega_{E}\) be a fixed generator of \(\Omega^{1}_{E/\mathcal{O}_{F,(p)}}\). The _complex period_ is the scalar \(\Omega_{\infty}\in\mathbb{C}^{\times}\) such that \(\omega_{E}=\Omega_{\infty}2\pi i\omega_{E,\mathbb{C}}\). For any \(t\in\mathcal{K}^{\times}\backslash\mathbb{A}_{\mathcal{K}}^{\times}/\mathcal{ K}_{\infty}^{\times}\), set \(E_{t}:=\mathbb{C}/\mathcal{O}_{c}t^{-1}\) and \(\omega_{E_{t}}\in\Omega^{1}_{E_{t}/\mathcal{O}_{F,(p)}}\) which pulls back to \(\omega_{E}\). By Serre's tensor product construction ([9, Prop 1.7.4.5]), \(A_{t}:=\mathcal{O}_{B}\otimes_{\mathcal{O}_{c}}E_{t}\) is a false elliptic curve over \(F\) with CM by \(\mathcal{O}_{c}\). As explained in [14, Sect 4.5], \(\omega_{A_{t},\mathbb{C}}=e\otimes\omega_{E_{t},\mathbb{C}}\) is a basis of \(e\Omega^{1}_{A/\mathbb{C}}\) and \(\omega_{A_{t}}:=e\otimes\omega_{E}=\Omega_{\infty}2\pi i\omega_{A_{t}, \mathbb{C}}\). Moreover by [2, Lem 3.1 & 3.2], \(v_{p}(\mathrm{Hdg}(A_{t}))=\frac{1}{p^{c-1}(p+1)}\) (resp. \(\frac{1}{2p^{c}}\)) if \(p\) is inert (resp. ramified). Take any \(\tau\in\mathcal{H}\cap K\) such that \(A\cong\frac{B\otimes\mathbb{R}}{\mathcal{O}_{B}\big{[}\genfrac{}{}{0.0pt}{}{1}{ 1}}\) and fix an isomorphism \(i_{\infty}:\ \mathbb{B}_{\infty}\cong M_{2}(\mathbb{R})\) such that \[i_{q}(\theta)=\begin{pmatrix}T(\tau)&-N(\tau)\\ 1&0\end{pmatrix},\quad i_{q}(J)=\sqrt{\beta}\begin{pmatrix}-1&T(\tau)\\ 0&1\end{pmatrix}.\] Fix a (small enough) open compact subgroup \(U^{p}\subset\mathbb{B}^{p\infty\times}\). Let \((A,i,\eta)\) represent the point \([\tau,1]\in X(\Gamma^{1}_{1}(p))(\bar{\mathbb{Q}})\). For any \(t\in\mathcal{K}^{\times}\backslash\mathbb{A}_{\mathcal{K}}^{\times}/\mathcal{ K}_{\infty}^{\times}\), consider the quadruple \((A_{t},i,\eta_{t},\omega_{A_{t}})\) where \(\eta_{t}\) is a compatible level structure. Then by CM theory, \(\mathcal{K}^{\times}\backslash\mathbb{A}_{\mathcal{K}}^{\times}/\mathcal{K}_{ \infty}^{\times}\) acts on the set of quadruples as above by the rule \[t\cdot(A,i,\eta,\omega_{A})=(A_{t},i,\eta_{t},\omega_{A_{t}}).\] Note that if \((A,i,\eta)\in\mathcal{X}(\Gamma^{1}_{1}(p))_{r,*}\) for \(r\in\mathbb{N}\) and \(*=a\) or \(c\), so does \((A_{t},i,\eta_{t})\) for all \(t\in\mathcal{K}^{\times}\backslash\mathbb{A}_{\mathcal{K}}^{\times}/\mathcal{ K}_{\infty}^{\times}\). The translation by \(\kappa_{p}:=\begin{pmatrix}0&1\\ p&0\end{pmatrix}\in\mathrm{GL}_{2}(\mathbb{Q}_{p})\) induces an isomorphism \(\mathcal{X}(\Gamma^{1}_{1}(p))_{r,a}\cong\mathcal{X}(\Gamma^{1}_{1}(p))_{pr,c}\). When \((A_{t},i,\eta_{t})\in\mathcal{X}(\Gamma^{1}_{1}(p))_{r,a}\), denote by \((A^{\prime}_{t}:=A_{t}/D_{t},i,\eta^{\prime}_{t})\in\mathcal{X}(\Gamma^{1}_{1 }(p))_{pr,c}\) its image under the translation and take \(\omega_{A^{\prime}_{t}}\in e\Omega_{A^{\prime}_{t}/\mathbb{C}}\) which pulls back to \(\omega_{A_{t}}\). Here \(D_{t}\subset A_{t}[p]\) is the \(\Gamma_{0}(p)\)-level structure of \((A_{t},i,\eta_{t})\). Let \(\pi\) be the (unitary) automorphic \(B_{\mathbb{A}}^{\times}\)-representation whose Jacquet-Langlands correspondence is \(\pi(f_{\mathrm{GL}_{2}})\). Let \(\Sigma\) consist of prime factors of \(Np\infty\) and the ramified places of \(\mu\). Take \(\varsigma=\prod_{q|N^{+}}\varsigma_{q}\) where for \(q=\mathfrak{q}\bar{\mathfrak{q}}\), \(\varsigma_{q}=\delta^{-1}\begin{pmatrix}\theta&\bar{\theta}\\ 1&1\end{pmatrix}\in\mathrm{GL}_{2}(K_{\mathfrak{q}})=\mathrm{GL}_{2}(\mathbb{Q }_{q})\). Set \(\varphi=\otimes_{v}\varphi_{v}\in\pi\) such that * \(\varphi_{v}\) is the \(\chi_{-k/2,v}^{-1}\)-eigenvector at \(v\neq p\in\Sigma\) nonsplit; * \(\varphi_{v}\) is the new vector at other places. By [15, Lemma 2.2.4], the vector bundles \(\underline{\omega}\) and \(\mathbb{H}\) are all defined over \(X(\Gamma^{1}_{1}(p))\) ( up to base change to certain real quadratic field in which \(p\) splits). Let \(\theta_{k}^{j}=\mathrm{Sp}\circ\nabla_{k}^{j}\) where \(\mathrm{Sp}:\ \mathrm{Sym}^{k+2j}\mathbb{H}\to\underline{\omega}^{k+2j}\) is the Hodge splitting of real analytic sheaves. Then there exists \(f\in H^{0}(X(\Gamma_{1}(p)),\underline{\omega}^{k})\) such that \(\pi(\varsigma)\varphi\) is the automorphic form attached to \(\delta_{k}^{j}f\) via the dictionary between modular forms and automorphic forms with respect to base point \(\tau\) (see [15, Sect. 2.4] or [5, Sect. 3]). **Definition A.6**.: For any open compact subgroup \(V\subset\hat{\mathcal{O}}_{K}^{\times}\), set \(H_{V}:=\mathcal{K}^{\times}\mathbb{A}^{\times}V\backslash\mathbb{A}_{\mathcal{ K}}^{\times}\). Define \(L_{\mathrm{alg}}(f,\mu)\) to be \[\frac{1}{\sharp H_{V}}\sum_{t\in H_{V}}\mu_{j}(t)\begin{cases}(\delta_{k}^{j} f)(t\cdot(A,i,\eta,\omega_{A}))&\text{ if }\pi_{p}\text{ is spherical or }(A_{t},i,\eta_{t})\in\mathcal{X}(\Gamma^{1}_{1}(p))_{r,c};\\ (\delta_{k}^{j}f)(t\cdot(A^{\prime},i,\eta^{\prime},\omega_{A^{\prime}}))& \text{ if }\pi_{p}\text{ is ramified and }(A_{t},i,\eta_{t})\in\mathcal{X}(\Gamma^{1}_{1}(p))_{r,a}.\end{cases}\] Now we relate \(L_{\mathrm{alg}}(f,\mu)\) with the Waldspurger period integral \[P_{\mu}(\phi)=\int_{\mathcal{K}^{\times}\mathbb{A}^{\times}\backslash\mathbb{ A}_{\mathcal{K}}^{\times}}\phi(t)\mu_{-k/2}(t)dt,\quad\phi\in\pi\] where the Haar measure has total volume \(\mathrm{Vol}(\mathcal{K}^{\times}\mathbb{A}^{\times}\backslash\mathbb{A}_{ \mathcal{K}}^{\times})=2L(1,\chi_{\mathcal{K}})\). Assume \([\tau,1]\in X(\Gamma^{1}_{1}(p))(\bar{\mathbb{Q}})\) is also represented by \((\frac{B\otimes\mathbb{R}}{\mathcal{O}_{B}\left[\begin{smallmatrix}\theta\\ 1\end{smallmatrix}\right]},i,\tilde{\eta})\) where \(\tilde{\eta}\) is certain level structure. Then there exists a unique scalar \(\Lambda_{\tau}\in K^{\times}\) such that the scalar multiplication by \(\Lambda_{\tau}\) induces an isomorphism \(\frac{B\otimes\mathbb{R}}{\mathcal{O}_{B}\left[\begin{smallmatrix}\theta\\ 1\end{smallmatrix}\right]}\overset{\cdot\Lambda_{\tau}}{\longrightarrow}A\) and identifies \(\tilde{\eta}\) with \(\eta\). Then by the argument in [4, Prop. 4.13], one has **Lemma A.7**.: _Let \(\phi=\pi(\varsigma)\varphi\) (resp. \(\pi(\kappa_{p})\pi(\varsigma)\varphi\)) if \(\pi_{p}\) is spherical or \((A,i,\eta)\in\mathcal{X}(\Gamma^{1}_{1}(p))_{r,c}\) (resp. \(\pi_{p}\) is ramified and \((A,i,\eta)\in\mathcal{X}(\Gamma^{1}_{1}(p))_{r,a}\)). Then_ \[L_{\mathrm{alg}}(f,\mu)=\frac{(2\pi i)^{k+2j}}{2L(1,\chi_{\mathcal{K}})\Omega^{k+ 2j}_{\infty}\Lambda^{k+2j}_{\tau}}P_{\mu}(\phi).\] For any \(\varphi_{1,v},\varphi_{2,v}\in\pi_{v}\), set \[P(\varphi_{1,v},\varphi_{2,v},\mu_{v})=\frac{L(1,\chi_{\mathcal{K},v})L(1,\pi_{v },\mathrm{Ad})}{L(1/2,\pi_{v},\mu_{-k/2,v})\zeta_{v}(2)}\int_{\mathbb{Q}_{v}^{ \times}\backslash\mathcal{K}_{v}^{\times}}(\pi_{v}(t)\varphi_{1,v},\pi_{v}(J) \varphi_{2,v})\mu_{-k/2,v}(t)dt.\] Here \((-,-)=(-,-\otimes\omega^{-1})_{\mathrm{Pet}}=\prod_{v}(-,-)_{v}\) is the Petersson inner product \[(-,-)_{\mathrm{Pet}}:\pi\times\tilde{\pi}\to\mathbb{C},\quad(\varphi,\phi) \mapsto\int_{B^{\times}\mathbb{A}^{\times}\backslash B_{\mathbb{A}}^{\times}} \varphi(g)\phi(g)dg\] with respect to the measure with total volume \(\mathrm{Vol}(B^{\times}\mathbb{A}^{\times}\backslash B_{\mathbb{A}}^{\times})=2\) composed with the obvious map \[\pi\to\tilde{\pi}=\pi\otimes\epsilon^{-1},\quad\varphi\mapsto\varphi\otimes \epsilon^{-1}.\] Set \(\kappa=\prod_{v}\kappa_{v}\) where * \(\kappa_{v}=J\) for non-split \(v\neq p\in\Sigma\); * \(\kappa_{v}=\begin{pmatrix}0&1\\ q_{v}^{\mathrm{Cond}(\pi_{v})}&0\end{pmatrix}\) for other places. Note that if \(\varphi_{v}\) is new (resp. \(\mu_{-k/2,v}^{-1}\)-eigen), \(\pi(\kappa_{v})\varphi_{v}\otimes\omega_{v}^{-1}\) is also new (resp. \(\mu_{-k/2,v}\)-eigen). Then for the chosen \(\varphi\in\pi\), \((\varphi,\pi(\kappa)\varphi)\neq 0\) as there exists a constant \(C\in\mathbb{C}^{\times}\) such that \[\pi(\kappa)\varphi\otimes\omega^{-1}=C\bar{\varphi}.\] For \(\phi\in\pi\) as in Lemma A.7, one has * for \(q=\mathfrak{q}\bar{\mathfrak{q}}\mid N^{+}\), similar computations as in [10, Sect 3.6] implies \[C_{q}(\pi,\mu):=\frac{P(\phi_{q},\phi_{q},\mu_{q})}{(\varphi_{q},\pi(\kappa_{q} )\varphi_{q})}\] is \(\mu_{q}(\mathfrak{q})\) times a non-zero constant independent of \(\mu_{q}\); * for finite places outside \(\Sigma\), \(C_{v}(\pi,\mu)=1\); * for \(v\neq p\in\Sigma\) non-split, \(C_{v}(\pi,\mu)=\mathrm{Vol}(\mathcal{K}_{v}^{\times}/\mathbb{Q}_{v}^{\times} )\frac{L(1,\chi_{\mathcal{K},v})L(1,\pi_{v},\mathrm{Ad})}{L(1/2,\pi_{v},\mu_{- k/2,v})\zeta_{v}(2)}\); * for \(v=p\), the \(\chi_{p}^{-1}\)-component of \(\phi_{p}\) is non-zero by [11, Prop 3.7] and hence \(C_{p}(\pi,\mu)\neq 0\). Take \(f_{0}\in H^{0}(X,\underline{\omega}^{k})\) such that \(\varphi\) is the automorphic form attached to \(\delta_{k}^{j}f_{0}\) via the dictionary between modular forms and automorphic forms with respect to base point \(\tau\) and \(\varphi_{0}\) be the automorphic form attached to \(f_{0}\) with base point \(i\). Then by [15, Lem 3.4.6], \[(\varphi,\pi(\kappa)\varphi)=\frac{\Gamma(j+1)\Gamma(k+j)}{\Gamma(k)(4\pi)^{2j }\mathrm{Im}(\tau)^{k+2j}}(\varphi_{0},C\bar{\varphi}_{0})_{\mathrm{Pet}}.\] (A.1) Combining all these discussions, one has the following: **Proposition A.8**.: _Let \(\ell:=k+2j\) and set_ \[C(f,\mu):=2^{k-1}\pi^{\ell-1}\Gamma(j+1)\Gamma(k+j)C\prod_{v<\infty}C_{v}(\pi,\mu).\] _Then_ \[L^{2}_{\mathrm{alg}}(f,\mu)=\frac{\zeta(2)(\varphi_{0},\bar{\varphi}_{0})_{ \mathrm{Pet}}}{L^{2}(1,\chi_{\mathcal{K}})L(1,\pi,\mathrm{Ad})}\frac{C(f,\mu)L (0,f,\mu)}{\Omega_{\infty}^{2\ell}(\Lambda_{\tau}^{2}\mathrm{Im}(\tau))^{\ell}}\] Proof.: Note that \[P_{\mu}(\phi)=P_{\mu^{-1}}(\pi(J)\phi\otimes\epsilon^{-1}):=\int_{\mathcal{K} ^{\times}\mathbb{A}^{\times}\backslash\mathbb{A}^{\times}_{\mathcal{K}}}\pi(J )\phi(t)\epsilon^{-1}\circ N_{\mathcal{K}}(t)\mu_{-k/2}^{-1}(t)dt.\] Then by the Waldspurger formula, \[P_{\mu}^{2}(\phi)=\frac{\zeta(2)L(1/2,\pi,\mu_{-k/2})}{2L(1,\pi,\mathrm{Ad})} \prod_{v}P(\phi_{v},\phi_{v},\mu_{v}),\quad\phi=\otimes_{v}^{\prime}\phi_{v}.\] Now the desired result follows from Lemma A.7 and the Equation A.1. Note that \(\mathrm{Im}(\tau)|\Lambda_{\tau}|^{2}=\mathrm{Vol}(\mathcal{O}_{c})\). #### a.3.2 The \(p\)-adic L-function and the \(p\)-adic Waldspurger formulae Denote by \(\Sigma^{(2)}(\mu)\) (resp. \(\Sigma^{(1)}(\mu)\)) the set of \(p\)-adic avatars of Hecke characters \(\chi\) on \(\mathcal{K}\) such that * \(\chi\) has infinite type \((-k-j,j)\) with \(j\geq 0\) (resp. \(-k+1\leq j\leq-1\)); * \(\chi_{-k/2}|_{\mathbb{A}^{\times}}\epsilon=1\) and \(\chi|_{\hat{\mathcal{O}}^{\times}_{\mathcal{K}}}=\mu|_{\hat{\mathcal{O}}^{ \times}_{\mathcal{K}}}\). Let \(\mathbb{A}^{\infty,\times,\prime}_{\mathcal{K}}\subset\mathbb{A}^{\infty,\times}_ {\mathcal{K}}\) be the subgroup of prime-to-\(p\) finite ideles and let \(\mathcal{F}(\mathbb{A}^{\infty,\times,\prime}_{\mathcal{K}},\mathcal{O}_{ \mathbb{C}_{p}})\) be the set of \(\mathcal{O}_{\mathbb{C}_{p}}\)-valued functions on \(\mathbb{A}^{\infty,\times,\prime}_{\mathcal{K}}\). By restriction to \(\mathbb{A}^{\infty,\times,\prime}_{\mathcal{K}}\), one has \(\Sigma^{(1)}(\mu)\sqcup\Sigma^{(2)}(\mu)\subset\mathcal{F}(\mathbb{A}^{\infty,\times,\prime}_{\mathcal{K}_{\mathcal{C}_{p}}},\mathcal{O}_{\mathbb{C}_{p}})\). Equip \(\mathcal{F}(\mathbb{A}^{\infty,\times,\prime}_{\mathcal{K}},\mathcal{O}_{ \mathbb{C}_{p}})\) with the compact open topology and let \(\hat{\Sigma}(\mu)\) be the completion of \(\Sigma^{(2)}(\mu)\) with respect to this topology. Note that \(\Sigma^{(1)}(\mu)\subset\hat{\Sigma}(\mu)\) (see [4, P. 1137]). Denote by \(w\) the composition of * the map \(\Sigma^{(2)}(\mu)\to\mathbb{Z}\) which sends a character \(\chi\) of infinity type \((-k-j,j)\) to \(j\), and * the embedding \[\mathbb{Z}\hookrightarrow\varprojlim_{n}\mathbb{Z}/(p-1)p^{n}\mathbb{Z}\cong \mathbb{Z}/(p-1)\mathbb{Z}\times\mathbb{Z}_{p}.\] **Lemma A.9**.: _The map \(w\) extends continuously to a local homeomorphism_ \[w:\ \hat{\Sigma}(\mu)\to\mathbb{Z}/(p-1)\mathbb{Z}\times\mathbb{Z}_{p}.\] _Consequently, one can uniquely lift \(\hat{\Sigma}(\mu)\) to an analytic space so that \(w\) is locally analytic._ Proof.: We first prove that \(w\) is locally analytic. By Lemma A.9, \(w\) is locally analytic. By Lemma A. Proof.: For any \(\chi\in\Sigma^{(2)}(\mu)\), let \[U(\chi,M)=\{g\in\mathcal{F}(\mathbb{A}_{\mathcal{K}}^{\infty,\times,\prime}, \mathcal{O}_{\mathbb{C}_{p}})|\ \forall h\in\hat{\mathcal{O}}_{\mathcal{K}}^{\times},\ g(h)\equiv\chi(h)\ \mathrm{mod}\ p^{M}\}\] Then for \(M\geq 1\), \(\chi^{\prime}\in U(\chi,M)\) if and only if \(w(\chi^{\prime})\equiv w(\chi)\ \mathrm{mod}\ (p-1)p^{M-1}\). This implies \(w\) extends continuously to \(\hat{\Sigma}(\mu)\). Let \(h\) be the class number of \(K\). For any integer \(m\) divisible by \(2h\), consider the Hecke character \(\psi_{m}=\psi_{m}^{\prime}((-)^{h}):\ \mathcal{K}^{\times}\backslash\mathbb{A}_{ \mathcal{K}}^{\infty,\times}\to\mathbb{C}_{p}^{\times}\) where \[\psi_{m}^{\prime}:\ \mathcal{K}^{\times}\backslash\mathcal{K}^{\times}\hat{ \mathcal{O}}_{\mathcal{K}}^{\times}\to\mathbb{C}_{p}^{\times};\quad u\in\hat{ \mathcal{O}}_{\mathcal{K}}^{\times}\mapsto u_{p}^{m/h}\bar{u}_{p}^{-m/h}.\] Then the map \(m\in h\mathbb{Z}\mapsto\chi\psi_{m}\) extends to a local section of \(w\) and hence \(w\) is a local homeomorphism. From now on, assume \(\pi(f_{\mathrm{GL}_{2}})_{p}\) is a _principal series representation_ and \(n\geq 3\). Let \(\hat{\Sigma}(\mu,B)\) be the connected component of \(\hat{\Sigma}(\mu)\) passing through \(\mu\). By the local constancy of epsilon factors away \(p\infty\) (see [12, Cor. 5.3.3]), \(\epsilon(1/2,\pi(f_{\mathrm{GL}_{2}}),\chi_{-k/2})=(-1)^{i}\) for any \(\chi\in\hat{\Sigma}(\mu,B)\cap\Sigma^{(i)}(\mu)\ i=1,2\) and the quaternion algebra determined by the pair \((\pi(f_{\mathrm{GL}_{2}}),\chi)\) for all \(\chi\in\hat{\Sigma}(\mu,B)\cap\Sigma^{(2)}(\mu)\) is \(B\). Let \(L/\mathbb{Q}_{p}\) be a large enough finite field extension. To construct the \(p\)-adic \(L\)-function, fix * an elliptic curve \(\tilde{E}\) over \(\mathcal{O}_{L}\) with CM by \(\mathcal{O}_{d}\); * a subgroup \(H\subset\tilde{E}[p^{n}]\) which is generically cyclic of order \(p^{n}\) such that \(E=\tilde{E}/H\) and when \(p=\mathfrak{p}\) is ramified in \(\mathcal{K}\), \(H\) intersects with the canonical subgroup \(\tilde{E}[\mathfrak{p}]\) trivially. Let \(\tilde{\tilde{E}}:=\tilde{E}/H[p]\) and denote the quotient false isogeny \(\tilde{\tilde{E}}\to E\) by \(\lambda_{n}\). Choose an invariant differential form \(\omega_{E,p}\in\Omega^{1}_{E/\mathcal{O}_{L}}\) such that \(\tilde{\tilde{\omega}}:=\lambda_{n}^{*}(\omega_{E,p})\) generates \(\underline{\Omega}_{\tilde{E}/\mathcal{O}_{L}}\). The \(p\)_-adic period_\(\Omega_{E,p}\in\mathbb{C}_{p}^{\times}\) is the scalar such that \(\omega_{E,p}=\Omega_{p}\omega_{E}\). By [2, Prop. 5.9 & 6.6], \[v_{p}(\Omega_{p})=\begin{cases}\frac{1}{p^{n-1}(p^{2}-1)}&\text{$p$ inert}\\ \frac{1}{2p^{n}(p-1)}&\text{$p$ ramified}\end{cases}\] Let \(\tilde{A}:=\mathcal{O}_{B}\otimes_{\mathcal{O}_{c}}\tilde{E}\) and \(D:=\mathcal{O}_{B}\otimes_{\mathcal{O}_{c}}H\) be the object obtained by Serre's tensor product construction over \(L\). Note that \(A=\tilde{A}/D\). Set \(\tilde{\tilde{A}}:=\tilde{A}/D[p]\) and denote the quotient false isogeny \(\tilde{\tilde{A}}\to A\) by \(\lambda_{n}\). Assume \(L\) is large enough such that \(\tilde{A}\), \(\tilde{\tilde{A}}\) and \(A\) all extend to \(\mathcal{O}_{L}\). Then \(\omega_{A,p}:=e\otimes\omega_{E,p}\) is a generator of \(e\underline{\Omega}_{A/\mathcal{O}_{L}}\). By discussions in [2, Sect. 5.2 & 6.2], for any character \(\nu:\ \mathbb{Z}_{p}^{\times}\to\mathbb{Q}_{p}^{\times}\) * \(e\otimes\tilde{\tilde{\omega}}\) induces an isomorphism \(\nu_{\tilde{\omega}}:\ \mathfrak{m}_{\tilde{A}/\mathcal{O}_{L}}^{k+2\nu}\cong \mathcal{O}_{L}\); * There is a canonical \(\mathcal{O}_{dp}\)-equivariant projection \[\Psi_{\tilde{A}}\circ(\lambda_{n})^{*}:\ \mathbb{W}_{k+2\nu,\tilde{\tilde{A}}/ \mathcal{O}_{L}}\to\mathfrak{m}_{\tilde{\tilde{A}}/\mathcal{O}_{L}}^{k+2\nu}\] When \((A,i,\eta)\in X(\Gamma^{1}_{1}(p))_{r,c}\), \(\delta_{k}^{\nu}f^{[p]}\) can evaluate at \((A,i,\eta,\omega_{A,p})\) by the formula \[\delta_{p,k}^{\nu}(f^{[p]})(A,i,\eta,\omega_{A,p}):=\nu_{\tilde{\omega}}\circ \Psi_{\tilde{A}}\circ\lambda_{n}^{*}(\nabla_{k}^{\nu}f^{[p]})(A,i,\eta).\] When \((A,i,\eta)\in X(\Gamma^{1}_{1}(p))_{r,a}\), take \(\omega_{A^{\prime},p}\in e^{\underline{\Omega}}_{A^{\prime}/\mathcal{O}_{L}}\) which pulls back to \(\omega_{A,p}\) and one can define \(\delta^{\nu}_{p,k}(f^{[p]})(A^{\prime},i,\eta^{\prime},\omega_{A^{\prime},p})\) similarly. Moreover for each \(t\in H_{V}\), one can define \(\delta^{\nu}_{p,k}(f^{[p]})(t\cdot(A,i,\eta,\omega_{A,p}))\) and \(\delta^{\nu}_{p,k}(f^{[p]})(t\cdot(A^{\prime},i,\eta^{\prime},\omega_{A^{\prime },p}))\). **Definition A.10**.: For any \(\chi\in\hat{\Sigma}(\mu,B)\cap\Sigma^{(2)}(\mu)\) with \(w(\chi)=\nu\), set \(L_{p}(f,\chi)\) to be \[\frac{1}{\sharp H_{V}}\sum_{t\in H_{V}}\chi_{\nu}(t)\begin{cases}(\delta^{\nu}_ {p,k}f^{[p]})(t\cdot(A,i,\eta,\omega_{A,p})),&\text{ if }\pi_{p}\text{ is spherical or }(A_{t},i,\eta_{t})\in \mathcal{X}(\Gamma^{1}_{1}(p))_{r,c}.\\ (\delta^{\nu}_{p,k}f^{[p]})(t\cdot(A^{\prime},i,\eta^{\prime},\omega_{A^{ \prime},p})),&\text{ if }\pi_{p}\text{ is ramified and }(A_{t},i,\eta_{t})\in \mathcal{X}(\Gamma^{1}_{1}(p))_{r,a}.\end{cases}\] Carrying over the argument in [2, Prop 5.6], one deduces that **Proposition A.11**.: _The map \(\chi\mapsto L_{p}(f,\chi)\) extends to a locally analytic function on \(\hat{\Sigma}(\mu,B)\)._ **Proposition A.12**.: _For \(\chi\in\hat{\Sigma}(\mu,B)\cap\Sigma^{(2)}(\mu)\) with \(w(\chi)=j\),_ \[L_{p}(f,\chi)=\frac{L_{\mathrm{alg}}(f,\chi)}{\Omega_{p}^{k+2j}}.\] Proof.: The form \(f^{[p]}\) is actually a classical quaternion modular of \(p\)-level \(\Gamma^{1}_{1}(p^{2})\). Since the Hodge splitting of CM false elliptic coincides with CM splitting, one deduces that \(L_{p}(f,\chi)\) equals \[\frac{1}{\Omega_{p}^{k+2j}\sharp H_{V}}\sum_{t\in H_{V}}\chi_{\nu}(t) \begin{cases}(\delta^{\nu}_{p,k}f^{[p]})(t\cdot(A,i,\eta,\omega_{A})),&\text{ if }\pi_{p}\text{ is spherical or }(A,i,\eta)\in \mathcal{X}(\Gamma^{1}_{1}(p))_{r,c}.\\ (\delta^{\nu}_{p,k}f^{[p]})(t\cdot(A^{\prime},i,\eta^{\prime},\omega_{A^{ \prime}})),&\text{ if }\pi_{p}\text{ is ramified and }(A,i,\eta)\in\mathcal{X}(\Gamma^{1}_{1}(p))_{r,a}.\end{cases}\] As \(f\) is new at \(p\), * \(f^{[p]}=(1-a_{p}V+p^{k-1}\epsilon(p)V)f\) if \(\pi_{p}\) spherical with \(T_{p}f=a_{p}f\); * \(f^{[p]}=(1-a_{p}V)f\) if \(\pi_{p}\) ramified with \(U_{p}f=a_{p}f\). Thus by the Waldspurger formula and Lemma A.7, it suffices to show \[P(\pi(\begin{pmatrix}p^{-i}&0\\ 0&1\end{pmatrix})\phi_{p},\phi_{p},\chi_{p}(t))=0,\quad 0<i\leq n.\] This is straightforward, as \(\phi_{p}\) is fixed by \((\mathcal{O}_{dp^{n-i}}\otimes\mathbb{Z}_{p})^{\times}/\mathbb{Z}_{p}^{\times}\). To state the \(p\)-adic Waldspurger formula, we now introduce more notations. Let \(E_{0}\) be an elliptic curve with full CM by \(\mathcal{O}_{K}\) together with a cyclic isogeny \(\varphi_{0}:\ E_{0}\to E\) of degree \(c\). Let \(\varphi^{\prime}_{0}\) be the composition of \(\varphi_{0}\) with the natural ispgeny \(E\to E^{\prime}\). For any \(t\in CK^{\times}\backslash\mathbb{A}_{K}^{\times}/\mathcal{K}_{\infty}^{\times}\), let \(\varphi_{t}:\ E\to E_{t}\) be the composition of \(\varphi_{0}\) with the natural isogeny \(E\to E_{t}\) and define \(\varphi^{\prime}_{t}:\ E\to E^{\prime}_{t}\) similarly. Let \(A_{0}:=\mathcal{O}_{B}\otimes_{\mathcal{O}_{c}}E_{0}\) and let \(\varphi_{t}:\ A_{0}\to A_{t}\) (resp. \(\varphi^{\prime}_{t}:\ A_{0}\to A^{\prime}_{t}\)) be the induced isogeny of false elliptic curves. Let \(\Gamma^{t}_{t}\) (resp. \(\Gamma^{\prime,t}_{t}\)) be the transpose of the graph of \(\varphi_{t}\) (resp. \(\varphi^{\prime}_{t}\)). Let \(\omega_{A_{0}}=\varphi_{0}^{*}\omega_{A}\) and \(\eta_{A_{0}}\in\mathbb{H}_{A_{0}}\) such that \(\langle\omega_{A_{0}},\eta_{A_{0}}\rangle=1\) via the Poincare pairing. For any \(r>0\), let \(W_{r}\) be the Kuga-Sato variety over \(C:=X(\Gamma^{1}_{1}(p))\) obtained as the canonical desingularization of the r-fold self-product of the universal false elliptic curve and \(X_{r}=W_{r}\times A^{r}_{0}\). By [5], there is a projector \(\epsilon\in\mathrm{Corr}_{C}(W_{r},W_{r})\) in the ring of algebraic correspondence on \(W_{r}\) fibred over \(C\) such that \[\epsilon H^{*}_{\mathrm{dR}}(W_{r})=H^{1}(C,\mathcal{L}_{2r,2r},\nabla)\subset H ^{4r+2}_{\mathrm{dR}}(W_{r}),\quad\mathcal{L}_{2r,2r}=\mathrm{Sym}^{2r} \mathbb{H}\otimes\mathrm{Sym}^{2r}\mathbb{H}_{A_{0}}.\] The codimensional \(2r+1\)-cycle \(\Delta_{t}:=\epsilon\Upsilon_{t}\) (resp. \(\Delta_{t}^{\prime}:=\epsilon\Upsilon_{t}^{\prime}\)) where \(\Upsilon_{t}:=(\Gamma_{t}^{t})^{r}\) (resp. \(\Upsilon_{t}^{\prime}:=(\Gamma_{t}^{r,t})^{r}\)) in \(W_{r}\) is cohomologically trivial. Let \(\mathrm{AJ}_{p}\) be the \(p\)-adic Abel-Jacobi map [5, Sect. 7]. For \(r=0\), let \(\iota:\ C\to J:=J_{C}\) be the quasi-embedding from \(C\) to its Jacobian \(J\) induced by the Hodge class (see [17, Sect 3]). Any differential form \(\omega\in H^{0}(C,\Omega_{C}^{1})\) can be seen as an invariant differential on \(J\) via \(\iota\). For any \(x\in C\), set \[\log_{\omega}(x):=\langle\log_{J}\iota(x),\omega\rangle.\] Let \(\omega_{f}\) be differential form attached to \(f\). **Proposition A.13**.: _Assume \(k=2r+2\) and take \(\chi\in\Sigma^{(1)}(\mu)\cap\hat{\Sigma}(\mu,B)\) of infinite type \((-k+1+j,-j-1)\), \(0\leq j\leq 2r\). Then_ * _if_ \(\pi_{p}\) _is spherical or_ \((A,i,\eta)\in\mathcal{X}(\Gamma_{1}^{1}(p))_{r,c}\)_,_ \[L_{p}(f,\chi)=\frac{\Omega_{p}^{2r-2j}c^{-j}}{j!\sharp H_{V}}\sum_{t\in H_{V}} \chi_{-1-j}(t)\begin{cases}\log_{\omega_{f}}(t\cdot(A,i,\eta)),&r=0;\\ \mathrm{AJ}_{p}(\Delta_{t})(\omega_{f}\wedge\omega_{A_{0}}^{j}\eta_{A_{0}}^{r -j}),&r>0;\end{cases}\] (A.2) * _if_ \(\pi_{p}\) _is ramified and_ \((A,i,\eta)\in\mathcal{X}(\Gamma_{1}^{1}(p))_{r,a}\)_,_ \[L_{p}(f,\chi)=\frac{\Omega_{p}^{2r-2j}(cp)^{-j}}{j!\sharp H_{V}}\sum_{t\in H_{ V}}\chi_{-1-j}(t)\begin{cases}\log_{\omega_{f}}(t\cdot(A^{\prime},i,\eta^{ \prime})),&r=0;\\ \mathrm{AJ}_{p}(\Delta_{t}^{\prime})(\omega_{f}\wedge\omega_{A_{0}}^{j}\eta_{ A_{0}}^{r-j}),&r>0.\end{cases}\] (A.3) Proof.: Similar as the interpolation formula, one finds the RHS of Equa. A.2 and A.3 is unchanged if \(\omega_{f}\) is replaced by \(\omega_{f^{[p]}}\). Note that over \(\mathbb{Q}(\mu_{p})\), \(C=X(\Gamma_{1}^{1}(p))\) is a disjoint union of \(\varphi(p)\)-copies of \(X(\Gamma_{1}(p))\). By the results on integral models of \(X(\Gamma_{1}(p))\) in [6, Sect. 4] and the Serre-Tate expansion computation in [13, Sect 2.4.3] and [5, Thm 7.3], one can adapt the arguments in [2, Prop 7.6,7.8] to show that * the \(p\)-depletion \(G^{[p]}\) of the Coleman primitive \(G\) of \(f\) is rigid analytic on \(\mathcal{X}(\Gamma_{1}^{1}(p))_{r,c}\); * define \(G^{[p]}_{j}(t\cdot(A,i,\eta,\omega_{A}))\) by the CM decomposition \[G^{[p]}(t\cdot(A,i,\eta,\omega_{A}))=\sum_{j=0}^{2r}(-1)^{j}G_{j}(t\cdot(A,i, \eta,\omega_{A}))\omega_{A_{t}}^{2r-j}\eta_{A_{t}}^{j};\] then \[\delta_{p,k}^{-1-j}(f^{[p]})(t\cdot(A,i,\eta,\omega_{A}))=\frac{\Omega_{p}^{2r -2j}}{j!}G_{j}^{[p]}(t\cdot(A,i,\eta,\omega_{A})).\] Similar results holds When \(r=0\), \(\log_{\omega_{f}}\) is the Coleman primitive of \(f\) by [3] and as in [2, Prop 7.6(ii)], one has \(\nabla\log_{\omega_{f^{[p]}}}=f^{[p]}\). On the other hand, adapting the argument in [8, Sect 2], one finds \[\mathrm{AJ}_{p}(\Delta_{t})(\omega_{f}\otimes\omega_{A_{0}}^{j}\eta_{A_{0}}^{2 r-j})=c^{j}N(t)^{-j}G_{j}((A,i,\eta,\omega_{A}));\] \[\mathrm{AJ}_{p}(\Delta_{t}^{\prime})(\omega_{f}\otimes\omega_{A_{0}}^{j}\eta_ {A_{0}}^{2r-j})=(cp)^{j}N(t)^{-j}G_{j}((A^{\prime},i,\eta^{\prime},\omega_{A^ {\prime}})).\] Combining all these ingredients, the desired results follow.
2302.04074
Fine Polyhedral Adjunction Theory
Originally introduced by Fine and Reid in the study of plurigenera of toric hypersurfaces, the Fine interior of a lattice polytope got recently into the focus of research. It is has been used for constructing canonical models in the sense of Mori Theory [arXiv:2008.05814]. Based on the Fine interior, we propose here a modification of the original adjoint polytopes as defined in [arXiv:1105.2415], by defining the Fine adjoint polytope $P^{F(s)}$ of $P$ as consisting of the points in $P$ that have lattice distance at least $s$ to all valid inequalities for $P$. We obtain a Fine Polyhedral Adjunction Theory that is, in many respects, better behaved than its original analogue. Many existing results in Polyhedral Adjunction Theory carry over, some with stronger conclusions, as decomposing polytopes into Cayley sums, and most with simpler, more natural proofs as in the case of the finiteness of the Fine spectrum.
Sofía Garzón Mora, Christian Haase
2023-02-08T14:21:53Z
http://arxiv.org/abs/2302.04074v1
# Fine Polyhedral Adjunction Theory ###### Abstract. Originally introduced by Fine and Reid in the study of plurigenera of toric hypersurfaces [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 231, 242, 251, 261, 272, 281, 290, 211, 223, 243, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 282, 283, 284, 285, 286, 287, 288, 289, 291, 292, 293, 294, 295, 296, 297, 298, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 320, 331, 334, 335, 336, 341, 342, 343, 344, 345, 346, 347, 348, 358, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 411, 42, 434, 445, 446, 447, 448, 450, 451, 461, 462, 463, 464, 47, 478, 48, 48, 49, 49, 500, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 52, 54, 55, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 87, 88, 89, 91, 83, 85, 89, 92, 86, 89, 93, 94, 95, 96, 97, 98, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 12, 14, 15, 16, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 29, 28, 29, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 84, 85, 88, 89, 92, 86, 87, 89, 93, 94, 95, 96, 97, 98, 99, 100, 101, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 83, 85, 89, 94, 95, 96, 97, 98, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 83, 84, 85, 86, 87, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34 One of the main results of Polyhedral Adjunction Theory is the decomposition theorem (cf. [10, 11, 12]). Here, we refer to a _Cayley sum_\(P_{0}\star\cdots\star P_{t}\) of \(t+1\) polytopes as being a polytope which is constructed by locating the \(t+1\) polytopes along the vertices of a \(t\)-dimensional standard simplex and taking its convex hull (as in Definition 4.1). A conjecture posed by Dickenstein and Nill [12, Conj. 1.2] about the Cayley decomposition of \(n\)-dimensional polytopes with codegree bounded below by \(\frac{n+3}{2}\) has been disproven by Higashitani [10]. Instead a weaker version was proposed [10, Conj. 1.3] which states the following. **Conjecture 1.1**.: _If an \(n\)-dimensional lattice polytope \(P\) satisfies \(\mu(P)>\frac{n+1}{2}\), then \(P\) decomposes as a Cayley sum of lattice polytopes of dimension at most \(\lfloor 2(n+1-\mu(P))\rfloor\)._ This conjecture is still open but a slightly weaker version was proven in [10, Thm. 3.4] in which \(\mu(P)>\frac{n+1}{2}\) is replaced with just \(\mu(P)\geq\frac{n+2}{2}\). Because of the way the Fine adjoint polytopes are defined, we obtain the relation of the \(\mathds{Q}\)-codegree and Fine \(\mathds{Q}\)-codegree of a rational polytope \(P\), \(\mu^{F}(P)\), to be such that \[\mu(P)\leq\mu^{F}(P). \tag{1}\] It is due to this that in Theorem 4.3 we prove a Fine version of this decomposition theorem where essentially the same proof yields a stronger result. Our second main result is related to Fujita's Spectrum Conjecture. **Conjecture 1.2** (Spectrum Conjecture, Fujita [10]).: _For any \(n\in\mathds{Z}_{\geq 1}\), let \(S_{n}\) be the set of unnormalized spectral values of a smooth polarized \(n\)-fold. Then, for any \(\varepsilon>0\), the set \(\{\mu\in S_{n}\,|\,\mu>\varepsilon\}\) is a finite set of rational numbers._ A polyhedral version of Fujita's conjecture was proven by Paffenholz [11, Thm 3.1] even allowing certain, \(\alpha\)-canonical, singularities (cf. Theorem 5.1 below). In this paper, we show that the analogous set \(\{\mu^{F}\in S_{n}^{F}\,|\,\mu^{F}>\varepsilon\}\) of Fine spectral values is finite without any assumption on the singularities (cf. Theorem 5.4). As a result, our proof is simpler than Paffenholz', and it should allow for classification results in the future. ## Acknowledgments The authors would like to thank Benjamin Nill for his insightful comments, helpful reviews and dedication to this project. The first author was supported by the Deutsche Forschungsgemeinschaft (DFG), Graduiertenkolleg "Facets of Complexity" (GRK 2434). ## 2. Redefining Polyhedral Adjunction Theory In what follows, unless stated otherwise, we consider \(P\subseteq\mathds{R}^{n}\) to be an \(n\)-dimensional rational polytope, which is described in a unique minimal way by inequalities as \(P=\{x\in\mathds{R}^{n}\,|\,\langle a_{i},x\rangle\geq b_{i},i=1,...,m\}\), where \(b_{i}\in\mathds{Q}\) and \(a_{i}\in(\mathds{Z}^{n})^{*}\) are the primitive rows of a matrix \(A\), i.e., they are not the multiple of another lattice vector, and \(b\in\mathds{Q}^{m}\). We will refer to \(P\) being a _rational polytope_ as having its vertices lie in \(\mathds{Q}^{n}\) and we will say that \(P\) is a _lattice polytope_ if its vertices lie in \(\mathds{Z}^{n}\). We introduce our first definitions. **Definition 2.1**.: Let \(f\) be the affine functional \(f(x)=\langle a,x\rangle-b\) for some \(b\in\mathds{Q}\) and \(a\in(\mathds{Z}^{n})^{*}\). Such a functional is said to be _valid_ for a polytope \(P\) if for the halfspace \(\mathcal{H}_{+}:=\{x\in\mathds{R}^{n}\,|\,f(x)\geq 0\}\), we have that \(P\subseteq\mathcal{H}_{+}\). Moreover, if there is some \(p\in P\) with \(f(p)=0\), i.e., at least one point of \(P\) lies in \(\mathcal{H}\), the hyperplane generated by \(f\), we say \(f\) is a _tight_ valid inequality for \(P\). Since \(P\) is a polyhedron, note that it can be described by a finite subset of all tight valid inequalities for \(P\) of which there is an infinite number, namely at least one for each primitive \(a\in(\mathds{Z}^{n})^{*}\setminus\{0\}\). **Definition 2.2**.: Let \(\alpha\in(\mathds{R}^{n})^{*}\) we define the _distance_ function associated with \(P\) as \[d_{P}^{F}:(\mathds{R}^{n})^{*}\to\mathds{R},\quad\alpha\mapsto\min_{x\in P} \langle\alpha,x\rangle.\] In terms of this function, for some real number \(s>0\), we may define the _Fine adjoint polytope_, which is a rational polytope, as \[P^{F(s)}:=\{x\in\mathds{R}^{n}\mid\langle a,x\rangle\geq d_{P}^{F}(a)+s,\text{ for all }a\in(\mathds{Z}^{n})^{*}\setminus\{0\}\}.\] We will refer to the study of such Fine adjoint polytopes as _Fine polyhedral adjunction theory_. As previously mentioned, the Fine adjoint polytopes we have introduced are a variant of the adjoint polytopes as defined in [14, Definition 1.1]. In order to compare between these definitions, we recall the original one here. **Definition 2.3**.: Let \(P\) be a rational polytope of dimension \(n\) given by the inequalities \(\langle a_{i},\cdot\rangle\geq b_{i}\) for \(i=1,...,m\) that define facets \(F_{1},...,F_{m}\) in a minimal way. Then for \(x\in\mathds{R}^{n}\), the _lattice distance_ from the facet \(F_{i}\) is given by \[d_{F_{i}}:=\langle a_{i},x\rangle-b_{i}\] and the _lattice distance with respect to the boundary \(\partial P\)_ of \(P\) is \[d_{P}:=\min_{i=1,...,m}d_{F_{i}}(x).\] For \(s>0\), the _adjoint polytope_ is defined as \[P^{(s)}:=\{x\in\mathds{R}^{n}\mid d_{P}(x)\geq s\}.\] **Remark 2.4**.: In some cases, taking Fine adjoint polytopes of a polytope \(P\) will be the equivalent to considering the original adjoint polytopes, as is the case of the rightmost and leftmost examples in Figure 1. In what follows, we will prove a crucial result, namely that only finitely many tight valid inequalities \(f_{1},...,f_{t}\) will be relevant when computing the Fine adjoint polytopes. Moreover, from its proof we will obtain a characterization for when exactly an inequality will be relevant for computing the Fine adjoint polytopes. We make this notion of a relevant inequality more precise. **Definition 2.5**.: Let \(\mathcal{F}\) be the set of all valid inequalities for \(P\), where an element \(f\in\mathcal{F}\) is of the form \(\langle a_{f},x\rangle\geq b_{f}\). A valid inequality \(f\in\mathcal{F}\) is said to be _relevant_ for \(P\) if for some \(s>0\), it holds that \[\{x\in\mathds{R}^{n}\mid\langle a_{f},x\rangle\geq d_{P}^{F}(a_{f})+s\ \forall f\in\mathcal{F}\}\neq\{x\in\mathds{R}^{n}\mid\langle a_{f},x\rangle \geq d_{P}^{F}(a_{f})+s\ \forall f\in\mathcal{F}\setminus\{f\}\}.\] The valid inequality \(f\) is said to be _irrelevant_ if it is not relevant. The following proposition will be very useful for our results and computations below. **Proposition 2.6** ([1, Proposition 3.11]).: _Let \(P\) be a rational polytope of dimension \(n\). Then there exists a finite set \(\mathcal{S}\subset\mathcal{F}\) of valid inequalities for \(P\) such that the set \(\mathcal{S}\) contains all relevant valid inequalities for \(P\)._ From Proposition 2.6 we obtain a useful description of the relevant valid inequalities which we state as a Corollary. **Corollary 2.7**.: _Let \(P\) be a rational polytope of dimension \(n\). The valid relevant inequalities for \(P\) of the form \(\langle a,\cdot\rangle\geq d_{P}^{F}(a)\) correspond to the \(a\in(\mathbb{Z}^{n})^{*}\) such that \(a\in\operatorname{conv}(a_{1},...,a_{m})\) where the \(a_{i}\) for \(1\leq i\leq m\) are the primitive inward pointing facet normals of \(P\)._ We will now consider the polytope \(P\) to be defined as \[P=\{x\in\mathds{R}^{n}\mid\langle a_{i},x\rangle\geq b_{i},i=1,...,m\}, \tag{2}\] where \(b_{i}\in\mathds{Q}\) and \(a_{i}\in(\mathds{Z}^{n})^{*}\) are the primitive rows of a matrix \(A\) including all relevant valid inequalities for \(P\). **Remark 2.8**.: Note that in the Fine case, we have that taking Fine adjoint polytopes satisfies monotonicity with respect to inclusion of polytopes, i.e., if \(P\) and \(Q\) are two polytopes, such that \(P\subseteq Q\), then we have that for any \(s\geq 0\), \(P^{F(s)}\subseteq Q^{F(s)}\). This holds since for any \(a\in(\mathds{Z}^{n})^{*}\) it follows that \(d_{P}^{F}(a)\geq d_{Q}^{F}(a)\), but this does not necessarily hold in the original polyhedral adjunction case. Now, using the Fine adjoint polytopes, we may reformulate the concept of the \(\mathds{Q}-\)codegree. **Definition 2.9**.: The _Fine \(\mathds{Q}\)-codegree_ of a rational polytope \(P\) is \[\mu^{F}(P):=(\sup\{s>0\:|\:P^{F(s)}\neq\emptyset\})^{-1},\] and the _Fine core_ of \(P\) is \[\operatorname{core}^{F}(P):=P^{(1/\mu^{F}(P))}.\] **Example 2.10**.: In general, the core and the Fine core of a given polytope can vary and may even have different dimensions, as in the case of the polytopes in Figure 2. **Example 2.11**.: Consider the polytope given as in [2, Figure 6] for the case \(h=10\) by \[P=\operatorname{conv}\begin{bmatrix}0&2&0&2&0&0\\ 0&0&4&2&0&4\\ 0&0&0&10&10\end{bmatrix}.\] We may compute the core of \(P\) and the Fine core of \(P\) to be \[\operatorname{core}(P)=\operatorname{conv}\begin{bmatrix}4/3&4/3\\ 4/3&4/3\\ 4/3&2\end{bmatrix},\qquad\operatorname{core}^{F}(P)=\operatorname{conv} \begin{bmatrix}1&1&1&1\\ 1&1&2&2\\ 1&3&1&3\end{bmatrix}.\] Thus, in this case, as seen in Figure 3, we have that \(\operatorname{core}^{F}(P)\nsubseteq\operatorname{core}(P)\). The core and the Fine core are even disjoint for this example. Let us now denote the _normal fan_ of a polytope \(P\) by \(\mathcal{N}(P)\). We will use this notion to define a second invariant for the Fine adjoint polytopes. Figure 3. Polytope whose Fine core is not contained in its classical core. Figure 2. Examples of polytopes whose Fine and original cores differ. **Definition 2.12**.: The _Fine nef value_ of a rational polytope \(P\) is \[\tau^{F}(P):=(\sup\{s>0\mid\mathcal{N}(P^{F(s)})=\mathcal{N}(P)\})^{-1}\in\mathds{ R}_{>0}\cup\{\infty\}.\] As opposed to the case of the Fine \(\mathds{Q}\)-codegree, here the supremum may not be the maximum. **Definition 2.13**.: Let \(\sigma\subset(\mathds{R}^{n})^{*}\) be an \(n\)-dimensional rational polyhedral cone with primitive generators \(a_{1},...,a_{k}\). The cone \(\sigma\) is called \(\mathds{Q}\)-_Gorenstein of index_\(r_{\sigma}\) if there is a primitive vector \(u_{\sigma}\) such that \(\langle a_{i},u_{\sigma}\rangle=r_{\sigma}\) for \(1\leq i\leq k\). If \(r=1\), the cone \(\sigma\) is called _Gorenstein_. We say that a complete rational polyhedral fan \(\Sigma\) is \(\mathds{Q}\)-_Gorenstein of index r_, resp. _Gorenstein_, if the maximal cones \(\sigma\in\Sigma\) are \(\mathds{Q}\)-Gorenstein of index \(r_{\sigma}\) and \(r=\operatorname{lcm}(r_{\sigma}\mid\sigma\in\Sigma)\), resp. Gorenstein. **Definition 2.14**.: If we consider an element \(y\) of a \(k\)-dimensional rational polyhedral cone \(\sigma\) with generators \(a_{1},...,a_{k}\), then the _height function_ associated with the cone \(\sigma\) is the piecewise linear function given by \[\operatorname{height}_{\sigma}(y):=\max\bigg{\{}\sum_{i=1}^{k}\lambda_{i} \ \bigg{|}\ y=\sum_{i=1}^{k}\lambda_{i}a_{i},\text{ and }\lambda_{i}\geq 0\text{ for }1\leq i\leq k \bigg{\}}.\] The cone \(\sigma\) is \(\alpha\)_-canonical_ for some \(\alpha>0\) if \(\operatorname{height}_{\sigma}(y)\geq\alpha\) for any integral point \(y\in\sigma\cap(\mathds{Z}^{n})^{*}\). A complete rational polyhedral fan \(\Sigma\) is \(\alpha\)_-canonical_ if every cone in \(\Sigma\) is \(\alpha\)-canonical. Furthermore, a cone or fan is _canonical_ if it is \(\alpha\)-canonical for \(\alpha=1\). We will now give a characterization of the finiteness of the Fine nef value. We assume \(P\) to have the inequality description given by \(\langle a_{i},x\rangle\geq d_{P}^{F}(a_{i})\) for \(1\leq i\leq m\) in a unique minimal way as in 2. Let \(s\geq 0\) and let \(v\) be a vertex of \(P\) that satisfies with equality the inequalities given by the \(a_{i}\) for \(i\in I\) where the other inequalities are strict for \(i\notin I\). Note that \(\mathcal{N}(P)\) is \(\mathds{Q}\)-Gorenstein of index \(r\), then using the notation of Definition 2.13, we may define \[v(s):=v+\frac{s}{r_{\sigma}}u_{\sigma}.\] We have that \(v(s)\) is linear as a function of \(s\) and that \(\langle a_{i},v(s)\rangle=d_{P}^{F}(a_{i})+s\) for \(i\in I\). **Theorem 2.15**.: _The Fine nef value \(\tau^{F}(P)<\infty\) if and only if \(\mathcal{N}(P)\) is \(\mathds{Q}\)-Gorenstein and canonical._ Proof.: To see the forward implication, assume that \(\tau^{F}(P)<\infty\). Then there exists some small enough \(s>0\), where without loss of generality we can assume \(s\in\mathds{Q}\), such that \(P^{F(s)}\) and \(P\) are combinatorially equivalent. Let \(v\) a vertex of \(P\) and let \(v^{\prime}\in P^{F(s)}\) the vertex corresponding to \(v\) under this equivalence. Since \(s>0\) then \(v^{\prime}-v\neq 0\). Now, take any linear functional defining a facet incident to \(v\), and let this be given by some primitive \(a_{i}\in(\mathds{Z}^{n})^{*}\). Then we have that \[\langle a_{i},v^{\prime}-v\rangle=s,\] which holds for all \(a_{i}\). This shows that \(\mathcal{N}(P)\) is \(\mathds{Q}\)-Gorenstein. Now, assume that \(\mathcal{N}(P)\) is not canonical. Then for some vertex \(v\) of \(P\) there is a linear functional \(a\) such that \(a=\sum_{i\in I}\lambda_{i}a_{i}\) and \(\sum_{i\in I}\lambda_{i}\) is minimal and strictly less than \(1\), where \(I\) defines the set of facet defining linear inequalities at \(v\). Then for any small \(s>0\), it is the case that \(a\) will define a facet of \(P^{F(s)}\) while it did not define a facet of \(P\). Thus \(\tau^{F}(P)\) is infinite. To see the reverse implication, let us assume that \(\mathcal{N}(P)\) is \(\mathds{Q}\)-Gorenstein and canonical. Then we can define \(v(s)\) for all vertices \(v\) of \(P\) as in our remarks above. We will show that this implies that for some small \(s>0\) \[P^{F(s)}=\operatorname{conv}(v(s)\mid v\text{ is a vertex of }P). \tag{3}\] Then, using \(v(s)\neq v^{\prime}(s)\) for \(v\neq v^{\prime}\) and small enough \(s\), we obtain a bijection between the vertices of \(P\) and \(P^{F(s)}\) which preserves incidences with facets. Hence, their face lattices are isomorphic and \(\tau^{F}(P)<\infty\). For the inclusion \(\operatorname{conv}(v(s))\subseteq P^{F(s)}\), let \(\langle a,x\rangle\geq c\) be a valid inequality for \(P\). We need to show that for some small \(s>0\) and every vertex \(v\) of \(P\) we have \(\langle a,v(s)\rangle\geq c+s\). If \(\langle a,v\rangle>c\), any small enough \(s\) will do the trick. If \(\langle a,v\rangle=c\), then \(a\) belongs to the normal cone of \(v\). Using the facet defining inequalities \(\langle a_{i},x\rangle\geq c_{i}\) which are sharp at \(v\), we can write \(a=\sum_{i}\lambda_{i}a_{i}\) and \(c=\sum_{i}\lambda_{i}c_{i}\) with all \(\lambda_{i}\geq 0\). As we assume \(\mathcal{N}(P)\) to be canonical, we have \(\sum_{i}\lambda_{i}\geq 1\). Hence, \[\langle a,v(s)\rangle=\sum_{i}\lambda_{i}\langle a_{i},v(s)\rangle=\sum_{i} \lambda_{i}(c_{i}+s)\geq c+s\,.\] For the other containment, suppose that there is a \(w\in P^{F(s)}\) such that \(w\notin\operatorname{conv}(v(s))\). Then there exists a linear functional \(a\in(\mathds{Z}^{n})^{*}\) separating \(w\) from all the \(v(s)\). This \(a\) must be contained in the normal cone of some vertex \(v\) of \(P\). Set \(c\coloneqq\langle a,v\rangle=\min\{\langle a,x\rangle\mid x\in P\}\,.\) Using the facet defining inequalities \(\langle a_{i},x\rangle\geq c_{i}\) which are sharp at \(v\), we can write \(a=\sum_{i}\lambda_{i}a_{i}\) and \(c=\sum_{i}\lambda_{i}c_{i}\) with all \(\lambda_{i}\geq 0\). The fact that \(w\in P^{F(s)}\) translates to the inequalities \(\langle a_{i},w\rangle\geq c_{i}+s\) for all \(i\). But the fact that \(a\) separates \(w\) from \(\operatorname{conv}(v(s))\) translates to the inequality \(\langle a,w\rangle<\langle a,v(s)\rangle\) which can be rewritten as \(\sum_{i}\lambda_{i}\langle a_{i},w\rangle<\sum_{i}\lambda_{i}(c_{i}+s)\,,\), a contradiction. ## 3. Natural Projections in the Fine case We now want to study the behaviour of the Fine \(\mathds{Q}\)-codegree under projections, so we introduce the following definition. **Definition 3.1**.: Let \(K(P)\) be the linear space parallel to \(\operatorname{aff}(\operatorname{core}^{F}(P))\). Then the projection \(\pi_{P}:\mathds{R}^{n}\to\mathds{R}^{n}/K(P)\) is called the natural projection associated with \(P\). We now have the following Lemma from [10] which holds with the same proof in the case of Fine Adjunction Theory. **Lemma 3.2**.: _Let \(x\in\operatorname{relint}(\operatorname{core}^{F}(P))\). Let us denote by \(f_{1},...,f_{t}\) the relevant valid inequalities for \(P\) with \(d_{f_{i}}(x)=\mu^{F}(P)^{-1}\). Then their primitive inner normals \(a_{1},...,a_{t}\) positively span the linear subspace \(K(P)^{\perp}\). Moreover, if \(\operatorname{core}^{F}(P)=\{x\}\), then_ \[\{y\in\mathds{R}^{n}\mid d_{f_{i}}(y)\geq 0\text{ for all }i=1,...,t\}\] _is a rational polytope containing \(P\)._ However, we can prove the following stronger result, which does not hold in the classical polyhedral adjunction case. We include here the proof of the direction that was previously not valid. **Proposition 3.3**.: _The image \(Q:=\pi_{P}(P)\) of the natural projection of \(P\) is a rational polytope satisfying \(\mu^{F}(Q)=\mu^{F}(P)\). Moreover \(\operatorname{core}^{F}(Q)\) is the point \(\pi_{P}(\operatorname{core}^{F}(P))\)._ Proof.: To prove that \(\mu^{F}(P)^{-1}\leq\mu(Q)^{-1}\), let \(g\) be a valid inequality for \(Q\) and let \(p\in P\) with \(\pi_{P}(p)=q\in\operatorname{core}^{F}(Q)\). Then, for \(\pi_{P}^{*}g=f\), we have \[\mu^{F}(Q)^{-1}=g(q)-\min_{\tilde{q}\in Q}g(\tilde{q})=\pi^{*}g_{P}(p)-\min_{ \tilde{p}\in P}\pi_{P}^{*}g(\tilde{p})\geq\mu^{F}(P)^{-1}\] which concludes our proof. **Remark 3.4**.: Note that we have described the behaviour of the Fine \(\mathds{Q}\)-codegree under the natural projection of \(P\). However, under any projection \(\pi^{\prime}\) of \(P\), we still have that \(\mu^{F}(\pi^{\prime}(P))\leq\mu^{F}(P)\). ## 4. Cayley Decompositions and the Fine structure theorem We let \(P\subseteq\mathds{R}^{n}\) be an \(n\)-dimensional lattice polytope and we recall the following definition. **Definition 4.1**.: Given lattice polytopes \(P_{0},...,P_{t}\subseteq\mathds{R}^{k}\), then the _Cayley sum_\(P_{0}*\cdots*P_{t}\) is the convex hull of \[(P_{0}\times 0)\cup(P_{1}\times e_{1})\cup\cdots\cup(P_{t}\times e_{t})\subseteq \mathds{R}^{k}\times\mathds{R}^{t}\] for \(e_{1},...,e_{t}\) the standard basis of \(R^{t}\). As a means of comparison, we will now define the notion of codegree which comes up in the context of Ehrhart Theory of lattice polytopes [1]. **Definition 4.2**.: Let \(P\) be a rational polytope. We define the _codegree_ as \[\operatorname{cd}(P):=\min\{k\in\mathds{N}_{\geq 1}\mid\operatorname{int}(kP) \cap\mathds{Z}^{n}\neq\emptyset\}.\] Now, let us define the value \[d^{F}(P):=\begin{cases}2(n-\lfloor\mu^{F}(P)\rfloor),\text{ if }\mu^{F}(P)\notin \mathds{N}\\ 2(n-\mu^{F}(P))+1,\text{ if }\mu^{F}(P)\in\mathds{N}\end{cases}\] We have that \(P\cong\Delta_{n}\) if and only if \(\operatorname{cd}(P)=n+1\). Moreover, \(\mu(P)\leq\mu^{F}(P)\leq\operatorname{cd}(P)\leq n+1\), where this relation is obtained from the original adjoint polytopes case in [1]. Since \(\mu^{F}(\Delta_{n})=n+1\), we see that \(P\cong\Delta_{n}\) if and only if \(\mu^{F}(P)=n+1\). Hence we come to the following strengthening of the Decomposition Theorem for Cayley Sums [1, Theorem 3.4] whose proof follows the one presented on [1] slightly adapted to the Fine case. **Theorem 4.3**.: _Let \(P\) an \(n\)-dimensional lattice polytope with \(P\not\cong\Delta_{n}\). If \(n>d^{F}(P)\), then \(P\) is a Cayley sum of lattice polytopes in \(\mathds{R}^{m}\) with \(m\leq d^{F}(P)\)._ Let us consider the following example where we compute the codegree in our three settings. **Example 4.4**.: Let \(\Delta_{n}(a):=\operatorname{conv}(0,ae_{1},e_{2},...,e_{n})\) for positive integers \(a\in\mathds{Z}_{>0}\), where the \(e_{i}\) for \(1\leq i\leq n\) form the standard basis of \(\mathds{R}^{n}\). Note that in the case where \(a=1\) this consists of the standard simplex which has been argued before that satisfies \[\operatorname{cd}(\Delta_{n}(1))=\mu(\Delta_{n}(1))=\mu^{F}(\Delta_{n}(1))=n+1.\] Thus, let us consider the case where \(a>1\) and \(n\geq 2\). It is easy to check that \(\operatorname{cd}(\Delta_{n}(a))=n\). Moreover, it has been computed in [10] that in this case the \(\mathds{Q}\)-codegree is given by \[\mu(\Delta_{n}(a))=n-1+\frac{2}{a}.\] Finally, since in the Fine case the inequality \(\sum_{i=2}^{n}x_{i}\leq 1\) is valid, it can be computed that \[\mu^{F}(\Delta_{n}(a))=n.\] Thus, we obtain that \(\mu(\Delta_{n}(a))<\mu^{F}(\Delta_{n}(a))=\operatorname{cd}(\Delta_{n}(a))\). From this example we see that the \(\mathds{Q}\)-codegree and the Fine \(\mathds{Q}\)-codegree can take different values on the same polytope \(P\). ## 5. Finiteness of the Fine \(\mathds{Q}\)-codegree spectrum It has been proven already that when bounded from below by some \(\varepsilon>0\), the set of values that the \(\mathds{Q}\)-codegree can take under certain conditions is finite. We will shortly review these conditions for the case of the original polyhedral adjunction theory. Let \(P\subseteq\mathds{R}^{n}\) be a lattice polytope of dimension \(n\), which we assume to be full-dimensional. We define the following sets as in [10], \[\mathcal{S}(n,\varepsilon):=\{P\mid P\text{ is an $n$-dimensional lattice polytope},\mu(P)\geq\varepsilon\},\] \[\mathcal{S}_{\alpha}^{can}(n,\varepsilon):=\{P\mid P\in\mathcal{S}(n, \varepsilon)\text{ and }\mathcal{N}(P)\text{ is $\alpha$-canonical}\}.\] The theorem proven in [10] is stated as follows. **Theorem 5.1** (Paffenholz, [10, Theorem 3.1]).: _Let \(n\in\mathds{N}\) and \(\alpha,\varepsilon>0\) be given. Then_ \[\{\mu(P)\mid P\in\mathcal{S}_{\alpha}^{can}(n,\varepsilon)\}\] _is finite._ Note that in this result the \(\alpha\)-canonical assumption on the polytopes was necessary. **Example 5.2**.: A natural example to consider in order to see the importance of this assumption is the family of polytopes \[\Delta_{n}(a)=\operatorname{conv}(0,ae_{1},...,e_{n})\] where the \(e_{1},...,e_{n}\) are the standard basis vectors and \(a\in\mathds{Z}_{>0}\), which was previously studied in Example 4.4. For these polytopes, their normal fan is \(\mathds{Q}\)-Gorenstein with index \(a\) and if \(a>1\), then \[\mu(\Delta_{n}(a))=n-1+\frac{2}{a},\] but the polytopes \(\Delta_{n}(a)\) are not \(\alpha\)-canonical for any \(\alpha>0\) and their \(\mathds{Q}\)-codegree can take an infinite number of values. In what follows we will study a generalization of this theorem to the case of Fine adjunction theory. We will follow the proof presented in [10] and adapt it to the Fine polyhedral adjunction case where the remarkable difference will be the fact that we will not be assuming that the polytopes are \(\alpha\)-canonical, hence in this new setting the theorem holds in much more generality and with much weaker assumptions. We first introduce the following definition. **Definition 5.3**.: A vector \(a_{i}\) is a _Fine core normal_ if for all \(y\in\operatorname{core}^{F}(P)\), \[\langle a_{i},y\rangle=d_{P}^{F}(a_{i})+\mu^{F}(P)^{-1}.\] We also define the set \[\mathcal{S}^{F}(n,\varepsilon):=\{P\mid P\text{ is an $n$-dimensional lattice polytope and }\mu^{F}(P)\geq\varepsilon\}.\] We can now state our main result. **Theorem 5.4**.: _Let \(n\in\mathds{N}\) and \(\varepsilon>0\) be given. Then_ \[\{\mu^{F}(P)\mid P\in\mathcal{S}^{F}(n,\varepsilon)\}\] _is finite._ The proof of our main theorem here will also consist of two main parts. First, we will show that up to lattice equivalence, for a fixed \(n\in\mathds{Z}_{>0}\), there are only finitely many sets of Fine core normals for \(n\)-dimensional lattice polytopes. Then we will show that each such configuration of core normals gives rise to finitely many values for the Fine \(\mathds{Q}\)-codegree above any positive threshold. Thus, if we let \(P\) be described by all relevant inequalities as \[P=\{x\in\mathds{R}^{n}\mid\langle a_{i},x\rangle\geq c_{i},i=1,...,m\},\] note that, up to relabelling, we can assume that the set of Fine core normals consisting of \(a_{1},...,a_{k}\) for some \(k\leq m\), is the set of valid inequalities defining the affine hull of the Fine core of \(P\), \(\operatorname{aff}(\operatorname{core}^{F}(P))\), i.e., \[\operatorname{aff}^{F}(\operatorname{core}(P))=\{x\mid\langle a_{i},x\rangle =c_{i}+\mu^{F}(P)^{-1},1\leq i\leq k\}.\] **Definition 5.5**.: Define the set \(A_{\operatorname{core}}^{F}\) to be \[A_{\operatorname{core}}^{F}:=\operatorname{conv}(a_{1},...,a_{k})\subseteq( \mathds{R}^{n})^{*}\] as the convex hull of the Fine core normals. For \(P\) as defined above, the following lemmas will show that all the \(a_{i}\) are vertices of \(A_{\operatorname{core}}^{F}\) and that the origin is a relatively interior point. **Lemma 5.6** ([11, Lemma 2.2]).: _The origin is in the relative interior of \(A_{\operatorname{core}}^{F}\)._ Moreover, the following result proven in [10] gives us precisely the vertices of \(A_{\operatorname{core}}^{F}\). **Lemma 5.7** ([10, Lemma 3.6]).: _The vertices of \(A_{\operatorname{core}}^{F}\) are \(a_{1},...,a_{k}\)._ We now want to show that, independently of the polytope being \(\alpha\)-canonical or not, the origin is the only lattice point in the relative interior of \(A_{\operatorname{core}}^{F}\). **Lemma 5.8**.: _For \(A_{\operatorname{core}}^{F}\) as above, it follows that \(\operatorname{relint}(A_{\operatorname{core}}^{F})\cap(\mathds{Z}^{n})^{*}= \{0\}\)._ Proof.: We prove this by contradiction. Assume there is some vector \(a\in(\mathds{Z}^{n})^{*}\setminus\{0\}\) contained in the relative interior of \(A^{F}_{\text{core}}\). As \(0\in\operatorname{relint}(A^{F}_{\text{core}})\), the point \(a\) is contained in the cone spanned by the vertices of some facet \(F\) of \(A^{F}_{\text{core}}\), and defines the valid inequality \(\langle a,x\rangle\geq b\) for \(P\). If we let \(a_{1},...,a_{k}\) be the vertices of \(A^{F}_{\text{core}}\), we can find \(\lambda_{1},...,\lambda_{k}\geq 0\) with \(\lambda_{i}=0\) if \(a_{i}\notin F\) such that \(a=\sum_{i=1}^{k}\lambda_{i}a_{i}\) and satisfying \(\sum_{i=1}^{k}\lambda_{i}<1\). This last inequality follows from the fact that \(a\) is in the relative interior of \(A^{F}_{\text{core}}\). Let \(x_{\text{core}}\in\operatorname{relint}(\text{core}^{F}(P))\). By definition, we have that \(\langle a,x_{\text{core}}\rangle-b\geq(\mu^{F}(P))^{-1}.\) Now, considering the sum over all valid inequalities associated to the core normals \(a_{i}\) for \(1\leq i\leq k\), for such \(y\) we obtain \[\langle a,x_{\text{core}}\rangle-b=\sum_{i=1}^{k}\lambda_{i}(\langle a_{i},x_ {\text{core}}\rangle-b_{i})=\sum_{i=1}^{k}\lambda_{i}(\mu^{F}(P))^{-1}<(\mu^{F }(P))^{-1}\] where the last inequality follows from the fact that \(\sum_{i=1}^{k}\lambda_{i}<1\), but this contradicts the previous relation. The last result we need in this first part of the proof is the following one by Lagarias and Ziegler. We say two lattice polytopes \(P\) and \(Q\) are _lattice equivalent_ if there is an affine lattice isomorphism mapping \(P\) onto \(Q\). **Theorem 5.9** (Lagarias, Ziegler [10, Theorem 1]).: _Let integers \(n,l\geq 1\) be given. There are, up to lattice equivalence, only finitely many different lattice polytopes of dimension \(d\) with exactly \(l\) interior points in the lattice \(\mathds{Z}^{n}\)._ Since we have proven that for \(A^{F}_{\text{core}}\) the only lattice point in its relative interior is \(\{0\}\), combining this result with Theorem 5.9 we have shown that for a fixed \(n\in\mathds{Z}_{>0}\), only finitely many sets define the Fine core normals of an \(n\)-dimensional lattice polytope \(P\). We record this as the following result. **Corollary 5.10**.: _Let \(n\in\mathds{Z}_{>0}\) be fixed. Then, up to lattice equivalence, only finitely many sets define the Fine core normals of some \(n\)-dimensional lattice polytope \(P\)._ In what follows, we will continue with the second step of the proof. We make use of the following lemma proven in [10] where we do not require the \(\alpha\)-canonicity of \(P\). **Lemma 5.11** ([10, Lemma 3.10]).: _Fix some \(\varepsilon>0\) and some \(n\in\mathds{Z}_{>0}\). Let \(P\) be an \(n\)-dimensional lattice polytope with set of Fine core normals \(\mathcal{A}\). Then the set_ \[\{\mu^{F}(P)\,|\,P\text{ is $n$-dimensional with set of Fine core normals }\mathcal{A},\mu^{F}(P)\geq\varepsilon\}\] _is finite._ We have now all the ingredients to prove our main result. Proof of Theorem 5.4.: Combining this last lemma together with the previous one we obtain the following. First of all, by Corollary 5.10 we have that up to lattice equivalence, there are only finitely many sets of Fine core normals for some \(n\)-dimensional lattice polytope. Finally, by Lemma 5.11 the set of values \(\mu^{F}\) is finite for \(n\)-dimensional lattice polytopes with a fixed set \(\mathcal{A}\) of Fine core normals. We have shown that in the Fine case, a version of the theorem regarding the finiteness of the \(\mathds{Q}\)-codegree spectrum holds, dropping the \(\alpha\)-canonicity assumption. Hence, considering all valid inequalities is a condition that highly restricts the shape and properties of the polytope \(A^{F}_{\text{core}}\), i.e., the convex hull of the Fine core normals of a polytope \(P\), since all such polytopes contain just one lattice point inside, namely the origin. Due to this we are able to prove the result in greater generality for the Fine \(\mathds{Q}\)-codegree spectrum.
2306.16187
Thermodynamics of accelerating AdS$_4$ black holes from the covariant phase space
We study the charges and first law of thermodynamics for accelerating, non-rotating black holes with dyonic charges in AdS$_4$ using the covariant phase space formalism. In order to apply the formalism to these solutions (which are asymptotically locally AdS and admit a non-smooth conformal boundary $\mathscr{I}$) we make two key improvements: 1) We relax the requirement to impose Dirichlet boundary conditions and demand merely a well-posed variational problem. 2) We keep careful track of the codimension-2 corner term induced by the holographic counterterms, a necessary requirement due to the presence of "cosmic strings" piercing $\mathscr{I}$. Using these improvements we are able to match the holographic Noether charges to the Wald Hamiltonians of the covariant phase space and derive the first law of black hole thermodynamics with the correct "thermodynamic length" terms arising from the strings. We investigate the relationship between the charges imposed by supersymmetry and show that our first law can be consistently applied to various classes of non-supersymmetric solutions for which the cross-sections of the horizon are spindles.
Hyojoong Kim, Nakwoo Kim, Yein Lee, Aaron Poole
2023-06-28T13:08:49Z
http://arxiv.org/abs/2306.16187v3
# Thermodynamics of accelerating AdS\({}_{4}\) black holes from the covariant phase space ###### Abstract We study the charges and first law of thermodynamics for accelerating, non-rotating black holes with dyonic charges in AdS\({}_{4}\) using the covariant phase space formalism. In order to apply the formalism to these solutions (which are asymptotically locally AdS and admit a non-smooth conformal boundary \(\mathscr{I}\)) we make two key improvements: 1) We relax the requirement to impose Dirichlet boundary conditions and demand merely a well-posed variational problem. 2) We keep careful track of the codimension-2 corner term induced by the holographic counterterms, a necessary requirement due to the presence of "cosmic strings" piercing \(\mathscr{I}\). Using these improvements we are able to match the holographic Noether charges to the Wald Hamiltonians of the covariant phase space and derive the first law of black hole thermodynamics with the correct "thermodynamic length" terms arising from the strings. We investigate the relationship between the charges imposed by supersymmetry and show that our first law can be consistently applied to various classes of non-supersymmetric solutions for which the cross-sections of the horizon are spindles. ## 1 Introduction ### Accelerating solutions Asymptotic analysis 3.1 Gauge field 3.2 Metric: Fefferman-Graham expansion 3.3 Boundary Cotton tensor 3.4 Variational problem 3.5 Comment on the time scaling parameter ### Charges 1 Covariant phase space 4.2 Corner improvement 4.3 Mass 4.4 Electric charge 4.5 Magnetic charge ### The \(\mathbb{R}^{3}\)-action 1 Covariant phase space 4.1 Covariant phase space 4.2 Corner improvement 4.3 Mass 4.4 Electric charge 4.5 Magnetic charge ### The \(\mathbb{R}^{3}\)-action 1 Covariant phase space 4.1 Covariant phase space 4.2 Corner improvement 4.3 Mass 4.4 Electric charge 4.5 Magnetic charge ### The \(\mathbb{R}^{3}\)-action 1 Covariant phase space 4.1 Covariant phase space 4.2 Corner improvement 4.3 Mass 4.4 Electric charge 4.5 Magnetic charge 1 Covariant phase space 4.1 Covariant phase space 4.2 Corner improvement 4.3 Mass 4.4 Electric charge 4.5 Magnetic charge 1 Covariant phase space 4.1 Covariant phase space 4.2 Corner improvement 4.3 Mass 4.4 Electric charge 4.5 Magnetic charge 1 Covariant phase space 4.1 Covariant phase space 4.2 Corner improvement 4.3 Mass 4.4 Electric charge 4.5 Magnetic charge 1 Covariant phase space 4.2 Corner improvement 4.3 Mass 4.4 Electric charge 4.5 Magnetic charge 1 Covariant phase space 4.3 Mass 4.4 Electric charge 4.5 Magnetic charge 1 Covariant phase space 4.1 Covariant phase space 4.2 Corner improvement 4.3 Mass 4.4 Electric charge 4.5 Magnetic charge [MISSING_PAGE_POST] 1 Covariant phase space 4.3 Mass 4.4 Electric charge 1 Covariant phase space 4.3 Mass 4.4 Electric charge 4.5 Magnetic charge 1 Covariant phase space 4.3 Mass 4. A Magnetic charges from the covariant phase spaceA.1 Topological termA.2 No contribution to the variational problemA.3 No contribution to the first lawB Comparison with other literatureB.1 Consistency with the "horizon polynomial" methodB.2 Differences in thermodynamic lengths ## 1 Introduction The understanding of black holes as thermodynamic objects is one of the key directions in uncovering the quantum nature of gravity. The origin of this field lies in the pioneering work by Bekenstein [1; 2], who conjectured that the entropy of a black hole should be proportional to the horizon area \(\mathcal{A}\), and later Hawking [3], where it was demonstrated that by taking into account the effects of quantum particle creation near the horizon, black holes possess a temperature \(T\). This confirmed Bekenstein's conjecture and resulted in the famous Bekenstein-Hawking entropy formula \[S_{\rm BH}=\frac{\mathcal{A}}{4G}. \tag{1}\] Alongside this identification of black hole entropy was the realisation that black holes obey certain laws of mechanics closely analogous to the ordinary laws of thermodynamics. Of particular focus in this work will be the _first law_ of black hole thermodynamics, which was originally formulated for stationary, asymptotically flat black holes as [4] \[\delta\mathcal{M}=T\delta S_{\rm BH}+\Omega_{\mathcal{H}}\delta J+\Phi_{e} \delta Q_{e}, \tag{2}\] a formula which relates variations in the charges \(\mathcal{M},J,Q_{e}\) (mass, angular momentum and electric charge) of the black hole to variations in the entropy (\(\Omega_{\mathcal{H}}\) is the angular velocity of the horizon and \(\Phi_{e}\) the electrostatic potential). Such a formula was generalised by Wald [5] to all diffeomorphism invariant theories of gravity, (i.e. beyond just general relativity) with the entropy taking the form of a local integral over the bifurcation surface of the horizon \(\Sigma_{\mathcal{H}}\) \[S_{\rm BH}=\frac{2\pi}{\kappa_{\rm sg}}\int_{\Sigma_{\mathcal{H}}}\mathbf{Q}, \tag{3}\] where \(\mathbf{Q}\) is the so-called Noether charge \((d-2)\)-form of the theory and \(\kappa_{\rm sg}\) the surface gravity of the black hole. This approach uses a technique known as the _covariant phase space formalism_[5; 6; 7] and not only has the advantage of extending to other theories but also gives an elegant geometrical derivation of the first law in terms of covariant expressions, most importantly the local formula for the entropy above. In this work we will study _accelerating_ black holes in _asymptotically locally anti-de Sitter_ (AlAdS) spacetime using the covariant phase space formalism. Black holes in AdS have proven to be particularly rich hunting grounds for those looking to understand their quantum properties thanks to the AdS/CFT correspondence [8; 9; 10]. This allows for entropy counting in the gravitational side to be reformulated in terms of an index computation in the dual CFT, see e.g. [11; 12; 13; 14] for black holes in \(d=4\) and [15; 16; 17] for \(d=5\). These techniques have made it possible to recover the Bekenstein-Hawking entropy (1) from the dual theory. For classical AdS gravity, the analogous first law to (2) has been derived for a wide class of AlAdS black holes [18] using the covariant phase space [5; 7] together with the necessary implementation of _holographic renormalisation_[19; 20; 21; 22; 23; 24] at the level of the on-shell action. The use of the covariant phase space has been extended to theories beyond those initially considered in [18] (see for example the recent works [25; 26; 27] on various \(d=5\) supergravity theories) but has not yet been adapted to accelerating AdS\({}_{4}\) black holes. This important gap in the literature will be addressed in this work. The progenitive accelerating black hole in AdS\({}_{4}\) is the famous C-metric solution [28], a member of the more general Plebanski-Demianski class of stationary, axisymmetric solutions [29; 30; 31; 32]. These black holes possess conical singularities due to the presence of cosmic strings stretching from the horizon of the black hole out to infinity. The cosmic strings have associated tensions which exert a force on the black hole, resulting in acceleration and displacing the object from the "centre" of the spacetime. In this work we will consider black holes which are said to be _slowly_ accelerating [33; 34], meaning that they possess an event horizon but no acceleration horizons. We will take the solutions to have charges corresponding to mass, electric, and magnetic charges but, importantly, not rotation. We will thus work with static solutions with \[J=0, \tag{4}\] for reasons we will discuss in the main text. As we shall see, the fact that these spacetimes contain conical singularities, together with the fact that one cannot apply Dirichlet boundary conditions when varying all of the parameters are the crucial obstruction in applying the methods of [18]. In this work we will provide a suitable extension of the methods developed in [18] in order to discuss the charges and thermodynamics of accelerating solutions. The covariant phase space [5; 7; 18] has yet to be applied to accelerating AlAdS black holes, although analysing their charges and associated thermodynamics using different techniques have been the study of a slew of recent work [35; 36; 37; 38; 39; 40] which we will follow closely (see also [41; 42; 43; 44; 45] for related works). A major feature present in these papers was the seeming inevitability of being forced to choose a particular parameter-dependent normalisation of the time coordinate in order to arrive at the correct form of the first law. Some justification for this was given in [38] in terms of asymptotic observers, although the conformal invariance at the boundary should negate the need to study a particular representative of the conformal class. This scaling is thus a somewhat unsatisfactory feature which is also not at all clear from the perspective of the dual field theory. We note that this story is somewhat similar in spirit to that of [46] where it was argued that the normalisation of the Killing vector in the first law was crucial in defining the charges and first law, before [18] demonstrated that the correct application of the covariant phase space overrides such issues and the first law is satisfied for all non-accelerating AlAdS black holes. It is in this vein that we expect the application of the covariant phase space formalism to shed new light on the time scaling and uncover the physics of this poorly-understood feature of black hole thermodynamics. In particular, we will show that the previous choice of the time scaling is only a well-posed choice when one also fixes the overall conical deficit in the spacetime. In this work we will consider the more general problem of well-posed variations, without explicitly fixing the time scaling. Accelerating solutions are also of interest in the field of _supergravity_ due to their relation to the field of _spindle solutions_[47, 48, 49, 40]. If the cosmic strings associated to acceleration are arranged in a particular way, then the surfaces of constant time and radius \(\Sigma\) can be given the topology of a spindle \(\Sigma\cong\mathbb{WCP}^{1}_{[n_{-},n_{+}]}\), a complex projective space parameterised by two coprime positive integers \(\{n_{-},n_{+}\}\). Such solutions are interesting because despite exhibiting conical singularities in \(d=4\), they are rendered completely smooth in \(d=11\) supergravity when uplifted on a suitably chosen Sasaki-Einstein manifold \(SE_{7}\)[48]. Following in the style of [40], we will work in \(d=4\) for the entirety of this paper and the uplift will not come into play. Supersymmetry will be preserved in \(d=11\) if it is satisfied in \(d=4\) and thus it is of interest to constrain the parameters of the solution via the supersymmetry conditions discussed in [48, 40, 50]. A further important subclass of these solutions are the _supersymmetric and extremal_ AdS\({}_{4}\) black holes with \(\Sigma\cong\mathbb{WCP}^{1}_{[n_{-},n_{+}]}\). These exhibit a near-horizon geometry of AdS\({}_{2}\times\mathbb{WCP}^{1}_{[n_{-},n_{+}]}\) and uplift in \(d=11\) to solutions with near horizon regions of the form AdS\({}_{2}\times Y_{9}\), where \(Y_{9}\) is a geometry of the type discussed in [51, 52]. A thorough understanding of the \(d=4\) solutions may also shed new light into the class of solutions with an AdS\({}_{2}\) factor and thus one is also motivated to apply extremality as well as supersymmetry for solutions in \(d=4\). We will use the supersymmetry relations in order to derive a "supersymmetric locus" of conserved charges but will stop short of being able to apply our first law to the supersymmetric solutions. This is because such solutions must contain either acceleration horizons (when extremal) or naked singularities (when non-extremal) [48] and thus fall outside the class of slowly accelerating solutions that we consider. Instead, we will apply our law to non-supersymmetric spindles, including the classes of _close-to-supersymmetric_ and _close-to-supersymmetric and close-to-extremal_ solutions, which are smoothly connected to the supersymmetric cases [48]. This paper is organised as follows: in Section 2 we provide a brief introduction to the family of solutions that we consider and discuss the physics of the parameters that specify the solutions. In Section 3 we perform a careful asymptotic analysis of the metric and gauge field which specify the solution. This includes a presentation of the Fefferman-Graham [53] expansion for the metric as well as an analysis of the boundary Cotton tensor. We use the asymptotic analysis to discuss the variational problem and derive a master equation for well-posedness. In Section 4 we use the covariant phase space formalism [5, 18] to construct the conserved charges for the solution. This section includes an introduction to the formalism as well as a discussion of the required corner modifications in order to allow for application to spacetimes with conical singularities. We use this to give expressions for the mass, electric, and magnetic charges of the accelerating solutions. In Section 5 we focus on the thermodynamics of accelerating black holes, again using the covariant phase space approach to write down the first law of thermodynamics. We provide a comment on the form of our law relative to others in the literature [35; 36; 37; 38; 39; 40]. In Section 6 we provide an application of our results for the conserved charges and first law to spindle solutions: we fix the string tensions and apply various other constraints related to supersymmetry and extremality. In Section 7 we conclude and discuss some interesting directions for future work. Also included are two appendices: A discusses the nature of magnetic charges from the covariant phase space and B provides a detailed comparison with other literature [35; 36; 37; 38; 39; 40]. This includes a demonstration of equivalence between the covariant phase space and "horizon polynomial" methods of deriving the first law, as well as a more detailed discussion concerning the discrepancies in our laws. ## 2 Accelerating solutions In this work we study Einstein-Maxwell theory in the presence of a cosmological constant \(\Lambda=-3/\ell^{2}<0\) on a \(d=4\) dimensional spacetime manifold \(M\). We will consider the following bulk action: \[S_{\rm bulk}=\frac{1}{16\pi G}\int_{M}(R-2\Lambda)\mathbf{\epsilon}-2\mathbf{F} \wedge*\mathbf{F}, \tag{1}\] where \(\mathbf{\epsilon}\) is the volume 4-form, oriented such that \(\epsilon_{0123}=\sqrt{-g}\). \(\mathbf{F}=d\mathbf{A}\) is the 2-form field strength tensor with \(\mathbf{A}\) the 1-form gauge potential. We will discuss the possibility of adding a purely topological term related to magnetic charges in appendix A but this bulk action will be sufficient for all of our analysis of the first law. We consider the following family of accelerating, static solutions [28; 29; 30; 31; 32; 54] with metric \[ds^{2}=\frac{1}{H^{2}}\left\{-\frac{Q}{r^{2}}\frac{1}{\kappa^{2}}dt^{2}+\frac {r^{2}}{Q}dr^{2}+\frac{r^{2}}{P}d\theta^{2}+Pr^{2}K^{2}\sin^{2}\theta d\varphi ^{2}\right\}, \tag{2}\] and gauge field \[\mathbf{A}=-\frac{e}{r}\frac{1}{\kappa}dt-g\cos\theta Kd\varphi, \tag{3}\] where \[H(r,\theta) =1-\alpha r\cos\theta,\qquad Q(r)=(r^{2}-2mr+e^{2}+g^{2})(1- \alpha^{2}r^{2})+\frac{r^{4}}{\ell^{2}}, \tag{4}\] \[P(\theta) =1-2\alpha m\cos\theta+\alpha^{2}(e^{2}+g^{2})\cos^{2}\theta.\] The solution is determined by five physical parameters: \(\{m,e,g,\alpha,K\}\) as well as the cosmological constant \(\Lambda=-3/\ell^{2}\) and the time scaling \(\kappa>0\) which as in [39; 40] we take to be a spacetime constant. The physical parameters have the rough identifications as corresponding to mass, electric charge, magnetic charge, acceleration and deficit angle1 respectively and thus we will refer to them as such throughout the text. We will make explicit their relation to the true charges of the spacetime in Section 4. Following [40; 48] we consider w.l.o.g. the following ranges of parameters \[m,K>0,\qquad\alpha,e,g\geq 0, \tag{5}\] although in general we will be interested in the case of all parameters being strictly positive. The metric is determined by three functions \(\{H,Q,P\}\) given in equation (4) which we now describe in some detail in order to explain the physics of this solution. Firstly, \(Q\) is the _horizon polynomial_ and the roots of \(Q\) are the locations of horizons in the spacetime. We will demand that the solution contains a black hole and thus the largest positive root \(r_{+}\) corresponds to the location of the (outer) event horizon \(\mathcal{H}\) in the spacetime: \[Q(r_{+})=0,\qquad r_{+}>0. \tag{6}\] In the entirety of this work, following [33; 34; 35; 36; 37; 38; 39; 40], we will restrict to the case of _slowly accelerating_ solutions, i.e. those without an acceleration horizon. This assumption corresponds to there being no further roots of \(Q\) between \(\mathcal{H}\) and the conformal boundary \(\mathscr{I}\). For a technical discussion of this in terms of the parameters of the solution we point the reader to [33; 34; 37], an analysis which we omit here as we will only use this assumption implicitly. We will be interested in studying the region of the solution outside the black hole, and thus we restrict consideration to the coordinate range \[r>r_{+}>0. \tag{7}\] \(H\) is the _conformal factor_ and thus the conformal boundary \(\mathscr{I}\) is located at \(H=0\). This sets the upper bound on the radial coordinate as \[r<\frac{1}{\alpha\cos\theta}, \tag{8}\] where we note that \(r\) is not a good coordinate to analyse the conformal boundary for \(\theta\geq\pi/2\) and we will utilise a different choice for the asymptotic analysis in Section 3. Combining equations (7) and (8) in the region of validity, we note that this sets \[r_{+}<\frac{1}{\alpha}, \tag{9}\] a condition which can be physically interpreted as ensuring that the horizon does not touch the conformal boundary [40]. \(P\) is a function which encodes the fact that the spacetime contains conical singularities, physically interpreted as cosmic strings stretching from \(\mathcal{H}\) to \(\mathscr{I}\). In order to see this explicitly [48; 38], one can perform an analysis of the metric near the poles of the azimuthal coordinate \(\theta_{\pm}\) \[\theta_{-}=0,\qquad\theta_{+}=\pi, \tag{10}\] which are the North and South poles respectively. Near the poles, the metric on the constant \((t,r)\) surfaces takes the form [48; 38] \[ds^{2}_{\theta,\varphi}\simeq\left[\frac{r^{2}}{PH^{2}}\right]_{\theta= \theta_{\pm}}[d\theta^{2}+P_{\pm}^{2}K^{2}(\theta-\theta_{\pm})^{2}d\varphi^ {2}], \tag{11}\] where \[P_{\pm}=P(\theta_{\pm})=\Xi\pm 2\alpha m, \tag{12}\] and following [40; 48; 38] we have introduced \[\Xi=1+\alpha^{2}(e^{2}+g^{2}). \tag{13}\] Returning to (11), we note that \(\varphi\) is a \(2\pi\)-periodic coordinate and thus the metric near each pole takes the a form similar to the usual plane polar coordinates on \(\mathbb{R}^{2}\) with \(\theta\) acting a radial coordinate and \(\varphi\) the polar angle. In order to remove the possibility of conical singularities, one needs to choose the parameter \(K\) s.t. \(P_{\pm}K=1\), although this is clearly impossible when \(P_{-}\neq P_{+}\iff\alpha m\neq 0\). The resulting spacetime thus contains conical singularities at \(\theta_{\pm}\), with deficit angles given by \[\delta_{\pm}=2\pi(1-P_{\pm}K). \tag{14}\] We note that one can choose \(K=1/P_{-}\) or \(K=1/P_{+}\) in order to remove one of the singularities and leave a spacetime with one smooth pole and one singular one. This was the approach taken in [35; 36; 37] where the North pole was takes to be regular, corresponding to the choice \(K=1/P_{-}\) and clearly fixing \(K\) in terms of the other parameters. In this work we will follow more closely in the footsteps of [40; 38; 39] where \(K\) is allowed to remain generic and thus we allow for conical singularities at both poles. Physically, these singularities correspond to the presence of _cosmic strings_ stretching from the black hole horizon to conformal infinity, as shown in Fig. 1. These cosmic strings have associated tensions given by \[\mu_{\pm}=\frac{\delta_{\pm}}{8\pi G}=\frac{1}{4G}(1-P_{\pm}K), \tag{15}\] which _accelerate_ the black hole. We see explicitly that the overall tension is \[\mu_{-}-\mu_{+}=\frac{\alpha mK}{G}>0 \tag{16}\] and thus the black hole accelerates in the North direction by virtue of \(\alpha mK>0\). We also note the value of the overall deficit in the spacetime \[\mu_{-}+\mu_{+}=\frac{1}{2G}(1-\Xi K), \tag{17}\] explicitly demonstrating that \(K\) acts as a parameter for the conical deficit. We finally note that we also require \(P>0\) in order to have the correct signature of the full metric (2). As discussed in [48; 39], this means that we also have the following constraints between the parameters \[m\alpha<\begin{cases}\frac{\Xi}{2}\quad\text{for}\quad\Xi\in(0,2],\\ \sqrt{\Xi-1}\quad\text{for}\quad\Xi>2,\end{cases} \tag{18}\] which we will never use explicitly in any calculations in this paper, in a similar style to the assumption of slow acceleration. ## 3 Asymptotic analysis In this section we will perform an asymptotic (i.e. near \(\mathscr{I}\)) analysis of the solution presented in equations (2) and (3). This will allow us to demonstrate that the geometry is explicitly an asymptotically locally AdS (AlAdS) solution and, through the analysis of the variational problem, derive a constraint between the variations of the parameters. We also note that from this point on we will always use the normalisation of \[\Lambda=-3\iff\ell=1, \tag{10}\] which can be reinstated via the usual dimensional analysis. For the asymptotic analysis of both the metric and the gauge field, we will often use the inverse radial coordinate \(z>0\) of [40], defined by \[\frac{1}{r}=\alpha\cos\theta+z, \tag{11}\] where \(z=0\) gives the location of \(\mathscr{I}\) as this clearly corresponds to \(H=0\). ### Gauge field The gauge field (3) is smooth as one takes the limit \(z=\epsilon\to 0\) and takes the boundary value \[\mathbf{A}_{(0)}=\lim_{\epsilon\to 0}A_{i}|_{z=\epsilon}dx^{i}=-\cos\theta\left[ \frac{\alpha}{\kappa}edt+gKd\varphi\right]. \tag{12}\] Figure 1: A cartoon of a constant-\(t\) slice of the accelerating black hole solution. The dark object in the interior is the black hole region with horizon cross-section at \(r=r_{+}\) denoted by \(\Sigma_{\mathcal{H}}\). Stretching from the horizon along the poles \(\theta_{\pm}\) are two cosmic strings \(\mathcal{S}_{\pm}\), providing conical deficits \(\delta_{\pm}\) at the poles and physically understood to accelerate the black hole along the North pole axis, resulting in the black hole being moved from the “centre” of the spacetime. The outer boundary \(\Sigma_{\infty}\) is a cross-section of the conformal boundary \(\mathscr{I}\). The axial coordinate \(\varphi\) is suppressed in this picture which should be understood as a volume of revolution about the string axis. This can be used to compute the boundary field strength via \(\mathbf{F}_{(0)}=d\mathbf{A}_{(0)}\). In doing this, we note that we will sometimes switch between the usual azimuthal angle coordinate \(\theta\) and an alternative coordinate \(x\) given by \[x=\cos\theta \tag{10}\] and thus the boundary field strength takes the form \[\mathbf{F}_{(0)}=\frac{\alpha e}{\kappa}dt\wedge dx-gKdx\wedge d\varphi. \tag{11}\] The final asymptotic quantity that it will be important to define here is the electric current [40] \[j^{i}=-\frac{1}{4\pi G}\lim_{\epsilon\to 0}\left[\frac{1}{\epsilon^{3}}n_{ \mu}F^{\mu i}\right]_{z=\epsilon}, \tag{12}\] where \(n\) is the _outward pointing_ unit normal to the hypersurfaces of constant \(z\). The only non-trivial components of the electric current are \[j^{t}=\kappa\frac{e}{4\pi G},\qquad j^{\varphi}=\frac{\alpha g}{4K\pi G}. \tag{13}\] ### Metric: Fefferman-Graham expansion We begin the asymptotic analysis of the metric (2) by providing the Fefferman-Graham expansion [53]. This calculation has already been performed in [38; 39] (the asymptotic analysis was performed via an alternative ADM approach in [40]) and here we will merely collect all of the prior results together and set our conventions. We begin by recalling that the Fefferman-Graham expansion for any AlAdS spacetime takes the form \[ds^{2}=\frac{1}{\rho^{2}}\left[d\rho^{2}+\left(g_{ij}^{(0)}+\rho^{2}g_{ij}^{(2 )}+\rho^{3}g_{ij}^{(3)}+\ldots\right)dx^{i}dx^{j}\right], \tag{14}\] where \(\rho>0\) is an inverse radial coordinate and the conformal boundary \(\mathscr{I}\) is located at \(\rho=0\)2. This gauge has proved extremely useful in studying AlAdS spacetimes in the AdS/CFT correspondence [21; 22; 23; 19]. The two key pieces of data in the expansion above are \(g^{(0)}\) and \(g^{(3)}\), which act as the CFT background metric and the expectation value of the CFT energy-momentum tensor respectively. The precise relationship [21] is Footnote 2: Although \(\mathscr{I}\) also corresponds \(z=0\) as defined in (11), \(z\neq\rho\) away from \(\mathscr{I}\). One can see this by applying the explicit coordinate transformation (11) to the metric (2). \[T_{ij}=\frac{3}{16\pi G}g_{ij}^{(3)}. \tag{15}\] The explicit coordinate transformation which is required to put the metric (2) into the gauge (14) was given in [38; 39] and for brevity we will not reproduce the steps here but merely summarise the important results of the expansion. Following the boundary coordinate convention of [40], our chosen representative of the conformal class is given by \[ds_{(0)}^{2}=-\frac{\tilde{P}}{\kappa^{2}}dt^{2}+\frac{1}{P\tilde{P}}d\theta^ {2}+PK^{2}\sin^{2}\theta d\varphi^{2},\qquad\tilde{P}(\theta)=1-\alpha^{2}P( \theta)\sin^{2}\theta. \tag{16}\] and the non-zero components of the energy-momentum tensor are (now using the coordinate \(x\) defined in (10)): \[T_{t}^{t} =\frac{\left\{\alpha m-2(\Xi-1)x\right\}\left\{3\alpha^{2}\left[x^{ 2}-1\right]\left[x(2\alpha m-\Xi x+x)-1\right]-2\right\}}{8\pi G\alpha}, \tag{23a}\] \[T_{x}^{x} =\frac{\alpha m-2(\Xi-1)x}{8\pi G\alpha},\] (23b) \[T_{\varphi}^{\varphi} =-\frac{\left\{\alpha m-2(\Xi-1)x\right\}\left\{3\alpha^{2}\left[ x^{2}-1\right]\left[x(2\alpha m-\Xi x+x)-1\right]-1\right\}}{8\pi G\alpha}. \tag{23c}\] This formula is an extension of [39] which now includes the magnetic charge parameter \(g\). It can be obtained via the simple exchange of \(e^{2}\to e^{2}+g^{2}\) in equation (100) of that work. With all of the important boundary quantities defined, we note that a number of Ward identities are satisfied due to the bulk equations of motion. These take the form of conservation identities related to boundary diffeomorphisms and \(U(1)\) gauge transformations respectively: \[\nabla_{i}^{(0)}T_{j}^{i} =-j^{i}F_{ij}^{(0)}=\frac{1-\Xi}{4\pi\alpha G}\delta_{j}^{x}, \tag{24}\] \[\nabla_{i}^{(0)}j^{i} =0. \tag{25}\] where \(\nabla^{(0)}\) is the Levi-Civita connection associated with (22) and all indices are understood to be moved with \(g_{(0)}\). There is also a trace identity \[T_{i}^{i}=\mathscr{A}=0, \tag{26}\] where the right hand side of the above equation vanishes due to the vanishing of the trace anomaly \(\mathscr{A}\) in four bulk dimensions [19]. ### Boundary Cotton tensor The boundary conformal class \([g_{(0)}]\) determines (in part) the asymptotic classification of the spacetime. In particular, we will follow [23; 24] in classifying a spacetime as _asymptotically AdS_ if \(g_{(0)}\) is conformally flat and \(\mathscr{I}\cong\mathbb{R}\times S^{2}\). If either of these conditions fail to hold then the spacetime will be _asymptotically locally AdS_ (AlAdS). Restricting consideration to the case of \(m\alpha\neq 0\)3, the family of solutions (2) are AlAdS as they fail both of the criteria listed above. Firstly, we note that the presence of cosmic strings stretching from \(\mathcal{H}\) to \(\mathscr{I}\) gives a boundary topology of \(\mathscr{I}\cong\mathbb{R}\times\Sigma_{\infty}\), where \(\Sigma_{\infty}\) is a surface with one or two conical deficits due to the strings piercing the poles and thus the topological condition is not satisfied. We will see that this plays a role in the construction of the conserved charges of these solutions in Section 4. Footnote 3: The \(\alpha=0\) solutions are asymptotically AdS. More importantly for our current analysis is the failure of the boundary to be conformally flat. The conformal invariant we will use is the Cotton tensor of \(g_{(0)}\), defined as \[C_{(0)}^{ij}=\varepsilon_{(0)}^{ikl}\nabla_{k}^{(0)}\left(R^{(0)j}_{\phantom{( 0)}l}-\frac{1}{4}\delta_{l}^{j}R^{(0)}\right), \tag{27}\] where \(\boldsymbol{\varepsilon}_{(0)}\) is the volume form for \(g_{(0)}\), oriented as \(\varepsilon^{(0)}_{t\theta\varphi}=\sqrt{-g_{(0)}}\). The Cotton tensor is symmetric and vanishes for any conformally flat 3-metric. Using the representative (3.10), we explicitly compute \[C^{t\varphi}_{(0)}=C^{\varphi t}_{(0)}=\frac{6\kappa(\Xi-1)x-3\alpha\kappa m}{K}, \tag{3.16}\] which in particular is non-zero, demonstrating that the conformal boundary is not conformally flat and thus providing another criterion for this spacetime to be AlAdS. We conclude this subsection by noting that the tensor density \(\sqrt{-g_{(0)}}C_{(0)\ j}^{\ \ i}\) is invariant under local conformal transformations, the non-trivial components of which are: \[\sqrt{-g_{(0)}}C_{(0)\ \varphi}^{\ \ \ \ t} =3K^{2}\left(1-x^{2}\right)^{3/2}\left[\alpha m-2(\Xi-1)x\right] \left[x(2\alpha m-\Xi x+x)-1\right],\] (3.17) \[\sqrt{-g_{(0)}}C_{(0)\ \tau}^{\ \ Despite this seeming like a very strong restriction upon the space of parameters, we note that \(\delta\Xi=0\) does not entirely fix the electric and magnetic parameters \(e\) and \(g\). Instead it allows for a circle on the phase space \[e^{2}+g^{2}=c^{2}, \tag{3.28}\] where \(c\) is a phase space constant. This is to be expected as \(e,g\) only enter the metric via \(\Xi\), so analysis of the metric will not put any constraints upon them individually. We will return to analyse the variations of the gauge parameters in the next section. ### Variational problem We analyse the variational problem for the family of spacetimes (2.2), an issue which will be crucial in determining the class of variations which are allowed to enter into the first law of accelerating black hole thermodynamics. We begin by noting that the bulk action (2.1) must first be supplemented by a boundary action consisting of the Gibbons-Hawking-York boundary term as well as the usual holographic counterterms [20; 21; 22; 23]. We begin by presenting this action at a regulated boundary \(z=\epsilon>0\) \[S_{\rm bdy}=S_{\rm GHY}+S_{\rm ct}=\frac{1}{16\pi G}\int_{z=\epsilon}d^{3}x\, \sqrt{-h}(2\mathcal{K}-4-\mathcal{R}), \tag{3.29}\] where \(h_{ij}\) is the induced metric on the hypersurface \(z=\epsilon\), \(\mathcal{K}\) is its trace extrinsic curvature when embedded in the bulk spacetime and \(\mathcal{R}\) is the Ricci scalar of \(h\). The total action is thus \[S=S_{\rm bulk}+S_{\rm bdy} \tag{3.30}\] and we define the renormalised action as \[S_{\rm ren}=\lim_{\epsilon\to 0}S. \tag{3.31}\] The variational problem is well-posed when variations of the renormalised action vanish iff the equations of motion are satisfied. The general formula for the variation of the renormalised on-shell action is [18] \[\delta S_{\rm ren}\approx\int_{\mathscr{I}}d^{3}x\sqrt{-g_{(0)}}\,\left(\frac {1}{2}T^{ij}\delta g_{ij}^{(0)}+j^{i}\delta A_{i}^{(0)}\right) \tag{3.32}\] and thus the variational problem is well-posed when the right hand term above vanishes. The typical way of ensuring this is to select Dirichlet boundary conditions [18], i.e. to demand that the variations satisfy \[\delta g_{ij}^{(0)}\propto g_{ij}^{(0)},\qquad\delta A_{i}^{(0)}=0, \tag{3.33}\] which clearly makes the variational problem well-posed due to the tracelessness condition (3.14). We will now show explicitly that these boundary conditions cannot be satisfied non-trivially for the solutions (2.2) when we treat the parameters \(m,e,g,\alpha,K,\kappa\) as changing under variation. We start with the metric boundary condition in (3.33) which we have already built up to in the analysis of the Cotton tensor (3.16) in the previous subsection. Due to the fact that \(\sqrt{-g_{(0)}}C_{(0)\ j}^{\ \ \ i}\) is invariant under local conformal transformations and varies under changes in the conformal class, we have the following relationship amongst the variations: \[\delta g_{ij}^{(0)}\propto g_{ij}^{(0)}\iff\delta\left(\sqrt{-g_{(0)}}C_{(0)\ j}^{\ \ \ i}\right)=0, \tag{3.34}\] where we have already determined that space of variations in the right hand set above are those given in equations (3.27) and (3.28). Moving on to the gauge field, we start by using (3.3) to write the constraint of [18] as \[0=\delta\mathbf{A}_{(0)}=\delta\left(-\cos\theta\left[\frac{\alpha}{\kappa}edt +gKd\varphi\right]\right), \tag{3.35}\] resulting in \[\delta e=\delta g=0, \tag{3.36}\] which we note is a much stronger constraint than the one imposed for the solutions with \(\alpha=g=0\) (such as those considered in [18]), where the boundary condition (3.35) is satisfied trivially. This analysis demonstrates that if one wants to apply Dirichlet boundary conditions (3.33) then the only allowed perturbation is the trivial one \[\delta m=\delta\alpha=\delta K=\delta e=\delta g=\delta\kappa=0. \tag{3.37}\] At first sight this appears to be a troubling result. It means that studying the black hole first law for the class of accelerating solutions (2.2) via the approach of [18] is not possible. Indeed, this tension was remarked upon in [40] due to the variations in that work changing the boundary conformal class \([g_{(0)}]\). It is clear that if we wish to apply techniques along the line of [18] then we need to consider more general boundary conditions than the Dirichlet conditions (3.33). In order to resolve these tensions, we instead follow [55] in considering the most general solutions to the variational problem i.e. we look to solve \[\delta S_{\text{ren}}\approx\int_{\mathscr{I}}d^{3}x\sqrt{-g_{(0)}}\,\left( \frac{1}{2}T^{ij}\delta g_{ij}^{(0)}+j^{i}\delta A_{i}^{(0)}\right)=0, \tag{3.38}\] without applying any specific boundary conditions upon the metric and gauge field. The analysis of the \(U(1)\) term goes through very straightforwardly due to the gauge choice (2.3): applying equations (3.3) and (3.7), we obtain \[\sqrt{-g_{(0)}}j^{i}\delta A_{i}^{(0)}=-\frac{\sin 2\theta}{8\pi G}\frac{K}{ \kappa}\left[e\kappa\delta\left(\frac{\alpha e}{\kappa}\right)+g\alpha\delta (gK)\right], \tag{3.39}\] which vanishes under the integration performed in (3.38), reducing that equation to a purely gravitational problem, i.e. we only need to solve \[\delta S_{\text{ren}}\approx\frac{1}{2}\int_{\mathscr{I}}d^{3}x\sqrt{-g_{(0)} }\,T^{ij}\delta g_{ij}^{(0)}=0. \tag{3.40}\] Using (3.10) and (3.11) and performing the \(\theta\) (equivalently \(x\)) integral we reach \[-2K\alpha\kappa\Xi\delta\alpha+2K(\alpha^{2}\Xi-1)\delta\kappa+\kappa(2\alpha^ {2}\Xi-1)\delta K=0, \tag{3.41}\] which is the master formula constraining the variations of the parameters in order to have a well-posed variational problem. We will use this relation in order to derive the first law. ### Comment on the time scaling parameter The master equation for well-posedness (3.41) involves the "time scaling" parameter \(\kappa\), which has proven to be one of the main mysteries of the first law of accelerating black hole thermodynamics in previous work [35; 36; 37; 38; 39; 40]. In all of these papers, it was argued that \(\kappa\) had to take a particular form in order to arrive at the correct form of the first law, namely \[\kappa=\sqrt{\Xi(1-\alpha^{2}\Xi)}. \tag{3.42}\] With our new master formula (3.41) we can investigate this choice of \(\kappa\) more carefully. By applying the above formula for \(\kappa\) we reach \[\delta(\Xi K)=0, \tag{3.43}\] a constraint which can be interpreted as fixing the overall deficit in the spacetime (2.17). It is interesting to note that this is actually a smaller set of constraints than those postulated in [39; 40], where both tensions \(\mu_{\pm}\) were chosen to be fixed, rather than just their sum. More importantly, the fact that we now have the formula (3.41) illustrates that \(\kappa\) as given in (3.42) is just one possible consistent choice rather than the value that one is forced to take. The general philosophy (and technical tool) is simply that one needs to consider a space of variations amongst the parameters for which the variational problem (3.38) is well-posed. This new approach to the accelerating solutions allows us to study thermodynamics for (many) alternative choices of \(\kappa\). In proceeding, we will work completely generically, i.e. we will not fix \(\kappa\) to the form of (3.42) but rather we will merely require that the master constraint equation (3.41) holds. We will now construct the conserved charges and derive a consistent first law, highlighting the importance of well-posedness along the way. ## 4 Charges Before discussing the thermodynamics of the solution specified by (2.2) and (2.3), we first need to establish the appropriate conserved charges which will later appear in the first law. In order to define these charges we utilise the _covariant phase space_ formalism, following closely in the style of [5; 6; 7; 56; 57; 58; 59]. We will now briefly review this formalism for a generic diffeomorphism covariant Lagrangian theory in \(d\)-dimensions in order to familiarise the reader with the notation, before applying the tools to our theory of interest (2.1). ### Covariant phase space We begin by considering a variation of the Lagrangian \(d\)-form \(\mathbf{L}[\psi]\), where \(\psi\) denotes the dynamical fields of the theory. A variation of \(\mathbf{L}\) takes the generic form \[\delta\mathbf{L}=\mathbf{E}[\psi]\delta\psi+d\mathbf{\Theta}[\psi;\delta\psi], \tag{4.1}\] where \(\mathbf{E}\) is the equation of motion \(d\)-form (\(\mathbf{E}\approx 0\)) and \(\mathbf{\Theta}\) is a \((d-1)\)-spacetime form called the _symplectic potential_.4 We note that \(\mathbf{\Theta}\) is also a \(1\)-form on the phase space. Using \(\mathbf{\Theta}\), one can construct the _symplectic current_ \[\boldsymbol{\omega}[\psi;\delta_{1}\psi,\delta_{2}\psi]=\delta_{2}\mathbf{\Theta} [\psi;\delta_{1}\psi]-\delta_{1}\mathbf{\Theta}[\psi;\delta_{2}\psi], \tag{4.2}\] a \((d-1)\)-form on spacetime and a 2-form on phase space. Integrating the symplectic current over a partial Cauchy slice \(C\) defines the _symplectic form_ \[\Omega_{C}(\psi;\delta_{1}\psi,\delta_{2}\psi)=\int_{C}\boldsymbol{\omega}[ \psi;\delta_{1}\psi,\delta_{2}\psi], \tag{4.3}\] a spacetime scalar and a phase space 2-form. In order to construct the charges, we will often be interested in the case when one of the variations is generated by a Killing vector field \(\xi\) (\(\delta_{\xi}=\mathcal{L}_{\xi}\)) or a \(U(1)\) gauge transformation \(f\). For these cases we are able to define the _Wald Hamiltonians_ corresponding to these transformations via \[\delta H_{\xi}=\Omega_{C}(\psi;\delta\psi,\mathcal{L}_{\xi}\psi)=\int_{C} \boldsymbol{\omega}[\psi;\delta\psi,\mathcal{L}_{\xi}\psi] \tag{4.4}\] and \[\delta H_{f}=\Omega_{C}(\psi;\delta\psi,\delta_{f}\psi)=\int_{C}\boldsymbol{ \omega}[\psi;\delta\psi,\delta_{f}\psi]. \tag{4.5}\] It remains to be seen that these Wald Hamiltonians are boundary quantities. We show this first for \(H_{\xi}\) by defining the Noether current \((d-1)\)-form \[\mathbf{J}[\xi]=\mathbf{\Theta}[\psi;\mathcal{L}_{\xi}\psi]-i_{\xi}\mathbf{L}, \tag{4.6}\] an object which is locally exact on-shell i.e. when \(\mathbf{E}=0\), we have \[\mathbf{J}[\xi]=d\mathbf{Q}[\xi], \tag{4.7}\] where \(\mathbf{Q}\) is the _Noether charge_\((d-2)\)-form. Restricting to the case when both the equations of motion \(\mathbf{E}=0\) and the linearised equations of motion \(\delta\mathbf{E}=0\) are satisfied, one is able to show that [7; 18] \[\boldsymbol{\omega}[\psi;\delta\psi,\mathcal{L}_{\xi}\psi]=d\left(\delta \mathbf{Q}[\xi]-i_{\xi}\mathbf{\Theta}[\psi;\delta\psi]\right) \tag{4.8}\] and thus \[\delta H_{\xi}=\Omega_{C}(\psi;\delta\psi,\mathcal{L}_{\xi}\psi)=\int_{C} \boldsymbol{\omega}[\psi;\delta\psi,\mathcal{L}_{\xi}\psi]=\int_{\partial C_{ \infty}}\delta\mathbf{Q}[\xi]-i_{\xi}\mathbf{\Theta}[\psi;\delta\psi], \tag{4.9}\] where \(\partial C_{\infty}\) is the intersection of \(C\) with the conformal boundary. For the \(U(1)\) transformation, the analysis is even simpler in that the Noether current takes the form \[\mathbf{J}[f]=\mathbf{\Theta}[\psi;\delta_{f}\psi]\approx d\mathbf{Q}[f]. \tag{4.10}\] By gauge invariance of the symplectic potential we have \[\boldsymbol{\omega}[\psi;\delta\psi,\delta_{f}\psi]=\delta\mathbf{\Theta}[ \psi;\delta_{f}\psi]\approx d\delta\mathbf{Q}[f], \tag{4.11}\] and thus the Wald Hamiltonian is \[\delta H_{f}=\Omega_{C}(\psi;\delta\psi,\delta_{f}\psi)=\int_{C}\boldsymbol{ \omega}[\psi;\delta\psi,\delta_{f}\psi]=\int_{\partial C_{\infty}}\delta \mathbf{Q}[f]. \tag{4.12}\] This can be immediately integrated on phase space to give \[H_{f}=\int_{\partial C_{\infty}}\mathbf{Q}[f]. \tag{4.13}\] ### Corner improvement With the general terminology of the covariant phase space now introduced, we now study Einstein-Maxwell theory (1) using these techniques. We return to working explicitly in \(d=4\) and note that the dynamical fields of this theory amount to \[\psi=\{g_{\mu\nu},A_{\rho}\}. \tag{41}\] The variations of the metric under the diffeomorphism and \(U(1)\) transformations are \[\delta_{\xi}g_{\mu\nu}=\mathcal{L}_{\xi}g_{\mu\nu},\qquad\delta_{f}g_{\mu\nu}=0, \tag{42}\] and the variations of the gauge field are \[\delta_{\xi}A_{\mu}=\mathcal{L}_{\xi}A_{\mu},\qquad\delta_{f}A_{\mu}=\partial_ {\mu}f, \tag{43}\] where we note that we have taken the diffeomorphism to also act on the gauge field with the Lie derivative. This is in order to preserve our gauge choice (3), which will turn out to be a particularly convenient choice to analyse the charges and thermodynamics of the solution. One can instead work in an entirely gauge-independent manner [60; 61; 62; 63] in which case the diffeomorphism transformation above includes an additional \(\xi\)-dependent gauge transformation: \(\delta_{\xi}A_{\mu}=\mathcal{L}_{\xi}A_{\mu}+d\chi_{\xi}\). When restricting to \(\xi\) as a Killing vector, our gauge choice (3) is such that \(\mathcal{L}_{\xi}A_{\mu}=0\) and thus the only allowed gauge transformations would be \(\chi_{\xi}=a_{1}dt+a_{2}d\varphi\), with \(a_{1,2}\) constants. These transformations have no effect on the laws of thermodynamics [40] and thus we will use the simpler transformation formulae above. We are almost at the stage of being able to compute the conserved quantities which will appear in the first law. There is one final subtlety for the accelerating solutions (2), (3) in that the cross-sections of the conformal boundary \(\partial C_{\infty}=\Sigma_{\infty}\) themselves have boundary: \(\partial\Sigma_{\infty}=S_{-}^{1}\sqcup S_{+}^{1}\), where \(S_{\pm}^{1}\) are the small circles around the cosmic strings at \(\theta_{\pm}\) respectively. The inclusion of these boundaries will mean that equation (41) needs to be supplemented by a _corner improvement_ in order to give the correct charges. The form of this corner term can be discerned from the holographic counterterms (39). The counterterms are not only responsible for renormalising the action but also the symplectic potential [18]. The full expression for the renormalised symplectic potential, including the corner terms, is given in equation (13) of [64] and in our notation reads \[\boldsymbol{\Theta}_{\text{ren}}[\psi;\delta\psi]\equiv\boldsymbol{\Theta}[ \psi;\delta\psi]-\delta\mathbf{L}_{\text{GHY}}[h_{ij}]-\delta\mathbf{L}_{ \text{ct}}[h_{ij}]+d\boldsymbol{\Theta}_{\text{ct}}[h_{ij};\delta h_{ij}], \tag{44}\] where \(\mathbf{L}_{\text{ct}}\) is the counterterm Lagrangian and \(\boldsymbol{\Theta}_{\text{ct}}\) is the symplectic potential arising from the variation of counterterm action, defined via \[\delta\mathbf{L}_{\text{ct}}[h_{ij}]=\mathbf{E}_{(3)}^{kl}[h_{ij}]\delta h_{ kl}+d\boldsymbol{\Theta}_{\text{ct}}[h_{ij};\delta h_{ij}], \tag{45}\] where \(\mathbf{E}_{(3)}^{ij}\) are the "equations of motion" for the boundary metric \(h_{ij}\), explicitly not satisfied as we do not impose dynamical gravity on the boundary. The adjustments to \(\boldsymbol{\Theta}\) in (44) can be seen as utilising all of the inherent ambiguities in the construction of the symplectic potential [7]. The \(\delta\mathbf{L}_{\text{bdy}}=\delta\mathbf{L}_{\text{GHY}}+\delta\mathbf{L}_ {\text{ct}}\) term does not alter the symplectic current (4.2) and as such it will not affect the either the symplectic form (4.3) or the Wald Hamiltonians (4.4), (4.5). We can safely ignore the contribution from this term in our analysis. The \(d\mathbf{\Theta}_{\rm ct}\) term will be important. In order to see this, we recall that this \(d\)-exact term shifts the Noether charge form [7] as \[\mathbf{\mathsf{Q}}_{\rm ren}=\mathbf{\mathsf{Q}}+\mathbf{\Theta}_{\rm ct}[h_{ij};\mathcal{ L}_{\xi}h_{ij}] \tag{4.19}\] and thus the renormalised diffeomorphism Hamiltonian is \[\begin{split}\delta H^{\rm ren}_{\xi}&=\delta H_{ \xi}+\int_{\Sigma_{\infty}}\delta\mathbf{\Theta}_{\rm ct}[h_{ij};\mathcal{L}_{\xi }h_{ij}]-\mathcal{L}_{\xi}\mathbf{\Theta}_{\rm ct}[h_{ij};\delta h_{ij}]+di_{\xi} \mathbf{\Theta}_{\rm ct}[h_{ij};\delta h_{ij}]\\ &=\delta H_{\xi}+\int_{\Sigma_{\infty}}\mathbf{\omega}_{\rm ct}[h_{ij };\delta h_{ij},\mathcal{L}_{\xi}h_{ij}]+di_{\xi}\mathbf{\Theta}_{\rm ct}[h_{ij}; \delta h_{ij}],\end{split} \tag{4.20}\] where we used the form expression for the Lie derivative \(\mathcal{L}_{\xi}=i_{\xi}d+di_{\xi}\) and introduced the counterterm symplectic current \(\mathbf{\omega}_{\rm ct}\) in the second line. If we restrict consideration to the case of \(\xi\) being an asymptotic Killing vector [18], then the vector will preserve the conformal class \([g_{(0)}]\) and thus \[\mathbf{\omega}_{\rm ct}[h_{ij};\delta h_{ij},\mathcal{L}_{\xi}h_{ij}]\big{|}_{ \Sigma_{\infty}}=0, \tag{4.21}\] which allows us to write our final formula for the renormalised Hamiltonian as \[\delta H^{\rm ren}_{\xi}=\delta H_{\xi}+\int_{\partial\Sigma_{\infty}}i_{\xi} \mathbf{\Theta}_{\rm ct}[h_{ij};\delta h_{ij}], \tag{4.22}\] a formula which can be viewed as the extension of those in [18; 64] to encompass spacetimes where cross-sections of \(\mathscr{I}\) have non-vanishing boundary. The new term acts as a counterterm at \(\mathcal{O}(1/\rho)\) in the coordinates (3.8) and plays a similar role to the counterterm required to define charges in NUT charged spacetimes [65]. This is perhaps expected as in that case the presence of _Misner strings_ results in singularities in much the same way the cosmic strings do in our setup. We will see further similarities between these cases when discussing the first law of thermodynamics. We conclude this section by noting that due to the transformation properties of the metric (4.15) we have \(\delta_{f}h_{ij}=0\) and thus the \(U(1)\) Hamiltonian is invariant i.e. \[H^{\rm ren}_{f}=H_{f}. \tag{4.23}\] We will now apply formulae (4.22) and (4.23) to compute the charges for our accelerating solution (2.2)-(2.3). ### Mass The first charge we compute is the mass charge \(\mathcal{M}\), given by \[\mathcal{M}=H^{\rm ren}_{\xi}, \tag{4.24}\] where we take the timelike Killing vector to be \[\xi=\partial_{t}. \tag{4.25}\] Note that this is the same normalisation as [38; 39; 40], where they argued that the normalisation was crucial in arriving at the correct law of thermodynamics. The key difference in our approaches is that we use the well-posedness master equation (3.41) as our guiding principle, and thus we will arrive at the correct first law for _any parameter independent_ normalisation, as long as equation (3.41) is satisfied. If the normalisation depends upon the parameters we no longer have \(\delta\xi=0\), which is an important assumption in [5]. Allowing for "field dependent" symmetries in the formalism is a topic of some study, (see e.g. related formulae in [66; 67]) but we do not consider such cases here. We will be explicit in constructing the mass charge as it will provide an important illustration of the corner improvement present in our formula (4.22). We begin by noting that one could completely bypass this discussion of covariant phase space/Wald Hamiltonians and simply use the _holographic mass_ formulae of [18; 68] \[\mathcal{M}_{\rm hol}=-\int_{\Sigma_{\infty}}d^{2}x\,\sqrt{-g_{(0)}}\left(T_{ i}^{t}+j^{t}A_{i}^{(0)}\right)\xi^{i}, \tag{4.26}\] which was the approach taken in [38; 39; 40]. Putting our vector (4.25) into the above formula and using the gauge choice (3.3) we find the holographic mass to be \[\mathcal{M}_{\rm hol}=-\int_{\Sigma_{\infty}}d^{2}x\,\sqrt{-g_{(0)}}T_{t}^{t} =\frac{Km(1-\alpha^{2}\Xi)}{G\kappa}. \tag{4.27}\] It was shown in [18] that the holographic mass was equivalent to the Wald Hamiltonian \(H_{\xi}\) for spacetimes without conical deficit. Here we extend this proof to include those which do. We start by computing the contribution to (4.22) from the first term, i.e. the "bare" Wald Hamiltonian \[\delta H_{\xi}=\int_{\Sigma_{\infty}}\delta\mathbf{Q}[\xi]-i_{\xi}\mathbf{ \Theta}[\psi;\delta\psi], \tag{4.28}\] where \[\mathbf{Q}=\mathbf{Q}_{\rm EH}+\mathbf{Q}_{\rm M},\qquad\mathbf{\Theta}= \mathbf{\Theta}_{\rm EH}+\mathbf{\Theta}_{\rm M} \tag{4.29}\] for the respective Einstein-Hilbert and Maxwell contributions to the bulk action (2.1). The formulae for the symplectic quantities for these theories are well-known, see for example [7; 18; 67] and read as follows5 Footnote 5: The convention for the volume form is \[\varepsilon_{tr\theta\varphi}=\sqrt{-g}=\frac{r^{2}\sin\theta}{H^{4}\kappa}K.\] \[\mathbf{Q}_{\mathrm{EH}} =\frac{1}{16\pi G}\cdot\frac{1}{2!}\varepsilon_{\mu\nu\rho\sigma} \nabla^{\nu}\xi^{\mu}dx^{\rho}\wedge dx^{\sigma}, \tag{4.30}\] \[\mathbf{\Theta}_{\mathrm{EH}} =\frac{1}{16\pi G}\cdot\frac{1}{3!}\varepsilon_{\mu\nu\rho\sigma }\left(\nabla^{\mu}(g_{\alpha\beta}\delta g^{\alpha\beta})-\nabla_{\alpha} \delta g^{\mu\alpha}\right)dx^{\nu}\wedge dx^{\rho}\wedge dx^{\sigma},\] (4.31) \[\mathbf{Q}_{\mathrm{M}} =-\frac{1}{4\pi G}(i_{\xi}\mathbf{A})*\mathbf{F}=-\frac{1}{4\pi G }\cdot\frac{1}{2!}\xi^{\mu}A_{\mu}(*\mathbf{F})_{\rho\sigma}dx^{\rho}\wedge dx ^{\sigma},\] (4.32) \[\mathbf{\Theta}_{\mathrm{M}} =-\frac{1}{4\pi G}\delta\mathbf{A}\wedge*\mathbf{F}=-\frac{1}{4 \pi G}\cdot\frac{1}{2!}\delta A_{\nu}(*\mathbf{F})_{\rho\sigma}dx^{\nu}\wedge dx ^{\rho}\wedge dx^{\sigma}, \tag{4.33}\] which agree with the formulae of [67] up to our differing normalisations of the gauge fields. We can quickly see that all of the Maxwell terms in the charge integral (4.28) drop out due to the gauge choice (3.3) (each azimuthal integral is of the form \(\int_{-1}^{1}x\,dx=0\)) and so the mass charge becomes a purely gravitational issue.6 After working through all of the algebra one finds the bare Hamiltonian contributes Footnote 6: Even if we used a different gauge for \(A_{\mu}\), the analysis would go through in exactly the same manner as [18] and the Maxwell contribution to the Wald Hamiltonian would match the contribution to the holographic mass. \[\delta H_{\xi}=\lim_{\epsilon\to 0}\int_{\Sigma}\delta\mathbf{Q}-i_{\xi} \mathbf{\Theta}=\lim_{\epsilon\to 0}\left[\frac{1}{\epsilon\kappa}\delta( \mu_{+}+\mu_{-})+\frac{1}{2\kappa}(3m\delta K+2K\delta m)\right], \tag{4.34}\] where we regulate using the inverse radial coordinate defined in (3.2) as \(z=\epsilon>0\). We see that this term is clearly divergent due to the presence of the \(\mathcal{O}(1/\epsilon)\) term in the asymptotic expansion on the right hand side and will need to be supplemented by the corner term in (4.22). This term is constructed from the counterterm Lagrangian \[\mathbf{L}_{\mathrm{ct}}=-\frac{1}{16\pi G}\sqrt{-h}(4+\mathcal{R})dt\wedge d \theta\wedge d\varphi \tag{4.35}\] and so the corner symplectic potential is just that of three dimensional Einstein gravity, up to an overall minus sign. The only formula we need is thus \[\mathbf{\Theta}_{\mathrm{ct}}=\frac{1}{16\pi G}\cdot\frac{1}{2!}\varepsilon_ {ijk}\left(D_{l}\delta h^{li}-D^{i}(h_{lm}\delta h^{lm})\right)dx^{j}\wedge dx ^{k}, \tag{4.36}\] where \(\varepsilon_{t\theta\phi}=\sqrt{-h}\), \(D\) is the Levi-Civita connection associated with \(h\) and all indices are understood to be moved with \(h\). This will be sufficient to compute the corner improvement in (4.22). Explicitly, we have \[\begin{split}\lim_{\epsilon\to 0}\int_{\partial\Sigma}i_{\xi} \mathbf{\Theta}_{\mathrm{ct}}&=\lim_{\epsilon\to 0}\left(\int_{S_{+}^{1}}i_{ \xi}\mathbf{\Theta}_{\mathrm{ct}}-\int_{S_{-}^{1}}i_{\xi}\mathbf{\Theta}_{ \mathrm{ct}}\right)\\ &=\lim_{\epsilon\to 0}\left\{-\frac{1}{\epsilon\kappa} \delta(\mu_{+}+\mu_{-})-\frac{\alpha}{\kappa}\left[\delta(\Xi Km\alpha)+\Xi \alpha m\delta K\right]\right\},\end{split} \tag{4.37}\] where we can immediately see that this term acts as an \(\mathcal{O}(1/\epsilon)\) correction to the bare charge formula (4.34). The finite term is a little less obvious but after some algebraic manipulation we can show that the fully renormalised Hamiltonian is \[\delta H^{\rm ren}_{\xi}=\delta\mathcal{M}_{\rm hol}+\frac{m}{2\kappa^{2}}\left\{ 2K\alpha\kappa\Xi\delta\alpha-2K(\alpha^{2}\Xi-1)\delta\kappa-\kappa(2\alpha^ {2}\Xi-1)\delta K\right\}=\delta\mathcal{M}_{\rm hol}, \tag{4.38}\] where in the second equality we used the well-posedness constraint (3.41). This can trivially be integrated in phase space to prove that \[H^{\rm ren}_{\xi}=\mathcal{M}_{\rm hol}=\mathcal{M}, \tag{4.39}\] concluding the proof of equivalence between the holographic mass formula and Wald Hamiltonian for the solution (2.2)-(2.3). ### Electric charge The next charge we will need to define is the electric charge, which can be done in a straightforward manner using (4.13), taking \(f\) to be a constant. We follow [18] in computing the \(U(1)\) Noether charge form as \[\mathbf{Q}[f]=-\frac{1}{4\pi G}f*\mathbf{F} \tag{4.40}\] and the electric charge is defined as \[Q_{e}=H_{-1}=\int_{\Sigma_{\infty}}\mathbf{Q}[-1]=\frac{1}{4\pi G}\int_{ \Sigma_{\infty}}*\mathbf{F}=\int_{\Sigma_{\infty}}d^{2}x\sqrt{-g_{(0)}}j^{t}, \tag{4.41}\] where we assume w.l.o.g that \(f\to-1\) on \(\mathscr{I}\). This picks the opposite sign convention to [18] but will give the same value for \(Q_{e}\) as we use a volume element with the opposite sign to that work. Our result matches equation (3.24) of [40] and thus completes the derivation of the electric charge from the covariant phase space. In order to evaluate this charge for the solution (2.3), we compute explicitly \[\mathbf{F} =gK\sin\theta d\theta\wedge d\varphi-\frac{e}{\kappa r^{2}}dt \wedge dr, \tag{4.42}\] \[*\mathbf{F} =eK\sin\theta d\theta\wedge d\varphi+\frac{g}{\kappa r^{2}}dt \wedge dr \tag{4.43}\] and thus the electric charge is given by \[Q_{e}=\frac{1}{4\pi G}\int_{0}^{\pi}d\theta\int_{0}^{2\pi}d\varphi\,eK\sin \theta=\frac{eK}{G}. \tag{4.44}\] Looking ahead to the first law, we also note that the electric charge can be defined as an integral over a cross-section of the horizon \(\Sigma_{\mathcal{H}}\) (as opposed to a cross-section of the conformal boundary \(\Sigma_{\infty}\)), which we can take w.l.o.g. to be the bifurcation 2-surface [5; 69]. To see this, we recall Maxwell's equations \[\mathbf{E}_{\rm M}=d*\mathbf{F} \tag{4.45}\] and consider now a constant time slice \(C\) which stretches between the black hole horizon and the conformal boundary. When on-shell we have \[0\approx\int_{C}d*\mathbf{F}=\int_{\partial C}*\mathbf{F}=\int_{\Sigma_{\infty}}* \mathbf{F}-\int_{\Sigma_{\mathcal{H}}}*\mathbf{F}+\int_{\mathcal{S}_{-}}* \mathbf{F}-\int_{\mathcal{S}_{+}}*\mathbf{F}, \tag{100}\] by Stokes' theorem. Using (101) we see that \((*\mathbf{F})_{r\varphi}=0\) and thus the string terms do not contribute. This allows us to write the electric charge as an integral over the horizon \[Q_{e}=\frac{1}{4\pi G}\int_{\Sigma_{\infty}}*\mathbf{F}=\frac{1}{4\pi G}\int_{ \Sigma_{\mathcal{H}}}*\mathbf{F}, \tag{101}\] a fact which will be crucial in our derivation of the first law. ### Magnetic charge Our final conserved charge is the magnetic charge \(Q_{m}\), an object which seems somewhat difficult to define using the covariant phase space approach: we have already assigned conserved charges to both the time translation Killing vector \(\partial_{t}\) and the constant \(U(1)\) gauge transformation so it seems like there is nothing left to produce additional charges! (The axial Killing field \(\partial_{\varphi}\) is associated to angular momentum, which vanishes trivially in the case we consider.) As it turns out, one can define the magnetic charge using the covariant phase space formalism by adding a _topological term_ to the action, the details of which we provide in appendix A. This term will play no role in the first law, so we leave the detailed discussion of this term as an aside. The magnetic charge is given by \[Q_{m}=\frac{1}{4\pi G}\int_{\Sigma_{\infty}}\mathbf{F}=\frac{gK}{G}, \tag{102}\] where we used (100) to evaluate the charge explicitly. Following a similar line of logic to the electric charge, the magnetic charge can also be written as an integral over the bifurcation surface by using the Bianchi identity for the field strength tensor \[0=\int_{C}d\mathbf{F}=\int_{\partial C}\mathbf{F}=\int_{\Sigma_{\infty}} \mathbf{F}-\int_{\Sigma_{\mathcal{H}}}\mathbf{F}+\int_{\mathcal{S}_{-}}\mathbf{ F}-\int_{\mathcal{S}_{+}}\mathbf{F} \tag{103}\] and in much the same manner as the electric argument, the cosmic string terms do not contribute, leaving \[Q_{m}=\frac{1}{4\pi G}\int_{\Sigma_{\infty}}\mathbf{F}=\frac{1}{4\pi G}\int_{ \Sigma_{\mathcal{H}}}\mathbf{F}. \tag{104}\] ## 5 First law With all of the charges defined, we are almost ready to move on to the derivation of the first law. First we have to establish the definitions of the other important quantities in the law which are not explicit conserved charges. The first is the Bekenstein-Hawking entropy, given by the usual formula in terms of the horizon area \(\mathcal{A}\) \[S_{\mathrm{BH}}=\frac{\mathcal{A}}{4G}=\frac{1}{4G}\int_{0}^{\pi}d\theta\int _{0}^{2\pi}d\varphi\left.\sqrt{g_{\theta\theta}g_{\varphi\varphi}}\right|_{r=r _{+}}=\frac{K\pi r_{+}^{2}}{G(1-\alpha^{2}r_{+}^{2})}. \tag{105}\] The second quantity is the black hole temperature \(T\). We recall that this is defined as \(T=\beta^{-1}=\frac{\kappa_{\rm sg}}{2\pi}\), where the surface gravity \(\kappa_{\rm sg}\) is constructed from the horizon generator \(\xi=\partial_{t}\) via \(\kappa_{\rm sg}^{2}=-\frac{1}{2}\nabla_{\mu}\xi_{\nu}\nabla^{\mu}\xi^{\nu}\). Utilising these definitions, we find the temperature of the black hole to be \[T=\frac{Q^{\prime}(r_{+})}{4\kappa\pi r_{+}^{2}}. \tag{100}\] We note consistency with [5; 7] in that these objects can be constructed in terms of the gravitational part of the Noether charge form (114) \[\int_{\Sigma_{\mathcal{H}}}\mathbf{Q}_{\rm EH}[\xi]=TS_{\rm BH}, \tag{101}\] a fact which will reappear in our derivation of the first law. The next quantities we need to define are the potentials dual to the electric and magnetic charges respectively [18; 40]. In the electric case we have the electrostatic potential defined via \[\Phi_{e}\equiv\Phi_{\infty}-\Phi_{H}=-\Phi_{H}=-\left.i_{\xi}\mathbf{A} \right|_{r=r_{+}}, \tag{102}\] where we used \(\Phi_{\infty}=0\) which is a result of the gauge choice (101) as there are no \(\theta\)-independent terms in \(i_{\xi}\mathbf{A}_{(0)}\)[40]. Now applying equation (3) allows us to read off the potential as \[\Phi_{e}=\frac{e}{\kappa r_{+}}. \tag{103}\] The magnetic potential is slightly more subtle in that in [40] it was simply introduced as the electric-magnetic dual of \(\Phi_{e}\) by replacing \(e\to g\). Here we will discuss how this can be realised as a potential dual to the magnetic charge. We note that we can write the magnetic charge as \[Q_{m}=\frac{1}{4\pi G}\int_{\Sigma_{\infty}}*\mathbf{G}, \tag{104}\] where \(\mathbf{G}=-*\mathbf{F}\). In order to compute the magnetic potential, we first compute the dual gauge field \(\tilde{\mathbf{A}}\) that sources \(\mathbf{G}\), i.e. \(d\tilde{\mathbf{A}}=\mathbf{G}\). Using (100) we find \[\tilde{\mathbf{A}}=-\frac{g}{r\kappa}dt+eK\cos\theta d\varphi, \tag{105}\] where we note that \[\tilde{\mathbf{A}}_{(0)}=\lim_{\epsilon\to 0}\tilde{A}_{i}|_{z=\epsilon}dx^{i}=\cos \theta\left(-\frac{g\alpha}{\kappa}dt+eKd\varphi\right) \tag{106}\] and we have again chosen a gauge s.t. \(\int_{0}^{\pi}d\theta\tilde{A}_{i}^{(0)}=0\). Having established the magnetic gauge field, we can now define the magnetic potential as \[\Phi_{m}=\tilde{\Phi}_{\infty}-\tilde{\Phi}_{H}=-\tilde{\Phi}_{H}=-\left.i_{ \xi}\tilde{\mathbf{A}}\right|_{r=r_{+}}=\frac{g}{\kappa r_{+}}, \tag{107}\] which we see can be obtained from \(\Phi_{e}\) under replacement of \(e\to g\). ### Geometric derivation With all of the charges (\(\mathcal{M},\,Q_{e},\,Q_{m}\)) and auxiliary quantities (\(S_{\text{BH}},\,T,\,\Phi_{e},\,\Phi_{m}\)) defined, we are finally ready to derive the first law using the covariant phase space. In doing this, we follow [5; 7] by considering a spacelike slice of spacetime \(C\) which stretches from a cross-section of the black hole horizon \(\Sigma_{\mathcal{H}}\) (which can be taken without loss of generality to be the bifurcation surface [69]) out to a cross-section of conformal infinity \(\Sigma_{\infty}\). The novel aspect of this surface in case of the accelerating solution is that the boundary of \(C\) does not merely consist of the two aforementioned surfaces but also includes the two cosmic strings responsible for the black hole acceleration! Technically speaking, we have \[\partial C=\Sigma_{\infty}-(\Sigma_{\mathcal{H}}-\mathcal{S}_{-}+\mathcal{S}_ {+}), \tag{110}\] where the signs are due to the induced orientations on each surface. A similar boundary structure has already been considered in the study of accelerating black hole thermodynamics in asymptotically (locally) flat spacetime [67; 70] and we will see that this will also play a crucial role in the thermodynamics of the AlAdS case. The derivation of the first law follows the same logic as [5], namely in that we begin with an integral of the symplectic current over the surface \(C\) \[0=\int_{C}\mathbf{\omega}[\psi;\delta\psi,\mathcal{L}_{\xi}\psi]=\int_{\partial C }\mathbf{k}_{\xi}=\int_{\Sigma_{\infty}}\mathbf{k}_{\xi}-\int_{\Sigma_{\mathcal{H}}} \mathbf{k}_{\xi}+\int_{\mathcal{S}_{-}}\mathbf{k}_{\xi}-\int_{\mathcal{S}_{+}}\mathbf{k}_{ \xi}, \tag{111}\] where \(\xi\) is the horizon generator given in equation (4.25) and the integral vanishes by virtue of \(\xi\) being a Killing vector. In the series of equalities on the right hand side above we have introduced \[\mathbf{k}_{\xi}=\delta\mathbf{Q}[\xi]-i_{\xi}\mathbf{\Theta}[\psi;\delta\psi]+di_{\xi }\mathbf{\Theta}_{\text{ct}}[h_{ij};\delta h_{ij}], \tag{112}\] which can be seen from equation (108) and we have chosen to add the additional \(d\)-exact term on the right hand side. We have already seen that the inclusion of this exact term gives the correct definition of the charges (4.22) and it will also be an elegant choice in explaining the role of each boundary contribution in the first law. Note however that this term is not necessary as the first law is invariant under transformations of the form \(\mathbf{k}_{\xi}\to\mathbf{k}_{\xi}+d\mathbf{B}\). This can be seen directly from (111) where the orientations mean that all corner contributions cancel. We now will provide the analysis of each term in (111) in order to derive the first law. #### 5.1.1 Conformal boundary term The first term we will analyse is the contribution at the conformal boundary, namely \[\int_{\Sigma_{\infty}}\mathbf{k}_{\xi}=\int_{\Sigma_{\infty}}\delta\mathbf{Q}[\xi ]-i_{\xi}\mathbf{\Theta}[\psi;\delta\psi]+di_{\xi}\mathbf{\Theta}_{\text{ct}}[h_{ij}; \delta h_{ij}]. \tag{113}\] Upon using equations (4.22) and (4.38) we immediately see \[\int_{\Sigma_{\infty}}\mathbf{k}_{\xi}=\delta H^{\text{ren}}_{\xi}=\delta\mathcal{ M}, \tag{114}\] so the term at the conformal boundary contributes precisely the variation of the mass charge. #### 5.1.2 Horizon term The horizon term is \[\int_{\Sigma_{\mathcal{H}}}\mathbf{k}_{\xi}=\int_{\Sigma_{\mathcal{H}}}\delta\mathbf{ Q}-i_{\xi}\mathbf{\Theta} \tag{5.15}\] and to analyse this we will use the split of the symplectic quantities into their "Einstein-Hilbert" and "Maxwell" components as in (4.29). The gravitational piece contributes \[\int_{\Sigma_{\mathcal{H}}}\mathbf{k}_{\xi}^{\text{EH}}=\int_{\Sigma_{\mathcal{H}} }\delta\mathbf{Q}_{\text{EH}}=\delta(TS_{\text{BH}}), \tag{5.16}\] where the fact that \(\xi\) vanishes at the bifurcation surface removes the \(i_{\xi}\mathbf{\Theta}_{\text{EH}}\) term and then the Wald entropy formula (5.3) ensures the right equality. This term simplifies further in that we follow [5; 18] in choosing perturbations such that we match the horizons of the perturbed and unperturbed solutions, as well as the unit surface gravity generators of the horizons \(\tilde{\xi}=\frac{1}{\kappa_{\text{sg}}}\xi\). We immediately have \(\delta\kappa_{\text{sg}}=0\) (as \(\delta\xi=0\)), so this in turn leads to \(\delta T=0\). This allows us to write the final form of the horizon term as \[\int_{\Sigma_{\mathcal{H}}}\mathbf{k}_{\xi}^{\text{EH}}=T\delta S_{\text{BH}}, \tag{5.17}\] precisely as one would find for AlAdS black holes without cosmic string insertion [18]. The electromagnetic piece is \[\int_{\Sigma_{\mathcal{H}}}\mathbf{k}_{\xi}^{\text{M}}=\int_{\Sigma_{\mathcal{H}} }\delta\mathbf{Q}_{\text{M}}-i_{\xi}\mathbf{\Theta}_{\text{M}}, \tag{5.18}\] where we note that the \(i_{\xi}\mathbf{\Theta}_{\text{M}}\) can no longer be ignored. We observe from (5.4) that \(i_{\xi}\mathbf{A}|_{r=r_{+}}\) is non-zero and thus one needs to treat the contractions of the Maxwell symplectic potential and the horizon generator more carefully. We also note that the gauge field (2.3) is not regular at the black hole horizon [40; 48], a statement which is generically true for spacetimes with magnetic charge. Analysing this term more carefully we find \[\delta\mathbf{Q}_{\text{M}}-i_{\xi}\mathbf{\Theta}_{\text{M}}=-\frac{1}{4\pi G} \left[i_{\xi}\mathbf{A}\left(\delta*\mathbf{F}\right)+\delta\mathbf{A}\wedge i _{\xi}*\mathbf{F}\right] \tag{5.19}\] and working on-shell so that \(d*\mathbf{F}\approx 0\) and recalling that \(\delta_{\xi}*\mathbf{F}=\mathcal{L}_{\xi}*\mathbf{F}=0\), we have \[0=(i_{\xi}d+di_{\xi})*\mathbf{F}\approx di_{\xi}*\mathbf{F}\implies i_{\xi}* \mathbf{F}\approx d\mathbf{X}\qquad\text{(locally)}. \tag{5.20}\] In order to solve for \(\mathbf{X}\), we recall the dual field strength \(d\tilde{\mathbf{A}}=\mathbf{G}=-*\mathbf{F}\) and using \(\delta_{\xi}\tilde{\mathbf{A}}=0\) we can write the equation above as \[i_{\xi}*\mathbf{F}=-i_{\xi}d\tilde{\mathbf{A}}=di_{\xi}\tilde{\mathbf{A}} \approx d\mathbf{X}\implies i_{\xi}\tilde{\mathbf{A}}=\mathbf{X}. \tag{5.21}\] We apply this to equation (5.19) and use "by parts" type manipulations in order to obtain \[\delta\mathbf{Q}_{\text{M}}-i_{\xi}\mathbf{\Theta}_{\text{M}}=-\frac{1}{4\pi G} \left[i_{\xi}\mathbf{A}\left(\delta*\mathbf{F}\right)+i_{\xi}\tilde{\mathbf{A} }\left(\delta\mathbf{F}\right)-d\left(\delta\mathbf{A}i_{\xi}\tilde{\mathbf{A} }\right)\right], \tag{5.22}\] which we note is consistent with previous formulae derived in [71] and utilised in [72]. When integrated over the bifurcation surface (5.22) gives \[\int_{\Sigma_{\mathcal{H}}}\mathbf{k}_{\xi}^{\mathrm{M}}=\Phi_{e}\delta Q_{e}+\Phi_{ m}\delta Q_{m}, \tag{5.23}\] where we used equations (4.47), (4.50), (5.4), (5.9) and the fact that the corner integrals over the poles of the bifurcation surface cancel one another out. We note that due to this careful analysis, we have _not_ been forced to fix the electric and magnetic potentials between the perturbed and unperturbed solutions i.e. (5.23) will still hold when \(\delta\Phi_{e}\neq 0\) and \(\delta\Phi_{m}\neq 0\). This is an advancement upon [18], where the \(i_{\xi}\mathbf{\Theta}\) term was not considered carefully enough at the horizon and they were forced to fix the value of the electric potential \(\delta\Phi_{e}=0\). #### 5.1.3 Cosmic string terms The final contributions to the first law are those which are special to accelerating solutions, namely the thermodynamic length and tension terms which arise from the presence of the cosmic strings. We begin by recalling that the strings are located at \(\theta_{-}=0\) and \(\theta_{+}=\pi\) respectively and thus \[\int_{\mathcal{S}_{\pm}}\mathbf{k}_{\xi}=\lim_{\epsilon\to 0}\int_{r_{+}}^{ \frac{1}{\epsilon+\alpha}}\int_{0}^{2\pi}\left.k_{r\varphi}\right|_{\theta_{ \pm}}\,drd\varphi. \tag{5.24}\] Using the general formula for the Maxwell contribution (5.22) we see that this term contributes nothing to the integral above except for the \(d\)-exact term which will not contribute to the first law. Thus we can treat the cosmic string terms as purely gravitational \[\int_{\mathcal{S}_{\pm}}\mathbf{k}_{\xi}=\int_{\mathcal{S}_{\pm}}\delta\mathbf{Q} _{\mathrm{EH}}-i_{\xi}\mathbf{\Theta}_{\mathrm{EH}}+di_{\xi}\mathbf{\Theta}_{\mathrm{ ct}}. \tag{5.25}\] Explicit computation using (4.30) shows that the Noether charge 2-form does not contribute due to \(\mathbf{Q}\stackrel{{\mathcal{S}_{\pm}}}{{=}}0\) and thus we have \[\int_{\mathcal{S}_{\pm}}\mathbf{k}_{\xi}=\int_{\mathcal{S}_{\pm}}-i_{\xi}\mathbf{ \Theta}_{\mathrm{EH}}+di_{\xi}\mathbf{\Theta}_{\mathrm{ct}}=-\int_{\mathcal{S}_{ \pm}}i_{\xi}\mathbf{\Theta}_{\mathrm{EH}}+\int_{S_{\pm}^{1}}i_{\xi}\mathbf{\Theta}_{ \mathrm{ct}}, \tag{5.26}\] where we have used Stokes' theorem in order to separate the integral into bare and counterterm pieces just as we did for the mass charge. We will present these results of these terms one by one, illustrating again the elegance of the corner term. The bare contribution is \[\int_{\mathcal{S}_{\pm}}\mathbf{k}_{\xi}^{\mathrm{bare}}=-\int_{\mathcal{S}_{\pm }}i_{\xi}\mathbf{\Theta}_{\mathrm{EH}}=\mp\lambda_{\pm}^{\mathrm{bare}}\delta\mu_ {\pm}, \tag{5.27}\] where \[\lambda_{\pm}^{\mathrm{bare}}=-\frac{1}{\kappa\epsilon}+\frac{r_{+}}{\kappa(1 \pm\alpha r_{+})} \tag{5.28}\] are the _bare thermodynamic lengths_ and \(\mu_{\pm}\) are the string tensions defined in (2.15). This term clearly diverges as \(\epsilon\to 0\). The counterterm contribution is \[\int_{\mathcal{S}_{\pm}}\mathbf{k}_{\xi}^{\text{ct}}=\int_{S_{\pm}^{1}}i_{\xi}\mathbf{ \Theta}_{\text{ct}}=\mp\frac{1}{\kappa}\left[\frac{1}{\epsilon}-\alpha(2\alpha m \pm\Xi)\right]\delta\mu_{\pm} \tag{111}\] and thus combining this term appropriately with (110), we arrive at \[\int_{\mathcal{S}_{\pm}}\mathbf{k}_{\xi}=\mp\lambda_{\pm}\delta\mu_{\pm}, \tag{112}\] where the _renormalised thermodynamic lengths_ are defined as \[\lambda_{\pm}=\frac{r_{+}}{\kappa(1\pm\alpha r_{+})}-\frac{\alpha}{\kappa}(2 \alpha m\pm\Xi)=\frac{r_{+}}{\kappa(1\pm\alpha r_{+})}\mp\frac{\alpha}{\kappa} P_{\pm}. \tag{113}\] ### Final statement of the law Combining equations (109), (110), (111) and (112) with the signs given in (106) we arrive at the final form of the first law \[T\delta S_{\text{BH}}=\delta\mathcal{M}-\Phi_{e}\delta Q_{e}-\Phi_{m}\delta Q _{m}+\lambda_{-}\delta\mu_{-}+\lambda_{+}\delta\mu_{+}, \tag{114}\] where, to recap, the relevant quantities are defined as \[\mathcal{M} =\frac{Km(1-\alpha^{2}\Xi)}{\kappa G},\] \[S_{\text{BH}} =\frac{\mathcal{A}}{4G}=\frac{K\pi r_{+}^{2}}{G(1-\alpha^{2}r_{+} ^{2})}, T=\frac{Q^{\prime}(r_{+})}{4\kappa\pi r_{+}^{2}},\] \[Q_{e} =\frac{eK}{G}, \Phi_{e}=\frac{e}{\kappa r_{+}}, \tag{115}\] \[Q_{m} =\frac{gK}{G}, \Phi_{m}=\frac{g}{\kappa r_{+}},\] \[\mu_{\pm} =\frac{1}{4G}\left(1-P_{\pm}K\right), \lambda_{\pm}=\frac{r_{+}}{\kappa(1\pm\alpha r_{+})}\mp\frac{ \alpha}{\kappa}P_{\pm}.\] We note that all of these quantities are identical to those of [40], _except_\(\lambda_{\pm}\) which are different because our phase space of parameters is different and not due to the fact that we derived our laws using different techniques. We demonstrate this by deriving the first law (114) using the "horizon polynomial" method of [35] in Appendix B, where we also demonstrate explicitly the reasons for the differences in the thermodynamic length terms. The law presented above can be seen as a five-parameter law: one starts with the six parameters \(\{m,e,g,\alpha,K,\kappa\}\) and then the constraint equation (108) reduces this by one. It is important to discuss some of the key differences with the law presented in [40] and ours: the law presented in equation (103) of [40] is a full cohomogeneity law with an ill-posed variational problem whereas (114) is a cohomogeneity-1 law with a well-posed variational problem. In fact, the variations that enter the first laws of [35; 36; 37; 38; 39; 40] are all generically _ill-posed_ and this results in their expressions for the thermodynamic lengths \(\lambda_{\pm}\) differing from ours. The crucial feature of well-posedness is in demonstrating the equivalence of the holographic mass and the Wald Hamiltonian, as was shown in equation (4.39). Due to this equivalence, our first law (5.32) can be taken to be read with either quantity acting as \(\mathcal{M}\) in the law and is thus entirely consistent. This is in great contrast to [40] where the holographic mass is not equivalent to the Wald Hamiltonian and thus the first law changes form depending on the charge that appears in the law. As explained in detail in Appendix B.2, if one writes their first law with the Wald Hamiltonian \(H_{\xi}\) then one finds the same \(\lambda_{\pm}\) as given in (5.33). If one uses the holographic mass \(\mathcal{M}_{\text{hol}}\) then one finds the \(\lambda_{\pm}\) as given in equation (3.43) of [40]. This inconsistency in the form of the law is manifested due to ill-posedness and thus we strongly advocate a first law where the variations satisfy the well-posedness constraint (3.41). There are various choices of \(\kappa\) which solve the master equation (3.41) that one may now want to examine. One obvious choice is to take \(\kappa\) as a phase space constant such that \(\delta\kappa=0\). This is the clearest limit from the perspective of both Einstein-Maxwell theory in four dimensions and the dual CFT\({}_{3}\): all of the parameters \(\{m,e,g,\alpha,K\}\) are physically well understood, and the fact that \(\kappa\) is fixed on phase space means that different choices correspond to scaling of dimensionful quantities in the dual field theory [40]. On the other hand, allowing for a phase space dependent \(\kappa\) may yet be crucial to discuss the thermodynamics of the supersymmetric solutions [50] and thus the uplift into \(d=11\) supergravity [48]. We shall now show that a phase space dependent \(\kappa\) is important in consistently analysing the thermodynamics of a class of solutions that we shall define as _close-to-supersymmetric and close-to-extremal_ spindles. ## 6 Spindles and supersymmetry In this section we will discuss various relations between the conserved charges \(\{\mathcal{M},Q_{e},Q_{m}\}\) as given in (5.33) and applications of the first law (5.32) when we constrain the parameters of the solution. These constraints will arise by requiring various combinations of supersymmetry, extremality, and for the surfaces of constant \((t,r)\) to have the topology of a _spindle_. We will begin with the technical details of these requirements, before applying them to derive loci satisfied by the conserved charges and a number of applications of the first law. ### Overview We have so far kept the choice of the deficit parameter \(K\) entirely generic and thus considered a setup with conical singularities at both poles due to the presence of two cosmic strings. Within this class of solutions, a particularly interesting case is that of the constant \((t,r)\) surfaces having the topology of a _spindle_. Following [40; 48], we note that such a topology is obtained when we choose \[K=\frac{1}{n_{+}P_{+}}=\frac{1}{n_{-}P_{-}}, \tag{6.1}\] where \(n_{\pm}\) are _coprime positive integers_ i.e. \(\gcd(n_{-},n_{+})=1\). With this choice, one has \(\Sigma\cong\mathbb{WCP}^{1}_{[n_{-},n_{+}]}\), the orbifold space known as a spindle. Such objects have been the topic of much recent study in the supergravity context [40; 47; 48; 49] due to their remarkable property that despite being singular surfaces (and thus inducing conical singularities when present in the low-dimensional spacetime) they admit smooth solutions when suitably uplifted into \(d=11\) supergravity. When working with a spindle we can rewrite the cosmic string tensions \(\mu_{\pm}\) entirely in terms of the spindle data, namely the coprime integers \(n_{\pm}\). Using equation (15) together with (6.1) we find \[\frac{1}{n_{\pm}}=1-4G\mu_{\pm} \tag{6.2}\] and we also note that the orbifold Euler characteristic of \(\Sigma\) is given by [40] \[\chi=\frac{1}{n_{-}}+\frac{1}{n_{+}}=2-4G(\mu_{-}+\mu_{+}) \tag{6.3}\] and is thus determined purely by the overall conical deficit (17) present in the spacetime. A crucial property for the uplift into regular solutions in supergravity is the property that the four dimensional solutions (2.2)-(2.3) are supersymmetric, i.e. that there exists a solution to the Killing spinor equation. Such equations have been analysed in [40; 48; 50] and result in the following constraints between the parameters \[g =\alpha m, \tag{6.4}\] \[g^{2} =\Xi(\Xi-1). \tag{6.5}\] A supersymmetric solution is also _extremal_ (\(T=0\)) if [48] \[e=0\implies Q_{e}=0. \tag{6.6}\] ### Charge loci It is natural to ask what algebraic constraints are placed upon the conserved charges \(\{{\cal M},\,Q_{e},\,Q_{m}\}\) given in (5.33) when one applies various combinations of the supersymmetry, extremality and spindle topology conditions. We will now examine some interesting combinations and derive the various charge loci that result. #### 6.2.1 Supersymmetric locus The first case of interest is to apply the supersymmetry constraints (6.4)-(6.5) in order to write down the supersymmetric locus of charges. We note that supersymmetric solutions no longer have a black hole horizon7[48], although they can still be slowly accelerating and exhibit a single conformal boundary with representative (3.10) and energy-momentum tensor (3.11). It is for these reasons that our analysis of the conserved charges should extend without issue to this class of solutions. In order to derive the locus, we find it helpful to rewrite (6.4)-(6.5) as Footnote 7: They exhibit a naked curvature singularity, visible at the conformal boundary \(\mathscr{I}\). \[m=\frac{g}{\alpha},\qquad\alpha^{2}\Xi=\frac{g^{2}}{e^{2}+g^{2}}=\frac{Q_{m}^ {2}}{Q_{e}^{2}+Q_{m}^{2}} \tag{6.7}\] and thus we find the supersymmetric locus of charges is given by \[{\cal M}=\frac{Q_{m}Q_{e}^{2}}{\alpha\kappa(Q_{e}^{2}+Q_{m}^{2})}. \tag{6.8}\] #### 6.2.2 Supersymmetric and extremal locus An immediate application of the supersymmetry locus (6.8) is to the _supersymmetric and extremal_ black holes. Application of the extremality constraint (6.6) immediately results in \[Q_{e}=\mathcal{M}=0. \tag{6.9}\] This result seems somewhat surprising at first but these are still genuine black hole solutions as discussed in [48]. The vanishing of \(\mathcal{M}\) imposes no further constraints upon the parameters than those already given in (6.4)-(6.6), in particular one still has \(m,g,\alpha,K\neq 0\) and a highly non-trivial global structure, including the presence of acceleration horizons [48]. It is important to note that these acceleration horizons split the conformal boundary into two pieces [48] and thus it is unclear if (6.9) is really a relationship between the true conserved charges of the supersymmetric and extremal solution. It would be interesting to investigate this issue more deeply in future work. #### 6.2.3 Supersymmetric spindle locus We note that (6.8) is quantitatively different from the \(a\to 0\) limit of the rotating supersymmetric locus as given in equation (1.1) of [40]. This is because we have applied the supersymmetry conditions _before_ fixing the solution to have the topology of a spindle. In order to work explicitly with a spindle, we start as in [40] by requiring that we fix the spindle data (6.2), namely we require \[\delta n_{\pm}=0\iff\delta\mu_{\pm}=0, \tag{6.10}\] where we used equation (6.2) in writing down the iff statement. Using equations (2.16) and (2.17) we see that this corresponds to fixing the products \(\alpha mK\) and \(\Xi K\) which can be physically understood as fixing the overall tension and deficit respectively. Fixing the overall deficit has an important implication for the well-posedness constraint (3.41): it is equivalent to equation (3.43) and thus in order to ensure well-posedness we must fix \(\kappa\) as in equation (3.42), namely \[\kappa=\sqrt{\Xi(1-\alpha^{2}\Xi)}. \tag{6.11}\] This is consistent with the observation made in [38; 39; 40] that when one fixes the string tensions the choice of \(\kappa\) given above ensures well-posedness. Using (6.7), we find \[\kappa=\frac{Q_{e}Q_{m}}{\alpha(Q_{e}^{2}+Q_{m}^{2})} \tag{6.12}\] and thus the supersymmetric locus for a spindle is \[\mathcal{M}=Q_{e}. \tag{6.13}\] We note that fixing a spindle is actually a stronger than necessary requirement in order to arrive at this locus. The locus would be equivalent if one merely demands supersymmetry (6.4)-(6.5) and a fixed Euler characteristic (6.3) (which is equivalent to fixing the overall deficit (2.17)). The supersymmetric and extremal spindle exhibits the same locus as (6.9). ### Applications of the first law We would like to apply our first law of slowly accelerating black hole thermodynamics (5.32) to the supersymmetric spindle solutions, although this is made difficult due to the global nature of such solutions. As mentioned, the supersymmetric and extremal solutions are not slowly accelerating and are thus beyond the scope of our first law. The next possible case of interest is the supersymmetric and non-extremal solutions (i.e. those where both (6.4) and (6.5) are satisfied but (6.6) is not) but as was shown in [48], these do not even possess a black hole horizon and thus these are also entirely unsuitable! We will instead first apply our law to solutions with a fixed overall deficit angle, then those with a spindle as the constant \((t,r)\) surfaces. Finally, we will show that our law can be further restricted to the families of _close-to-supersymmetric_ and _close-to-supersymmetric and close-to-extremal_ spindles, which we shall introduce using criteria inspired by those given in Section 6.3 of [48]. We note that such solutions were referred to as "non-supersymmetric and non-extremal" in [48] and we will alter this terminology here to avoid confusion with the other cases we study, none of which are sypersymmetric or extremal. #### 6.3.1 Fixed deficit The first case we consider is to fix the overall deficit (2.17) (equivalently the orbifold Euler characteristic (6.3)) \[\delta(\mu_{+}+\mu_{-})=\delta\chi=0, \tag{6.14}\] a condition which results in equation (6.11) for well-posedness. Applying this relation to the first law (5.32) we find the following reduction of the law \[T\delta S_{\rm BH}=\delta{\cal M}-\Phi_{e}\delta Q_{e}-\Phi_{m}\delta Q_{m}+( \lambda_{+}-\lambda_{-})\delta\mu_{+}, \tag{6.15}\] which is now a four-parameter law. #### 6.3.2 Fixed spindle In order to write down the first law for a spindle, we start by applying equations (6.10) and (6.11) in order to fix the spindle topology and ensure well-posedness. We note that fixing the spindle topology corresponds to fixing the overall deficit (2.17) and the overall tension (2.16) and thus can be obtained directly from (6.15) by imposing \(\delta\mu_{+}=0\). We can then immediately write down the first law for a spindle \[T\delta S_{\rm BH}=\delta{\cal M}-\Phi_{e}\delta Q_{e}-\Phi_{m}\delta Q_{m}, \tag{6.16}\] which equivalently follows from (5.32) after fixing the string tensions and is a three-parameter law. #### 6.3.3 Close-to-supersymmetric spindle The next case is to apply some supersymmetry condition on top of (6.10). As we cannot apply both supersymmetry conditions, we follow Section 6.3 of [48] in only applying \[g=\alpha m, \tag{6.17}\] an equation which we take with (6.10) to define a _close-to-supersymmetric_ spindle solution. The application of this constraint is straightforward in that we have \[0=\delta(\mu_{-}-\mu_{+})=\frac{1}{G}\delta(\alpha mK)=\frac{1}{G}\delta(gK)= \delta Q_{m} \tag{6.18}\] and so this just amounts to fixing the magnetic charge, reducing the first law of (6.16) down to the "standard" form of \[T\delta S_{\rm BH}=\delta\mathcal{M}-\Phi_{e}\delta Q_{e}. \tag{6.19}\] We note that this is equivalent to that written down in [40] (with \(J=0\)) but is now a two-parameter law. #### 6.3.4 Close-to-supersymmetric and close-to-extremal spindle The final case of interest is the _close-to-supersymmetric and close-to-extremal_ spindle solution, which in addition to (6.10) and (6.17) also has \[e=Q_{e}=0. \tag{6.20}\] Note that this solution is not extremal as this condition only corresponds to extremality when the solution is supersymmetric, i.e. when _both_ supersymmetry conditions (6.4)-(6.5) hold. This solution is now characterised by a single parameter, which we can take to be the magnetic charge parameter \(g\). In order to apply our first law to such a solution we need to specify the range of \(g\) for which the solution is slowly accelerating. Such a range was computed in [48] and reads \[g<g_{\rm BPS}=\frac{\sqrt{1-\alpha^{2}}}{\alpha^{2}}, \tag{6.21}\] where \(g_{\rm BPS}\) corresponds to the supersymmetric and extremal solution. Restricting \(g\) to the range above, the first law for this solution is \[T\delta S_{\rm BH}=\delta\mathcal{M}, \tag{6.22}\] which completes the application of our current law to slowly accelerating spindle solutions. We note that this reduction would not have been possible if \(\kappa\) was not treated as a phase space dependent parameter. If \(\kappa\) was taken to be a phase space constant, the master equation (3.41) would reduce to a non-trivial constraint, and subsequently applying (6.10), (6.17) and (6.20) would overconstrain the first law (5.32) down to a trivial (0-parameter) statement. This application thus provides important motivation to treat \(\kappa\) as a generic parameter of the solution. It will be of future interest to study more carefully the effect of rotation: as well as adding an additional parameter, rotation also allows for slowly accelerating supersymmetric and extremal solutions [40]. This would open up many more interesting reductions of the more general first law, although the key issue of extracting the true mass for accelerating solutions must first be tackled. We leave this work to future studies. Conclusions and outlook In this work we have extended the techniques of [18] in describing the charges and thermodynamics of AlAdS solutions to encompass _accelerating_ solutions in \(d=4\) spacetime dimensions. This effort relied on two important developments, firstly the relaxation of Dirichlet boundary conditions to the more general demand that the variational problem is well-posed (3.38). Secondly, one needs to supplement the definition of the Wald Hamiltonians by a suitable _corner improvement_ due the topology of \(\mathscr{I}\): a hypersurface which is not smooth due to the presence of cosmic strings piercing the poles of the constant time cross-sections. With these improvements in place, we were able to show agreement for the conserved charges between the usual holographic definitions [18; 68] and the charges constructed via the covariant phase space [5]. The main motivation in developing these techniques was to examine the first law of thermodynamics for these solutions, and in particular to make comparison and contrast with the results of [35; 36; 37; 38; 39; 40]. Of crucial importance in these works was the choice of the time scaling parameter \(\kappa\) of which we have now shed more light on: we have shown that one can use the techniques developed in this paper to derive an entirely consistent first law of thermodynamics with \(\kappa\) as long as one imposes the constraint (3.41). We emphasise that the key idea in deriving this constraint is the requirement of a well-posed variational problem (3.38). Such a requirement was not in place for the variations of the accelerating solution considered in [35; 36; 37; 38; 39; 40] and this results in a mismatch between the holographic mass and the Wald Hamiltonian associated with the time translation. This mismatch results in an ambiguous form of the first law depending upon the "mass" that appears in the law, manifested explicitly in finding different values for the thermodynamic lengths \(\lambda_{\pm}\) for the holographic and Wald masses. In contrast, our first law (5.33) is valid for both the holographic and Wald masses (because they are equivalent) and thus gives the true expression for the thermodynamic lengths \(\lambda_{\pm}\). We note that when one fixes the overall deficit angle in the spacetime and one chooses the value of \(\kappa\) as in (3.42) (as considered in [35; 36; 37; 38; 39; 40]), the well-posedness constraint (3.38) _is_ solved. This gives a concrete reason for the value of \(\kappa\) used in [40] (the authors noted at the time that \(\kappa\) "gives the first law by trial and error"). This work thus provides a platform to study the thermodynamics of spindle solutions from the covariant phase space, but there are still many future directions which need to be pursued for a more complete understanding. The first issue is that of rotation, in particular in that one needs to first establish the true mass charge for accelerating, rotating solutions. The difficulty in doing this lies in the fact that the true mass charge is associated to the timelike Killing vector of the solution when the boundary is in a _non-rotating frame_, an issue which was resolved for AdS-Kerr-Newman black holes in [18; 46] but is yet to be entirely settled for the accelerating solutions. In [39; 40; 48], the mass charge is associated with the timelike Killing vector in a _rotating_ frame, and only matches the true mass along the poles of the spindle (\(\theta=\theta_{\pm}\)). We believe that a more careful analysis of the mass charge must first be carried out, with the \(\Lambda\)-BMS gauge fixing of [66] providing a possible algorithm in order to find the correct non-rotating frame. It would be interesting to observe how this would affect the first law, and would also open the door to apply the law to supersymmetric and extremal spindle solutions, where the rotation can be tuned so as to ensure slow acceleration [40]. In addition to rotation, it would also be interesting to include the other parameters of the Plebanski-Demianski family as spelled out in [32]. One could for example promote to thermodynamic objects the cosmological constant \(\ell\)[73; 74; 75; 76; 77] or Newton constant \(G\)[78; 79], the former of which may be understood in terms of a braneworld interpretation as recently discussed in [80]. Perhaps more interesting would be to include a non-zero NUT parameter \(N\)8. The thermodynamics of this parameter have been the subject of much recent study [65; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93] and we note that the conical singularity structure we dealt with in this work via the covariant phase space is remarkably similar to that of the Taub-NUT-AdS as discussed in [65], with the difference being that the NUT charge introduces _Misner strings_ rather than cosmic strings along the poles of the constant \((t,r)\) surfaces. It would be interesting to combine these techniques to derive the thermodynamics for the entire class of solutions [32]. It may also be of interest to understand the role of the NUT parameter in accelerating solutions where the underlying theory is something other than Einstein-Maxwell. As an example of this: accelerating, NUT charged black holes have recently been found in the theory of Einstein gravity conformally coupled to a scalar field (for \(\Lambda=0\)) [94]. If such solutions are found in the \(\Lambda<0\) case, then the methods discussed in this paper may prove fruitful in analysing their charges and first law. Footnote 8: This must be performed together with non-zero “Kerr-like” rotation, as otherwise the accelerating, NUT-charged black holes fall outside the Plebanski-Demianski family [81; 82; 83]. Another important future direction would be to move away from the slowly accelerating case and apply these techniques to AlAdS solutions with an acceleration horizon. In the asymptotically (locally) flat setting, the lack of a cosmological constant forces the inclusion of an acceleration horizon and covariant phase space techniques have been used [67; 70] to derive a first law using the background subtraction of the massless cosmic string spacetime. Such techniques should be readily applicable to the AlAdS case, and in fact should go further as the holographic counterterms will negate the need for background subtraction. Physically speaking, spacetimes containing multiple horizons with different surface gravities makes the assignment of a thermodynamic temperature unclear [77] and thus this issue would also have to be explored more deeply in the rapidly accelerating case. A direct application of this would be the _supersymmteric and extremal_ non-rotating solutions, as these must contain (for certain values of the azimuthal coordinate \(\theta\)) acceleration horizons [48]. These solutions possess a near-horizon geometry of AdS\({}_{2}\times\mathbb{W}/\mathbb{CP}^{1}_{[n_{-},n_{+}]}\) and thus are an important direction in understanding AdS\({}_{2}\) solutions, both from the lower dimensional and uplifted perspectives [95]. Finally, we note that the techniques pioneered here should be applicable to spacetimes whose conformal boundary cross-sections \(\Sigma_{\infty}\) are generic 2-manifolds-with-boundary as (4.22) will still hold. We conjecture that this formula should apply immediately to higher even dimensions whereas odd dimensions may be more subtle due to the different boundary conditions one imposes due to the conformal anomaly [18]. While it is proven that there is no analogue of the C-metric in higher dimensions [96] (and thus no good candidate solution to describing accelerating spacetime) such formulae may still be crucial in describing the charges and thermodynamics of more exotic solutions to the field equations. This work is supported by the National Research Foundation of Korea under the grants, NRF-2022R1A2B5B02002247 (H.K, N.K, Y.L, and A.P.) and NRF-2020R1A2C1008497 (H.K. and A.P.). This research was partially supported by the Asia Pacific Center for Theoretical Physics (APCTP) via H.K. and A.P. participating in the APCTP SAG workshop, "Entanglement, Large N and Black hole". A.P. would like to thank Matthew Roberts and Finn Larsen for insightful comments given during the workshop. ## Appendix A Magnetic charges from the covariant phase space In this appendix we show that the magnetic charge (4.48) can be realised as a ture conserved charge in the covariant phase space formalism by adding an additional term to the action (3.30). This term will be purely topological and thus will not contribute to the equations of motion, but will modify the symplectic structure (and thus the charges). We will show that this term does not affect either the variational problem (as studied in Section 3.4) or the derivation of the first law (Section 5) and thus the analysis of the main text is entirely sufficient to discuss the first law. ### Topological term As discussed in Section 4.5, the standard discussion of Maxwell theory does not derive the magnetic charge as a conserved quantity from the covariant phase space point of view. In order to see how the magnetic charge is constructed on phase space we follow [65] by considering the action (3.30) supplemented by an additional term \[S=S_{\rm bulk}+S_{\rm bdy}+S_{\rm top},\] (A.1) where the new term is \[S_{\rm top}=-\frac{1}{8\pi G}\int_{M}\mathbf{F}\wedge\mathbf{F}.\] (A.2) This term is often referred to as the "\(\theta\)-term" [97; 98] when added to the usual Yang-Mills Lagrangian and is responsible for the breaking of CP-symmetry in the quantum theory when included. As we will treat the theory purely classically we will not worry about such features and we use it purely to illustrate this feature of the magnetic charge. The contribution from this new term to the equation of motion is trivial (\(\mathbf{E}_{\rm top}\equiv 0\)) due to the closed field strength tensor \(d\mathbf{F}=0\) and thus we see that this term is entirely topological. Despite this, there will be a modification in the symplectic potential. Computing explicitly, we find \[\mathbf{\Theta}_{\rm top}=-\,\frac{1}{4\pi G}\delta\mathbf{A}\wedge\mathbf{F},\] (A.3) which shows that the effect of this topological term will be identical to that of the Maxwell term (100) with a replacement of \(*{\bf F}\to{\bf F}\). Putting everything together, we see that the total \(U(1)\) Hamiltonian (101) is modified from the form given in (102) and now reads \[H_{-1}=\frac{1}{4\pi G}\int_{\Sigma_{\infty}}\left(*{\bf F}+{\bf F}\right)=Q_{e} +Q_{m}, \tag{103}\] i.e. the \(U(1)\) charge is now the sum of the electric and magnetic charges. We note that even though only the linear combination of \(Q_{e}+Q_{m}\) appears in the definition of the \(U(1)\) Hamiltonian, both can be understood as conserved charges independently. Such a result follows immediately from the equation of motion \(d*{\bf F}\approx 0\) and the Bianchi identity \(d{\bf F}=0\). As an aside, we note that (as expected) the electric and magnetic charges appear from the covariant phase space point of view as "dual charges", a topic of recent interest in gravitation [99, 100, 101, 102]. We will not need these more sophisticated notions as our solution of interest (2) does not contain non-trivial dual gravitational charges. It would be of interest to generalise the solutions to include spacetimes which do (for example those with non-trivial NUT parameter) although we leave these discussions to future work. ### No contribution to the variational problem As we have modified the action via the addition of the topological term (100), it is natural to ask whether this may affect the variational problem analysis that was performed without the term in Section 3.4. We note that the variation of the topological term is \[\delta S_{\rm top}\approx\int d^{3}x\sqrt{-g_{(0)}}j_{m}^{i}\delta A_{i}^{(0)}, \tag{104}\] where we have introduced the magnetic current vector field \(j_{m}\), defined analogously to the electric current (11) via \[j_{m}^{i}=\frac{1}{4\pi G}\lim_{\epsilon\to 0}\left[\frac{1}{\epsilon^{3}}n_{ \mu}(*F)^{\mu i}\right], \tag{105}\] which can also be used to define the magnetic charge \[Q_{m}=\int_{\Sigma_{\infty}}d^{2}x\,\sqrt{-g_{(0)}}j_{m}^{t}=\frac{1}{4\pi G} \int_{\Sigma_{\infty}}{\bf F}. \tag{106}\] Computing the values of the magnetic current explicitly for (3), we find as the only non-zero components \[j_{m}^{t}=\kappa\frac{g}{4\pi G},\qquad j_{m}^{\varphi}=-\frac{\alpha e}{4K \pi G}, \tag{107}\] which clearly take constant values. Combining this with \(\int_{0}^{\pi}\,d\theta\sqrt{-g_{(0)}}\delta A_{i}^{(0)}=0\) immediately tells us that \[\delta S_{\rm top}\approx 0 \tag{108}\] and thus the question of well-posedness is independent of whether or not one adds (100) to the action. We also note that the presence of the magnetic current will not affect the Ward identities (103), (104) as these are determined by the equations of motion which are unaffected by the addition of the topological term (100). ### No contribution to the first law Finally, we show that this topological term gives no contribution to the first law. We start by noting that this will be the case if we can show \[\mathbf{k}_{\xi}^{\rm top}=\delta\mathbf{Q}_{\rm top}-i_{\xi}\mathbf{\Theta}_{\rm top}=d \mathbf{Z}, \tag{104}\] for some 1-form \(\mathbf{Z}\), as then all contributions from the corners will cancel in the full statement of the law using the logic as argued in the paragraph below equation (125). The topological contribution to the Noether charge is \[\mathbf{Q}_{\rm top}[\xi]=-\frac{1}{4\pi G}(i_{\xi}\mathbf{A})\mathbf{F}, \tag{105}\] which when combined with equation (103) allows us to write \[\mathbf{k}_{\xi}^{\rm top}=\delta\mathbf{Q}_{\rm top}-i_{\xi}\mathbf{\Theta}_{\rm top }=-\frac{1}{4\pi G}\left[(i_{\xi}\mathbf{A})\delta\mathbf{F}+\delta\mathbf{A} \wedge i_{\xi}\mathbf{F}\right]. \tag{106}\] Using \(d\mathbf{A}=\mathbf{F}\) together with \(\delta_{\xi}\mathbf{A}=0\), we can now perform elementary manipulations to reach \[\mathbf{k}_{\xi}^{\rm top}=-\frac{1}{4\pi G}d\left[(i_{\xi}\mathbf{A})\delta \mathbf{A}\right]\implies\mathbf{Z}=-\frac{1}{4\pi G}(i_{\xi}\mathbf{A})\delta \mathbf{A}, \tag{107}\] allowing us to conclude that the topological term does not contribute to the first law. ## Appendix B Comparison with other literature In this appendix we provide a careful analysis of our first law (104) with other literature, namely that of [35; 36; 37; 38; 39; 40]. In the first subsection we show that our law can also be consistently derived using the "horizon polynomial" method of [35]. In the second subsection we show explicitly that ill-posedness of the variational problem results in the discrepancies between our thermodynamic lengths \(\lambda_{\pm}\) given in (105) and those of [40]. ### Consistency with the "horizon polynomial" method In equation (104) we wrote down the first law of accelerating black hole thermodynamics using the covariant phase space formalism, an elegant approach as this immediately allows one to identify the appearance of the conserved charges and entropy entering the law. In previous works [35; 36; 38; 39; 40] a _different_ first law was obtained by studying the variation of the horizon polynomial \[\delta Q(r_{+})=0, \tag{108}\] where the right hand side comes from the fact that the perturbed horizon polynomial vanishes at the perturbed horizon location. It is natural to ask whether our law is consistent with those of [35; 36; 38; 39; 40]: we have used a different method and arrived at a different result. Here we will provide a derivation of our law using equation (108), providing a useful consistency check and demonstrating that the differences in our result to those of [35; 36; 38; 39; 40] are purely due to our different choices of phase space. We begin by recalling the definition of the horizon polynomial from (4) (with \(\ell=1\)) \[Q(r_{+})=(r_{+}^{2}-2mr_{+}+e^{2}+g^{2})(1-\alpha^{2}r_{+}^{2})+r_{+}^{4}=0, \tag{114}\] an equation we will both vary and utilise on its own. Computing the variation (113) explicitly, we find \[\begin{split} 0&=\kappa T\frac{2\pi Kr_{+}}{(1-\alpha^{2}r_{+} ^{2})^{2}}\delta r_{+}-\frac{K}{1-\alpha^{2}r_{+}^{2}}\delta m\\ &\qquad+\frac{K}{r_{+}(1-\alpha^{2}r_{+}^{2})}\left(e\delta e+g \delta g\right)-K\alpha r_{+}^{2}\frac{r_{+}-2m}{(1-\alpha^{2}r_{+}^{2})^{2}} \delta\alpha\end{split} \tag{115}\] and recalling the definition of the Bekenstein-Hawking entropy (114), we have \[\delta S_{\rm BH}=\frac{2\pi Kr_{+}}{(1-\alpha^{2}r_{+}^{2})^{2}}\delta r_{+} +\frac{\pi r_{+}^{2}}{1-\alpha^{2}r_{+}^{2}}\delta K+\frac{2K\pi r_{+}^{4} \alpha}{(1-\alpha^{2}r_{+}^{2})^{2}}\delta\alpha, \tag{116}\] allowing us to write the law in (115) as \[\begin{split} T\delta S_{\rm BH}=\frac{1}{\kappa(1-\alpha^{2}r_ {+}^{2})}\Bigg{[}&\kappa T\pi r_{+}^{2}\delta K+\frac{Kr_{+}(2 \kappa T\pi r_{+}^{3}+r_{+}^{2}-2mr_{+}+e^{2}+g^{2})}{1-\alpha^{2}r_{+}^{2}} \alpha\delta\alpha\\ &\qquad+K\delta m-\frac{K}{r_{+}}\left(e\delta e+g\delta g \right)\Bigg{]}.\end{split} \tag{117}\] All that remains now is to write the right hand side as \(\delta\mathcal{M}-\Phi_{e}\delta Q_{e}-\Phi_{m}\delta Q_{m}+\lambda_{-}\delta \mu_{-}+\lambda_{+}\delta\mu_{+}\). This calculation is straightforward but a little tedious and requires use of equation (114) for the definitions of \(\mathcal{M},\Phi_{e},Q_{e},\Phi_{m},Q_{m},\lambda_{\pm},\mu_{\pm}\). After some algebraic manipulation we arrive at \[T\delta S_{\rm BH}=\delta\mathcal{M}-\Phi_{e}\delta Q_{e}-\Phi_{m}\delta Q_{m} +\lambda_{-}\delta\mu_{-}+\lambda_{+}\delta\mu_{+}+C, \tag{118}\] where \[\begin{split} C&=\frac{Km}{\kappa^{2}}(1-\alpha^{2} \Xi)\delta\kappa\\ &\qquad+\bigg{(}\frac{r_{+}}{\kappa}-\frac{r_{+}^{3}}{(r_{+}^{2} \alpha^{2}-1)\kappa}-\frac{m(3+2\alpha^{2})}{2\kappa}-(e^{2}+g^{2})\frac{mr_{+ }\alpha^{4}-1}{r\kappa}\bigg{)}\delta K\\ &\qquad+\bigg{(}m\left[\alpha^{2}r_{+}^{2}-1\right]\left[-\alpha^ {2}\left(e^{2}+g^{2}\right)+r_{+}^{2}\left(\alpha^{2}+\alpha^{4}\left(e^{2}+g ^{2}\right)+4\right)-1\right]\\ &\qquad\qquad+2r_{+}\left[-\alpha^{2}r_{+}^{2}\left(e^{2}+g^{2}+r _{+}^{2}\right)+e^{2}+g^{2}+r_{+}^{4}+r_{+}^{2}\right]\bigg{)}\frac{\alpha K \delta\alpha}{(1-\alpha^{2}r_{+}^{2})^{2}\kappa}\end{split} \tag{119}\] and now utilising the phase space relationship (109) required for well-posedness of the variational problem we can simplify this to \[C=Q(r_{+})\frac{\delta K(1-\alpha^{2}r_{+}^{2})+2\alpha\delta\alpha Kr_{+}^{2 }}{\kappa r_{+}\left(\alpha^{2}r_{+}^{2}-1\right)^{2}}=0, \tag{120}\] where we used (114) in the final step, thus demonstrating equivalence between the covariant phase space and horizon polynomial approaches. ### Differences in thermodynamic lengths We will now demonstrate the reason for the differences in our "thermodynamic length" parameters \(\lambda_{\pm}\) to those of the previous literature [35; 36; 37; 38; 39; 40]. We compare our law with that of [40], as that is the case with the largest number of non-trivial parameters and in particular has \(g\neq 0\). Their general first law is given in equation (3.46) of that work and in order to compare with ours we set \[J=0,\qquad\ell=1,\qquad\delta\ell=0,\qquad\delta G=0,\] (B.9) reducing equation (3.46) of [40] down to \[\delta\mathcal{M}_{\rm hol}=T\delta S_{\rm BH}+\Phi_{e}\delta Q_{e}+\Phi_{m} \delta Q_{m}-\tilde{\lambda}_{-}\delta\mu_{-}-\tilde{\lambda}_{+}\delta\mu_{+},\] (B.10) where all quantities above are equivalent to those defined in (5.33), _except_ for the thermodynamic lengths, which are given by \[\tilde{\lambda}_{\pm}=\frac{r_{+}}{\kappa(1\pm\alpha r_{+})}-\frac{m}{\kappa \Xi}\mp\frac{\alpha\Xi}{\kappa}.\] (B.11) We also note that we have explicitly put the holographic mass \(\mathcal{M}_{\rm hol}\) as defined in (4.26) in the first law when written as in (B.10). This is a crucial feature of this law as we shall now demonstrate. In order to compare with [40] we remove the requirement of well-posedness (3.41) and choose \(\kappa\) as given in (3.42). Using equation (4.38) together with the definitions of the string tensions (2.15) we find that the difference between the variations of the masses is now \[\delta H^{\rm ren}_{\partial_{t}}-\delta\mathcal{M}_{\rm hol}=\frac{m(2\alpha ^{2}\Xi-1)}{\kappa\Xi}\delta(\mu_{+}+\mu_{-}),\] (B.12) which we note is no longer zero.9 This means that the first law is sensitive to the choice of mass that appears, for example we can rewrite (B.10) with the Wald Hamiltonian as Footnote 9: Unless one fixes the overall deficit \(\delta(\mu_{+}+\mu_{-})=0\), e.g. in the case of a spindle. \[\delta H^{\rm ren}_{\partial_{t}}=T\delta S_{\rm BH}+\Phi_{e}\delta Q_{e}+ \Phi_{m}\delta Q_{m}-\lambda_{-}\delta\mu_{-}-\lambda_{+}\delta\mu_{+},\] (B.13) where the right hand side of (B.12) has been absorbed into the thermodynamic lengths, which are now in agreement with our expression for \(\lambda_{\pm}\) given in (5.33). This calculation demonstrates explicitly that the removal of a well-posed variational problem results in an ambiguous first law which changes form depending on the choice of mass charge that one uses. If the variational problem is well-posed then both notions of mass are equivalent and one finds a single consistent first law as given in (5.32). We thus strongly advocate well-posedness as a requirement in order to establish equality between various notions of mass and to obtain consistent thermodynamic relations.
2302.12616
Does an IRS Degrade Out-of-Band Performance?
Intelligent reflecting surfaces (IRSs) were introduced to enhance the performance of wireless systems. However, from a cellular service provider's view, a concern with the use of an IRS is its effect on out-of-band (OOB) quality of service. Specifically, given two operators, say X and Y, providing services in a geographical area using non-overlapping frequency bands, if operator-X uses an IRS to optimally enhance the throughput of its users, does the IRS degrade the performance of operator-Y? We answer this by deriving the ergodic sum spectral efficiency (SE) of both operators under round-robin scheduling. We also derive the complementary cumulative distribution function of the change in effective channel at an OOB user with and without the IRS, which provides deeper insights into OOB performance. Surprisingly, we find that even though the IRS is randomly configured from operator-Y's view, the OOB operator still benefits from the IRS, witnessing a performance enhancement for free. This happens because the IRS introduces additional paths between the nodes, increasing the signal power at the receiver and providing diversity benefits. We verify our findings numerically and conclude that an IRS is beneficial to every operator, even when the IRS is deployed to optimally serve only one operator.
L. Yashvanth, Chandra R. Murthy
2023-02-24T13:21:55Z
http://arxiv.org/abs/2302.12616v2
# Does an IRS Degrade Out-of-Band Performance? ###### Abstract Intelligent reflecting surfaces (IRSs) were introduced in the literature in order to enhance the performance of the wireless systems. However, from a cellular service provider's point of view, a concern with the use of an IRS is its effect on out-of-band (OOB) quality of service. Specifically, if there are two operators, say X and Y, providing services in a given geographical area using non-overlapping frequency bands, and if operator X uses an IRS to optimally enhance the throughput of it's users, does the IRS degrade the performance of operator Y? We study this scenario by analyzing the ergodic sum-rates achieved by both the operators under round-robin scheduling. We also derive the complementary cumulative distribution function of the change in the effective channel gain at an OOB user with and without the IRS, which provides deeper insights into the effect of the IRS on the overall channel quality. Surprisingly, we find that even though the IRS is randomly configured from operator Y's point of view, the OOB operator still benefits from the presence of the IRS, witnessing a performance enhancement for free. This happens because the IRS introduces additional paths between the transmitter and receiver, increasing the overall signal power arriving at the receiver and providing diversity benefits. We verify our findings via numerical simulations, and conclude that an IRS is always beneficial to every operator, even when the IRS is deployed to optimally serve only one operator in the system. Intelligent reflecting surfaces, Out-of-band performance. ## I Introduction Intelligent reflecting surfaces (IRS) have been extensively studied in the literature as a means to enhance the performance of both indoor and outdoor wireless systems [1, 2, 3]. An IRS is a passive electro-magnetic surface which comprises of IRS elements made of meta-materials. The IRS elements can introduce a small delay/phase shift in the radio frequency (RF) signal impinging on it before reflecting it, thereby allowing the IRS as a whole to steer the signal in any desired direction. This can be achieved by appropriately configuring the reflection coefficient at every IRS element. Further, due to the passive nature of IRS, the performance of an IRS aided system is enhanced while maintaining the energy efficiency [4]. In order to obtain the professed benefits of an IRS, every IRS element needs to be configured to introduce a phase shift in the signal that is optimal to the scheduled user (UE) in terms of a metric of interest, e.g., the signal-to-noise ratio (SNR). This, however, requires the knowledge of the channel state information (CSI) of all the links from the base station (BS) to the user through every IRS element [5]. In practice, multiple network operators co-exist in a given geographical area, each operating in different frequency band. As a consequence, in a given point in time, multiple UEs are served by different operators in the system. In such a scenario, if an IRS is optimized to cater to the needs of one of the operators, it is not clear whether the IRS will boost or degrade the performance of the other operators in the system. In particular, since the IRS elements are passive, they will reflect the RF signals impinging on them in all frequency bands. So, it is important to understand how an IRS which is controlled by only one operator affects the performance of other operators (called as _out-of-band performance_ in this paper). Although a very few works consider the scenario of the presence of an IRS in multi-band systems [6, 7], these works proceed along the lines of jointly optimizing the IRS phase configurations among all the operators. This approach requires inter operator co-ordination, which is not practical. Moreover, the solutions and analysis provided in these works are not scalable with number of operators (or frequency bands) in the system. More fundamentally, none of these works address the question of the out-of-band (OOB) performance even in the scenario of two operators operating in non-overlapping bands and the IRS is optimized for only one operator. In this paper, we address this question, and to the best of our knowledge, this is the first work which considers the effect of OOB performance due to the presence of an IRS in a system under practical cellular network deployment scenarios. The contributions of this paper are summarized as follows: We consider a system with two network operators operating in different frequency bands, and analyze the OOB throughput performance in the presence of an IRS that is optimized to serve the users subscribed to an operator offering wireless services in a different frequency band. Specifically, * We derive the ergodic sum spectral efficiencies (SE) of the two operators as a function of the number of IRS elements, under round-robin scheduling of UEs. We show that the ergodic sum-SE scales quadratically and linearly with the number of IRS elements for the in-band and OOB networks, respectively, even when the OOB operator has no control over the IRS in the environment. * We provide an exact characterization of the complementary cumulative distribution function (CCDF) of the difference in the channel gain at an OOB UE with and without the IRS. We determine the probability with which the difference is non-negative as a function of the number of IRS elements, and we also show that the channel gain with an IRS _stochastically dominates_ the gain without the IRS, with the gap being an increasing function of the number of IRS elements. This confirms that even an OOB UE witnesses benefits that monotonically increase with the number of IRS elements, even through the OOB system has no control over the IRS phase configuration. * Through numerical simulations, we illustrate that an IRS _enhances_ the OOB performance, i.e., the presence of an IRS is beneficial even if its reflection coefficients are chosen randomly from the OOB operator's viewpoint. Our results show that deploying an IRS not only improves the throughput of the operator who controls the IRS phase configuration to optimally serve its own users, but also enhances the throughput of users associated with an OOB operator who has no control over the IRS, albeit by a smaller amount compared to the in-band users. Furthermore, in rich scattering environments, the throughput enhancement increases with the number of IRS elements deployed. Thus, deploying an IRS enriches the overall wireless channels, and thereby benefits all wireless operators in the area. _Notation:_\([N]\) stands for the set of natural numbers from \(1\) to \(N\); \(|\cdot|\), \(\angle\) stand for the magnitude and phase of a complex number (vector); \(\mathcal{CN}(\mathbf{0},\mathbf{\Sigma})\) denotes a circularly symmetric complex Gaussian random vector with mean \(\mathbf{0}\) and covariance matrix \(\mathbf{\Sigma}\), \(\exp(\lambda)\) denotes an exponentially distributed random variable with parameter \(\lambda\); i.i.d. stands for independent and identically distributed, \(A\overset{d}{=}B\) denotes that random variables \(A\) and \(B\) have the same distribution, \(\mathsf{Pr}(\cdot)\) refers to the probability measure, and and \(\mathbb{E}[\cdot]\) denotes the statistical expectation. \(\mathcal{O}(\cdot)\) is the Landau's Big-O notation. Finally, \(\mathbbm{1}_{\{\cdot\}}\) is the indicator function, and \(\mod(A,B)\) yields the integer remainder when \(A\) is divided by \(B\). ## II System Model We consider two mobile network operators X and Y who provide service to \(K\) and \(Q\) UEs, respectively. The UEs are arbitrarily distributed over a single cell covering the same geographical area, and operators X and Y use non-overlapping frequency bands. The base stations (BSs) of operators X and Y (referred to as BS-X and BS-Y, respectively) and the UEs are equipped with a single antenna, and all the channels in the systems undergo frequency-flat fading.1 An \(N\)- element IRS is deployed by operator X in order to enhance the quality of service (QoS) to the UEs being served by it. That is, operator X configures the IRS with the optimal phase configuration for a UE scheduled by BS-X in every time slot. Footnote 1: The extension to more general cases such as multiple antenna systems and frequency selective channels does not change the main message of the paper, but will be addressed in our future work. ### _Channel Model_ We model the downlink signal received at the \(k\)th UE (served by BS-X) as \[y_{k}=\left(h_{d,k}+\mathbf{g}_{k}^{T}\mathbf{\Theta}\mathbf{f}^{X}\right)x_{k }+n_{k}, \tag{1}\] where \(\mathbf{g}_{k}\in\mathbb{C}^{N\times 1}\) represents the channel from the IRS to the \(k\)th UE, \(\mathbf{f}^{X}\in\mathbb{C}^{N\times 1}\) represents the channel from the BS-X to the IRS, \(\mathbf{\Theta}\in\mathbb{C}^{N\times N}\) is a diagonal matrix containing the IRS reflection coefficients of the form \(e^{j\theta}\), and \(h_{d,k}\) is the direct (non-IRS) path from the BS to the UE-\(k\). Also, \(x_{k}\) is the data symbol for UE-\(k\) with average power \(\mathbb{E}[|x_{k}|^{2}]=P\), and \(n_{k}\) is the AWGN \(\sim\mathcal{CN}(0,\sigma^{2})\) at UE-\(k\). Similarly, at UE-\(q\) served by BS-Y, we have \[y_{q}=\left(h_{d,q}+\mathbf{g}_{q}^{T}\mathbf{\Theta}\mathbf{f}^{Y}\right)x_{q }+n_{q}, \tag{2}\] where the symbols have similar meanings as in (1). Fig. 1 summarizes the considered network. Similar to [8], we consider that all the fading channels in the system are statistically independent and follow the Rayleigh distribution.2 Specifically, \(h_{d,k}=\sqrt{\beta_{d,k}}\tilde{h}_{d,k},h_{d,q}=\sqrt{\beta_{d,q}}\tilde{h} _{d,q};\tilde{h}_{d,k},\tilde{h}_{d,q}\overset{\text{i.i.d.}}{\sim}\mathcal{CN }(0,1)\); \(\mathbf{g}_{k}=\sqrt{\beta_{\mathbf{g},k}}\tilde{\mathbf{g}}_{k},\mathbf{g}_{ q}=\sqrt{\beta_{\mathbf{g},q}}\tilde{\mathbf{g}}_{q};\tilde{\mathbf{g}}_{k}, \tilde{\mathbf{g}}_{q}\); \(\overset{\text{i.i.d.}}{\sim}\mathcal{CN}(\mathbf{0},\mathbf{I}_{N})\) ; \(\mathbf{f}^{X}=\sqrt{\beta_{\mathbf{f}}\mathbf{x}^{T}}\mathbf{f}^{Y}=\sqrt{ \beta_{\mathbf{f}^{Y}}\mathbf{f}^{Y}}\mathbf{f}^{X},\mathbf{f}^{Y}\overset{ \text{o}}{\sim}\mathcal{CN}(\mathbf{0},\mathbf{I}_{N})\). All terms of the form \(\beta_{x}\) represent the pathloss factor in \(x\)th link. Footnote 2: We do not consider spatial correlation in the channels across the IRS elements, which can otherwise occur if the inter-element spacing in the IRS is \(<\lambda/2\) where \(\lambda\) is the wavelength of the signal [9]. ## III Out-of-band Performance Analysis As mentioned earlier, in this work, we consider a scenario where operator X deploys and controls an IRS in order to enhance the throughput performance of the users being served by it, and are interested in the effect of the IRS on an operator Y that is providing services in a different frequency band. Thus, in order to serve the \(k\)th UE, operator X configures the IRS with the rate-optimal phase angles [5, Lemma 1] \[\theta_{n,k}^{*}=\angle h_{d,k}-\left(\angle f_{n}^{X}+\angle g_{k,n}\right), \hskip 14.226378pti=1,\ldots,N, \tag{3}\] which results in coherent addition of the signals along the direct path as well through all the IRS elements, leading to the maximum possible received SNR and hence data rate. The rate achieved by the \(k\)th UE is given by \[R_{k}^{BF}=\log_{2}\!\left(\!1+\frac{P}{\sigma^{2}}\left|\left|h_{d,k}\right|\! +\sum_{n=1}^{N}\!\left|f_{n}^{X}g_{k,n}\right|\right|^{2}\!\right). \tag{4}\] Now, due to the independence of the channels of the users served by operators X and Y, the phase configuration used by operator X to serve its own users appears as a _random_ phase configuration of the IRS for any UE served by operator Y. In the sequel, we quantify the impact of the IRS on the throughput achieved by the users served by operator Y which has no control over the phase configuration used at the IRS. In order to study the impact on the OOB performance, we consider the scheduling of UEs in a round-robin (RR) fashion at both BS-X and BS-Y. We note that the performance under opportunistic scheduling at either or both BSs can also be derived along similar lines, e.g., following the approach in [5]. Fig. 1: Network scenario of an IRS aided multiple-operator system. Since the BSs are equipped with a single antenna, only one UE from each network is scheduled for data transmission in every time-slot. In particular, BS-X configures the IRS optimally to maximize the SNR at the scheduled UE. A summary of the protocol is given in Algorithm 1. ``` 1:Input: UE indices at BS-X: \(0,1,\ldots,K-1\). 2:Input: UE indices at BS-Y: \(0,1,\ldots,Q-1\). 3: Initialize the UE index at BS-X to 0. 4: Initialize the UE index at BS-Y to 0. 5:repeat 6:for time slot \(t\)do 7:\(k^{\#}=\mod(t,K)\). \(\triangleright\)RR Scheduling at BS-X. 8:\(q^{\#}=\mod(t,Q)\). \(\triangleright\)RR Scheduling at BS-Y. 9: Set \(\mathbf{\Theta}=\text{diag}(\theta_{1,k^{\#}}^{*},\ldots,\theta_{N,k^{\#}}^{*})\) as per (3). 10:\(\triangleright\)Hence, a random \(\mathbf{\Theta}\) gets realized for UE \(q^{\#}\). 11: Data transmission from BS-X to \(k^{\#}\). 12: Data transmission from BS-Y to \(q^{\#}\). 13:endfor 14:until service time ``` **Algorithm 1** Protocol at BS-X and BS-Y We characterize the OOB performance of the network by deriving the ergodic sum-SE of both the networks, and then infer the degree of degradation/enhancement of the OOB performance caused by the IRS to operator Y. The ergodic SE at UE-\(k\) is \[\langle R_{k}^{(X)}\rangle=\mathbb{E}\!\left[\log_{2}\left(\!1\!+\!\frac{\left| \sum_{n=1}^{N}\!\left|f_{n}^{X}g_{k,n}\right|\!+\!\left|h_{d,k}\right|\right|^{ 2}P}{\sigma^{2}}\right)\right], \tag{5}\] since the IRS is configured with the optimal phase configuration for (scheduled) UE-\(k\). On the other hand, the ergodic SE for (scheduled) UE-\(q\) of operator Y is given by \[\langle R_{q}^{(Y)}\rangle=\mathbb{E}\!\left[\log_{2}\left(\!1\!+\!\frac{\left| \sum_{n=1}^{N}f_{n}^{Y}g_{q,n}+h_{d,q}\right|^{2}P}{\sigma^{2}}\right)\right], \tag{6}\] where we used the fact that the channels are circularly symmetric random variables, so \(g_{q,n}e^{j\theta}\triangleq g_{q,n}\) for any \(\theta\). Here, the expectations are taken with respect to the distribution of the channels to the respective UEs. With RR scheduling, the ergodic sum-SEs of two operators are given by \[\bar{R}^{(X)}\triangleq\frac{1}{K}\sum_{k=1}^{K}\langle R_{k}^{(X)}\rangle, \text{ and }\bar{R}^{(Y)}\triangleq\frac{1}{Q}\sum_{q=1}^{Q}\langle R_{q}^{(Y)}\rangle, \tag{7}\] where the factors \(1/K\) and \(1/Q\) account for the fact that every UE gets scheduled only for a fraction of total number of UEs in the system under RR scheduling. Closed-form expressions for the ergodic sum-rate is difficult to obtain due to the complicated nature of the distribution of the SNR terms in (5), and (6). Instead, we provide an approximate characterization of the achievable SE below by bounding the data rates using Jensen's inequality. **Theorem 1**.: _With RR scheduling, and under independent Rayleigh fading channels, the ergodic sum-SEs of the operators X and Y when the IRS is optimized to serve the UEs of operator X scale as_ \[\bar{R}^{(X)} =\frac{1}{K}\sum_{k=1}^{K}\!\left(\mathcal{O}\left\{\log_{2} \left(1+\!\left|N^{2}\left(\frac{\pi^{2}}{16}\beta_{r,k}\right)\right.\right.\right.\] \[\left.\left.\left.+N\left(\beta_{r,k}-\frac{\pi^{2}}{16}\beta_{r,k}+\frac{\pi^{3/2}}{4}\sqrt{\beta_{d,k}\beta_{r,k}}\right)\!+\!\beta_{d,k} \right]\!\frac{P}{\sigma^{2}}\right)\right\}\!\right)\!, \tag{8}\] _where \(\beta_{r,k}\triangleq\beta_{\mathbf{f}^{X}}\beta_{\mathbf{g},k}\), and_ \[\bar{R}^{(Y)}=\frac{1}{Q}\sum_{q=1}^{Q}\left(\mathcal{O}\left\{\log_{2}\left(1 +\left[N\beta_{r,q}+\beta_{d,q}\right]\frac{P}{\sigma^{2}}\right)\right\} \right), \tag{9}\] _where \(\beta_{r,q}\triangleq\beta_{\mathbf{f}^{Y}}\beta_{\mathbf{g},q}\)._ Proof.: See Appendix A. From the above theorem, we infer the following on the performance of an IRS aided system where several network operators co-exist in the same geographical area, providing services in different frequency bands: * The IRS enhances the average received SNR by a factor of \(N^{2}\) at any scheduled (in-band) UE of operator X when the IRS is optimized by BS-X. This is the benefit that operator X obtains by using an optimized \(N\)-element IRS. * Operator Y, who does not control the IRS, also witnesses an enhancement of average SNR by a factor of \(N\) free of cost, i.e., without any co-ordination with the IRS. This happens because the IRS makes the wireless environment more rich-scattering, and hence facilitates reception of multiple copies of the signals at the (out-of-band) UEs. We now make the analysis more concrete by analyzing the stochastic behavior of the channels gains witnessed by an OOB UE with and without an IRS. Define the following random variables for an arbitrary UE, say UE-\(q\), served by BS-Y. \[\left|h_{1,q}\right|^{2}\triangleq\left|\sum_{n=1}^{N}f_{n}^{Y}g_{q,n}+h_{d,q} \right|^{2};\left|h_{2,q}\right|^{2}\triangleq\left|h_{d,q}\right|^{2}. \tag{10}\] Note that \(\left|h_{1,q}\right|^{2}\) and \(\left|h_{2,q}\right|^{2}\) represent the channel power gain of UE-\(q\) in the presence and absence of the IRS, respectively. We now characterize the change in the channel gain at UE-\(q\) served by BS-Y in the presence and absence of the IRS as \[Z_{N}^{(Y)}\triangleq\left|h_{1,q}\right|^{2}+\left(-\frac{1}{\left|h\neq 0 \right)}\left|h_{2,q}\right|^{2}. \tag{11}\] The random variable \(Z_{N}^{(Y)}\) as defined above, provides a more conservative comparison of the instantaneous channel gains in the presence and absence of an \(N\)-element IRS over the entire support of their respective probability distributions. In fact, the event characterized by \(Z_{N}^{(Y)}<0\) indicates the adverse effect on an OOB UE by the IRS, and in the sequel we prove that almost surely, \(Z_{N}^{(Y)}\) becomes a non-negative random variable. Towards that end, we derive the CCDF of \(Z_{N}^{(Y)}\) given by \[\text{CCDF}(Z_{N}^{(Y)})\triangleq\bar{F}_{Z_{N}^{(Y)}}(z)=\mathsf{Pr}(Z_{N}^{(Y) }\geq z). \tag{12}\] **Theorem 2**.: _The CCDF of the random variable \(Z_{N}^{(Y)}\) when \(N\) is reasonably large is given by_ \[\bar{F}_{Z_{N}^{(Y)}}(z)=\left\{\begin{array}{l}1-\dfrac{1}{1+ \left(1+N\tilde{\beta}\right)^{2}}\times e\Bigg{(}\dfrac{z}{2\beta_{d,q}^{2}} \Bigg{)},\ \ \text{if}\ z<0,\\ \\ \dfrac{\left(1+N\tilde{\beta}\right)^{2}}{1+\left(1+N\tilde{\beta} \right)^{2}}\times e^{-\left(\dfrac{z}{2\beta_{d,q}^{2}\left(1+N\tilde{\beta }\right)^{2}}\right)},\\ \text{if}\ z\geq 0.\end{array}\right. \tag{13}\] _where \(\tilde{\beta}\triangleq\dfrac{\beta_{r,q}}{\beta_{d,q}}\)._ Proof.: See Appendix B. From the above theorem, we have \(\bar{F}_{Z_{N}^{(Y)}}(0)=1-1/\Big{(}1+(1+N\tilde{\beta})^{2}\Big{)}\), i.e., for a given \(\tilde{\beta}\), the probability that the SNR/gain offset in (11) is negative value decays as \(\mathcal{O}\left(1/N^{2}\right)\). This is also corroborated by the numerical result we show in Fig. 4. Moreover, we see that \(\bar{F}_{Z_{N}^{(Y)}}(z)\geq\bar{F}_{Z_{N}^{(Y)}}(z)\) for all \(z\) and any \(N^{\prime}>N^{\prime\prime}\). Consequently, we have the following proposition. **Proposition 1**.: _For any \(M,N\in\mathbb{N}\cup\{0\}\) with \(M>N\), the random variable \(Z_{M}^{(Y)}\) stochastically dominates3\(Z_{N}^{(Y)}\). In particular, the channel gain in the presence of the IRS stochastically dominates the channel gain in its absence._ Footnote 3: A real-valued random variable \(X\) is stochastically larger than, or stochastically dominates, the random variable \(Y\), written \(X>_{st}Y\), if \[\mathsf{Pr}(X>a)\geq\mathsf{Pr}(Y>a),\ \ \text{for all}\ a. \tag{14}\] Note that, if the random variables \(X\) and \(Y\) have CCDFs \(\bar{F}\) and \(\bar{G}\), respectively, then \(X>_{st}Y\Longleftrightarrow\bar{F}(a)\geq\bar{G}(a)\) for all \(a\)[10]. The above proposition states that the random variables \(\left\{Z_{n}^{(Y)}\right\}_{n\in\mathbb{N}\cup\{0\}}\) form a sequence of _stochastically larger_ random variables as a function of the number of IRS elements, where \(\mathbb{N}\) is the set of natural numbers. Thus, the SNR offset increases with the number of IRS elements even at an OOB UE, i.e., the IRS only enhances the channel quality at an OOB UE at any point in time, with high probability. Therefore, the performance of OOB operators _does not degrade_ even when the operator is completely oblivious to the presence of the IRS. Note that this holds true for any operator in the area, and hence no operator will be at a disadvantage due to the presence of an IRS being controlled by only one operator. In the next section, we numerically illustrate the above points. ## IV Numerical Results In this section, we validate the analytical results derived and empirically show that an IRS does not cause any degradation in the OOB performance. The single antenna BS-X and BS-Y are located at coordinates \((0,200)\), and \((200,0)\) (in metres), the IRS is at \((0,0)\) and single antenna UEs are located uniformly at random locations in the rectangular region with diagonally opposite corners \((0,0)\) and \((200,200)\). The path losses are computed as \(\beta=1/d^{\alpha}\) where \(d\) is the distance and \(\alpha\) is the path loss exponent. We use \(\alpha=1.5,2\) and \(3\) in the BS X/Y-IRS, IRS-UE and BS X/Y-UE (direct) links, respectively, similar to [5]. The fading channels are randomly generated as per Sec. II-A. In Fig. 2, we plot the empirical ergodic sum-SE vs. the transmit SNR \(\left(\frac{\Delta}{\Delta}\,P/\sigma^{2}\right)\) for both the operators as a function of the number of IRS elements. We also plot the sum-SE obtained from analytical expressions in Theorem 1. We considered a scenario with \(K=Q=10\), and all UEs being served over a total of \(1000\) time slots under RR scheduling by the respective operators. Further, the IRS is optimized to serve the UEs of operator X. We see that while IRS uniformly enhances the signal strength for operator X at all SNRs, it also boosts the SNR for any UE served by BS-Y (for any number of IRS elements) which has no control over the IRS phase configurations. This corroborates our observation from Theorem 1 that the IRS does not degrade the OOB performance. Also, the derived analytical expressions tightly match with the simulated values, i.e., the approximation error introduced by the use of Jensen's inequality is small. Fig. 3: Spectral efficiency vs \(\log_{2}(N)\). Fig. 2: Spectral efficiency vs transmit SNR. Next, in Fig. 3, we examine the effect of the number of IRS elements, \(N\), by plotting the ergodic sum-SE vs. \(\log_{2}(N)\) for transmit SNRs of \(70\) dB and \(90\) dB (denoted by T. SNR) to validate the scaling of the received SNR as \(N^{2}\) for operator X and as \(N\) for operator Y. On the plot we mark the slope of the different curves; and as expected from Theorem 1, it clear that while received SNR for a user served by operator X scales as \(N^{2}\), it also scales as \(N\) for a user served by operator Y. Finally, we study the effect of the IRS on the OOB operator (namely, Y), by considering the behavior of the random variable \(Z_{N}^{(Y)}\) (see (11)), which represents the difference in the SNR/channel gain at a UE \(q\) served by BS-Y (which does not control the IRS) with and without the IRS in the environment. In Fig. 4, we plot the CCDF of \(Z_{N}^{(Y)}\) (at a transmit SNR of \(70\) and \(90\) dB), given by (12). Firstly we observe that \(Z_{N}^{(Y)}\) is a non-negative random variable for any \(N>0\) with high probability, which again confirms that every possible outcome of the channel gain at an OOB UE _with_ an IRS is at least as good as every possible outcome of the channel gain at the same UE _without_ an IRS. Next, we observe that the CCDF shifts to the right as the number of IRS elements is increased. On the same plot, we also show the CCDF of received SNR in the absence of IRS, which is in the left-most curve in the figure. This shows that the probability that an operator benefits from the presence of a randomly configured IRS in the vicinity increases with \(N\), even for operators who do not control the IRS. These observations confirm our inference from Theorem 2. Further, the instantaneous SNRs witnessed at an arbitrary UE of an OOB operator stochastically dominates the SNR seen by the same UE in the absence of the IRS. Thus, the IRS only enhances the performance of any operator regardless of the frequency band of its operation. ## V Conclusions In this paper, we analyzed the effect of deploying an IRS on the performance of an OOB operator that has no control over the IRS. We showed that while the IRS optimally serves the in-band UEs, it simultaneously, and at no additional cost, enhances the quality of the channels of the OOB UEs. This performance enhancement in the OOB case is a result of multiple copies of the signals arriving at the UEs. Our numerical results corroborate our theoretical results, and we conclude that deployment of an IRS benefits all the co-existing network operators, albeit to a lesser extent than the operator that has control over the IRS phase configuration. Future work can include the effect of multiple antennas at the BSs/UEs, and consider other scheduling schemes such as opportunistic scheduling. In particular, with opportunistic scheduling, and if sufficiently many users are served by operator Y, any phase configuration selected by user X can be near-optimal to some user associated with operator Y. Selecting and serving such a user can potentially procure near-optimal benefits from the IRS to both operators. ## Appendix A Proof of Theorem 1 We derive the ergodic sum-SEs for operators X and Y in the following two subsections. ### _System ergodic sum-SE of operator X_ We first compute \(\langle R_{k}^{(X)}\rangle\) for a given \(k\). By Jensen's inequality, we obtain \[\langle R_{k}^{(X)}\rangle\leq\log_{2}\!\!\left(\!1\!+\!\frac{ \mathbb{E}\left[\left|\sum_{n=1}^{N}\!\left|f_{n}^{X}g_{k,n}\right|\!+\!\left| h_{d,k}\right|\right|^{2}\right]\!P}{\sigma^{2}}\!\right)\!. \tag{15}\] We expand the expectation term as follows. \[\left|\sum_{n=1}^{N}\!|f_{n}^{X}g_{k,n}|\!+\!|h_{d,k}|\right|^{2} =\sum_{n=1}^{N}\sum_{m=1}^{N}\!|f_{n}^{X}||g_{k,n}||f_{m}^{X}||g_{k,m}|\] \[\qquad\qquad+2\left(\sum_{n=1}^{N}\!|f_{n}^{X}||g_{k,n}||h_{d,k} |\right)+|h_{d,k}|^{2}.\] \[=\sum_{n=1}^{N}\!|f_{n}^{X}|^{2}|g_{k,n}|^{2}\!+\!\sum_{ \begin{subarray}{c}n,m=1\\ n\neq m\end{subarray}}^{N}\!|f_{n}^{X}||g_{k,n}||f_{m}^{X}||g_{k,m}|\] \[\qquad\qquad+2\left(\sum_{n=1}^{N}\!|f_{n}^{X}||g_{k,n}||h_{d,k} |\right)+|h_{d,k}|^{2}. \tag{16}\] Under Rayleigh fading, \(\mathbb{E}[|f_{n}^{X}|^{2}]=\beta_{\text{F}^{X}},\mathbb{E}[|g_{k,n}|^{2}]= \beta_{\text{g},k},\mathbb{E}[|h_{d,k}|^{2}]=\beta_{d,k}\), \(\mathbb{E}[|f_{n}^{X}|]=\sqrt{\frac{\pi}{4}}\beta_{\text{F}^{X}}\), \(\mathbb{E}[|g_{k,n}|]=\sqrt{\frac{\pi}{4}}\beta_{\text{g},k},\mathbb{E}[|h_{d,k}|]=\sqrt{\frac{\pi}{4}}\beta_{d,k},\ \forall k\in[K],n\in[N]\). Further, all the random variables are independent. Taking the expectation in (16), and substituting for these values, we get \[\mathbb{E}\left[\left|\sum_{n=1}^{N}\!|f_{n}^{X}g_{k,n}|\!+\!|h_{d,k}|\right|^{2}\right]=N^{2}\left(\frac{\pi^{2}}{16}\beta_{r,k}\right)\\ +N\left(\beta_{r,k}-\frac{\pi^{2}}{16}\beta_{r,k}+\frac{\pi^{3/2}} {4}\sqrt{\beta_{d,k}\beta_{r,k}}\right)+\beta_{d,k}, \tag{17}\] where \(\beta_{r,k}\) is as defined in the statement of the Theorem. Substituting (17) in (15), and plugging in the resulting expression in (7) yields (8) as desired. Fig. 4: CCDF of \(Z_{N}^{(Y)}\) as a function of \(N\). ### _System ergodic sum-SE of operator \(Y\)_ As above, from Jensen's inequality we have \[\langle R_{q}^{(Y)}\rangle\leq\log_{2}\left(1+\frac{\mathbb{E}\left[\left|\sum_{n= 1}^{N}f_{n}^{Y}g_{q,n}+h_{d,q}\right|^{2}\right]P}{\sigma^{2}}\right). \tag{18}\] Proceeding along the same lines, we get \[\left|\sum_{n=1}^{N}f_{n}^{Y}g_{q,n}+h_{d,q}\right|^{2}\] \[\qquad=\sum_{n=1}^{N}|f_{n}^{Y}|^{2}|g_{q,n}|^{2}+\sum_{ \begin{subarray}{c}n,m=1\\ n\neq m\end{subarray}}^{N}f_{n}^{Y}g_{q,n}f_{m}^{Y*}g_{q,m}^{*}\] \[\qquad+\left(\sum_{n=1}^{N}f_{n}^{Y}g_{q,n}h_{d,q}^{2}+\sum_{m=1} ^{N}f_{m}^{Y*}g_{q,m}^{*}h_{d,q}\right)+|h_{d,q}|^{2}.\] Taking the expectation and simplifying, \[\mathbb{E}\left[\left|\sum_{n=1}^{N}f_{n}^{Y}g_{q,n}+h_{d,q}\right|^{2} \right]=N\beta_{r,q}+\beta_{d,k}. \tag{19}\] Substituting (19) in (18), and plugging in the resulting expression in (7) yields (9). ## Appendix B Proof of Theorem 2 When \(N\) is reasonably large, following along the lines of [5, Proposition 1], we can approximate \(h_{1,q}\sim\mathcal{CN}(0,N\beta_{r,q}+\beta_{d,q})\).4 As a consequence, we have \(|h_{1,q}|^{2}\sim\exp(1/(N\beta_{r,q}+\beta_{d,q}))\), and \(|h_{2,q}|^{2}\sim\exp(1/\beta_{d,q})\). For notational brevity, we define two real-valued random variables \(\tilde{h}_{1,q}\triangleq|h_{1,q}|^{2}\), and \(\tilde{h}_{2,q}\triangleq|h_{2,q}|^{2}\). Hence, \(Z_{N}^{(Y)}\) is the difference of two non-identically and exponentially distributed random variables. From (10), we see that although \(\tilde{h}_{1,q}\) and \(\tilde{h}_{2,q}\) are dependent, their dependence arises due to the single common term, \(h_{d,q}\). Now, since \(h_{1,q}\) contains the sum of \(N\) independent terms in addition to the \(h_{d,q}\) term, we expect that the two terms can be considered to be approximately independent when \(N\) is large. We examine this by first determining the correlation coefficient between \(\tilde{h}_{1,q}\) and \(\tilde{h}_{2,q}\), and showing that it decays inversely with \(N\). Recall that the correlation coefficient is defined as Footnote 4: We consider larger \(N\) only for the sake of analytical tractability. We showed in [5] that this approximation works well even with \(N=5\). \[\rho_{12}\triangleq\frac{\mathbb{E}\left[(\tilde{h}_{1,q}-\mathbb{E}[\tilde{h }_{1,q}])(\tilde{h}_{2,q}-\mathbb{E}[\tilde{h}_{2,q}])\right]}{\sigma_{1} \sigma_{2}}, \tag{20}\] where \(\sigma_{1}^{2}\) and \(\sigma_{2}^{2}\) are the variances of \(\tilde{h}_{1,q}\) and \(\tilde{h}_{2,q}\), respectively. Since \(\tilde{h}_{1,q}\) and \(\tilde{h}_{2,q}\) are exponentially distributed, 1. \(\mathbb{E}[\tilde{h}_{1,q}]=N\beta_{r,q}+\beta_{d,q}\), and \(\mathbb{E}[\tilde{h}_{2,q}]=\beta_{d,q}\), 2. \(\sigma_{1}^{2}=(N\beta_{r,q}+\beta_{d,q})^{2}\), and \(\sigma_{2}^{2}=\beta_{d,q}^{2}\). Substituting for these values and expanding (20), we get \[\rho_{12}=\frac{\mathbb{E}\left[\tilde{h}_{1,q}\tilde{h}_{2,q}\right]-(N\beta _{r,q}+\beta_{d,q})\,\beta_{d,q}}{(N\beta_{r,q}+\beta_{d,q})\beta_{d,q}}. \tag{21}\] Using the expressions for \(\tilde{h}_{1,q}\) and \(\tilde{h}_{2,q}\) from (10) in the above equation, it is easy to verify that \(\mathbb{E}\left[\tilde{h}_{1,q}\tilde{h}_{2,q}\right]=N\beta_{r,q}\beta_{d,q }+2\beta_{d,q}^{2}\). After simplification, we arrive at \[\rho_{12}=\frac{1}{1+N\left(\frac{\beta_{r,q}}{\beta_{d,q}}\right)}. \tag{22}\] Clearly, \(\rho_{12}\) decays inversely with \(N\). We now use a result from [11, Eq. 4.24] which characterizes the distribution of the difference of two dependent and non-identically distributed chi-square random variables and obtain the CDF of \(Z_{N}^{(Y)}\) as \[F_{Z_{N}^{(Y)}}(z)=\left\{\begin{array}{ll}\frac{2}{\sigma_{1 }^{2}\sigma_{2}^{2}(1-\rho_{12}^{2})\gamma\alpha^{-}}e^{\left(\frac{\alpha^{ -}z}{4}\right)},&\text{if }z<0,\\ \frac{2}{1-\frac{2}{\sigma_{1}^{2}\sigma_{2}^{2}(1-\rho_{12}^{2})\gamma\alpha^ {+}}}e^{-\left(\frac{\alpha^{+}z}{4}\right)},&\text{if }z\geq 0.\end{array}\right. \tag{23}\] where \[\gamma\!=\!\frac{\sqrt{\left(\sigma_{2}^{2}\!-\!\sigma_{1}^{2}\right)^{2}+4 \sigma_{1}^{2}\sigma_{2}^{2}(1-\rho_{12}^{2})}}{\sigma_{1}^{2}\sigma_{2}^{2}(1- \rho_{12}^{2})};\alpha^{\pm}\!=\!\gamma\pm\frac{\sigma_{2}^{2}\!-\!\sigma_{1}^ {2}}{\sigma_{1}^{2}\sigma_{2}^{2}(1-\rho_{12}^{2})}, \tag{24}\] and other symbols have meanings as defined above. We further simplify the CDF by considering a large \(N\); and let \(\rho_{12}\to 0\) in (23), (24) as per (22). Finally, recognizing that \(\bar{F}_{Z_{N}^{(Y)}}(z)=1-F_{Z_{N}^{(Y)}}(z),\,\forall z\), we obtain the CCDF of \(Z_{N}^{(Y)}\) as \[\bar{F}_{Z_{N}^{(Y)}}(z)=\left\{\begin{array}{ll}1-\frac{\sigma_{2}^{2}}{ \sigma_{1}^{2}+\sigma_{2}^{2}}e^{\frac{\sigma_{2}^{2}}{2\sigma_{1}^{2}}},&\text{ if }z<0,\\ \frac{\sigma_{1}^{2}}{\sigma_{1}^{2}+\sigma_{2}^{2}}e^{-\frac{ \sigma_{1}^{2}}{2\sigma_{1}^{2}}},&\text{if }z\geq 0,\end{array}\right. \tag{25}\] Substituting for \(\sigma_{1}^{2}\) and \(\sigma_{2}^{2}\) into (25) completes the proof.
2308.07526
Protecting the Future Grid: An Electric Vehicle Robust Mitigation Scheme Against Load Altering Attacks on Power Grids
Due to the growing threat of climate change, the worlds governments have been encouraging the adoption of Electric Vehicles (EVs). As a result, EV numbers have been growing exponentially which will introduce a large EV charging load into the power grid. On this basis, we present a scheme to utilize EVs as a defense mechanism to mitigate Load-Altering (LA) attacks against the grid. The developed scheme relies on robust control theory and Linear Matrix Inequalities (LMIs). Our EV-based defense mechanism is formulated as a feedback controller synthesized using H-2 and H-infinity control techniques to eliminate the impact of unknown LA attacks. The controller synthesis considers the grid topology and the uncertainties of the EV connection to the grid. To demonstrate the effectiveness of the proposed mitigation scheme, it is tested against three types of LA attacks on the New England 39-bus grid. We test our mitigation scheme against 800 MW static, switching, and dynamic attacks in the presence of multiple sources of uncertainty that can affect the EV load during deployment. The results demonstrate how the grid remains stable under the LA attacks that would otherwise lead to serious instabilities.
Mohammad Ali Sayed, Mohsen Ghafouri, Ribal Atallah, Mourad Debbabi, Chadi Assi
2023-08-15T01:47:25Z
http://arxiv.org/abs/2308.07526v1
Protecting the Future Grid: An Electric Vehicle Robust Mitigation Scheme Against Load Altering Attacks on Power Grids ###### Abstract Due to the growing threat of climate change, the world's governments have been encouraging the adoption of Electric Vehicles (EVs). As a result, EV numbers have been growing exponentially which will introduce a large EV charging load into the power grid. On this basis, we present a scheme to utilize EVs as a defense mechanism to mitigate Load-Altering (LA) attacks against the grid. The developed scheme relies on robust control theory and Linear Matrix Inequalities (LMIs). Our EV-based defense mechanism is formulated as a feedback controller synthesized using H-2 and H-\(\infty\) control techniques to eliminate the impact of unknown LA attacks. The controller synthesis considers the grid topology and the uncertainties of the EV connection to the grid. To demonstrate the effectiveness of the proposed mitigation scheme, it is tested against three types of LA attacks on the New England 39-bus grid. We test our mitigation scheme against 800 MW static, switching, and dynamic attacks in the presence of multiple sources of uncertainty that can affect the EV load during deployment. The results demonstrate how the grid remains stable under the LA attacks that would otherwise lead to serious instabilities. Electric Vehicle, Grid Stability, Robust Control, Mixed Controller, Linear Matrix Inequalities, Load Altering Attack, Attack Mitigation, Dynamic Attack, Switching Attack. ## 1 Introduction Humanity's increasing reliance on electricity has transformed the course of society's development over the past couple of centuries [1]. The power grid has become the center of any advanced society and its security and stability are at the center of any country's national security. To this end, smart technologies have been introduced to support reliable grid operation transforming it into a smart grid [2][3]. The smart grid, however, became an interconnected system of physical and cyber components leaving it open to attacks initiated through the cyberinfrastructure that can have detrimental impacts on its stability and security [4]. One such attack is the False Data Injection (FDI) attack [5] in which attackers tamper with the grid's measurements to manipulate the state estimation and cause operators to take actions that might damage the grid. Stealthy FDI attacks also remain hidden from the Bad Data Detection (BDD) mechanism employed by utilities, even when attackers have incomplete topology information [5]. To this end, multiple attempts have been made to secure the communication layer of the grid [5][6]. Yet Load-Altering (LA) attacks against the grid demand side, rather than state estimation, can only be seen through their impact [7] bypassing the BDD. LA attacks can be broadly classified into 3 subfamilies which are static attacks [7][8] switching attacks [9][10], and dynamic attacks [11][12]. These LA attacks are stealthier and stronger than attacks targeting the grid's cyber layer alone as demonstrated below. The authors of [7] and [8] demonstrated how static attacks can be initiated by manipulating smart home high-wattage Internet of Things (IoT) devices to cause line tripping and load shedding while remaining unobservable to the utility. The switching attacks proposed in [8] manipulate distribution feeders to cause a disturbance that led to generator tripping while mimicking natural phenomena making them hard to be detected by the utility. Finally, the dynamic attacks in [11][12] achieved grid instability and blackouts by targeting smart loads which cannot be directly monitored by the utility. Most studies related to cyber intrusions into the power grid focus on attack detection on the cyber layer with little focus on mitigation. The authors of [10], for instance, utilized Neural Networks (NN) to detect switching attacks and achieved a 70% detection accuracy after examining 20s of charging requests data. Although the NN in [13] achieved near-perfect accuracy, it still requires 5s after the attack is initiated to classify it correctly by which time certain attacks would have already damaged the grid. The authors of [14] proposed an accurate detection algorithm based on extremely randomized trees to detect FDI attacks but ignored attack mitigation. On the other hand, most protection mechanisms found in the literature target very specific phenomena and disregard others. The authors of [8], for example, consider that the current N-1 contingency criterion is enough to overcome the impact of static attacks. The authors, however, propose a variation of the attack that causes load shedding even in the presence of N-1 contingency. The switching attack mitigation scheme in [10] uses a wide area controller to mitigate switching attacks with a frequency below 2 Hz making this scheme less effective against higher frequency attacks and dynamic attacks. An optimal output feedback controller was used in [15] to eliminate interarea oscillation following a contingency on the grid but not persistent attacks. These examples, however, should not overweight the advantages introduced by the smart technologies incorporated into the smart grid. One such technology is the EV and its charging infrastructure. Faced with the presented reality of the grid's vulnerability to LA attacks, we intend to create an LA attack mitigation scheme that takes advantage of the EVs' unique properties that are ideal for such a purpose. EVs can support the power grid by acting as distributed battery storage as well as distributed generators owing to the Vehicle-to-Grid (V2G) power flow capability in new EV Charging Stations (EVCSs). These EV loads are also spread throughout the power grid such that their distribution covers all load buses. This widespread distribution makes EVs optimal for usage in a wide area controller. This distribution also means that the EVs are collocated with the other system loads that will be used by adversaries for LA attacks. This colocation gives EVs an edge since the disturbances can be efficiently eliminated at their source bus with minimal propagation to the rest of the grid. Finally, EVs have another advantage over generators when used to mitigate fast switching attacks, which is the speed at which EVs can change their load. Turbine generators are rotating machines, and their reaction times are determined by their size, type, weight, and control mechanisms and are usually in the order of several seconds [16][17]. On the other hand, EVCSs are based on bidirectional power converters [18] that can change their charging rate and toggle between on, off, and V2G instantly [17] in the order of 1ms. This fast reaction time is needed to react to LA attacks especially those initiated from converter-based IoT loads. Previous studies that have considered using EV loads to support the power grid fall short of achieving the mitigation capabilities suggested in this study. Most of these studies, some of which are discussed in Section 2, focus on using EVs to support the power grid during its steady-state operation. Other studies use EVs passively for load balancing [19]. Based on the above discussion, in this paper, we create a robust wide-area controller based on mixed H-2/\(\infty\) controller synthesis that utilizes the EVs as its control inputs to mitigate the impact of persistent static, switching, and dynamic attacks even when they are sustained for long durations. We follow a detailed methodology to evaluate the performance of the controller and examine its performance in comparison to H-2 and H-\(\infty\) controllers. The performance of this family of controllers makes them an ideal starting point for our mitigation scheme. The Linear Matrix Inequalities (LMIs) of the controllers are modified to fit our system and incorporate the uncertainties of the EV connection to the grid resulting in the formulation of a family of robust controllers. The robust controller formulation is meant to overcome the deployment obstacles that can cause uncertainties in the control signal sent from the utility to the participating EVs. To the best of our knowledge, this is the first work to consider using EV charging loads in a scheme meant to mitigate the 3 known types of LA attacks. The contributions of this paper can be summarized as: * We are the first to propose a robust mixed sensitivity wide-area controller that utilizes EV active and reactive charging load to stabilize the power grid during the 3 known types of LA attacks. Our control mechanism successfully eliminates the impact of persisting attacks without the need for a detection mechanism. Our mixed controller eliminates over 99.6% of the attack impact and returns the frequency to its normal operating range instantaneously. * We design our robust feedback controller to account for uncertainties introduced by real-life deployment obstacles. We mathematically model this uncertainty and incorporate it into our controller synthesis. We are the first to study the stability of wide-area controllers under these uncertainties in the context of smart grid cyber-physical security. * We demonstrate the effectiveness of our proposed EV-based mitigation mechanism through extensive time-domain simulations. These simulations show how the devastating impact of the 3 known types of LA attacks is eliminated completely and instantaneously while having a negligible impact on the range of the participating EVs as well as negligible cost. The rest of the paper is organized as follows. Section 2 briefly presents the system preliminaries and the related studies. Section 3 discusses the grid modeling and the mathematical formulation of the controllers. Section 4 discusses the case studies and Section 5 examines the scheme's stability and the effects of uncertainties. Finally, Section 6 concludes the paper. ## 2 Preliminaries and Related Studies In this section, we present the EV numbers and take a brief look at the EV technology we utilize in our mitigation scheme. We then briefly discuss the LA attacker models that require such a mitigation scheme to be present. Finally, we present the related studies in the field of EVs and power grid protection. ### EV Numbers and Technology The world's governments have been encouraging the adoption of EVs to reduce the emissions of the transportation sector. As such, we are witnessing exponential growth in EV numbers on the road. This exponential trend is demonstrated by the record EV sales reaching 3.2 million EVs in 2020 [20] and 6.6 million in 2021 [20]. The trend has continued with 10.5 million EV sales in 2022 and an anticipated 14 million EV sales in 2023 according to the IEA [21]. To support this rapid deployment, EVCS manufacturers have been introducing faster and cheaper EVCSs. While Level 2 EVCSs had a rate of 7.2 kW a few years ago, it has now increased to 11 kW [18] and 19 kW. Level 3 fast EVCSs have rates between 40kW and 360 kW [18]. While all Level 3 EVCSs are DC chargers, Level 2 can be AC or DC. Nonetheless, DC chargers are becoming more common owing to their bidirectional inverter/converter circuits [18] that allow higher charging rates and support the V2G functionality. Furthermore, the current EV infrastructure for public EVCSs provides us with the communication and control mechanisms needed for our mitigation scheme. Public EVCSs are connected to a Cloud Management System (CMS) [18][22][23]. This CMS utilizes the Open Charge Point Protocol (OCPP) to communicate, monitor, and control all functionalities of the EVCSs in real-time. By utilizing this underlying infrastructure, our control mechanism removes the need for the addition of any new control software or hardware. These EVCSs are connected to the internet through onboard 5 routers. 5G networks are intended to function in areas of high user density and achieve a speed of 10 Gigabits per second (Gbps) and a latency of 1 millisecond [24]. This is the fastest technology currently available to connect widely disbursed users and achieve reliable and fast communication, making it ideal for our fast-reacting EV-based mitigation scheme. Moreover, OCPP specifies the Network Time Protocol (NTP) as the main protocol used to ensure synchronization of the EVCS clocks [25][26]. NTP ensures the synchronization of the EVCSs' clocks with a guaranteed accuracy of 10ms over the public Internet [27] and 1ms over a local area network [27]. ### LA Attack Types and Attacker Model As mentioned above, LA attacks manipulate actual power consumption to harm the grid [7]. Soltan et al. [7] proposed a family of large-scale static attacks, sudden spikes in load, against the grid using a botnet of compromised high-wattage IoT devices. These attacks only need high-level geographical information on the IoT devices' distribution in the grid. Their attacks cause frequency instability, increase operation costs, or cause line tripping. The authors found that compromising a small fraction of the available water heaters in a grid is sufficient to disrupt its operation. Moreover, the authors of [8] presented a multi-step static attack based on the grid's transient conditions. This variation requires the attacker to have the ability to monitor the grid's transient behavior to launch the attack steps accordingly. Switching attacks are another form of LA attack that can be launched to excite certain unstable modes present in the grid, e.g., inter-area oscillation [9][10]. No topology information is required for this attack. However, during the reconnaissance phase, the attacker introduces a chirp signal into the grid using the compromised loads. This signal allows the attacker to monitor the grid's response to different attack frequencies and calculate the impulse response of the system. From this response, the attacker can now determine the specific frequency that would excite the existing unstable mode. The attacker now switches the compromised loads on/off at this specific frequency. The inter-area oscillation mode was excited by a switching attack in [9] and [10]. However, the largest impact of the switching attack can only be achieved when it is done at a frequency of an existing unstable mode. The third LA attack is the dynamic attack described in [11] and [12]. During the reconnaissance phase, the attacker gathers information about the power grid's topology and parameters to build a state-space model of the system [11]. This state-space is then used to craft the attack as a feedback controller that manipulates the magnitude and oscillation frequency of the compromised load to shift the grid's eigenvalues to unstable operation regions. In [12] the feedback gain was calculated using LMIs and caused the generators to oscillate against each other and the frequency to deviate beyond the 2.5% limit tripping the generators. The dynamic attack load is tailored to the instantaneous changes in the grid's response. The authors of [12] also demonstrated that the success of the attack does not require using 100% accurate grid parameters. The above examples stress the necessity for a fast-reacting protection scheme against LA attacks. This is especially true since the world is witnessing an increasing ability of attackers to manipulate high-wattage IoT devices as demonstrated by studies performed in partnership with multiple utilities [8] and cyber security companies [28]. ### Related Studies Multiple studies have considered using EV charging to support the power grid in its steady-state operation. The work in [29] for example discusses the scheduling of EV charging at non-unity power factor to inject or draw reactive power into the grid. The reactive power flow is then included in the optimization of the EV charging schedule [29]. This strategy reduced the overall cost of EV integration into the distribution grid and improved voltage steady-state stability [29]. A similar study was performed in [30] in which the bidirectional EV charging is optimized to perform peak shaving for the grid during peak demand times. EVs were also considered in [31] as a virtual distributed storage system to mitigate the intermittency of wind generation. The EV charging is optimized to store excess wind generation and then inject it back into the grid at times when wind generation was lower than expected. Another EV usage to support the power grid was suggested in [32] where the authors suggested a local control scheme for 3-phase EV chargers coupled with photovoltaic inverters to balance 3-phase distribution grids. The EVCSs would draw the power from the lightly loaded phases while the inverters inject power into the highly loaded phases. The authors were able to improve 3 phase balance and reduced the power losses by 28%. Other studies have considered EV-based frequency regulation mechanisms against disturbances caused by renewable energy intermittency [33][34][35]. As a result, the control mechanisms in [33][34] are designed to handle frequency deviations below 0.07Hz. Furthermore, the work performed in [35] is designed to deal with frequency fluctuations below 0.06Hz with the occasional sudden spike or drop in renewable energy output power. As such this study is optimized to deal with a single sudden spike or drop in generation and not persistent LA attacks. On the other hand, in our work, we will utilize the EVs to support the small-signal stability of the transmission grid against persistent LA attacks by treating these attacks as persisting disturbances to be attenuated through the action of our EV-based mitigation scheme. ## 3 System Modeling and Mitigation Methodology In the following section, we discuss the defender model and the synthesis of our proposed EV-based controller to be used as our mitigation scheme against LA attacks targeting power grids. In this section, we explain the utility's state-space model of the grid. We then present our observer design that is needed to recover and incorporate the power grid's states into the state-space model. We then move on to discuss our EV-based mitigation controller selection and walk through the synthesis of the controller to fit our scenario and achieve the desired results. Finally, we address the looming issue of the impact on EV users under such a scheme. ### Power System Representation and Defender Model As a utility, the defender is assumed to have all knowledge of the grid's parameters and is thus able to represent its behavior with extremely high accuracy. In our study herein, the power system's dynamic behavior is considered mostly dependent on the generators and their control systems. For the modeling of the generators and control systems, we use models which are widely accepted in similar studies [36], i.e., (i) round rotor synchronous machine with order 6, (ii) generator exciter Model IEEE T1, (iii) single mass IEEE G2 steam turbine prime mover, and (iv) Power System Stabilizer (PSS) based on IEEE Std 421.2. We assume that the defender has perfect knowledge of these parameters as well as the line and load parameters to represent the grid. Finally, the grid is linearized into a state-space model with the active and reactive power of the aggregate EV-defender loads as the inputs and the generator frequencies being the outputs. Additionally, the unknown LA attack is represented by a disturbance to the grid. This representation is expressed in (1) where _x, y, u,_ and \(\omega\) represent the vectors of system states, outputs, inputs, and disturbances, respectively. Additionally, \(\Delta P_{EV_{n}},\Delta Q_{EV_{n}}\), and \(\mathrm{f_{Gen}}_{m}\) represent the change in active and reactive power of the aggregate EV load at bus n and the frequency of generator m respectively. A, B, C, and D are the state-space matrices that represent the power grid and its dynamic behavior. \(B_{d}\) and \(D_{d}\) represent the impact of the LA attacks on the grid's states and outputs respectively. \[\dot{x}=Ax+Bu+B_{d}\omega \tag{1a}\] \[y=Cx+Du+D_{d}\omega\] (1b) \[\mathrm{such\ that}\quad y=(f_{Gen_{1}}\quad f_{Gen_{2}}\quad...\quad f _{Gen_{m}})\] (1c) \[\mathrm{and}\quad\quad\mathrm{u}=\Delta PQ_{EV}= \big{(}\Delta P_{EV_{1}}\,\Delta Q_{EV_{1}}\,\cdots\,\Delta P_{EV_{n}}\, \Delta Q_{EV_{n}}\big{)} \tag{1d}\] The authors of [37], [38], and [39] discuss how the power grid can be linearized to maintain all its behavioral properties while the work presented in [40] and [41] discusses how power grids are linearized for the sake of designing their control techniques. Finally, the details of modeling the power grid as a state-space are presented in [42] and [12]. Fig. 1 represents our system model including the LA attack against the grid and the defender's mitigation scheme we are proposing. The attacker in the dotted box above the grid relies on the LA attack models discussed above. The state-space representation in the dashed box, below the grid, is constructed by the utility as part of the EV-based controller scheme we are proposing. This state-space representation is then used to calculate the EV gain matrix \(K_{def}\) used to determine the required EV active and reactive power and capture the behavior of the input \(u=\Delta PQ_{EV}\) for the system to eliminate the impact of the attack/disturbance. The relation between the disturbance and the generator frequency can be expressed as \(y=T\omega\), where \(T\) is a transfer function written in terms of the state-space matrices. The behavior of \(u=\Delta PQ_{EV}\), is then captured and replicated on the set of secure EVs located throughout the grid. The mitigation controller is designed as a full-state feedback controller, i.e., \(u=K_{def}x\), where \(K_{def}\) is the gain matrix that is optimized to eliminate the disturbances caused by the attack. The state-space matrices are also used to calculate the observer gain matrix \(L\) needed by the utility to recover the grid states, \(x\). Since not all the states are measurable, the utility obtains an estimate, \(\hat{x}\), by multiplying the grid's outputs, y, by the observer gain \(L\). As such, the power of the aggregate EV load, \(\Delta PQ_{EV}\), is determined based on \(\Delta PQ_{EV}=u=K_{def}\hat{x}\). Designing the EV-based controller based on feedback control law changes the A matrix to its closed-loop form, i.e., \(A_{cl}=A+BK_{def}\). Since the eigenvalues of \(A_{cl}\) define the stability of the power grid, it is of paramount importance that the methodology used to calculate \(K_{def}\) guarantee the mitigation scheme's performance under different types of attacks when faced with the different uncertainties and obstacles of a real-life deployment. The mentioned sources of uncertainty in the EV load are discussed in the controller design section and studied in Section 5. ### Observer Design Since our EV-based LA attack mitigation strategy relies on full-state feedback control, the utility also needs to choose an appropriate design for the observer gain matrix \(L\). This observer facilitates the accurate recovery and estimation of the grid states \(\hat{x}\) needed to calculate the feedback control input \(\Delta PQ_{EV}=u=K_{def}\hat{x}\). With the introduction of the gain \(K_{def}\) and the observer \(L\), the state-space (1) becomes (2). \[\dot{\hat{x}} =A_{cl}\hat{x}+L(y-\hat{y})+B_{d}\omega \tag{2a}\] \[y =C_{cl}x+D_{d}\omega \tag{2b}\] where \(C_{cl}=C+DK_{def}\) is the closed-loop form of matrix C that relates the system outputs to its states. \(B_{cl}\) and \(D_{ct}\) are \(B_{d}\) and \(D_{d}\) respectively. Given that an accurate observer is based on gain matrix \(L\), it must be designed to ensure that \((A_{cl}-LC_{cl})\) has stable Fig. 1: Defender’s state-space model and interaction with the attacked grid poles [43]. Therefore, we employ a Linear Quadratic Controller (LQR) [43] as our observer for accurate state recovery. LQR is a control technique that optimizes the balance between the energy of states, and control signals to achieve the desired output accurately. This balance is controlled by the respective weights Q and R of the states and inputs. Q is a square symmetric positive semi-definite matrix and R is a square symmetric definite matrix. The observer gain \(L\) is designed as an optimization problem having cost function J: \[minimize\ J\ =\ \int_{0}^{\infty}{x_{o}}^{\top}Qx_{o}\ +\ {u_{o}}^{\top}Ru_{o}\ dt \tag{3}\] where \(x_{o}=(y-\hat{y})\) is the difference between the actual measured outputs and their estimated value, and \(u_{o}\) is the observer output being fed back into the state-space. Given that \(Q=Q^{T}\geq\ 0\) and \(R=R^{T}>\ 0\), this problem can be solved by finding S that satisfies the Algebraic Riccati Equation (4) \[0\ =\ SA_{o}+{A_{o}}^{\top}S-SB_{o}R^{-1}{B_{o}}^{\top}S\ +\ Q \tag{4}\] where \(A_{o}={A_{cl}}^{\top}\), \(B_{o}={C_{cl}}^{\top}\)_and_\(S\)= \(S^{T}\geq\ 0\). The detailed derivation of (4) from (3) is omitted for compactness. Equation (4) is quadratic in S and has no trivial solution, but it has a single positive definite solution that makes the observer stable. Thus, the observer gain \(L\) is determined as \[L=-R^{-1}{B_{o}}^{\top}S. \tag{5}\] ### Choice of Controller for EV-Based Mitigation Scheme Unlike the studies in [29]-[32] that support the grid's steady-state operation using EVs, we utilize EVs to create a mitigation scheme to attenuate the impact of LA attacks. Additionally, unlike traditional approaches that create controllers to mitigate specific disturbances, we use EVs to mitigate the three known types of LA attacks by optimizing the controller to accomplish multiple objectives. Also, by formulating our mitigation scheme as a feedback controller, we eliminate the need for attack detection tools since the controller reacts to the changes in the grid's states. These states are recovered by the utility by using the observer suggested above. We suggest a family of H-2, H-\(\infty\), and mixed H-2/\(\infty\) control techniques to synthesize our problem as a convex optimization with LMI constraints [44] to achieve a guaranteed attenuation level of the attack impacts. H-\(\infty\) controllers minimize the maximum singular value of a function while H-2 controllers minimize the energy of the output signal over the entire frequency range. This would result in the H-2 controller performing better than H-\(\infty\) over most frequencies but failing at specific frequencies. H-\(\infty\) controllers have been adopted in mechanical systems such as missile or satellite trajectory control [45] and suspension systems [46]. However, we intend to adopt the usage of such controllers into the domain of power grid protection against LA attacks. Although, H-2 controllers have received less attention in the literature than H-\(\infty\), their ability to outperform H-\(\infty\) at most frequencies merits their usage in our study. Given the complexity of our problem and the need to address multiple objectives simultaneously, we ultimately choose the mixed H-2/\(\infty\) controller to enable our EV-based mitigation scheme to handle the different types of LA attacks. ### H-2 and H-\(\infty\) Controller Design In this subsection, the EV-based mitigation controller scheme is developed, and its mathematical formulation is obtained based on the LMIs of the desired control law [44]. Writing our controller equations as LMIs gives us the flexibility to (i) finetune their design, (ii) implement complex control schemes, and (iii) combine multiple objectives into a single optimization problem. In our study, the unknown LA attack is treated as a persisting disturbance, \(w\), whose impact we aim to attenuate. The relationship between the disturbance and the system states and outputs is governed by (2) presented above. Our mitigation scheme aims to simultaneously minimize the impact of the attack and the EV load involved in the feedback controller signal. To this end, we design our control methods below and modify the design to achieve all our desired objectives in the sub-sections that follow. 1. H-2 controllers [44] aim to minimize the L\({}^{2}\) norm of a system across the entire frequency range. This means that the cost function of the H-2 controller is the Euclidean distance of the outputs from the origin. This allows the H-2 controller to rapidly react and eliminate the disturbances in a system by using rapidly increasing input signals. The transfer function \(T_{2}\), representing the influence of the disturbance \(\omega\) on the grid output, i.e., generator frequency \(\gamma\), is presented in (6): \[T_{2}=\frac{\gamma}{\omega}=\big{(}C+DK_{def}\big{)}(sI-(A+BK_{def}))^{-1}B_{d} \qquad(6)\] The cost function of the H-2 controller becomes \(||T_{2}||_{L^{2}}<\gamma\). This function is rearranged in terms of its LMIs (7)-(8). \[(AX+BK_{def}X)^{T}+AX+BK_{def}X+B_{d}{B_{d}}^{T}<0 \qquad(7)\] \[\text{trace}\ \{(C+DK)X(CX+DK_{def}X)^{T})\}<\text{trace}(Z)< \gamma^{2} \qquad(8)\] Based on Schur's formulation for partitioned matrices [44], inequality (8) is rewritten as (9) and (10). \[\begin{bmatrix}-Z&CX+DK_{defX}\\ (CX+DK_{def}X)^{T}&-X\\ \qquad\qquad\qquad trace(Z)<\gamma^{2}\end{bmatrix}<0 \qquad(9)\] Considering that the gain \(K_{def}=\ K_{\text{H-2}}=WX^{-1}\), (7) and (9) now become (13) and (14), and the optimization problem to reduce the disturbance impact can be written as (11)-(14). \[\begin{array}{c}\text{Minimize}\ \gamma^{2}&(11)\\ \text{s.t.}\qquad\qquad\qquad trace(Z)<\gamma^{2}&(12)\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad H-\(\infty\) optimization problem becomes (16)-(18). \[\begin{array}{cccc}&\text{Minimize }\rho&(16)\\ \text{s.t.}&X>0&(17)\\ \begin{bmatrix}(AX+BW)^{T}+AX+BW&B_{d}&(CX+DW)^{T}\\ B_{d}{}^{T}&-\rho I&D_{d}{}^{T}\\ CX+DW&D_{d}&-\rho I\end{bmatrix}<0&(18)\end{array}\] The H-\(\infty\) LMI optimization problem can only have a solution if there exists a matrix W and a symmetric positive definite matrix X satisfying the matrix inequality problem in (17)-(18). Each of these two control techniques is better suited to a specific class of LA attacks as we will demonstrate in Section 4 which is why we will also develop a mixed controller later in this section. After calculating \(K_{def}\), the utility will have to check the attenuation levels dictated by \(\gamma\) and \(\rho\) respectively. These variables should be maintained below 1 to achieve feasible and satisfactory attenuation levels. This, however, only guarantees the performance of the controller in the frequency domain. To improve the controller performance in the time domain, i.e., reduce settling time, we add the pole placement constraint (19). \[AX\ +\ XA^{T}+BW+W^{T}B^{T}+2A_{1}X\ <\ 0 \tag{19}\] where \(X\) is a positive semi-definite matrix. This LMI constraint is based on a D-stabilization pole placement technique [12] that shifts the eigenvalues of the system into a region where the real part of the eigenvalues is less than \(-\Lambda_{1}\). To guarantee performance and fast settling time, we choose \(l<\Lambda_{1}<0\) to ensure stable poles. \(l\) should also be chosen small enough to avoid an aggressive controller behavior that is not desirable. ### Robust Controller Under Uncertain EV Feedback Load In this subsection, we address the issue of uncertainty of the EV load. This uncertainty can arise from different sources. The first is the user behavior that can be accurately estimated but never guaranteed by the utility. The second source of uncertainty arises from the possibility of attackers targeting the EV ecosystem. Attackers might be able to compromise part of the connected EVs while giving the utility the impression they are secure. This would mean that the control signal would reach less secure EVs than the utility intended. The third source of uncertainty can be introduced in the system by the clustering method we suggest below as a privacy-preserving measure. Clustering is the only case where uncertainty can be positive. These three types of uncertainties are modeled as an uncertain matrix \(\vartheta\) in the feedback loop after the output of the controller \(K_{def}\) which changes the value of the EV load to \(APQ_{EV-\vartheta}=u_{\vartheta}=\partial K_{def}\dot{X}=\vartheta u\). As a result, the state-space representation in (1) becomes (20). \[\dot{x}=Ax+B\vartheta K_{def}\dot{X}\ +B_{d}\omega \tag{20}\] By rewriting \(\Delta PQ_{EV-\vartheta}\) as a function of the original input \(u\), (20) becomes (13) which is restructured through (22) to become the state-space representation in (23) which models the uncertainty in the feedback signal as uncertainty in matrix \(B\). \[\dot{x}=Ax+B\vartheta u+B_{d}\omega \tag{21}\] \[\dot{x}=Ax+\{B+B(\vartheta-I)\}u+\ B_{d}\omega\] (22) \[\dot{x}=Ax+\ (B+\Delta B)u+\ B_{d}\omega \tag{23}\] It is worth noting that since the utility is responsible for this mitigation scheme, it is considered that the system parameters are accurate, thus the uncertainty in matrix A is zero. The matrix uncertainties are written as (24) and (25). \[\Delta A = HFE_{1} \tag{24}\] \[\Delta B = HFE_{2} \tag{25}\] Where H, \(E_{1}\), and\(E_{2}\) are known quantities while F is an uncertain matrix. Since \(\Delta A\) is zero, then \(E_{1}\) is zero and excluded from further calculations. H is usually chosen to be an identity matrix. The uncertain matrix F can be written in the form of (26) if it satisfies condition (27). \[F = \delta_{1}F_{1}+\delta_{2}F_{2}+\cdots+\delta_{k}F_{k} \tag{26}\] \[FF^{T} \leq I \tag{27}\] Writing F in the form of (26) means that \(\Delta B\) can be rewritten as (28) and consequently as (29) which represents a family of uncertain matrices. \[\Delta B = H(\delta_{1}F_{1}+\delta_{2}F_{2}+\cdots+\delta_{k}F_{k})E_{2} \tag{28}\] \[\Delta B = \delta_{1}B_{1}+\delta_{2}B_{2}+\cdots+\delta_{k}B_{k} \tag{29}\] The H-2 controller constraint (14) is now rewritten as (30) and simplified as (31) to account for the uncertainty in B. \[\{AX+(B+\Delta B)W\}^{T}+AX+(B+\Delta B)W +B_{d}B_{d}{}^{T}<0 \tag{30}\] \[(AX+BW)^{T}+AX+BW+B_{d}B_{d}{}^{T}+\beta<0\] (31) \[\text{where }\beta=\Delta BW+(\Delta BW)^{T} \tag{32}\] Based on (25)-(29), \(\beta\) is rewritten as (33) and (34). \[\beta = HFE_{2}W+(HFE_{2}W)^{T} \tag{33}\] \[\beta = HFE_{2}W+W^{T}E_{2}{}^{T}F^{T}H^{T} \tag{34}\] Using the variable elimination Lemma in [44] that states that any variable in the form of (34) under the condition stated in (27) can be rewritten as (35) \(\beta\) becomes: \[\beta = \alpha HH^{T}+\alpha^{-1}(E_{2}W)^{T}(E_{2}W) \tag{35}\] if there exists a scalar \(\alpha>0\). After substituting the value of \(\beta\) derived in (35) back into (31), we apply Schur's complement lemma [47], to rearrange the inequality containing \(\alpha^{-1}\) into the equivalent inequality in (36) which would replace constraint (14) in the original H-2 formulation making it a robust H-2 controller capable of handling uncertainty in the feedback loop. \[\begin{bmatrix}\Lambda_{1}+\alpha HH^{T}&(E_{2}W)^{T}\\ E_{2}W&-\alpha l\end{bmatrix}<0 \tag{36}\] \[\text{where }\Lambda_{1} = (AX+BW)^{T}+AX+BW+B_{d}B_{d}{}^{T}. \tag{37}\] Without going into the details, similar steps to (25)-(35) are followed and the inequality in (18) is rewritten as (38) \[\begin{bmatrix}\Lambda_{2}+\alpha HH^{T}&B_{d}&(CX+DW)^{T}&(E_{2}W)^{T}\\ B_{d}{}^{T}&-\rho l&D_{d}{}^{T}&0\\ CX+DW&D_{d}&-\rho l&0\\ E_{2}W&0&0&-\alpha l\end{bmatrix}<0 \tag{38}\] \[\text{where }\Lambda_{2} = (AX+BW)^{T}+AX+BW \tag{39}\] This formulation represents a robust H-\(\infty\) controller capable of handling uncertain parameters represented by \(\Delta B\) corresponding to our feedback loop EV load uncertainty. ### Mixed Robust H-2/\(\infty\) Controller After presenting the robust controller formulation above, we discuss herein the robust mixed H-2/\(\infty\) controller. Our mitigation strategy ultimately aims at eliminating the impact of the three types of LA attacks hence the importance of its success in the different attack ranges and the maximum possible attenuation level across these ranges. To this end, we develop a mixed controller that combines the LMI constraints of both H-2 and H-\(\infty\) robust controllers and arrive at the formulation in (40)-(47). The optimization objective function (40) is a weighted mix of objectives (11) and (16). \[\text{Minimize }a_{1}\gamma^{2}+a_{2}\rho\] (40) \[\text{s.t. }AX\ +XA^{T}+BW+W^{T}B^{T}+2A_{1}X\ <\ 0\] (41) \[\left\{\begin{array}{c}X>M1\\ trace(Z)<\gamma^{2}\\ -Z\quad CX+DW\\ (CX+DW)^{T}\quad-X\end{array}\right\rvert<0\] (44) \[\left[\xi+B_{d}B_{d}{}^{T}+\alpha HH^{T}\quad(E_{2}W)^{T}\right]<0\] (45) \[\left[\xi+\alpha HH^{T}\quad B_{d}\quad(CX+DW)^{T}\quad(E_{2}W)^{ T}\right]\] \[B_{d}{}^{T}-\rho I\quad D_{d}{}^{T}\quad 0\] \[CX+DW\quad D_{d}\quad-\rho I\quad 0\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad clusters of 35 EVCs. This, however, introduces uncertainty in the actual EV load used as a feedback control signal in the mitigation scheme since the number of connected EVs per cluster cannot be guaranteed. The impact of this clustering method is examined in Section 5 as part of our EV-based mitigation scheme's stability study. ## 4 Case Studies and Simulations In this section, we demonstrate our EV mitigation scheme against static, switching, and dynamic attacks on the New England (NE) 39-bus grid in Fig. 2. [50]. We first demonstrate the impact of the attacks in the absence of our scheme to highlight their devastating impact. The NE grid has 39 buses, 10 generators, and 19 loads with a total of 6,097MW. The simulations were performed on MATLAB-Simulink 2021a Specialized Power Systems Toolbox using a variable step size of \(1\times 10^{-12}\) to \(1\times 10^{-9}\). While we acknowledge that the current number of EVs is not enough to exploit the full potential of the suggested mitigation scheme, the following example demonstrates its feasibility with increased EV adoption levels. To demonstrate this, we choose a similarly sized grid which is the New South Wales (NSW) grid on a day in December 2021 [51]. The average load is 6968 MW [51] and the total registered vehicles are 5,892,206 [52]. Scaled down to fit our test grid, the total number of vehicles is 5,155,681. At future EV penetration levels, the EV load will become huge, especially with the increasing EVCS charging rates. Thus, only a small fraction of the EVs in our grid needs to be connected for our proposed EV-based LA attack mitigation scheme to achieve the results presented below. However, the EV load is variable depending on the time of day. Thus, a more detailed examination of the EV load is required. As per the International Energy Agency (IEA) [53] EVCS operators, utilities, and governments have maintained a ratio of 1 public EVCS for every 9.9 EVs on the road to guarantee the quality of service. Thus, at a future 50% EV penetration level, there will be 260,387 public EVCSs in our NE grid. With an average public EVCS occupancy of 33% [54], we can estimate the average number of connected EVs to the grid to be 86,000. Furthermore, according to the IEA [53], based on the mixture of different EVCS rates, the average charging rate per EVCS is 24kW. From these statistics, we can estimate that at a future 50% EV penetration level, there will be 2,064MW of EV load connected to the grid on average at public EVCS. To examine the specific EV load during the different times of the day, we create a data-driven model for the arrival and charging times of the EVs. To achieve a realistic EV charging behavior, we independently simulate a Poisson arrival process of EVs to each EVCS [55]. The charging time of these EVs is assumed to follow a truncated Gaussian distribution [55]. The parameters of these models are specified for 1-h windows for a 24-h period. These parameters are tuned based on a real dataset containing five years of records for 7,500 EVCSs in Quebec, Canada. This dataset was obtained from Hydro-Quebec as part of a research collaboration. Figure 2: New England 39-bus grid Additionally, we extract the EVCS utilization information from the dataset. The average utilization rate of EVCSs in Quebec is 31% with the peak charging demand occurring in the afternoon. By examining this dataset, we were able to extract the average hourly arrival rates and charging times to simulate them as a Poisson process and a truncated Gaussian distribution respectively. From the presented statistics and data-driven EV fleet model, we generate the public EVCS load profile presented in Fig. 3. Additionally, we acquire an approximation of the private EVCS load profile, for the presented number of EVs, by using the Electric Vehicle Infrastructure Projection Tool provided by the Alternative Fuels Data Center (AFDC) [56] and added the acquired data to Fig. 3. Fig. 3 demonstrates the change in the EV load for an entire 24-h period. Fig. 3 demonstrates the presence of a minimum 1196MW of EV charging load connected to the grid at 3:30 am. Furthermore, there are roughly 9 private EVCSs for every 10 EVs [53]. This means that at 50% penetration, there will be over 2.3 million private EVCSs. Given user tendency in the presence of an incentive scheme like _Hilo_, to connect EVs to home EVCSs even if no significant charging is needed, there will be large numbers of EVs connected at residences at the disposal of the utility that do not factor into the normal charging load presented in Fig. 3. Once we factor in the EVs that will be connected due to the incentives, the utility will have a much larger EV charging load at its disposal to participate in the presented mitigation scheme. ### Reduced Power Grid State-Space Model Due to the complexity of power grids, the size of the state-space matrices is relatively large hindering the design of the controllers, i.e., the NE grid has 300 states. To preserve the correct system behavior while reducing its order, we use Hankel model reduction [57]. For this purpose, we calculate the Hankel Singular Values (HSVs) of the system states to optimize the system order reduction versus the preserved system accuracy. Fig. 4 represents the energy contained within the Hankel Singular Values (HSVs) of the first 40 states. Fig. 4 shows the sharp decrease in the energy of the HSVs as the order of the system states increases. Table 1 shows the preserved system energy based on the reduced system order. Since 99.81% of the total system energy is contained within the first 30 HSVs, we reduced the system order to 30 states. Beyond 30, any order increase does not significantly improve accuracy while considerably adding to the controller synthesis complexity. This order reduction is only used for controller synthesis, while the presented simulations are performed on the actual grid, not the linearized nor the reduced model. Figure 4: First 40 Hankel Singular Value Weights Figure 3: EVCS Public and Private charagine load for 24 hours ### Case Study 1: Attacks in Absence of Mitigation without PSS Based on the work presented in [7, 10], and [12] which discuss static attacks, switching attacks, and dynamic attacks respectively, we formulate our attack scenarios against the NE grid in the absence of the suggested mitigation strategy to demonstrate their impact as a base case. The mathematical formulations of these attacks are presented in [7, 10], and [12], and are based on these studies, thus are not repeated herein. The first batch of attacks is simulated in the absence of a PSS and the impact is discussed in the following three cases. 1. Attack 1: is an 800 MW static attack initiated at t=5s split on buses 3, 4, 24, and 29. This attack is a single spike in load equal to 13% of the system demand. Fig. 5 demonstrates the frequency behavior of the generators in response to this attack. Attack 1 causes a frequency drop to 59 Hz followed by growing oscillations that are sustained due to the absence of a PSS. 2. Attack 2: is an 800 MW switching attack split equally on buses 3, 4, 24, and 29. Fig. 6 demonstrates how this attack caused a huge frequency deviation that reached 71 Hz in 10 s. This attack would trip the generators resulting in a blackout. 3. Attack 3: is an 800 MW dynamic attack split on buses 3, 4, 24, and 29. Since the attack is formulated as a feedback loop so that the eigenvalues of the grid are shifted into an unstable region based on the attack methodology in [12], the loads do not always oscillate in phase. The result of this attack is demonstrated in Fig. 7. The frequencies of all the generators experience wild \begin{table} \begin{tabular}{c c} \hline **Reduced System Order** & \multicolumn{2}{c}{**Percentage of Preserved**} \\ & **System Energy** \\ \hline 10 & 83.93\% \\ 20 & 98.38\% \\ 30 & 99.81\% \\ 40 & 99.93\% \\ 50 & 99.97\% \\ \hline \end{tabular} \end{table} TABLE I: Possibilities for Reducing System Order Fig. 5: Generator frequency response under attack 1 with no PSS Fig. 6: Generator frequency response under attack 2 with no PSS Fig. 7: Generator frequency response under attack 3 with no PSS oscillations that continue to grow to reach 74 Hz while oscillating against each other. This would trip all the generators to avoid damaging their shafts due to the violation of the safe frequency operation thresholds. ### Case Study 2: Attacks in Absence of Mitigation with PSS In this case study, we repeat the three attack scenarios in the presence of the PSS at the generators. The PSS is meant to stabilize the grid by damping generator frequency swings but falls short of eliminating LA attack impacts. 1) Attack 1: is now repeated in the presence of the PSS. This static attack causes a 0.4 Hz frequency drop and minor oscillations that are eventually eliminated by the PSS as the grid regains stability. 2) Attack 2: is also repeated in the presence of the PSS which significantly limits the impact. However, the grid sustained frequency oscillations reaching a maximum deviation of 1.08 Hz. Fig. 8 demonstrates this impact that causes the operator to shed 5% of the grid's load [8]. Fig. 9 demonstrates a sample Attack 2 load. The average frequency deviation among the 10 generators was 0.74 Hz. 3) Attack 3: is also repeated in the presence of the PSS and again proved to be the strongest attack. The grid experienced the rapid frequency oscillation depicted in Fig. 10. The maximum frequency reached 1.58 Hz. This causes the generators to trip instantaneously leading to a blackout. Fig. 11 demonstrates a sample Attack 3 load and Fig. 12 represents the aggregate of the 4 attack loads. The average frequency deviation among the 10 generators was 1.06 Hz. This case study demonstrates that even in the presence of the PSS the attack impact is not eliminated when it is crafted properly. To this end, adding a mitigation mechanism that can eliminate LA attack impa Figure 8: Generator frequency response under attack 2 with PSS Figure 10: Generator frequency response under attack 3 with PSS Figure 9: Attack 1 load on each of the buses ### Case Study 3: H-2 and H-\(\infty\) Control Ev-Based Mitigation Based on the methodology presented above, we design our EV-based mitigation scheme based on H-2 and H-\(\infty\) robust control and evaluate the performance of both controller designs against the three types of LA attacks. The EV defender load is calculated as \(\Delta PQ_{EV}=K_{def}\hat{\mathcal{X}}\) and replicated on the NE grid that is under attack. It is important to mention that the H-2 and H-\(\infty\) EV-based mitigation, immediately eliminates all traces of the static attack. The cases of switching and dynamic attacks are presented below. 1. H-2 EV mitigation: We repeat Attack 2 and Attack 3 on the NE grid after adding the EV feedback loop gain, \(K_{def}\), calculated based on H-2 control and the results are as follows. * Attack 2: is repeated and the frequency responses of the generators are presented in Fig. 13. The maximum frequency deviation was reduced to 0.065 Hz which represents a 93.9% decrease in the attack impact. The average frequency deviation was reduced to 0.04 Hz which is equal to a 99.6% reduction in attack impact. * Attack 3: is repeated and the frequency response of the generators is tremendously improved. As presented in Fig. 14, the maximum frequency deviation was reduced to 0.007 Hz which is roughly equivalent to a 99.6% reduction in the original attack Fig. 11: Attack 3 load on bus 4 Fig. 14: Generator frequency - attack 3 with H-2 controller Fig. 12: Attack 3 aggregate load Fig. 13: Generator frequency - attack 2 with H-2 controller impact, practically eliminating it. The average frequency deviation was reduced to 0.004 Hz which is equal to a 99.6% reduction in attack impact. 2. H-\(\infty\) EV mitigation: We repeat Attack 2 and Attack 3 on the NE grid after adding the EV feedback loop gain \(K_{def}\) calculated based on H-\(\infty\) control and the results are as follows. * Attack 2: is repeated and causes a frequency drop of 0.01 Hz which then recovers at t=30s to 60 Hz, representing 100% recovery to the pre-attack state as seen by the frequency responses of the generators in Fig. 15. * Attack 3: is repeated in the presence of the H-\(\infty\) EV mitigation scheme. Although this controller eliminates the impacts of the switching attacks, it falls short of achieving the same against the dynamic attack. The frequency responses of the generators are presented in Fig. 16. The maximum frequency deviation was reduced to 0.1 Hz which is a 93.7% reduction of the LA attack impact. The average frequency deviation was reduced to 0.05 Hz which is equal to a 95.3% reduction in attack impact. This case study demonstrates that the H-\(\infty\) EV-based mitigation scheme performs better against switching attacks while the H-2 mitigation scheme performs better against dynamic attacks. In the case of the switching attack in the presence of the H-2 and the dynamic attack in the presence of the H-\(\infty\) controller, the governor of the generators would have to react since the frequency does return to the safe operation limit but not to the normal frequency range. Since this is a persisting attack, this means that the governors will have to constantly keep correcting the frequency. As a result, we recommend a mixed controller. We now repeat the attacks to test the mitigation scheme's success when the defender does not control any EVs on one of the attacked buses. This is to demonstrate the success of our EV mitigation even if we eliminate one of its advantages which is the colocation with the attack load. The controller is successful in eliminating the attack impact but with slightly reduced performance. Attack 2 is repeated against the grid that has the H-\(\infty\) controller and results in a maximum 0.1Hz frequency deviation which is equivalent to a 90.7% reduction in impact. The average frequency deviation was reduced to 0.04 HZ which is equivalent to a 94.6% reduction. Attack 3 is also repeated against the grid that has the H-2 mitigation scheme which results in a maximum 0.06 Hz frequency deviation or 96.2% reduction in attack impact. The average frequency deviation was reduced to 0.03 Hz or a 97.2% reduction. This demonstrates the success of our attack mitigation even when the utility loses its resources on an attacked bus. ### Case Study 4: Robust Mixed H-2/\(\infty\) Ev-Based Mitigation In this case study, we demonstrate the effectiveness of the mixed H-2/\(\infty\) robust control mitigation strategy and its advantage in our EV-based LA attack mitigation. Once again, the H-2/\(\infty\) controller eliminates any trace of the static attack. Fig. 16: Generator frequency - attack 3 with H-\(\infty\) controller Fig. 15: Generator frequency - attack 2 with H-\(\infty\) controller 1) Attack 2: is repeated and Fig. 17 demonstrates the success of the mixed control strategy in mitigating the impact of the switching attack. The frequency responses of the generators are presented in Fig. 17, and it is evident that the performance is much better than the H-2 controller with a reduction of the switching attack impact by 99.5%. The average frequency drops slightly below 60 Hz but stabilizes towards t=30s. Also, the sustained oscillations reach a maximum of 0.006 Hz. The average value of the sustained oscillations across all generators was also reduced to 0.003Hz representing a 99.6% drop in average attack impact. 2) Attack 3: is repeated and the frequency responses of the generators are presented in Fig. 18. Fig. 18 demonstrates the success of the controller in counteracting the impact of the dynamic attack. The maximum impact of the attack was reduced from 1.5 Hz to 0.01 Hz representing a 99.4% impact reduction. This is also 10 times smaller than the impact of the same attack in the presence of the H-\(\infty\) EV-based controller. The average value of the sustained oscillations was also reduced to 0.008 Hz representing a 99.3% drop in average attack impact. 3) We also study the case when the utility has no resources on the attacked buses. The switching attack and dynamic attack cause a maximum frequency deviation of 0.09 and 0.07 respectively. This is equivalent to a 91.7% and a 95.6% reduction in maximum attack impact respectively. This case study proves that the mixed H-2/\(\infty\) controller is superior to the individual controllers by addressing their gaps. By reducing the frequency oscillation and deviation caused by all types of LA attacks to lower than 0.01 Hz, the mixed controller returned the grid to a state where the frequency is well within the normal range in which the turbine governors are not engaged, and the system behaves as it would behave normally in the absence of any attack even when the attacks are sustained and persistent. Based on the above discussions and results, the best course of action would be the adoption of the EV-based robust mixed H-2/\(\infty\) controller for the LA attack mitigation scheme. The complexity of designing the controller based on the presence of uncertainty is only during the planning phase. During deployment, the performance of the controller will not be impacted since it is based on matrix multiplication regardless of the method of its synthesis. Lastly, the presented case studies demonstrate the instantaneous reaction time of the presented mitigation scheme due to the advantage introduced by the power converters in the EVCSs that resulted in the immediate elimination of the attack impact. ### Case Study 5: Smaller LA Attacks The previous case studies aimed at demonstrating the EV-based robust mitigation scheme's success against large LA attacks (800MW) to showcase the mitigation scheme's effectiveness. In this case study, however, we examine the impact of smaller attacks Fig. 17: Generator frequency - attack 2 with mixed H-2/\(\infty\) controller Fig. 18: Generator frequency - attack 3 with mixed H-2/\(\infty\) controller and the performance of our proposed EV-based mitigation scheme in such scenarios. As a starting point, we repeat Attack 2 and Attack 3 and cap their attack loads at different magnitudes between 100MW and 800MW. Table 2 demonstrates the impacts of such attacks before and after the addition of our EV-based robust mixed H-2/\(\infty\) mitigation scheme. We now examine a new dynamic attack (Attack 4) based on the methodology in [12] while shifting the eigenvalues of the system further right (unstable region) than Attack 3. This results in a faster oscillation of the attack load. Attack 4 is initiated at t=5s against buses 3, 4, 18, and 39 with a magnitude of 19% of the load on each bus for a total of 395.96MW. Attack 4 causes the average frequency to reach 61.25 Hz while the forced oscillations reach 61.49 Hz as depicted in Fig. 19. While this behavior does not cause instantaneous generator tripping since it does not exceed the 1.5 Hz limit, sustaining it for 30s will cause the generator protection relays to trip. However, some utilities have stricter limits (61Hz) meaning such an attack would instantaneously trip the generators. Attack 4 is now repeated in the presence of the mixed H-2/\(\infty\) mitigation strategy. The frequency deviation/oscillation is reduced below 0.01 Hz meaning that the attack impact was successfully eliminated. ### Impact on EV Range We now evaluate the impact of our mitigation scheme on the EVs' range. This evaluation is based on the average charging rate of 24kW. Mitigating Attacks 2 or 3 requires the EV to alternate between charging and V2G such that the net charge is approximately 0. Attacks 2 and 3 cause a loss of 0.001kWh and 0.009kWh respectively. The EVs also lose the opportunity to charge 0.2kWh. The total loss is equivalent to a range of 1 mile for each EV. For an EV that was connected to an EVCS but not charging, the net impact is almost 0kWh. Attack 1 on the other hand will result in a total loss of 0.4kWh or 2 miles. ### Mitigation Scheme Feasibility An added advantage of using EV charging in our mitigation scheme is that the required communication and control infrastructure is already in place. The required central authority needed to communicate with the distributed resources (EVCSs) is the CMS that already exists in the EV ecosystem. This CMS communicates and controls all the public EVCSs in real-time. Using the OCPP \begin{table} \begin{tabular}{c c c c c} \hline \multirow{2}{*}{ \begin{tabular}{c} **Attack** \\ **Magnitude** \\ \end{tabular} } & \multicolumn{2}{c}{**Attack 2**} & \multicolumn{2}{c}{**Attack 3**} \\ \cline{2-5} & **No Mitigation** & **Mixed H-2/\(\infty\)** & **No Mitigation** & **Mixed H-2/\(\infty\)** \\ & & **Mitigation** & **Mitigation** \\ \hline 800 & 1.08 Hz & 0.006 Hz & 1.58 Hz & 0.01 Hz \\ 700 & 0.97 Hz & 0.006 Hz & 1.42 Hz & 0.01 Hz \\ 600 & 0.87 Hz & 0.005 Hz & 1.21 Hz & 0.009 Hz \\ 500 & 0.79 Hz & 0.004 Hz & 1.03 Hz & 0.008 Hz \\ 400 & 0.65 Hz & 0.003 Hz & 0.91 Hz & 0.006 Hz \\ 300 & 0.54 Hz & 0.003 Hz & 0.76 Hz & 0.004 Hz \\ 200 & 0.32 Hz & 0.001 Hz & 0.46 Hz & 0.002 Hz \\ 100 & 0.17 Hz & 0.001 Hz & 0.26 Hz & 0.001 Hz \\ \hline \end{tabular} \end{table} Table 2: Max Frequency Deviation Vs Different Attack 2 and 3 Magnitudes Figure 19: Generator frequency - attack 4 with mixed H-2/\(\infty\) controller protocol, the CMS can turn the individual EVCSs on and off, change their charging rate, and discharge using V2G [25][26]. This means that adopting our mitigation scheme would not require the addition of any software, hardware, or communication to the ecosystem. The Hilo project gives the utility the same control capabilities over private EVCSs. This means that adopting our mitigation scheme would not require any software and communication capabilities to be added to the ecosystem. Furthermore, since our mitigation scheme only requires the frequencies at the generators which are already monitored by the utility, implementing our mitigation scheme does not require the addition of any measurement devices. Mitigating these attacks using our suggested scheme requires an extremely minimal cost to be incurred by the utility. To demonstrate this, we consider 2 different EVCS charging levels in Quebec with a charging rate of 24 and 50kW [58]. The hourly price of charging on these EVCSs is 7.53 CAD and 12.77 CAD respectively and is billed per second [58]. This means that mitigating Attack 2 and Attack 3 costs the individual EV user 6.28 cents on the 24kW EVCS and 5.1 cents on the 50kW EVCS. This means that when the utility reimburses the EV users for this cost, the utility will have to pay a total of 1,683-2,072 CAD (1,242- 1,530 USD) plus whatever extra value the utility determines in its incentive program. Mitigating Attack 1 however, requires double the cost since the energy loss from the EV batteries will be double that of the other 2 attacks. ### Mitigation Scheme Reaction To Non-Attack Scenarios One issue that arises from the absence of a detection scheme is the EV controller's reaction to frequency fluctuations that are not caused by attacks. Our proposed scheme can react to sudden changes in power grid behavior such as the abrupt line or generator tripping studied in [15]. Such sudden events resemble static attacks. Fig. 20 demonstrates the grid's behavior after the line connecting bus 4 to bus 5 tripped in the presence of the H-2/\(\infty\) controller. It is evident that our EV- based mitigation successfully brings the grid back to stability after such a singular event and eliminated all traces of the impact. Additionally, Fig. 21 demonstrates the frequency behavior in the grid after tripping the generator connected at bus 39. This generator has an output of 1104MW, which is the largest in our grid. In the absence of our proposed mitigation scheme, the frequency deviation surpasses 1Hz and the generators start oscillating wildly leading to tripping. However, our proposed EV-based H-2/\(\infty\) controller immediately limits the initial deviation to 0.1 Hz and then brings the frequency back to the nominal 60 Hz. These 2 simulation results demonstrate the effectiveness of the proposed EV-based LA attack mitigation scheme against the singular events that are usually studied in the literature in the context of EV frequency support to the grid. However, such events Fig. 21: Grid frequency after Generator 1 tripping Fig. 20: Grid frequency after line tripping line 4-5 are rare during the normal operation of a power grid. To avoid having the presented EV-based mixed H-2/\(\infty\) controller react to the normal frequency fluctuations corresponding to the random consumer behavior, we set a frequency deviation threshold of 0.03 Hz before the mitigation scheme is engaged. This threshold can be changed by the utility depending on their historic data which would indicate the maximum value of benign frequency fluctuations caused by the random user behavior. To simulate normal frequency fluctuations of a real grid, we add random load blocks to all load buses. IEEE benchmark grids have constant loads representing the average load of the individual buses. Utilities depend on historical data to estimate the load at a certain time of day. However, during short windows (ex.1min), the load cannot be predicted since real consumer behavior is random but centered around the average bus load. This gives rise to the need to simulate random perturbations in the loads of our grid which would lead to the normal frequency variations seen in Fig. 22. The random load variations follow a random Gaussian distribution in our simulations. To avoid the repetitiveness of pseudorandom number generator patterns (pattern effect) [59], we use the Mersenne Twister algorithm with a period of 2\({}^{19937}\) - 1 which can overcome the pattern effect [59] and guarantee true randomness. Additionally, we shuffle the random generator's seed before each simulation. After setting up this simulation environment, we simulated 3 weeks (21 days) of power grid behavior and collected the frequency readings. We then collected the value of the highest frequency deviation during each 1-minute window. A histogram representing the frequency deviation probability during normal behavior is presented in Fig. 23. This histogram demonstrates that normal frequency fluctuation caused by consumer behavior does not exceed 0.022 Hz. Thus, our threshold for engaging the mitigation scheme was set at 0.03 Hz. In the following, simulation, we repeat Attack 3, which was the most impactful attack, in the presence of the mixed H-2/\(\infty\) controller after setting its engagement threshold to 0.03 Hz. Fig. 24 demonstrates the success of our mitigation strategy. The frequency rises to reach the 0.03 Hz threshold before the mitigation scheme is engaged and brings it back down to 0.01 Hz. Additionally, our proposed scheme is superior to the EV-based frequency regulation schemes proposed in [33] - [35] since it is originally designed to deal with continuous persisting attacks especially the dynamic attack that adapts to the grid reaction. Such singular events resemble Attack 1, whose impact was eliminated completely. The work in [35] proposed a frequency regulation scheme for the small frequency fluctuations (\(<\)0.06 Hz) caused by the intermittency of renewable energy. Additionally, the presented scheme in [35] was optimized to handle sudden drops or spikes in renewable energy generation of 120MW resembling the behavior of Attack 1. The presented methods in [33][34] on the other hand are meant to address frequency fluctuations between 0.06 Hz and 0.07 Hz, caused by renewable energy intermittency, and reduce the frequency deviation to 0.05 Hz (29% reduction). In comparison, our EV-based LA attack mitigation scheme is designed to handle persistent static, switching, and dynamic attacks. Additionally, our proposed controller is designed to mitigate the impact of attacks with much larger magnitudes than the events in [33] - [35] and reduce their impact by over 99% from the devastating range of 1.5 Hz to a normal 0.01 Hz range. ## 5 Stability and Performance Evaluation In this section, we examine the performance of our EV-based mitigation when faced with real-life deployment obstacles. ### Uncertainty and Controller Stability In this subsection, we examine the stability of the proposed control strategy in face of the uncertainty of the feedback EV defender load. By modeling this uncertainty into the design of the controller, the value of the matrix \(K_{def}\) changes to allow the mitigation scheme to perform well facing these uncertainties. In case there is no uncertainty present, a carefully designed robust controller will achieve an extremely similar response to the original non-robust controller but will outperform it when uncertainty is added to the systems. Fig. 25 presents the response of the worst-performing generator to Attack 3 in the presence of a random uncertainty of \(\pm 5\%\) on each of the feedback EV inputs. Fig. 25 demonstrates how the mixed robust controller performs better than a normal mixed controller when the feedback channel is not 100% certain. The three types of robust controllers were evaluated using MATLAB's "diskmargin" function to calculate their disk stability [57]. To this end, we consider the system states, x, as the object of our analysis for the sake of having a system with 30 inputs and 30 states. Table 3 demonstrates the stability range of the system when all the inputs are varied independently with an uncertainty \(F_{j}\) presented (48). In (48) \(j\) is the index of the input/state and \(\delta\) is a random complex number such that \(|\delta|<1\) represents the uncertainty in the channel gain and phase. Variable \(a\) is the multiplicative gain of the uncertainty and \(\sigma\) represents the skewness of \(F\) to shift the probability of the uncertainty in the positive or negative direction. Fig. 25: Generator frequency with normal and robust controllers \[F_{j}=\frac{1+a(1-\sigma)/2)\delta_{j}}{1-a(1+\sigma)/2)\delta_{j}} \tag{48}\] Table 3 demonstrates the range of uncertainty where our proposed mitigation robust controller remains stable in the presence of uncertainty. The uncertainty, \(\delta\), is a complex number to represent the uncertainty in both the magnitude and angle of the control signal. Uncertainty in magnitude represents the possibility of the control input being smaller or larger than the control signal due to the uncertainties discussed in Section 3. Uncertainty in phase represents the uncertainty added by a time delay in the feedback control channel due to communication channel delays. The skewness, \(\sigma\), is used to study the stability of the controller when the uncertainty is biased in a given direction. This means that we consider the case in which the uncertainties can either increase or decrease the gain of the feedback control input, \(\Delta PQ_{EV}\). The actual value of the EVCS real and reactive power load/injections can be smaller than the calculated \(\Delta PQ_{EV}\) in the case where attackers compromise some EVCSs or the case where the utility overestimates the number of EVCSs, the control signal is reaching. The actual value of the EVCS real and reactive power load/injections can be larger than the calculated \(\Delta PQ_{EV}\) in case the utility underestimates the number of EVCSs the control signal is reaching especially if the clustering mechanism discussed in Section 3 is used. ### Smaller Defender Load Another issue associated with the uncertainty of the feedback EV defender load is that the defender load might become smaller than the attack load. If the attacker successfully compromises enough EVs such that the total defender load is smaller than the total attack load, the defender would be able to partially mitigate the attack impact but not eliminate it. This assumption, however, would require an attacker with huge resources to compromise enough load and EVs as well as keep the EV compromise hidden from the utility. We repeat Attack 2 when the defender controls 25% less load than the attacker and the mitigation scheme successfully reduced the oscillations resulting from the attack to 0.25 Hz (75% reduction). To demonstrate the impact of limiting the utility's resources below those of the attacker, Fig. 26 presents the maximum frequency deviation achieved under Attack 2 when the utility EV load is capped at different levels. To this end, we use steps of 10% of the total attack load. We can conclude that even as we decrease the defender's resources, the attack impact is still reduced below its original value. It is only after the utility's EV load drops to 20% that the mitigation scheme stops having a positive impact. Finally, we examine the possibility of an attacker gaining access to a large portion of EVCS (unsecure private EVCSs for instance [60]). However, the modeling of the uncertainty in the feedback load of the controller, u = \(\Delta PQ_{EV}\), accounts for such Figure 26: Maximum frequency deviation vs defender resources level scenario and the mitigation scheme remains successful as long as the attacker does not control a larger portion of the EVCSs than the utility. To put things into perspective, let us assume the attacker attempts to compromise several EVCS in tandem with the previous 800MW attack. Based on the total (public and private) EVCS load curve in Fig. 3 the average EVCS charging load is 2960 MW. Thus, the attacker is required to compromise 1080MW of EVCS load in addition to the original 800MW to start degrading the performance of the mitigation scheme. Under such a condition, the attacker has compromised a total of 1880MW while the defender still has access to 1880MW of EV load. Even attackers knowledgeable of the presence of the mitigation scheme are required to compromise this much EVCS load to have an impact on the performance of our proposed EV-based mitigation. Beyond this point, the aggregate defender load starts becoming smaller than the aggregate attacker load following the behavior in Fig. 26. However, we stress that this scenario is far-fetched and practically impossible. Such an assumption would require an attacker with enough resources to compromise 31% of the total load of the power grid, which is extremely implausible. ### Effect of Communication Delay In this subsection, we consider another aspect usually ignored in similar studies which is the communication and synchronization delay present during real deployment. To this end, we now simulate Attack 3 after adding a random Gaussian delay to each of the feedback signals between 0 and 10ms to examine the impact of the system incurring synchronization delay. The obtained generator frequency response is depicted in Fig. 27. It is evident from this response that the mitigation scheme is still successful despite this delay in the communication channel. To quantify things in terms of controller success, the maximum and average frequency deviations were reduced to 0.026 Hz and 0.013 Hz respectively. This reduction is equivalent to eliminating 98.4% and 98.8% of the maximum and average frequency deviations caused by the LA attack respectively. Table 4 also serves to demonstrate the maximum frequency deviation caused by Attack 3 under the robust H-2, H-\(\infty\) and mixed sensitivity H-2/\(\infty\) control strategy. In this table, the delay is assumed equal in all the feedback channels. Table 4 demonstrates that the EV-based mitigation scheme is successful albeit with slightly reduced performance. This shows that the maximum possible 10ms synchronization delay has been effectively accounted for in the controller synthesis. ### Impact of Clustering In this subsection, we briefly discuss the impact that the clustering suggested in Section 3 would have on the performance of the EV attack mitigation. To this end, we use the example that the utility clusters EVs into groups such that the expected participation is 10 EVs per cluster. To this end, we sample the feedback control signal and send a signal equal to a multiple of the power per \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **Delay (ms)** & **0** & **1** & **2** & **4** & **6** & **8** & **10** \\ \hline H-2 & 0.006 & 0.012 & 0.02 & 0.025 & 0.029 & 0.03 & 0.032 \\ H-\(\infty\) & 0.1 & 0.13 & 0.17 & 0.19 & 0.21 & 0.24 & 0.26 \\ Mixed H-2/\(\infty\) & 0.01 & 0.04 & 0.07 & 0.1 & 0.11 & 0.12 & 0.13 \\ \hline \hline \end{tabular} \end{table} Table 4: Max Frequency Deviation vs Communication/Synchronization Delay Figure 27: Generator frequency with controller feedback communication delay cluster. Each cluster is assigned a load value equal to 10 EVs. With this clustering technique, Attack 2 is repeated and results in the frequency responses seen in Fig. 28. It is evident that this technique reduces the effectiveness of the controller, but the achieved response is still within the acceptable range of normal frequency operation. The maximum and average frequency deviations were reduced by factors of 95.4% and 97.4% to reach 0.05 Hz and 0.019 Hz respectively. ### Impact of Grid Operating Point Finally, we examine the impact of the grid's operating point on the success of the LA attack mitigation scheme. To this end, we scale the NE grid based on the NSW load profile [51]. The average load in the NSW grid is 6968 MW while the peak and lowest loads are 8214 and 5897 respectively. We repeated Case Study 4 on the scaled grids and observed that when using the correct state-space matrices, the operating point has very little impact on the performance of our proposed mitigation scheme. This is to demonstrate that even when the dynamic behavior of the power grid changes, the mitigation scheme is still successful as long as the utility maintains updated and correct parameters of their grid in the calculation of the controller gain matrix. Alternatively, the utility can model the grid's parameter uncertainty by modifying (23) and assigning a non-zero variable to \(\Delta A\) in (24). Consequently, the uncertainty in matrix A of the state-space representation can be modeled using steps resembling (26) to (37) that were used to represent the uncertainty in the feedback loop and matrix B. However, as mentioned earlier, it is widely accepted to model the power grid as a state space [37]-[39] and to use this state space for controller design [40]-[42] while assuming the utility has the complete accurate knowledge of their own power grid parameters. Finally, in the farfetched case that a utility does not have an accurate state-space representation, they can follow the approach proposed in [15] to obtain an accurate state-space representation. This approach utilizes a system identification technique consisting of introducing a small probing signal and retrieving the grid's impulse response to create the state-space matrices using the Eigenvalue Realization Algorithm [61]. ## 6 Conclusion The exponentially growing number of EVs coupled with their presence on the load buses of the grid and fast communication infrastructure, make them ideal for our EV-based LA attack mitigation scheme. In this work, we demonstrated how EVs can be modeled as a feedback loop controller used to stabilize the grid under LA attacks. The controller synthesis is based on the state-space model of the grid which is needed to design the feedback gain based on robust mixed H-2/\(\infty\) control. The mixed controller design incorporates the uncertainty in the feedback EV signal to achieve grid stability under static, switching, and dynamic attacks. The initial 1.5 Hz frequency deviation caused by an 800MW attack was attenuated by our scheme below 0.01 Hz guaranteeing system stability. The controller was also successful under different operating conditions such as EV load uncertainty and communication delay. We also presented an EV clustering technique that can be used to preserve the privacy of home EVCSs. The performance of the controller did vary slightly when clustering was implemented, but the result remained well within the normal frequency behavior of the grid. ## Acknowledgment The work of Ph.D. candidate M. A. Sayed is supported by Fonds de Recherche du Quebec (FRQNT), project 2022-2023 - B2X - 317973. This research was conducted and funded as part of the Concordia University/ Hydro-Quebec/ NSERC research collaboration project "Large-Scale Integration of EVCSs into the Smart Grid" Grant reference: ALLRP 567144-21.
2306.11830
UMM: Unsupervised Mean-difference Maximization
Many brain-computer interfaces make use of brain signals that are elicited in response to a visual, auditory or tactile stimulus, so-called event-related potentials (ERPs). In visual ERP speller applications, sets of letters shown on a screen are flashed randomly, and the participant attends to the target letter they want to spell. When this letter flashes, the resulting ERP is different compared to when any other non-target letter flashes. We propose a new unsupervised approach to detect this attended letter. In each trial, for every available letter our approach makes the hypothesis that it is in fact the attended letter, and calculates the ERPs based on each of these hypotheses. We leverage the fact that only the true hypothesis produces the largest difference between the class means. Note that this unsupervised method does not require any changes to the underlying experimental paradigm and therefore can be employed in almost any ERP-based setup. To deal with limited data, we use a block-Toeplitz regularized covariance matrix that models the background activity. We implemented the proposed novel unsupervised mean-difference maximization (UMM) method and evaluated it in offline replays of brain-computer interface visual speller datasets. For a dataset that used 16 flashes per symbol per trial, UMM correctly classifies 3651 out of 3654 letters ($99.92\,\%$) across 25 participants. In another dataset with fewer and shorter trials, 7344 out of 7383 letters ($99.47\,\%$) are classified correctly across 54 participants with two sessions each. Even in more challenging datasets obtained from patients with amyotrophic lateral sclerosis ($77.86\,\%$) or when using auditory ERPs ($82.52\,\%$), the obtained classification rates obtained by UMM are competitive. In addition, UMM provides stable confidence measures which can be used to monitor convergence.
Jan Sosulski, Michael Tangermann
2023-06-20T18:39:12Z
http://arxiv.org/abs/2306.11830v1
# UMM: Unsupervised Mean-Difference Maximization ###### Abstract Many brain-computer interfaces make use of brain signals that are elicited in response to a visual, auditory or tactile stimulus, so-called event-related potentials (ERPs). In the predominantly used visual ERP speller applications, sets of letters shown on a screen are flashed randomly, and the participant attends to the target letter they want to spell. When this letter flashes, the resulting ERP is different compared to when any other non-target letter flashes, and by using a sequence of binary classifications of the observed ERP responses, the brain-computer interface can detect which letter was the target. We propose a new unsupervised approach to detect the attended letter. In each trial, for every available letter our approach makes the hypothesis that it is in fact the attended letter, and calculates the ERPs based on each of these hypotheses. By leveraging the fact that only the true hypothesis produces the largest difference between the class means, we can detect the attended letter. Note that this unsupervised method does not require any changes to the underlying experimental paradigm and therefore can be employed in almost any ERP-based setup. To deal with the very noisy electroencephalogram data, we use a block-Toeplitz regularized covariance matrix to model the background activity. We implemented the proposed novel unsupervised mean-difference maximization (UMM) method and evaluated it in offline replays of brain-computer interface visual speller datasets. For a dataset that used 16 flashes per symbol per trial, UMM correctly classifies 3651 out of 3654 letters (\(99.92\,\%\)) across 25 participants. In another dataset with fewer shorter trials, 7344 out of 7383 letters (\(99.47\,\%\)) are classified correctly across 54 participants with two sessions each. Even in more challenging datasets obtained from patients with amyotrophic lateral sclerosis (\(77.86\,\%\)) or when using auditory ERPs (\(82.52\,\%\)), the obtained classification rates obtained by UMM are competitive. As an additional benefit, stable confidence measures are provided by this novel method, which can be used to monitor convergence of UMM. Machine Learning, ICM information is exploited for subsequent prediction, i.e., the productive online usage of the speller application. However, it is often unclear how much calibration data is required. Please note that in the calibration period, the participant cannot use the BCI productively. There exist some unsupervised approaches that make the calibration phase superfluous, enabling the participant to immediately start using the BCI. For example, expectation maximization can be used to find the target/non-target ERP responses (Kindermans et al., 2012). Alternatively, a slight modification of a visual speller paradigm can enable learning by label proportions (Hubner et al., 2017) or one can even combine both (Verhoeven et al., 2017; Hubner et al., 2018) into a mixed approach. While there is no calibration phase in these examples, the approaches typically require a few trials worth of data (e.g., around 7 letters in (Hubner et al., 2018)) to reach a satisfactory performance. Each of these previous approaches use unsupervised learning to obtain the target and non-target ERP responses. These serve as the class means for an LDA classifier, which is then used to classify each individual epoch of a trial to obtain the multi-class prediction (i.e., which letter was attended) by aggregating these many binary classifications. Instead of aggregating the outcome of the binary events (target/non-target) into a multi-class decision (letter), we propose to make use of the whole trial in ERP-based BCIs, by forming each possible selection as a hypothesis and choosing the one that maximizes the distance between the hypothesized ERP target and non-target means. This simple, yet surprisingly effective, unsupervised mean-difference maximization (UMM) method is computationally light and does not require any modifications of the underlying BCI paradigm, making deployment in current ERP-based paradigms straightforward. ## 2 Methods ### Preliminaries We consider a binary BCI ERP classification problem, where one trial consists of \(N_{e}\) epochs, of which \(N_{e}^{+}\) are targets and \(N_{e}^{-}\) are non-targets. Additionally, \(N_{e}^{+}<N_{e}^{-}\) and \(N_{e}^{+},N_{e}^{-}>1\), both of which are true in virtually all ERP-based BCI applications. Furthermore for each epoch \(e_{k},1\leq k\leq N_{e}\) we know which symbols \(s\) out of the set of available symbols \(S\) were flashed during the \(k\)-th highlight event. In this setting, without label information--i.e., which symbol was focused by the participant--the task is to find the assignment \(A^{+}\) such that all epochs \(\{e_{k}\,|\,k\in A^{+}\}\) correspond to target events (i.e., the attended symbol was flashed) and all other epochs \(\{e_{k}\,|\,k\in A^{-}\}\) with \(A^{-}\coloneqq\{k\,|\,k\notin A^{+}\}\) are non-targets. Without incorporating experimental constraints, enumerating assignments quickly would become prohibitive in practice, as the number of possible assignments is \(\binom{N_{e}}{N_{e}^{+}}\), e.g., for a common row-column speller with a 1:5 target/non-target ratio and 60 highlights per letter, this is \(\binom{60}{10}=7.54\cdot 10^{10}\). However, we know that in all feasible assignments \(A^{+}\) one letter has to be common among all assigned epochs, therefore the number of possible assignments reduces to the number \(|S|\) of available symbols. ### Unsupervised Mean-difference Maximization (UMM) ``` 1:available symbols \(S\), epochs of \(i\)-th trial \(E^{(i)}\) 2:for every trial \(i\)do 3:\(\Sigma^{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7 }{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox {0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7 }{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7 }{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7 }{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\scalebox{0.7}{\cdot{0.7 }{\scalebox{0.7}{\scalebox{0.7.7{0.7.7. both) contain mixtures of non-target and target epochs. As a result, averaging the epochs of different classes in each assignment will mix the corresponding ERPs, which in turn reduces the distance between the hypothetical class means. Therefore, the squared distance \(d^{2}(s)\) between the assumed class means will be maximal if our assumed symbol is the true symbol that was attended. However, in a high-dimensional noisy setting, with correlated data and few samples to estimate the class means from, the squared distance is not reliable. As a remedy, we propose to use the inverted global covariance matrix \(\Sigma^{-1}\), to remove the influence of correlated dimensions and dampen dimensions that have a high variance in general. The resulting covariance-corrected distance metric \[d^{\Sigma}(s)=(\Delta\mathbf{\mu}_{s})\Sigma^{-1}(\Delta\mathbf{\mu}_{s})^{\top} \tag{3}\] is also known as the squared Mahalanobis distance. Note that both distances are equivalent if the covariance is the identity matrix, e.g., as would be the case for whitened data. The symbol \(s^{*}\) that was actually attended can now be obtained by \[s^{*}=\operatorname*{argmax}_{s}d^{\Sigma}(s). \tag{4}\] Note that the traditional Mahalanobis distance would require the within-class covariance matrices, i.e., label information would be necessary. However, as shown by Hubner (2020), in some cases, the global covariance--i.e., pooling data of both classes and ignoring class-specific means--can be used instead. Particularly, when the matrix is multiplied with a vector that points in the direction of the difference between the (unknown) class means. In this case, the matrix-vector product is merely scaled by some factor when using the global instead of the within-class covariance. As the same \(\Sigma^{-1}\) is used for all symbols, every distance is scaled equally, i.e., this does not affect Equation (4). Note that regardless of the hypothesis on which symbol is attended, the expected direction of the hypothesized class mean difference vector \(\Delta\mathbf{\mu}_{s}\) points toward the same direction as the true class mean difference vector. While noise may change the direction of \(\Delta\mathbf{\mu}\), we assume that the overall reduction of the distance between class means caused by wrong assignments dominates. To illustrate the novel approach Figure 1 shows a toy example of the unsupervised mean-difference maximization (UMM) for simulated data of a four-letter spelling problem. Each letter is hypothesized as the target ('C' and 'D' as the target are not shown), but only the true target 'B' produces the largest vector between the class means. The basic UMM method as pseudo-code is described in Algorithm 1. #### 2.2.1 Confidence While our method assumes the attended symbol is the one that maximizes Equation (3), we can make use of all distance values generated by the other hypotheses to define a notion of UMM's confidence. After UMM determined \(s^{*}\) to be the attended symbol, let \(S^{-}\) describe the set of all other symbols, i.e., \(S^{-}\coloneqq\{s\,|\,s\neq s^{*},s\in S\}\) with corresponding distances \(D^{\Sigma}_{S^{-}}\coloneqq\{d^{\Sigma}(s)\,|\,s\in S^{-}\}\). Now \(D^{\Sigma}_{S^{-}}\) allows to calculate the standard deviation \(\sigma_{S^{-}}\) of the class mean distances of the presumably not attended symbols. It can be used to standardize a comparison of the distance produced by the symbol assignment \(s^{*}\) with the runner-up assignment \(s^{r}\): \[c=\frac{d^{\Sigma}(s^{*})-d^{\Sigma}(s^{r})}{\sigma_{S^{-}}}, \tag{5}\] where the runner-up is determined by \(s^{r}=\operatorname*{argmax}_{s}d^{\Sigma}(s)\) where \(s\neq s^{*}\). The obtained confidence value \(c\) for this choice of \(s^{*}\) is always positive or zero. Intuitively, if differences in distances is only caused by Gaussian noise then the distance between the winner and the runner-up should remain small. A large confidence on the other hand is unlikely to be caused merely by noise, instead UMM's decision for \(s^{*}\) is more likely to be correct. #### 2.2.2 Learning Across Trials So far, UMM is applied instantaneously--i.e., using only the epochs of the trial at hand--and did not incorporate any information of previous trials. Making use of data of previous trials is straightforward for the covariance estimation. Instead of using only the epochs of the current \(i\)-th trial \(\Sigma^{1}\coloneqq\operatorname*{cov}(E^{(i)})\) (cf. Algorithm 1, line 2), we pool the data of the current trial and all previous trials, i.e., \(\Sigma^{all}\coloneqq\operatorname*{cov}(E^{(1)}\cup\ldots\cup E^{(i-1)}\cup E ^{(i)})\). Correspondingly, class mean estimates obtained from the current trial only might be improved by replacing them by a more robust estimate that makes use of previous trials (cf. Algorithm 1, line 5). It makes use of the weighted average between the class mean estimates of the previous \(N_{t}\) trials and the estimate obtained from the current trial, i.e., \[\mathbf{\mu}^{O}_{s^{+}}=\frac{\mathbf{\mu}^{\text{prev}}_{+}\cdot N_{t}+\operatorname {mean}\left(E^{(i)}_{A^{s^{+}}}\right)}{N_{t}+1}, \tag{6}\] with \(\mathbf{\mu}^{\text{prev}}_{+}\) being the target mean of previous trials. The non-target means can be calculated analogously. Note that we are considering an unsupervised setting, and therefore we do not know the true class means/labels of previous trials. We propose two different options to deal with this: The first option given by Equation (6) is an optimistic one, as it simply assumes that all of UMM's predictions in previous trials have been correct. Alternatively, the previously derived confidence measure can be used to weigh the means according to their confidence, i.e., \[\boldsymbol{\mu}^{C}_{s^{+}}=\frac{\left[\sum\limits_{l=1}^{N_{t}}(\hat{c}^{(l)} \cdot\boldsymbol{\mu}^{(l)}_{+})+c^{(i)}\cdot\mathrm{mean}\left(E^{(i)}_{A^{ \times t}}\right)\right]}{\sum\limits_{l=1}^{N_{t}}(\hat{c}^{(l)})+c^{(i)}}, \tag{7}\] where \(c^{(l)}\) is the confidence and \(\boldsymbol{\mu}^{(l)}\) the mean obtained of the (already recorded) \(l\)-th trial, and \(\hat{c}^{(l)}=\min(c^{(l)},1)\). Limiting previous confidence values to 1 is needed, because--as we show in Section 4.8--UMM is sensitive to the specific stimulation sequence used for a symbol. Note that \(c^{(i)}\) is the confidence of UMM for the current \(i\)-th trial, which cannot be known before calculating \(\boldsymbol{\mu}^{C}_{s^{+}}\), therefore this \(c^{(i)}\) is derived using instantaneous, i.e., only the current trial only (cf. Equation (5)). Note that both approaches make use of naive labeling (Kuncheva et al., 2008), i.e., they use their own past classification decisions and assume them to be the true labels. As such, UMM is dependent on correct classifications, especially during the very first trials. #### 2.2.3 Stationary Covariance Matrix of Background Activity To have less epochs than feature dimensions in the first few trials is prohibitive for using the vanilla sample covariance matrix. Thus a shrinkage regularized covariance matrix (Ledoit & Wolf, 2004) denoted by \(\Sigma_{s}\) may be beneficial. Alternatively, a recently introduced block-Toeplitz structured covariance matrix (Sosulski & Tangermann, 2022) denoted by \(\Sigma_{t}\) can be used. It assumes EEG background activity to be stationary in short epochs (Cohen & Sances, 1977). The authors observed improved ERP classification performance for this regularization--especially when few data is available--due to its sample efficiency. In addition, \(\Sigma_{t}\) reduces the required memory by a factor of \(N_{t}\) compared to \(\Sigma_{s}\), and more efficient algorithms are known for mathematical operations performed on block-Toeplitz matrices. Comparisons between \(\Sigma_{s}\) and \(\Sigma_{t}\) in our work, however, will only focus on potential classification performance differences. #### 2.2.4 BCI Datasets We evaluated our method on five publicly available BCI datasets. For the **Hub17** dataset by Hubner et al. (2017), a copy spelling task was performed by 13 healthy participants, and 31-channel EEG was recorded using gel-based electrodes to reflect the visual ERP responses. Each participant had to spell a 63 letter sentence three times. To spell one letter, 68 visual highlighting events were performed, where a pseudo-random set of letters was highlighted, with Figure 1: UMM method exemplified for two-dimensional toy data representing a four letter speller paradigm. The same multivariate Gaussian noise was assumed for all letters. The true target letters ‘B’ (orange) were drawn from a different mean than the true non-targets ‘A’, ‘C’ and ‘D’ (blue, green, red). Star markers indicate means of assumed target letters, circles of correspondingly assumed non-target letters. **Left**: Input data (no hypothesis). **Center**: Example of the wrong hypothesis, under which letter ‘A’ (blue) is assumed target. Letters ‘B’, ‘C’, ‘D’ are pooled to form the gray non-targets under this hypothesis. The black dashed line indicates the difference between the hypothesized class means. **Right**: Analogously for the orange ‘B’ being the hypothesized target class, which results in a larger distance between the hypothesized class means. Note that ‘C’ and ‘D’ as target hypotheses are not shown. 16 target and 52 non-target events per letter. The **Hub18** dataset by Hubner et al. (2018) had almost the same setup for 12 healthy participants, except the sentence to copy spell consisted of 35 letters only. Data obtained afterwards under a later free-spelling condition was not evaluated. Lee et al. (2019) recorded the **Lee19** dataset of 54 healthy participants with two sessions of a visual ERP paradigm each. Per session, participants performed 33 letters of standard copy spelling first (used as calibration data) followed by an online block of spelling a known 36 letter sentence. However, the UI did not indicate the next letter on the screen during the spelling process, so participants had to remember the sentence and their current position in the sentence. While 62-channel EEG had been recorded, we included only the same 32 channels used by the original authors in their ERP classification pipeline. In addition to using pseudo-random sets of letters during each highlighting event, the authors overlayed symbols by a familiar face to potentially evoke an additional N400f ERP response (Kaufmann et al., 2011). In this dataset 60 epochs per letter are available, with 10 target and 50 non-target epochs. In the **Ric13** dataset, Riccio et al. (2013) recorded 35 letters per participant for a visual ERP protocol. The eight participants were ALS patients. The EEG was recorded at eight channels and the spelling application had a classic row-column layout. Each letter used 120 highlighting events with 20 targets and 100 non-targets. Note that for the UMM method, compared to pseudo-random symbol highlights, the row-column paradigm is harder to classify, when using mean estimation methods that make use of past trials. This can be explained in an example, consider the setting that UMM chooses a wrong symbol already in the first trial. If the correct symbol is on a different row and column than the correct letter, the hypothesis for the target assignment now contains no actual targets, whereas the non-target assignments contains all 20 target and the remaining 80 non-targets. In this case, the vector between the means does not point towards the target but the opposite direction. As UMM considers distances only, it is not able to detect this wrong orientation. Using this wrong direction in the following letter would pull future means in this wrong direction. Since the row-column interface is still popular we chose to include this more challenging paradigm into our evaluation. Finally, the **Sch14** dataset by Schreuder (2014) is a dataset using auditory evoked ERPs in the AMUSE paradigm. The 21 healthy participants used an auditory BCI to spell a sentence, but in contrast to the other datasets, had to correct for spelling mistakes, i.e., if the online classifier decoded a wrong selection, participants had to select an 'undo' operation to correct this. The spelling application involved a two-step procedure to select a letter: Per step, a participant focused on one out of six different tones which were presented from six loudspeaker directions. First the group of letters containing the target was selected and then the actual letter to be spelled. As a wrong selection in the first step changes the required selection for the second step (i.e., from the actual letter to choosing 'undo'), we could evaluate UMM's performance per selection step with respect to the correct selection of one out of six loudspeaker directions, but not regarding the correct selection of a letter. One selection step consisted of 90 epochs with 15 target and 75 non-target tones. The dataset contains EEG recorded at 32 channels. Note that similarly to the **Ric13** dataset, this is a challenging paradigm, as wrong decisions create mean-difference vectors that point into the opposite direction. Example code how to use UMM on two visual ERP datasets is available at our repository at: [https://github.com/jsosulski/umm_demo](https://github.com/jsosulski/umm_demo). ## 3 Statistical Testing In order to compare the different mean and covariance estimation methods, we use the paired t-test, which requires the obtained average classification performances to come from a normal distribution. As these averages are calculated from total of 108 participants, we assume that due to the central limit theorem that the calculated sample means follow a normal distribution (Lumley et al., 2002). While we could employ a non-parametric paired Wilcoxon rank sum test, this test would treat a classification rate difference of, e.g., \(0.98\) to \(0.99\) in the same way as \(0.54\) to \(0.99\) and we want to penalize cases where UMM underperforms severely. To correct for multiple testing, we used Bonferroni correction to correct for multiple testing six times and tested against a significance level of \(1\,\%\). ## 4 Results We show results for three different mean estimation methods: using only the single current trial (\(\mu^{1}\)) and then using information from previous trials. First, using an optimistic estimation (\(\mu^{O}\)) and second using a confidence-based estimation (\(\mu^{C}\)). The covariance matrix is estimated either only on the single current trial (\(\Sigma^{1}\)) or on the current and all previous trials (\(\Sigma^{all}\)) using either shrinkage (\(\Sigma_{s}\)) or block-Toeplitz (\(\Sigma_{t}\)) regularization. ### Effectiveness of UMM Performance values of the different mean and covariance estimators in the UMM algorithm are shown in Figure 2. For this plot, the average classification rate for each participant was calculated, and then all 108 participants across all datasets were pooled. While this emphasizes the Lee19 dataset with 54 participants, we can get an overview over the performance values of the different estimators. First off, the median classification rate of UMM for the setting with Toeplitz covariance from all past trials (\(\Sigma_{t}^{all}\)) and confidence-based mean estimation (\(\mu^{C}\)), has a median performance of \(1.0\), as 58 (more than half of 108) could be classified perfectly from the very first trial on. The difference of using Toeplitz covariance compared to shrinkage covariance is not significant when using confidence-based mean, but it is significant when using optimistic mean. An explanation could be that Toeplitz covariance reduces the number of wrong classifications especially early on, which is important when using optimistic mean estimation and relying on early classification results. In contrast, confidence-based mean estimation discounts the importance of wrong early classifications (if their confidence is low). Interestingly, only shrinkage covariance estimation benefits significantly from using more than one trial of data to estimate the covariance matrix and not when using the Toeplitz covariance estimation. For three participants of the Ric13 dataset, the classification performance is \(0.0\) (the theoretical chance level is \(1/36\)) when using with \(\mu^{C}\) and \(\Sigma_{t}^{all}\) strategies. A possible explanation could be that in these cases, UMM was initialized unfavorably and was unable to recover (cf. Section 2.2.4). Further evidence for this explanation is that this does not occur when using the instantaneous mean (\(\mu^{1}\)). Using \(\mu^{1}\) should be combined with using the instantaneous covariance estimation (\(\Sigma^{1}\)) and not all pooled data (\(\Sigma^{all}\)), however, this effect is only significant when using shrinkage covariance estimation. Figure 3 compares UMM classification performances across the different mean and covariance estimators for each respective dataset. As expected, the domain-specific block-Toeplitz covariance is better than using just the shrinkage covariance regularization for most datasets. Generally, the confidence-based mean estimation performs best across all datasets. Only the Hub18 dataset is marginally better with the optimistic mean estimate, which corresponds to one instead of two wrong letters out of 1260 letters. Using UMM, on all three visual speller datasets with healthy participants, classification rates above \(99\,\%\) are achieved, without using any calibration data at all. For the Ric13 dataset, a normal shrinkage covariance is better than using the Toeplitz structured matrix. Surprisingly, in the Sch14 dataset, using a covariance matrix calculated from a single trial appears to perform better than using all the data available. However, this occurs only due to chance as UMM is sensitive to a correct classification in the first few trials, as it learns from its own classification results when using optimistic or (less so) confidence-based class mean estimation. Interestingly, when using only the current trial to estimate the mean (\(\mu^{1}\)), it is actually detrimental in every dataset to calculate the covariance matrix on previous trials (\(\Sigma^{all}\)). A possible explanation could be, that all datasets had a fixed stimulus onset asynchrony (SOA). As a result, in each trial the frequency that is synchronized with the SOA (e.g., 4 Hz for SOA 250 ms) is barely represented in the temporal covariance. ### Learning Curves The learning curves in Figure 4 show that the Toeplitz covariance matrix (blue curves) tends to be a better choice than only the shrinkage regularized covariance for all datasets except for the Ric13 dataset. This is especially true during the early letters, where for Hub17 and Hub18 there is virtually no ramp up period observed anymore with the Toeplitz regularization. Furthermore, in all datasets, using UMM with a confidence-based mean estimation (the brighter curves within one color) outperforms the optimistic UMM approach, especially so for the Lee19 dataset when used in combination with a shrinkage estimation covariance matrix. For the Ric13 dataset, we observed a surprisingly short ramp up period at the start, however the performance does not reach the upper ceiling as for the other datasets. This is explained by the effect explained in the next section, where UMM can fail to perform at all if the initial trials are classified wrongly. ### Participant-wise Results for Sch14 For the auditory dataset in each trial, the attended tone/direction (one out of six) had to be decoded instead of the actual letter, while selecting a letter required at least two trials. As the spelling interface allowed to delete wrong letter and undo wrong first step decisions, the number of trials performed for spelling the same text was different for Figure 2: Boxplot of UMM classification rates for different mean and covariance estimators, pooled across all datasets. Whiskers show 1.5 interquartile ranges and diamonds indicate performance of subjects who are more than 1.5 interquartile ranges away from the first quartile. each participant. As shown in Figure 5, the overall performance is worse for this auditory paradigm than for the visual speller datasets. Nevertheless, UMM worked flawlessly or almost flawlessly for some participants. Interestingly, the two participants VPfcj and VPfcm perform below chance level with UMM when using information from previous trials. This is explained by the stimulation protocol, which delivers tone stimuli one after the other instead of in parallel: When UMM chooses the wrong target assignment in the first trials, the mean estimation in subsequent trials will move the class means along the opposite direction (cf. Section 2.2.4). In the visual speller setups, where one stimulation event highlights multiple symbols, this problem is alleviated, especially when the set of highlighted letters is chosen (pseudo-)randomly (datasets: Hub17, Hub18, Lee19) and is not constrained to rows or columns (dataset: Ric13). ### Improvement over State of the Art Unsupervised Classifiers To relate UMM with state of the art decoding methods in BCI, we compare our results with the original publications of the Hub17 and Hub18 datasets, as in these datasets the used online classifier was indeed unsupervised. We only report for the setting where the covariance is estimated using a Toeplitz structure (\(\Sigma_{t}\)) and the mean was estimated based on previous confidence values (\(\mu^{C}\)). In Figure 6, the online classification results obtained by the original learning from label proportions ('LLP') method on the Hub17 dataset (Hubner et al., 2017) are compared to the classification results we obtained using UMM. In this plot, each row corresponds to one block of 63 letters spelled by one participant. Whereas in the original experiment, a clear ramp-up phase of 'LLP' can be observed for all participants, UMM works from the very start for basically every participant. Even for participant 11 (third last) who had difficulties using the BCI with 'LLP', UMM would allow for a perfect classification performance. Note that the EEG data of the second block of participant 6 could not be loaded due to missing optical markers and was therefore omitted for both methods. In the original recording of the Hub18 dataset, Hubner et al. (2018) compared learning from label proportions, expectation maximization and their proposed combination thereof, i.e., the so-called 'Mix' method. Note that in this online experiment each block of 35 letters had been classified by one of the three methods. As we could not match which method had been used for which block, we only report the performance of the 'Mix' method, as this method had been reported as the best of the compared approaches by Hubner and colleagues and can still be considered as the state of the art unsupervised classification method for ERP-based BCI. The results of a comparison between the 'Mix' method and UMM are shown in Figure 7. In this dataset, no clearly bad performing participants can be observed, but UMM notably again works from the very first letter. Note that the performance of 'Mix' (85.71%) seems to be only slightly better than 'LLP' (84.21%), but this is caused by the shorter sentences that had to be spelled (unsupervised methods tend to perform badly especially for the first few letters). Note that in both, the Hub17 and the Hub18 datasets, UMM's performance when using no past information at all (see columns \(\Sigma_{t}^{1},\mu^{1}\) and \(\Sigma_{s}^{1},\mu^{1}\) in Figure 3), is already very high, especially when the covariance estimation makes use of the block-Toeplitz regularization. ### Comparison to Supervised Classification In the Lee19 dataset, the original authors used the first 33 letters recorded in copy-spell mode as calibration data, while Figure 3: Heatmap with the classification rate of UMM for all datasets. Each row describes a dataset with the different covariance and mean estimators indicated in each column. Note that the top three rows correspond to the three similar visual speller datasets with pseudo-random stimulation sequences. Numbers in brackets after each dataset name indicate how many times UMM was used to spell a sentence in total, i.e., per session / block. the remaining 36 were actually classified online. Note that the latter was a copy-spell task as well, however, the interface did not indicate which letter to spell next after every letter. For this online block, Lee et al. (2019) report a mean classification accuracy of \(96.8\,\%\). As UMM is an unsupervised approach, it can not only be applied to the online block, but it would allow online classification already for the first 33 letters. On the full set of 69 letters, UMM reaches a performance of \(99.47\,\%\). Evaluating UMM's classification rate only for letters from the online block (letters 34-69, comparable to what Lee and colleagues reported), UMM obtains an accuracy of \(99.61\,\%\). For the Ric13 dataset, the first 15 letters had been used by the original authors for calibration and the remaining 20 had been used for online spelling. On this dataset, Riccio et al. (2013) report an average classification accuracy of \(97.5\,\%\) which is much better than the \(77.86\,\%\) UMM achieves. However, the bad average classification performance is mainly caused by participants where initial mistakes prohibit UMM from working even when using confidence-base mean estimation. Detecting and mitigating early mistakes may be key to improve UMM's performance. ### Reliability of the Confidence Measure To assess the reliability of the confidence measure proposed for the unsupervised UMM method, we focus on a setting for which a substantial amount of letters were classified incorrectly. For this purpose, we chose the traditional shrinkage covariance estimation with confidence-based mean (\(\Sigma_{s}^{all}\), \(\mu^{C}\) classification rate: \(93.40\,\%\)) of the large Lee19 dataset. In this setting, the distribution of the confidence values is Figure 4: Learning curves for UMM using different covariance regularization and mean estimation methods for all visual speller datasets. UMM used the current and all previous trials to predict the letter. A ratio of 1 indicates that the N-th letter was correctly predicted for all sessions/participants contained in a dataset. Please note, that results for the Ric13 dataset were obtained from the recordings of 8 patients only, while the other datasets contain the results of between 36 to 107 recordings. The solid black line indicates the maximally possible performance level for each dataset. Figure 5: Bar plot with classification rate of the Sch14 dataset. Red-colored participants did not finish the original experiment, as in the original experiment their classification performance was not sufficient to finish spelling the sentence. shown in Figure 8. As expected, the confidence is consistently low for incorrect classifications. In fact, all wrong classifications had a confidence of \(1.5\) or lower, while overall confidence values ranged between \(0\) and \(7.5\). ### Confidence as a Predictor of Degenerate Cases As mentioned previously, when using past information to estimate the means of a trial, in rare cases it can happen that UMM learns the inverted class means at the start. This undesired behavior was mainly observed when using the worse shrinkage regularized covariance matrix (\(\Sigma_{s}\)). However, even when using the block-Toeplitz covariance (\(\Sigma_{t}\)), we observed the undesired behavior for participants 1,2,4 of the Ric13 dataset and participants VPFcj and VPFcm of the Sch14 dataset. If this happens, UMM will misclassify reliably below chance level. We investigated this undesired behavior for the example of participant 40 of the Lee19 dataset in Figure 9 when using shrinkage covariance (\(\Sigma_{s}^{all}\)) estimation. This figure provides the cumulative confidence values, i.e., the sum of all confidences obtained over all trials. In session 1 (top plot), using UMM with past information would lead to almost no correct classification at all for this participant, whereas in session 2 UMM performs perfectly. The cumulative confidence can be used to detect this degenerate case in session 1, as the confidence of UMM using \(\mu^{C}\) (green line) is barely different from the confidence that uses \(\mu^{1}\) (purple line). Contrary, in session 2 \(\mu^{C}\) reliably accumulates to higher confidences. Note that in the UMM implementation, calculating both confidences corresponds to one additional matrix multiplication only, which has only a negligible run time impact. ### Using UMM Confidence to Assess the Quality of the Stimulation Sequence In the Hub17 and Hub18 datasets, the stimulation sequences were pre-generated before the start of the experiments. The same sequence was used in every block/sentence of each participant across both datasets. This means that, for example, when letter 'A' was the symbol to be attended in the third position of a sentence, each participant was presented with the same order of highlighting events for this letter in this position. The grand average confidence values for each letter for both the Hub17 and the Hub18 datasets are visualized in Figure 10. By design, the first 15 letters participants had to spell were the same between the two datasets, while this happened by chance for letters 30 and 34 in the later part. Interestingly the average classifier confidences are virtually equal when the letters are the same at a certain position in the sentence to be spelled, even though the 13 participants in the Hub17 study were different from the 12 participants from the Mix Figure 8: Distribution of confidence values for the Lee19 dataset using confidence-based mean estimation and shrinkage covariance estimation. Figure 6: Heatmap of correctly (blue) and incorrectly (yellow) classified letters for the original learning from label proportions (LLP) method (left) and the proposed novel UMM method (right). Each row is one block of 63 letters of one participant. Black lines delineate the 13 participants, who executed different numbers of letter blocks each. Figure 7: Heatmap of correctly (blue) and incorrectly (yellow) classified letters for the original ‘Mix’ method (left) and UMM (right). Each row is one block of 35 letters of one participant. Black lines delineate the 12 participants, who executed different numbers of letter blocks each. Overall classification performance is given in brackets. study. However, if different letters had to be spelled--i.e., the stimulation sequence of the attended letter is different between Hub17 and Hub18 participants--the confidences are not overlapping. The same holds for the same letter being spelled in different positions of the sentence as then different stimulation sequences have been used. See for example the confidence values for the space/underscore character in positions 12 and 15, where the average confidence is around \(3.4\) and around \(5.7\) respectively. This observation strongly indicates that the confidence of UMM can be used--when other experimental parameters are identical--to identify stimulation sequences which are hard or easy to classify for UMM. ## 5 Discussion We proposed the UMM approach, this unsupervised method would allow healthy participants to use a ERP-based visual speller BCI without any calibration and with almost no misclassifications from the very first letter on. The experiments on a large number of datasets indicate, that UMM performs better than traditionally used supervised and unsupervised LDA binary classification. Note that the latter use aggregated classifier outputs to make predictions in a multi-class setting, whereas UMM solves this multi-class directly. A possible explanation of the observed performance difference is that our method does not try to correctly estimate the parameters of an actual projection direction (while for LDA, the normal vector \(\mathbf{w}\) needs to be estimated)--instead, it selects the classification assignment that produces the largest difference between the class means, which is a much simpler task but still sufficient to determine the target ERP stimulus. Solving this simpler task only may enable UMM to flexibly adapt to changes of the underlying signals, while most decoding approaches cannot cope well with such non-stationarities. See, e.g., participant 11 (third last participant from the top in Figure 6), who had a 'overall low signal-to-noise ratio (SNR)' (Hubner et al., 2017) which yields very poor performance using LDA. If this low SNR is caused by, e.g., latency jitter of the discriminative ERP components between trials, the true ERPs cannot be captured by mere class-wise averaging of the epochs across trials, which is used in traditional classification approaches. In contrast, UMM will consider the most discriminative dimensions per trial--albeit with more emphasis to the traditional class-wise means when using optimistic or confidence-based mean estimations. The proposed mean estimation methods of our approach can be interpreted as a regularization of the mean using past trials and naive labeling of these past trials. Compared to a traditional LDA--here, the data is projected on the direction of the difference between class means without considering the current trial--this allows UMM to consider deviations from the directions of the class means when classifying a new trial. Note that our current straightforward approaches weight previous trials with the amount of trials available (optimistic mean estimate), which already works well on the majority of participants. However, as it probably is not optimal, it leaves room for future improvement. Our proposed UMM framework would also allow a 'forgetting of very old trials' for mean and covariance estimation, to cope with non-stationarities. A rather extreme case is to use UMM instantaneously, e.g., with \(\Sigma_{t}^{1}\) and \(\mu^{1}\). Here, no past information is used at all, nevertheless this variant achieves up to \(96.11\,\%\) classification accuracy (on the Hub18 dataset) and is completely immune towards all signal non-stationarities between trials (but not within a trial). In contrast to expectation maximization, which typically is computationally intensive, our proposed approach takes around \(0.5\,\mathrm{s}\) to predict a letter on a i7-8700K CPU (released in 2017) in our provided implementation. Note that the current implementation has more potential for run time improvements, e.g., by optimizing the inversion of the block-Toeplitz matrix (Poletti & Teal, 2021) which currently makes up more than half of the total time required. Additionally, Hubner et al. (2018) observed that expectation maximization sometimes can get stuck in local optima and take many trials before showing a good performance, even on the high quality data of the Mix dataset. Aside from better classification performance, another benefit of UMM over the 'Mix' (which is a combination of 'LLP' and expectation maximization) method is that it does not Figure 9: Cumulative confidences for participant 40 of the Lee19 dataset. Blue blocks at the bottom line indicate that the letter was classified correctly using the cumulative mean estimate \(\mu^{C}\), yellow blocks indicate misclassifications. require a paradigm modification: it is readily applicable to any existing ERP-based BCI paradigm we can think of, which requires a multi-class selection by solving multiple binary classification problems, which is almost always true. However, UMM can benefit from suitable paradigm design, for example, it tends to perform better when the set of highlighted letters is not constrained to form a row or column but is be chosen as (pseudo-)random subsets. From a neuroscience perspective, it is also interesting to investigate which stimuli or epochs, are easier or harder to classify for UMM in order to assess the elicited ERPs. However, UMM can only be used to predict the final one-out-of-multiple-classes letter (or another multi-class label) directly and cannot directly operate on a single stimulus. Still, as UMM calculates a mean estimate and the inverted covariance matrix, it is trivial to simply use these to obtain an LDA (\(\mathbf{w}=\Sigma^{-1}\Delta\boldsymbol{\mu}\)), which could then be used for actual binary classification of individual epochs. While for more difficult data (Ric13 dataset, with patients and a row / column layout) or generally for weaker ERP responses (Sch14 dataset with auditory evoked ERPs) it rarely can happen that UMM fails to perform at all when using mean information from past trials. This is a known phenomenon in setups using naive labeling (Kuncheva et al., 2008) when early trials are misclassified. This emphasizes the need for methods that work even when extremely limited data is available--for example, using the block-Toeplitz structured covariance matrix--to reliably classify the first few trials/letters in a BCI experiment. Using the proposed cumulative confidence measure (see Figure 9), these degenerate cases could be detected in an online experiment, such that UMM can be informed to, e.g., discard the mean information obtained so far and start over. A different approach to cope with harder datasets could be to initialize the UMM mean estimation either with prior knowledge, or using a short calibration phase, however, this would make UMM a supervised method. As the UMM method appears to perform always above chance level when no past information is used, for example as shown in Figure 5, this information could be used to obtain a robust mean estimate overall and may also serve to prevent the undesired cases where UMM performs below chance level. Finally, UMM could also make use of a recalculation procedure similar to the post-hoc re-analysis proposed by Hubner et al. (2017). The authors make use of recent information (i.e., better mean and covariance estimates the more data is recorded) to re-classify previous trials. If this rectifies mistakes on the first few trials, the corrected aggregated mean estimate (confidence-based or optimistic) will become more reliable in future trials. ## 6 Conclusion We introduced the simple Unsupervised Mean-difference Maximization (UMM) method for ERP-based BCI systems. It does not require labels, i.e., no calibration phase is needed. UMM delivers a highly competitive classification accuracy over multiple visual speller datasets with healthy participants (\(99.96\,\%\), \(99.84\,\%\) and \(99.47\%\)). Rare shortcomings were observed in a patient dataset (\(77.86\,\%\)) and an auditory dataset (\(82.52\,\%\)), which can be detected using a proposed confidence metric, which comes basically for free when using UMM. For BCI practitioners it is important to emphasize that UMM can be applied to any available ERP-based BCI protocol, but benefits from suitable paradigm design. Practitioners should consider incorporating UMM into their BCI systems to eliminate the need for calibration as well as to allow participants to instantly be able to spell. Figure 10: Classifier confidence of the first 35 letters of the Hüb17 and Hüb18 dataset. The sentence a participant had to spell in each dataset is given on the bottom. Areas with a gray background indicate that different letters had to be spelled at that position. Error bars indicate the 95 % confidence interval of the mean. ## Disclaimer There is currently a patent application pending for applications using UMM. ## Acknowledgements Our work was supported by the German Research Foundation project SuitAble (DFG, grant number 387670982) and by the Federal Ministry of Education and Research (BMBF, grant number 16SV8012). The authors would also like to acknowledge support by the state of Baden-Wurttemberg, Germany, through bwHPC and the German Research Foundation (DFG, INST 39/963-1 FUGG).
2304.01282
PEACH: Pre-Training Sequence-to-Sequence Multilingual Models for Translation with Semi-Supervised Pseudo-Parallel Document Generation
Multilingual pre-training significantly improves many multilingual NLP tasks, including machine translation. Most existing methods are based on some variants of masked language modeling and text-denoising objectives on monolingual data. Multilingual pre-training on monolingual data ignores the availability of parallel data in many language pairs. Also, some other works integrate the available human-generated parallel translation data in their pre-training. This kind of parallel data is definitely helpful, but it is limited even in high-resource language pairs. This paper introduces a novel semi-supervised method, SPDG, that generates high-quality pseudo-parallel data for multilingual pre-training. First, a denoising model is pre-trained on monolingual data to reorder, add, remove, and substitute words, enhancing the pre-training documents' quality. Then, we generate different pseudo-translations for each pre-training document using dictionaries for word-by-word translation and applying the pre-trained denoising model. The resulting pseudo-parallel data is then used to pre-train our multilingual sequence-to-sequence model, PEACH. Our experiments show that PEACH outperforms existing approaches used in training mT5 and mBART on various translation tasks, including supervised, zero- and few-shot scenarios. Moreover, PEACH's ability to transfer knowledge between similar languages makes it particularly useful for low-resource languages. Our results demonstrate that with high-quality dictionaries for generating accurate pseudo-parallel, PEACH can be valuable for low-resource languages.
Alireza Salemi, Amirhossein Abaskohi, Sara Tavakoli, Yadollah Yaghoobzadeh, Azadeh Shakery
2023-04-03T18:19:26Z
http://arxiv.org/abs/2304.01282v2
PEACH: Pre-Training Sequence-to-Sequence Multilingual Models for Translation with Semi-Supervised Pseudo-Parallel Document Generation ###### Abstract Multilingual pre-training significantly improves many multilingual NLP tasks, including machine translation. Most existing methods are based on some variants of masked language modeling and text-denoising objectives on monolingual data. Multilingual pre-training on monolingual data ignores the availability of parallel data in many language pairs. Also, some other works integrate the available human-generated parallel translation data in their pre-training. This kind of parallel data is definitely helpful, but it is limited even in high-resource language pairs. This paper introduces a novel semi-supervised method, SPDG, that generates high-quality pseudo-parallel data for multilingual pre-training. First, a denoising model is pre-trained on monolingual data to reorder, add, remove, and substitute words, enhancing the pre-training documents' quality. Then, we generate different pseudo-translations for each pre-training document using dictionaries for word-by-word translation and applying the pre-trained denoising model. The resulting pseudo-parallel data is then used to pre-train our multilingual sequence-to-sequence model, PEACH. Our experiments show that PEACH outperforms existing approaches used in training mT5 Xue et al. (2021) and mBART Liu et al. (2020) on various translation tasks, including supervised, zero- and few-shot scenarios. Moreover, PEACH's ability to transfer knowledge between similar languages makes it particularly useful for low-resource languages. Our results demonstrate that with high-quality dictionaries for generating accurate pseudo-parallel, PEACH can be valuable for low-resource languages. + Footnote †: Equal contribution ## 1 Introduction Machine Translation (MT) involves transferring a text from one language to another. Recent investigations have revealed that multilingual pre-training on a large corpus is profitable for NLP systems' performance on multilingual downstream tasks Liu et al. (2020); Lample and Conneau (2019); Conneau et al. (2020); Xue et al. (2021); Devlin et al. (2019) and knowledge transferability between languages Wu and Dredze (2019); K et al. (2020); Liu et al. (2020). Furthermore, using parallel data in pre-training encoder and encoder-decoder models effectively increases the models' performance in downstream tasks Lample and Conneau (2019); Chi et al. (2021). The existing pre-training approaches are mainly based on Masked Language Modeling (MLM) and its variations Liu et al. (2020); Raffel et al. (2020); Xue et al. (2021); Lewis et al. (2020). Although using parallel data in pre-training multilingual models improves their performance on downstream tasks, the amount of available parallel data is limited Tran et al. (2020). Moreover, MLM-based objectives for sequence-to-sequence (seq2seq) models usually ask the model to generate an output in the same language as input, which is not in the interests of translation tasks. Additionally, MLM-based objectives use shared subwords or alphabets between different languages to learn shared embedding spaces across them Lample and Conneau (2019); Lample et al. (2017); Smith et al. (2017); this would not be possible for languages without shared alphabets. Using dictionaries to define anchor points between different languages in cross-lingual pre-training of the encoder of seq2seq models has been investigated and shown to be effective for unsupervised translation Duan et al. (2020). Still, it never has been used as a method for pre-training multilingual seq2seq models. Our proposed method, Semi-Supervised Pseudo-Parallel Document Generation (SPDG), addresses the challenge of limited parallel data for low-resource languages by leveraging dictionaries to generate pseudo-parallel documents. SPDG adopts unsupervised translation techniques Kim et al. (2018); Lample et al. (2017) to generate a high-quality translation for each pre-training document. We use a pre-trained denoising seq2seq model with word reordering, adding, removing, and substituting to enhance the quality of the word-by-word translated document. The improved unsupervised translated text is used as the target text for training our multilingual seq2seq model, PEACH, using SPDG as a new pre-training method. SPDG enables transfer of knowledge between similar languages, making it particularly useful for low-resource languages. Our experiments show that PEACH outperforms the pre-trained models with mT5's MLM and mBART's MLM with Reordering objectives in English, French, and German. Additionally, PEACH demonstrates strong performance in zero- and few-shot scenarios. Moreover, we test our model for other multilingual tasks, such as natural language inference, to investigate the model's ability in this task. Our results show that our model achieves a higher score in this task than other objectives, which shows PEACH's ability to transfer knowledge between languages. The main contribution of this paper is twofold: * We propose a novel semi-supervised pre-training method using bilingual dictionaries and pre-trained denoising models for seq2seq multilingual models. * We show the benefits of SPDG objective in translation, supervised and zero- and few-shot cases, and knowledge transfer between languages. ## 2 Related Work Among the first endeavor for MT, dictionary and rule-based methods were popular Dolan et al. (1993); Kaji (1988); Meyers et al. (1998), followed by Knowledge-Based Machine Translation (KBMT) and statistical methods Mitamara et al. (1993); Carbonell et al. (1981); Koehn (2009); Al-Onaizan et al. (1999). The popularity of neural machine translation has only grown in the recent decade with the introduction of the first deep neural model for translation Kalchbrenner and Blunsom (2013). While the RNN-based seq2seq models seemed to be promising in neural machine translation Wu et al. (2016); Bahdanau et al. (2015); Sutskever et al. (2014), the advent of the transformer architecture Vaswani et al. (2017) plays an integral role in modern MT. With the introduction of the transformer architecture, pre-training general-purpose language models seemed to be an effective way to improve different NLP tasks Devlin et al. (2019); Liu et al. (2019). In most cases, transformer models were asked to denoise a noisy input to learn a language Lewis et al. (2020); Devlin et al. (2019); Raffel et al. (2020). One of the most popular pre-training objectives for both encoder-only and encoder-decoder models is called Masked Language Modeling (MLM), in which the model should predict the masked part of a document and generate it in its output Raffel et al. (2020). However, many other objectives were also developed for encoder-decoder and encoder-only models Song et al. (2019); Clark et al. (2020). Meanwhile, unsupervised methods for neural machine translation (NMT) using monolingual corpora based on adversarial learning Lample et al. (2017) and transformer-based text denoising Kim et al. (2018) was tested and demonstrated promising outcomes. Using bilingual dictionaries for defining anchors in pre-training unsupervised translation models was successful Duan et al. (2020) but never has been used for generating data for supervised translation on a large scale. Our work differs from using dictionaries as anchor points for learning a better representation for tokens in encoder Duan et al. (2020). We use dictionaries to generate a pseudo translation of the source language in the target language instead of just defining some anchor points. Thus, the model in pre-training steps learns to generate a text in the target language based on input in the source language using only monolingual data and dictionaries on a large scale. Pre-training task-specific models by generating pseudo-summaries was successful in some cases for summarization, question answering, and speech recognition Chen et al. (2017); Salemi et al. (2021); Zhang et al. (2020); Abaskohi et al. (2022), but it has not been performed for pre-training encoder-decoder seq2seq models for supervised translation according to the best of our knowledge. On the other hand, the endeavors for pre-training specific models for translation ended up in training multilingual language models Xue et al. (2021); Liu et al. (2020). mT5 Xue et al. (2021) is trained with the MLM objective of T5 Raffel et al. (2020). In its pre-training objective, some spans of the input document are masked by specific tokens, and the model has to predict those spans by generating them in its output. mBART Liu et al. (2020) is another multilingual model based on the BART Lewis et al. (2020) model, pre-trained with MLM with Reordering objective. In mBART's pre-training objective, the order of sentences in the input document is corrupted while a specific token masks some spans of the document. The model has to generate the original document in its output. PEACH is different from both mentioned models because we use a semi-supervised method to generate several pseudo-translations (one for each selected language) of each pre-training document. These translations are then fed to pre-train PEACH. Furthermore, in the mentioned models, the inputs and outputs are from the same language while we ask the model to translate texts from one language to another in our pre-training phase. ## 3 Peach PEACH is a new sequence-to-sequence multilingual transformer model trained with SPDG, a semi-supervised pseudo-parallel document generation method. This section explains the pre-training objective and the model architecture. ### Semi-Supervised Pseudo-Parallel Document Generation (SPDG) Our proposed pre-training objective, SPDG, generates a pseudo-translation of the input document. For generating pseudo-translations, we use Kim et al. (2018)'s approach for unsupervised translation with some modifications. Our pipeline for pre-training a model based on SPDG is shown in Figure 2. We pre-train a seq2seq denoising model for the target language using the pre-training corpus of that language. Next, for each pre-training document in the source language, we translate it to the target language word-by-word using dictionaries. Then, we give this word-by-word translated document to the pre-trained model with denoising objectives to improve its quality and restore missing words. Using this method, we can generate the pseudo-translation of each pre-training document from the source language to the target language. We use these pseudo-translations as gold translations for each pre-training document to pre-train a new language model for translation tasks. Since this pre-training objective is similar to translation, we hypothesize that the pre-trained model learns the translation task faster than the models trained using monolingual data. Word-by-Word Translation Using DictionariesThe first step to generate pseudo-parallel documents is to map sentences from one language to another using dictionaries. We used bilingual dictionaries provided by Conneau et al. (2017) for our work. To map sentences word-by-word from one language to another, we first tokenize sentences using the NLTK1 library. Then, for each token, we find a translation for the token in the target language using a dictionary from the source to the target language. Some tokens, such as punctuations and numbers, do not need to be translated to the target language because they are shared between them. Therefore, we just put them in the translated words set. Furthermore, we can not find any translation for named entities in dictionaries. To solve this issue, spaCy2 small (<lang>_core_news_sm) models for named entity recognition for each language are used to extract named entities. We transliterate the named entities and put them in the translated words set. Tokens without translation in dictionaries that are not named entities, punctuations, or numbers are skipped. We hope denoising objectives could find an appropriate substitute for these tokens in the next step. The implementation details of word-by-word translation can be found in Appendix B. Footnote 1: [https://www.nltk.org/](https://www.nltk.org/) Improving Word-by-Word Translations with Denoising ObjectivesA critical problem with word-by-word translation is that the word order in the target language is not usually the same as the source. Furthermore, some words in the source language might not have any translation in the target language or vice versa. Additionally, since many words have multiple meanings, word-by-word translation might select the wrong translation for a word. We define four denoising objectives to overcome the mentioned challenges, and train a denoising model for each language. Since Easy Data Augmentation (EDA) has shown great impact on the performance of various tasks in NLP Wei and Zou (2019); Zhong et al. (2020); Abaskohi et al. (2022), our denoising objectives include word addition, word erasing, word substitution, and shuffling objectives of EDA. The pipeline is shown in Figure 1. First, we shuffle the words in each sentence in a document while keeping the relative order of shuffled words in different sentences in the document. Next, we remove, add, and replace some of the words in each sentence to encourage the model to resolve the aforementioned issues in word-by-word translation. We use the corrupted document as the model's input and ask the model to generate the original one as its output. The deshuffling objective aims to improve the ability of the model to reorder word-by-word translated documents. Removing and adding words help the model to correct some translations. Moreover, replacing is beneficial especially for correcting the word-by-word translation of ambiguous words. Figure 2 depicts our pipeline for pre-training with SPDG on a single example. In the mentioned example, after word-by-word translation, some of the words in the pre-training document cannot be translated into German because they do not exist in the dictionary. Furthermore, the relative order of words in the word-by-word translated text is not grammatically correct, and some words can be substituted with more suitable ones. It can be seen that after applying the denoising model to the word-by-word translated text, the mentioned problems are resolved. ### Pre-Training with Multilingual SPDG Most common multilingual models, such as mT5 (Xue et al., 2021) and mBART (Liu et al., 2020), use MLM and MLM with Reordering as their pre-training objectives. Despite their success, these objectives are not perfectly aligned with the goal of MT. Specifically, these objectives are designed to work on monolingual inputs; they denoise the input document in a specific language and produce the denoised version in the same language. Here, we design Algorithm 1, in which the pre-training task's input is in one language, and its output is in another language. The algorithms' inputs are the corpora of all languages that the model should be trained on as well as their names. The algorithm generates the input-output pairs for pre-training the multilingual model. In Algorithm 1, given a pre-training document, we generate a pseudo-translation of it to each of the other languages. So, the model can observe translations in different languages for a single document. This helps the model in learning cross-lingual knowledge even about a language not present in a specific training instance. The mentioned claim is because the model learns about the language differ Figure 1: An overview of denoising objectives used for training denoising models. We use word shuffling, addition, substitution, and removing based on the values in Table 7 in Appendix C. Figure 2: An overview of our pre-training pipeline for training a model based on SPDG. The method uses the output of the word-by-word translation of a pre-training document as the input of the trained denoising model based on Figure 1 to improve its quality. ences by translating the same input into multiple languages. It should be noted that based on the goal of pre-training a language model for translation, it is possible to change Algorithm 1. For example, if the multilingual model is going to be used to just translate from or to English, there is no need to pre-train the model with the task of generating pseudo-translation from German to French. Since we are interested in evaluating our model on all pairs of the pre-training languages, we generate pseudo-translation for all pairs in Algorithm 1. ArchitectureOur model, PEACH, and the other presented denoising models are all based on transformer Vaswani et al. (2017) encoder-decoder architecture with a 12 layer encoder and a 12 layer decoder with 768 hidden size, 3072 feed-forward filter size, and 12 self-attention heads. ## 4 Experiments In this section, we compare the results of PEACH, trained with SPDG, with other common objectives utilized for pre-training multilingual models. To investigate the effectiveness of SPDG in comparison with common objectives, we pre-trained two other models based on mTS's MLM objective Xue et al. (2021) and mBART's MLM with Reordering objective Liu et al. (2020) in the same setup. The codes for pre-training and fine-tuning of all models are publicly available on GitHub3. Footnote 3: [https://github.com/AmirAbaskohi/PEACH](https://github.com/AmirAbaskohi/PEACH) ### Pre-Training Data and Configuration We pre-train PEACH on English, French, and German with the CC100 corpora Wenzek et al. (2020); Conneau et al. (2020). Due to the lack of computing power, we cannot use more than around \(550M\) words of text from each language. So, we train our model on around \(1.6B\) total words. Our pre-training batch size is 96, with a maximum of 512 input and output tokens, and we train it for 500K steps on Google Colab TPUs (v2-8). The AdaFactor Shazeer and Stern (2018) optimizer with a decay rate of 0.8 and a dropout rate of 0.1 is used in pre-training and fine-tuning. Furthermore, we use the SentencePiece BPE algorithm Gage (1994); Kudo and Richardson (2018) to generate a vocabulary of 32K words for denoising models and 96k for multilingual models. We pre-train PEACH with Multilingual SPDG for 75% of its pre-training steps and mT5's MLM Xue et al. (2021) approach for the other 25% pre-training steps. The latter pre-training objective is used because it increases the scope of the fine-tuning tasks that our model can do well. Indeed, multilingual SPDG teaches the model to transform a text from one language to another, but it does not help the model in tasks where their inputs and outputs are in the same language. Therefore, pre-training the model with MLM for a few steps is helpful. We train the denoising models with the same setup as PEACH. An important factor in training denoising models is the rate of corruption for training documents. We shuffle all words in sentences while removing, adding, and replacing a small proportion of them. We use the word-by-word translation script outputs to decide on these rates. First, we calculate the rate of missing words in word-by-word translation using dictionaries for all languages to a specific language on around 1GB of text of each language. Then, we use a normal distribution with mean and standard deviation of the same as the calculated numbers to define the rate of words that should be removed from a sentence. The values of corruption rates for each language are reported in Table 7 in Appendix C, in which we explain the method to find the best values for rates. Due to the lack of computing power, we cannot train a large-scale PEACH and compare it with pre-trained models like mT5 or mBART. Instead, we train two models based on mT5 Xue et al. (2021) objective, which we call MLM, and mBART Liu et al. (2020) objective, which we call MLM with Reordering, with the same setup as PEACH. Also, we fine-tune a Transformer model with randomly initialized weights on downstream tasks. ### Results This section evaluates PEACH in various translation scenarios, including supervised, zero- and few-shot. We also evaluate PEACH's ability for cross-lingual knowledge transfer in translation and natural language inference tasks. Supervised TranslationIn order to evaluate PEACH on translation tasks, we fine-tune it on the EN-DE and EN-FR parts of the WMT14 dataset (Bojar et al., 2014). Additionally, we fine-tune our model on the FR-DE part of the WMT19 dataset (Barrault et al., 2019) in the same setup. Since the test set of WMT19 DE-FR datasets is not available publicly to the best of our knowledge, we evaluated the models on its validation set. The model is fine-tuned for 50K steps with a batch size of 96, a learning rate of \(5\times 10^{-5}\), and the same optimizer as pre-training. We use 10K warmup steps for fine-tuning. More information about the experiments' setup is reported in Appendix D. It should be noted that while translation downstream datasets usually have millions of samples, we at most use \(50000\times 96\) samples of them due to the lack of computing power. To support the selected number of samples for the downstream task, we report pre-training and fine-tuning time on the whole datasets for an epoch in Appendix D. This sample count is less than 15% of samples for the WMT14 English-French dataset. Additionally, since the primary purpose of this paper is to introduce a new method for pre-training multilingual models and the comparisons happen in the same setup for all objectives, the results are fair and valid. The results of our model and other trained models on translation tasks are reported in Table 1. Additionally, the results of our model on EN-FR downstream dataset in some pre-training steps are shown in Figure 3. Also, the results for other downstream datasets are reported in Table 14 in Appendix E. The presented results show that PEACH outperforms other models, not only with 500K steps of pre-training but also even with its 200K steps pre-training checkpoint. Furthermore, the MLM method used in mT5 achieves worse results than MLM with Reordering objective that mBART used. We believe this is because the MLM objective of mT5 just asks the model to generate the masked spans in the output, while mBART's objective asks the model to reorder and predict the masked spans of the input document simultaneously. Indeed, the objective of mBART asks the model to generate complete sentences in its output, and that is why it can generate better translations. On the other hand, mT5 just predicts spans of text, which are not complete sentences in many cases. We believe that the better results of our model stem from its pre-training objective which is similar to translation tasks. Indeed, we pre-trained our model on a massive amount of pre-training data with a task similar to translation, which increases the model's ability in translation when it is fine-tuned with a smaller amount of translation samples. To investigate the effect of pre-training on more than two languages on the performance of our model on translation tasks, we pre-train a model based on SPDG for 200K steps for each pair of languages, and fine-tune them for 50k steps, with the same setup as PEACH. The results are reported in Table 2. We show that our multilingual model with three languages outperforms other models not only with full pre-training for 200K steps but also with 100K steps of pre-training. We believe this is \begin{table} \begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**WMT14**} & \multicolumn{2}{c}{**WMT19**} \\ & **FR\(\leftrightarrow\)EN** & **DE\(\leftrightarrow\)EN** & **DE\(\leftrightarrow\)FR** \\ \hline **MLM** & \(21.38\leftrightarrow 21.64\) & \(17.88\leftrightarrow 19.54\) & \(16.59\leftrightarrow 16.54\) \\ **MLM with Reordering** & \(29.02\leftrightarrow 28.71\) & \(22.80\leftrightarrow 25.53\) & \(21.39\leftrightarrow 22.45\) \\ **Transformer** & \(9.15\leftrightarrow 9.17\) & \(10.02\leftrightarrow 9.79\) & \(9.16\leftrightarrow 10.31\) \\ \hline **PEACH** & **31.25\(\leftrightarrow\) 29.98** & **23.61\(\leftrightarrow\) 26.97** & **23.13\(\leftrightarrow\) 25.25** \\ \hline \hline \end{tabular} \end{table} Table 1: The supervised translation results evaluated with BLEU score. Figure 3: PEACH’s performance in pre-training steps on WMT14’s EN-FR section. Results for EN-DE and DE-FR are reported in Table 14 in Appendix E. because we perform the SPDG objective between each pair of languages in its pre-training. Indeed, this approach for pre-training multilingual models helps the model simultaneously gain knowledge about other languages than the pair of languages in each pre-training example because it observes the same input with different outputs for each language. These results support our claim in section 3.2. Zero- and Few-Shot TranslationWe evaluate the pre-trained models in a zero-shot setting to investigate our model's ability in low-resource scenarios. Each pre-trained model is evaluated on the test set of WMT14 EN-FR dataset without fine-tuning. The results of this experiment are reported in Figure 4. The results for EN-DE and DE-FR section of WMT14 and WMT19 are reported in Table 15 in Appendix E. The results in Figure 4 and Table 15 show that our model, PEACH, outperforms other models in zero-shot translation. We believe this stems from the similarity of its pre-training objective with actual translation tasks. For few-shot experiments, we fine-tuned PEACH on 50K samples from the English-French section of the WMT14 dataset at a maximum of 50K steps. The results are shown in Figure 5. Accordingly, PEACH outperforms MLM with Reordering model trained in the same setup. Additionally, PEACH surpasses MLM and MLM with Reordering models' checkpoints in 50K fine-tuning steps on around 5M samples, after only 10K and 25K steps of fine-tuning on 50K samples. We conclude that PEACH performs well in low-resource scenarios because it is trained on a massive amount of psuedo-translation data. Cross-Lingual Transfer for TranslationHere we evaluate each fine-tuned model on a language pair on how it performs for other pairs and directions. We use the fine-tuned models in Table 1 for these experiments. The experimental results in Table 3 demonstrate that PEACH can transfer the knowledge learned from one language pair to another better than MLM with Reordering model. We believe this stems from our pre-training method in which we ask the model \begin{table} \begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**WMT14**} & \multicolumn{2}{c}{**WMT19**} \\ & **FR\(\leftrightarrow\)EN** & **DE\(\leftrightarrow\)EN** & **DE\(\leftrightarrow\)FR** \\ \hline \(\mathbf{SPDG}_{EN}\leftrightarrow FR\)**(200k steps)** & \(25.98\leftrightarrow 25.42\) & \(-\) & \(-\) \\ \(\mathbf{SPDG}_{EN}\leftrightarrow DE\)**(200k steps)** & \(-\) & \(17.75\leftrightarrow 22.97\) & \(-\) \\ \(\mathbf{SPDG}_{FR}\leftrightarrow DE\)**(200k steps)** & \(-\) & \(-\) & \(16.24\leftrightarrow 18.77\) \\ \hline \(\mathbf{SPDG}_{EN}\leftrightarrow FR\leftrightarrow DE\)**(100k steps)** & \(27.40\leftrightarrow 26.60\) & \(21.21\leftrightarrow 23.89\) & \(20.49\leftrightarrow 22.32\) \\ \(\mathbf{SPDG}_{EN}\leftrightarrow FR\leftrightarrow DE\)**(200k steps)** & **29.04\(\leftrightarrow\) 28.08** & **22.33\(\leftrightarrow\) 25.29** & **21.67\(\leftrightarrow\) 23.29** \\ \hline \hline \end{tabular} \end{table} Table 2: Results of different models trained with SPDG on either two or three indicated languages. The number of pre-training steps is shown in parenthesis. Figure 4: Comparing the pre-trained models in zero-shot setting on WMT14 EN-FR section. Results for EN-DE and DE-FR are reported in Table 15 in Appendix E. Figure 5: Results of fine-tuning PEACH with 50K samples of WMT14 EN-FR dataset for 0 to 50k steps, and its comparison with MLM and MLM with Reordering objectives on \(50000\times 96\) data points. PEACH outperforms the fully-trained MLM models after only 25K fine-tuning steps. to generate pseudo-translations between each pair of languages. Furthermore, the results confirm Liu et al. (2020)'s experiments and show that whenever a model fine-tuned on a dataset from A to B is evaluated on A to C or C to B or B to A, the results on the evaluation dataset increase more than other combinations. Additionally, because the inputs of PEACH's encoder are human-generated texts while the decoder's expected outputs are the outputs of the denoising models, fine-tuning from A to B increases the performance of C to B more than A to C. Indeed, fine-tuning from A to B helps the decoder of our model learn to generate better outputs by observing human-generated texts in its decoder. This is because our model did not encounter human-generated texts as gold labels in its output during pre-training. On the other hand, observing more human-generated inputs is not as helpful as human-generated outputs since the inputs of the model's encoder were human-generated text in its pre-training. In support of the previous point, the results in Table 3 show that PEACH fine-tuned on the DE-EN dataset achieves better results than MLM fine-tuned on the FR-EN dataset, when evaluated on the FR-EN dataset. Additionally, PEACH fine-tuned on the EN-FR dataset achieves a comparable result with MLM with Reordering fine-tuned on the DE-FR dataset, when evaluated on the DE-FR dataset (0.54 difference in BLEU). We believe this experiment shows PEACH's ability to transfer the knowledge learned from a language to another effectively. Cross-Lingual Transfer for natural language inferenceWe focus on translation in this paper. However, we expect that PEACH's ability to transfer knowledge between languages is suitable for other cross-lingual scenarios as well. To test this hypothesis, we evaluate PEACH on the XNLI benchmark Conneau et al. (2018). We fine-tune our model for 50K steps with a batch size of 256, a learning rate of \(10^{-3}\), and a maximum output length of 16 on the MultiNLI English dataset Williams et al. (2018) and apply it to the XNLI benchmark. The results of this experiment are reported in Table 4. According to Table 4, PEACH outperforms other models in transferring knowledge from English to German and French. Considering our pre-training objective, in which we ask the model to generate pseudo-translations for each pair of pre-training languages, we believe this objective helps PEACH to transfer the knowledge about the English dataset to other languages better than other pre-trained models. ## 5 Conclusion We introduced SPDG, a semi-supervised method for pre-training multilingual seq2seq models, to address the lack of parallel data between different languages. In this new method, we use bilingual dictionaries and denoising models trained with reordering, adding, substituting, and removing words to generate a pseudo-translation for each pre-training document. We use this generated data to train our multilingual model, PEACH, for English, French, and German languages. Our results show that PEACH outperforms the common pre-training objectives for training multilingual models. Furthermore, PEACH shows a remarkable ability in zero- and few-shot translation and knowledge transfer between languages. \begin{table} \begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**XNLI**} \\ & **EN** & **FR** & **DE** \\ \hline **MLM** &.676 &.480 &.463 \\ **MLM with Reordering** &.710 &.603 &.527 \\ \hline **PEACH** & **.745** & **.637** & **.636** \\ \hline \hline \end{tabular} \end{table} Table 4: The accuracy results on the XNLI benchmark. \begin{table} \begin{tabular}{c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Fine-Tuned / Evaluated**} & \multicolumn{3}{c}{**PEACH**} & \multicolumn{3}{c}{**MLM with Reordering**} \\ & \multicolumn{3}{c}{**WMT14**} & \multicolumn{3}{c|}{**WMT19**} & \multicolumn{3}{c}{**WMT14**} & \multicolumn{3}{c}{**WMT19**} \\ & **FR\(\leftrightarrow\)EN** & **DE\(\leftrightarrow\)EN** & **DE\(\leftrightarrow\)FR** & **FR\(\leftrightarrow\)EN** & **DE\(\leftrightarrow\)EN** & **DE\(\leftrightarrow\)FR** \\ \hline \(EN\leftrightarrow FR\) & \(-\leftrightarrow 11.38\) & \(12.35\leftrightarrow 16.57\) & \(12.38\leftrightarrow 21.91\) & \(-\leftrightarrow 11.25\) & \(11.52\leftrightarrow 12.51\) & \(11.65\leftrightarrow 11.70\) \\ \(FR\leftrightarrow EN\) & \(11.30\leftrightarrow-\) & \(14.62\leftrightarrow 21.35\) & \(15.05\leftrightarrow 17.28\) & \(11.27\leftrightarrow-\) & \(12.88\leftrightarrow 12.99\) & \(12.68\leftrightarrow 11.28\) \\ \(EN\to DE\) & \(20.63\leftrightarrow 11.99\) & \(-\leftrightarrow 12.70\) & \(19.88\leftrightarrow 13.84\) & \(10.80\leftrightarrow 11.29\) & \(-\leftrightarrow 12.64\) & \(12.89\leftrightarrow 11.07\) \\ \(DE\to EN\) & \(18.97\leftrightarrow 24.54\) & \(13.39\leftrightarrow-\) & \(14.99\leftrightarrow 18.85\) & \(10.99\leftrightarrow 13.85\) & \(12.71\leftrightarrow-\) & \(11.23\leftrightarrow 11.09\) \\ \(FR\to DE\) & \(23.64\leftrightarrow 24.69\) & \(18.59\leftrightarrow 22.69\) & \(-\leftrightarrow 23.35\) & \(12.07\leftrightarrow 11.43\) & \(12.81\leftrightarrow 11.54\) & \(-\leftrightarrow 20.65\) \\ \(DE\to FR\) & \(24.88\leftrightarrow 24.94\) & \(20.12\leftrightarrow 20.74\) & \(23.03\leftrightarrow-\) & \(14.72\leftrightarrow 11.57\) & \(12.86\leftrightarrow 11.92\) & \(21.56\leftrightarrow-\) \\ \hline \hline \end{tabular} \end{table} Table 3: The results of experiments on cross-lingual knowledge transfer for translation. We fine-tune the model on one language and evaluate it on other languages. The results are reported using BLEU score. ### Limitations The main limitations of our work can be classified into two types: 1) SPDG's limitations and 2) Computational limitations. SPDG's LimitationsAlthough our method can address the issue of limited parallel data between different languages, it does not solve the problem completely. First, our method uses bilingual dictionaries to translate each pre-training document from one language to another, which is not always available for low-resource languages. Furthermore, the available dictionaries for low-resource languages do not have a high quality and are not comparable with high-resource languages. Additionally, we use Named Entity Recognition (NER) models to transfer named entities of each pre-training document into its pseudo-translation, which is unavailable for some low-resource languages. Therefore, using unsupervised methods for NER can be a solution for the mentioned problem, which is not investigated in this work. Computational limitationsWe did not have access to clusters of GPU or TPU to train our models on a large scale and compare them with the results reported in other papers about multilingual models. However, we tried to provide a realistic setting for our experiments. Further investigation into training models on a larger scale, same as standard multilingual models, can improve this work.
2301.04524
Specific heat of a driven lattice gas
Calorimetry for equilibrium systems aims to determine the available microscopic occupation and distribution of energy levels by measuring thermal response. Nonequilibrium versions are expected to add information on the dynamical accessibility of those states. We perform calculations on a driven exclusion process on an array of particle stations, confirming that expectation. That Markov model produces a fermionic nonequilibrium steady state where the specific heat is computed exactly by evaluating the heat fluxes that are entirely due to a change in ambient temperature. We observe a zero-temperature divergence (violation of the Third Law) when the Fermi energy and the kinetic barrier for loading and emptying become approximately equal. Finally, when the kinetic barrier is density-dependent, a stable low-temperature regime of negative specific heat appears, indicating an anti-correlation between the temperature--dependence of the stationary occupation and the excess heat.
Pritha Dolai, Christian Maes
2023-01-11T15:42:25Z
http://arxiv.org/abs/2301.04524v2
# Towards many-body nonequilibrium calorimetry: ###### Abstract Calorimetry for equilibrium systems reveals the parameter-dependent occupation statistics of energy levels by measuring thermal response. Nonequilibrium versions are expected to add information on their dynamical accessibility. Calculations on a driven many-body system confirm that expectation. Modeling electrons hopping between quantum dots, the asymmetric exclusion process provides a Fermi Golden Rule approximation upon adding births and deaths at each dot. This yields a fermionic nonequilibrium steady state where the heat capacity is computed exactly by evaluating the heat fluxes entirely due to the changing of the ambient temperature. In particular, the heat capacity depends on the symmetric kinetic barrier for loading and emptying the quantum dot, invisible at equilibrium. Moreover, we find a zero-temperature dynamical phase transition, shown by the sudden divergence of the heat capacity when the Fermi energy and the kinetic barrier become approximately equal. The nonvanishing heat capacity at absolute zero, violating an extended Third Law, is caused by the relaxation times exceeding the dissipation time. Finally, when the kinetic barrier is density-dependent, a low-temperature regime of negative heat capacity appears, indicating an anti-correlation between the temperature dependence of the stationary occupation and the excess heat. ## I Introduction Because of the absence of global thermodynamic potentials, the thermal properties of driven and active systems remain theoretically less understood. The application of thermodynamics for far-from-equilibrium systems is not only more problematic, but it is also expected that the thermal features reveal kinetic information, especially at low temperatures. Therefore there is a need to fill up the theoretical gap for obtaining kinetic information, to provide a framework to connect heat with the statistical features of the dynamics, not just with the static fluctuations. For example, the kinetics of low-temperature transitions is in general based on empirical information solely [1]. Specific subjects include cold plasmas which are often far from equilibrium [2], and electrolyte design, widely relevant in the field of nanotransport which is reaching ultralow temperatures as well [3]. As a modest beginning, some simple model systems which are exactly solvable by preference, can be expected to reach relevant conclusions here. As a matter of fact, from a fundamental perspective, nonequilibrium statistical mechanics in general lacks nonperturbative results. Specifically, theory focusing on low-temperature nonequilibrium physics remains largely unexplored [4]. However there exists a framework of ideas and some results on steady state thermodynamics [5; 6; 7]. From there, a theory of nonequilibrium heat capacities has been started [8; 9; 10] with recent results including their effective computation [11; 12; 13] and low-temperature behavior [14]. Even then nonequilibrium heat capacities have so far been computed explicitly for (effectively) independent particles only. The present paper extends that to driven lattice gases in a grand-canonical setup, where particles hop between lattice sites subject to exclusion, plus birth and death with density-dependent constraints. The latter summarizes the coupling to a thermochemical bath at chemical potential \(\mu\) and temperature \(T\). The bath is also the reservoir for dissipating the Joule heat due to the external particle-driving in the ring. This is formalized via the condition of local detailed balance, [15]. In this way, we have a periodic array of two-level systems in which a particle current is maintained; see Fig.1(a). This yields a semiclassical description of transport along an array of quantum dots, [16; 17; 18; 19] fed by an electron bath. Reading coupled two-level systems has obviously a great number of applications, e.g. in heat engines [20] and quantum devices [21; 22]. Their thermal properties at low temperatures is the subject of the present paper. We start in the next section with a precise definition of the model and its mathematical description as a Markov jump process. The thermodynamic parameters are the chemical potential and the temperature of the uniform equilibrium environment. The stationary distribution is determined as a product of Fermi-Dirac distributions. In Section II-II.2 we introduce elements of nonequilibrium calorimetry. Here the central idea is the notion of excess heat which is defined to be a geometric and operationally measurable quantity of "extra" heat that appears due to the quasistatic relaxation between two steady nonequilibrium. In particular, changing the temperature \(T\to T+\mathrm{d}T\), gives rise to an excess heat \(\delta Q=C(T)\,\mathrm{d}T\) which defines the heat capacity \(C(T)\). We refer to [8, 9, 10, 11, 13, 14] for background, examples and more definitions. We obtain the heat capacity as a function of all thermodynamic and kinetic parameters, and for rings of arbitrary size using an AC-calorimetric method, which, we believe, is optimal for experimental measurements. The simplest case of a ring with three sites (quantum dots) is explicitly calculated in Section III using the notion of quasipotential and Section IV is devoted to the general case. The result shows a heat capacity \(C(T)=C(T,\zeta)\) depending on the driving \(\zeta\) as well, and for \(\zeta=0\) we recover the equilibrium heat capacity of a two-level system. We investigate how the low-temperature heat capacity \(C(T\downarrow 0)\) depends on the chemical potential and kinetic barriers for nonzero driving thereby crucially deviating from the equilibrium behavior. By adding a kinetic barrier effectively we screen the chemical bath and see a zero-temperature transition at some values of the chemical potential \(\mu\). In particular, the heat capacity diverges when the Fermi energy lies between two kinetically defined energies. The basic reason for the divergence of the heat capacity at zero temperature is explained in Section V. Here we deal with a zero-temperature dynamical phase transition which is governed by the relation between relaxation and dissipation times. The latter is inversely proportional to current or dissipated power and is infinite under equilibrium conditions. That is the reason why the transition only appears out of equilibrium. The relaxation times can be quite different for current-carrying versus static conditions. At low temperatures and depending on the chemical potential, either the fully occupied or the empty system is typical, but they do not participate in the irreversible dissipation. If there is a sufficiently strong kinetic barrier between those two states and the current-carrying states then the phase transition happens. Driven array of quantum dots Electron transport between quantum dots (QD) or between artificial atoms and confinements is a subject of current interest. We refer to the review [23] for background and many other references. The present paper offers a new domain of exploration as it deals with low-temperature thermal properties _taking fully into account_ the driving condition. The approach is based on the idea that, as we have a nonequilibrum system with a constant dissipation to the heat bath, we want to compute the heat capacity as the _excess_ heat per unit temperature, when quasistatically changing the bath temperature. Our modeling is semiclassical with incoherent coupling to an electron bath, taking a Fermi Golden Rule approximation, but nonperturbatively concerning the distance from equilibrium. Note in particular that we are not investigating here the equilibrium heat capacity of QD-clusters, and how quantum confinement modifies e.g. the Einstein-Debye behavior of quantum solids at low temperatures, [24], nor do we wish to compute the equilibrium heat capacity for a semiconductor-QD in a magnetic field, [25; 26]. Our work focuses on the influence and interest of a genuine and possibly very strong nonequilibrium driving. ### Dynamics We consider a ring with \(N\) sites (quantum dots), where each site \(x=1,2,\ldots,N\) (with periodic boundary conditions) can be occupied by at most one particle, occupation \(\eta_{x}=0,1\). See Fig. 1. The dynamics consists of two parts, the asymmetric hopping and the onsite exit and entry of particles, both in incoherent contact with a heat bath at inverse temperature \(\beta\) and with chemical potential \(\mu\). The inspiration are quantum dots indeed, having the particularly useful possibility of attaching current and voltage leads. The hopping to nearest-neighbor sites is determined by the transition rates \[k_{x,x+1}(\eta)=\frac{\nu}{1+e^{-\beta\zeta}}\;\eta_{x}(1-\eta_{x+1})+\frac{ \nu}{1+e^{\beta\zeta}}\;\eta_{x+1}(1-\eta_{x}) \tag{1}\] for exchanging the occupation at sites \(x\leftrightarrow x+1\), and where \(\zeta\) stands for the work done by a driving force to move one electron from one to a neighboring dot, say in the clockwise direction. See Fig.1(b). At very low temperatures (large \(\beta\)), the rate to move a particle from \(x\) to \(x+1\) equals the constant \(\nu\), and the rate for moving it from \(x+1\) to \(x\) can be approximated by \(\nu\,e^{-\beta\zeta}\). We note the exclusion between the particles, which keeps the array dynamics fermionic. The frequency \(\nu\) in (1) sets a timescale. A quantum dot also has a charging energy, required to add or remove a single electron from the dot. The loading and emptying of the quantum dot at \(x\) is modeled as a birth and death process; see Fig.1(c). At rate \(\alpha(x,\eta)=e^{-\beta\Delta(x,\eta)}\) the particle is removed from the system, and with rate \(\delta(x,\eta)=\alpha(x,\eta)\,e^{\beta\mu}\) the particle enters, which is summarized in the transition rate \[k_{x}(\eta)=\tilde{\nu}[\,\alpha(x,\eta)\eta_{x}+\delta(x,\eta)(1-\eta_{x})\,] \tag{2}\] Figure 1: (a) Cartoon of a periodic array of QD in an electron bath characterized by an ambient low temperature \(T\) and chemical potential \(\mu\). The driving \(\zeta\) breaks time-reversal invariance and a steady electronic current is maintained. (b) Scheme of transitions with their rates. There is a clockwise bias along the circuit. (c) QD as a two-level system for negative Fermi energy. With kinetic barrier \(\Delta\) and chemical potential \(\mu\), the QD is charged at rate \(\delta=e^{-\beta\Delta}e^{\beta\mu}\), and at rate \(\alpha=e^{-\beta\Delta}\) it is emptied. We also denote the (zero-temperature) chemical potential \(\mu=\epsilon_{F}\) as the Fermi energy. for the transition \(\eta_{x}\to 1-\eta_{x}\). The \(\Delta(x,\eta)\geq 0\) is a configuration-dependent kinetic barrier, which we write as \[\Delta(x,\eta) = \Delta\ {\rm when}\ \,I[\eta_{x+1},\eta_{x-1}]=1\] \[= 0\ {\rm otherwise}\] equal either to \(\Delta\geq 0\) or to zero, depending on the indicator \(I[\eta_{x+1},\eta_{x-1}]\). For example, when \(I[\eta_{x+1},\eta_{x-1}]=(1-\eta_{x+1})(1-\eta_{x-1})\), the kinetic barrier at \(x\) is only effective when the two nearest neighbors of \(x\) are empty. In other words, the births and deaths at \(x\) then get facilitated when at least one neighbor is occupied. It adds a local interaction, in the form of a density-dependent charging/emptying of the QD. We put \(e^{\beta\Delta}=\alpha\) and \(\alpha\,e^{\mu\beta}=\delta\), as in Fig. 1(c). We notice that \(\delta(x,\eta)=\alpha(x,\eta)\,e^{\mu\beta}\) as well, which is an instance of local detailed balance; see [15]. The chemical potential \(\mu\) of the electron heat bath can also be interpreted as an energy difference between two levels, and is taken temperature-independent. That allows some abuse of notation, to take the chemical potential equal to the Fermi energy \(\epsilon_{F}\), as we focus on low temperatures. See again Fig.1(c). The particles are noninteracting except for the exclusion and the configuration-depending kinetic barriers. In the case of electrons the Coulomb repulsion is indeed not thought to be significant for thermal properties, [27]. The Pauli-Master Equation for the time-dependent probability \(P_{t}(\eta)\) is \[\frac{{\rm d}}{{\rm d}t}P_{t}(\eta) = \sum_{x=1}^{N}[k_{x,x+1}(\eta^{x,x+1})P_{t}(\eta^{x,x+1})-k_{x,x+ 1}(\eta)P_{t}(\eta)] \tag{4}\] \[+ \sum_{x=1}^{N}[k_{x}(\eta^{x})P_{t}(\eta^{x})-k_{x}(\eta)P_{t}( \eta)]\] where \(\eta^{x,x+1}\) is the configuration obtained from \(\eta\) after exchanging the occupations between sites \(x\) and \(x+1\), and \(\eta^{x}\) is obtained by flipping the occupation at \(x\). By putting the left-hand side of (4) equal to zero, it is verified that \[P^{s}(\eta)\propto\exp[\beta\mu\sum_{x}\eta_{x}] \tag{5}\] is the stationary probability distribution. There is no detailed balance but \(P(\eta)\) is independent of the driving \(\zeta\) and is a product distribution determined by the steady density \(\rho=\)Prob\([\eta_{x}=1]\) \[\rho=\frac{\delta(x,\eta^{x})}{\alpha(x,\eta^{x})+\delta(x,\eta^{x})}=\frac{1}{ \frac{\alpha}{\delta}+1}=\frac{1}{e^{-\mu\beta}+1}\] In that way, the stationary occupation probability is a product of Fermi-Dirac distributions, independent of all kinetic and nonequilibrium parameters. That is the main simplification of the present model which allows the exact computation of heat capacities. However, because of the driving, heat capacities will be very different from equilibrium. ### Calorimetry Calorimetry starts from the First Law of Thermodynamics. When in configuration \(\eta\), the heat flux \(\dot{q}(\eta)\) to the heat bath has two contributions: \[\dot{q}(\eta)=\mu\,\tilde{\nu}\sum_{x}\alpha(x,\eta)[e^{\mu\beta}\,(1-\eta_{x })-\,\eta_{x}]+\nu\,\zeta\,\Gamma\sum_{x}\eta_{x}(1-\eta_{x+1}) \tag{6}\] where \(\Gamma:=\Gamma(\beta\zeta):=\sinh\beta\zeta\,(1+\cosh\beta\zeta)^{-1}\). The first term in (6) refers to the rate of change of the energy \(E(\eta)=-\mu\sum_{x}\eta_{x}\) by loading or emptying the QD. The second term is the work done on the particles by the driving force. In that sense, (6) expresses conservation of energy. For stationary expectations, we use the notation \(\langle f\rangle^{s}=\sum_{\eta}P(\eta)\,f(\eta)\). The stationary heat flux to the environment is \[\dot{q}^{s}=\langle\dot{q}\rangle^{s}=\sum_{\eta}\dot{q}(\eta)P(\eta)=N\,\mu\, \tilde{\nu}[\langle\delta(x,\eta)\rangle^{s}\,(1-\rho)-\langle\alpha(x,\eta) \rangle^{s}\,\rho]+N\,\zeta\,\nu\,\rho(1-\rho)\,\Gamma \tag{7}\] which depends on the steady expectations \[\langle\delta(x,\eta)^{s}=p\,e^{(\mu-\Delta)\beta}+(1-p)\,e^{\mu\beta},\quad \langle\alpha(x,\eta)^{s}=p\,e^{-\Delta\beta}+(1-p)\] in terms of the stationary probability \(p:=\langle I[\eta_{x-1},\eta_{x+1}]\rangle^{s}\) of satisfying the kinetic constraint. For example, when \(p=1\) (always a kinetic constraint), then the heat flux to the environment (Joule heating) equals \[\dot{q}^{s}=\frac{\zeta\nu\,e^{\beta\mu}}{(1+e^{\mu\beta})^{2}}\,\Gamma(\zeta\beta) \tag{8}\] Next, we define the quasipotential \(V(\eta)\) as the function of configurations \(\eta\) which has vanishing stationary expectation \(\langle V\rangle^{s}=0\), and satisfies \[LV(\eta)=\dot{q}^{s}-\dot{q}(\eta) \tag{9}\] for backward generator \[LV(\eta)=\sum_{x=1}^{N}k_{x,x+1}(\eta)\left[V(\eta^{x,x+1})-V(\eta)\right]+\sum_{ x=1}^{N}k_{x}(\eta)\left[V(\eta^{x})-V(\eta)\right]\] In the case of equilibrium, \(\zeta=0\), \(\dot{q}(\eta)=LE\left(\eta\right)\) and \(V(\eta)=E(\eta)-\langle E\rangle^{s}\) where \(\langle E\rangle^{s}\) is then the equilibrium value of the energy. In all cases, the heat capacity \(C(T)\) is the derivative (with \(\beta^{-1}=T,k_{B}=1\)), \[C(T)=\beta^{2}\left\langle\frac{\mathrm{d}V}{\mathrm{d}\beta}\right\rangle^{s} \tag{10}\] For the origin of these formulae, see [8; 9; 10; 14]. The main point is to realize that the quasipotential \(V\) in (9) equals \[V(\eta)=\int_{0}^{\infty}\,\mathrm{d}t\,\left[\langle\dot{q}(\eta_{t}\,|\,\eta (0)=\eta\rangle-\dot{q}^{s}\right] \tag{11}\] where \(\langle\dot{q}(\eta_{t}\,|\,\eta(0)=\eta\rangle\) is the expected heat flux at time \(t\) when starting the process, at time zero, in configuration \(\eta\). Note how the integrand in (11) is an excess dissipated power. The computational challenge lies mainly in solving the linear equations (9) for the quasipotential \(V\). For small systems we can do that by hand, as illustrated in the next section, and otherwise we use numerically-assisted diagonalization of \(L\). There is however another method called AC-calorimetry, introduced for nonequilibrium systems in [10] and applied for active particles in [13], which is numerically more stable and which is applied in Section IV for solving the problem for arbitrary ring sizes. That makes the driven fermionic array exactly solvable for its thermal properties. ## III Ring with three sites We start by illustrating the steps of the previous section for the smallest-size system, \(N=3\) dots in Fig. 1(a). There are 8 states for a possible total of 0, 1, 2 or 3 particles occupying the system. By the symmetry of that small-sized system, we are allowed to work with those 4 classes and denote by \(s_{0},s_{1},s_{2}\) and \(s_{3}\) respective configurations from those classes. ### Configuration-independent kinetic barrier In the case that the kinetic barrier holds for all birth/death transitions, \(I[\eta_{x-1},\eta_{x+1}]\equiv 1\), we have as expected heat fluxes (6), \[\dot{q}(s_{0})=3\mu\delta, \dot{q}(s_{1})=-\mu\alpha+2\mu\delta+\zeta\nu\Gamma\] \[\dot{q}(s_{2})=-2\mu\alpha+\mu\delta+\zeta\nu\Gamma, \dot{q}(s_{3})=-3\mu\alpha\] The stationary heat flux (7) is \[\dot{q}^{s}=\langle\dot{q}\rangle^{s}=3\zeta\nu\Gamma\frac{\alpha\delta}{( \alpha+\delta)^{2}}\] The quasipotential satisfies (9) with solution plotted in Fig. 2(a-b) for a particular choice of parameters. Note that \(V(s_{3})>V(s_{2})>s(s_{1})>V(s_{0})\) are in the same order as their respective energies for \(\mu<0\). The specific heat, the heat capacity divided by the number of QD, is obtained from (10), \[c(T)=\frac{1}{3}C(T)=\frac{\beta^{2}\mu^{2}e^{-\beta\mu}}{(1+e^{-\beta\mu})^{ 2}}-\beta^{2}\mu\zeta\frac{\nu}{\dot{\nu}}\,\frac{(e^{\beta(\Delta-3\mu)}-e^{ \beta(\Delta-2\mu)})}{(1+e^{-\beta\mu})^{4}}\tanh(\beta\zeta/2)\geq 0 \tag{12}\] This is plotted in Fig. 3. Fig. 3(a) shows the \(\mu-\)dependence. Note the divergence when the chemical potential gets small, e.g., when \(0<|\mu|<\Delta/2\). The diverging heat capacity implies that the Third Law Figure 2: (a)-(b) Quasipotentials for uniform barrier height, \(\Delta=0.5\), \(\zeta=1.0\) and \(\mu=-1.0\). It corresponds to the orange curve in Fig. 3(a). (or Nernst postulate for nonequilibria) is violated in that regime of chemical potentials, as indicated in the inset of Fig. 3(a). The interesting physics happens indeed when the Fermi energy \(\epsilon_{F}\) becomes of the order of the barrier energy \(\Delta\): observe that, at least when \(\mu\,\Delta\,\zeta\,\nu\neq 0\), the specific heat \(c(T\downarrow 0)\rightarrow\infty\) diverges for \(-\Delta<\mu=\epsilon_{F}<\Delta/2\). In all other cases, \(c(T\downarrow 0)\to 0\) as in the Third Law, [12; 14]. Note that the kinetic parameters \(\nu,\tilde{\nu},\Delta\) are all in the second (nonequilibrium) term. The \(\zeta-\)dependence in Fig.3(b) is of course entirely in the second term in (12) as well. We note from Fig. 3(b) the giant magnification of the peak for large \(\zeta\), as compared to the Schottky peak for an equilibrium two-level system. The peak-temperature (inset of Fig. 3(b)) also saturates at a lower value of the temperature, as \(\zeta\) grows. We can understand in full detail what is happening here and causing the divergence. It is due to the nonequilibrium nature of the dynamics, and more in particular by a phenomenon of dynamical localization; see more in Section V. Figure 3: (a) Heat capacity for \(N=3\), for different chemical potentials for a uniform kinetic barrier \(\Delta=0.5\) and \(\zeta=1.0\). Inset: Heat capacity as a function of chemical potential \(\mu\) at a fixed inverse temperature \(\beta=15\), \(\Delta=0.5\) and \(\zeta=1.0\). This shows the divergence of heat capacity at low temperatures for a range of small \(|\mu|\neq 0\). (b) Heat capacity plotted for different values of driving (\(\zeta\)) at a fixed kinetic barrier \(\Delta=0.8\) and \(\mu=-1.0\). Inset: Position of the peak (\(T^{*}\)) in \(C(T)\) plot as a function of \(\zeta\) for different chemical potentials at fixed \(\Delta=0.8\). The low- and high-density asymptotics \(|\mu|\uparrow\infty\) of (12) is given by \[c_{\rm low/high}(T)=\beta^{2}\mu^{2}e^{-\beta|\mu|} \tag{13}\] and the equilibrium (thermodynamic) contribution dominates. ### Density-dependent kinetic barrier We now install a configuration-dependent barrier by taking \(I[\eta_{x-1},\eta_{x+1}]=\eta_{x+1}\eta_{x-1}\), i.e., the reservoir gets screened with a factor \(e^{-\beta\Delta}\) in the birth and death rates, if and only if both neighbors are occupied. The calculation proceeds along the same lines as before, with the quasipotentials now given in Fig. 4(a-b). The order \(V(s_{0})<V(s_{1})<V(s_{2})\) is as for the energies \(E(s_{i})=-\mu\,i\), since \(\mu<0\). Note however that now the full configuration has a negatively diverging quasipotential, \(V(s_{3})<0\). Recalling (11), we have that the difference in time-integrated heat flux to the environment during the relaxation, \[V(s_{3})-V(s_{2})=\int_{0}^{\infty}\,{\rm d}t\,\left[\langle\dot{q}(\eta_{t}\,| \,\eta(0)=s_{3}\rangle-\langle\dot{q}(\eta_{t}\,|\,\eta(0)=s_{2})\right]\] diverges to minus infinity as \(\beta\uparrow\infty\). That means that the barrier (which is between \(s_{2}\) and \(s_{3}\)) requires the heat bath to pump a lot of energy into the system to allow a decrease of energy. This paradoxical situation is at the origin of negative heat capacity, as we will show around (22). The specific heat (10) per QD becomes \[c(T)=\frac{\beta^{2}\mu^{2}e^{-\beta\mu}}{(1+e^{-\beta\mu})^{2}}-\frac{\beta^ {2}\mu\zeta\nu}{\tilde{\nu}}\frac{(e^{-5\beta\mu}+e^{-4\beta\mu}-e^{-3\beta\mu }-e^{\beta(\Delta-2\mu)})}{(1+e^{-\beta\mu})^{6}}\tanh(\beta\zeta/2) \tag{14}\] The heat capacity is plotted in Fig. 5(a) for different values of \(\mu\). A new interesting feature appears: the heat capacity gets negative values. As for the zero-temperature limit \(\beta\uparrow\infty\), divergences appear, \(C\to-\infty\) for \(-\Delta/4<\mu<0\), and \(C\to\infty\) for \(0<\mu<\Delta/2\), when \(\mu\,\zeta\,\nu\,\Delta\neq 0\). The divergence for positive \(\mu\) is however much stronger; see the inset of Fig. 5(a). We discuss the deeper reasons for all that in Section V. The same can be done for a kinetic barrier obstructing birth and death at \(x\) when both neighbors are unoccupied, \(I[\eta_{x-1},\eta_{x+1}]=(1-\eta_{x+1})(1-\eta_{x-1})\). After Figure 4: (a)-(b) Quasipotentials when the kinetic barrier \(\Delta\) is active if nearest neighbours are occupied. Here \(\Delta=0.8\), \(\zeta=1.0\) and \(\mu=-0.24\). There is \(\Delta-\)dependence in birth and death rates when nearest neighbors are occupied. This corresponds to the orange curve in Fig. 5(a). Figure 5: Heat capacity for \(N=3\) with density-dependent kinetic barrier. (a) \(C(T)\) for different values of the chemical potential \(\mu\) at fixed driving \(\zeta=1.0\) and barrier \(\Delta=0.8\). The barrier at any dot is only there when both the neighboring sites are occupied. Inset: Heat capacity plotted as a function of \(\mu\) at fixed inverse temperature \(\beta=20\), \(\Delta=0.8\), and \(\zeta=1.0\), showing the low-temperature divergence of the heat capacity. (b) \(C(T)\) for different \(\mu\) at fixed \(\Delta=1.0\) and \(\zeta=1.0\). In this case, birth and death rates are obstructed when both the neighboring sites are empty. Inset: Heat capacity as a function of \(\mu\) at fixed inverse temperature \(\beta=20\), \(\Delta=1.0\) and \(\zeta=1.0\) the specific heat per QD now becomes \[c(T)=\frac{\beta^{2}\mu^{2}e^{-\beta\mu}}{(1+e^{-\beta\mu})^{2}}-\frac{\beta^{2} \mu\zeta\nu}{\tilde{\nu}}\frac{(e^{\beta(\Delta-5\mu)}+e^{-4\beta\mu}-e^{-3 \beta\mu}-e^{-2\beta\mu})}{(1+e^{-\beta\mu})^{6}}\tanh(\beta\zeta/2) \tag{15}\] Again, we see the possibility of negative heat capacities; see Fig. 5(b). Concerning the Nernst postulate, for \(\beta\uparrow\infty\), \(C\rightarrow\infty\) for \(-\Delta<\mu<0\) and \(C\rightarrow-\infty\) for \(0<\mu<\Delta/5\) if \(\mu\,\zeta\,\nu\,\Delta\neq 0\). Now, the divergence is much stronger at negative values of the chemical potential; see the inset of Fig. 5(b). Again, we postpone the explanation to Section V. Note that the low-density regime where the exclusion principle plays a minor role, does not differ from (13). The same method can be followed for every ring size \(N\) but it becomes of course computationally challenging to solve \(2^{N}\) linear equations. We, therefore, introduce another method that appears more flexible, is applicable for all \(N\), and, so we believe, gives the most promising experimental setup. We refer to [10; 13; 28; 29] for the background of that AC-calorimetric method that we next apply. ## IV Ac-Calorimetric method For simplicity, we put \(\tilde{\nu}=1\) and work with a uniform kinetic barrier. The point of departure in temperature is considered to be a sinusoidal modulation \(T_{t}=T+\tilde{\epsilon}\sin\omega t\,\) at frequency \(\omega\) for small amplitude \(\tilde{\epsilon}\). As a consequence, the birth and death rates become \(\alpha_{t}=\alpha\,(1+\epsilon_{a}\sin\omega t),\delta_{t}=\delta\,(1+ \epsilon_{d}\sin\omega t)\) with \[\epsilon_{a}=\frac{\tilde{\epsilon}\Delta}{T^{2}},\,\,\,\epsilon_{d}=\frac{ \tilde{\epsilon}(\Delta-\mu)}{T^{2}},\,\,\,\,\,\,\alpha=e^{-\Delta/T},\,\,\, \,\delta=\alpha e^{\mu/T} \tag{16}\] Instead of (7), we have a time-dependent heat flux to the bath, \[\dot{Q}(\rho_{t})=\zeta\nu\rho_{t}(1-\rho_{t})\Gamma(\beta_{t}\zeta)-\mu\,[ \rho_{t}(\alpha_{t}+\delta_{t})-\delta_{t}] \tag{17}\] obtained from (6) but averaged with the time-dependent probability distribution \(\rho_{t}\), that solves \[\frac{\mathrm{d}\rho_{t}}{\mathrm{d}t}=-(\alpha_{t}+\delta_{t})\rho_{t}+\delta _{t} \tag{18}\] Working in the complex domain of temperatures, it is long but straightforward to calculate the large-time and small-frequency limit, \[\rho(t)=\frac{\delta}{(\alpha+\delta)}-\tilde{\epsilon}\frac{\delta}{(\alpha+ \delta)^{3}}\bigg{(}\frac{\alpha\delta}{T^{2}}\bigg{)}\left[\omega+i(\alpha+ \delta)\right]e^{i\omega t} \tag{19}\] to linear order in \(\tilde{\epsilon}\). Finally, from general results [10; 13; 29], the out-of-phase component in \(\dot{Q}(\rho_{t})\) gives the heat capacity. In these limits indeed, the heat flux (17) verifies \[-\dot{Q}(t)=-\dot{q}^{s}+\tilde{\epsilon}\left[B(T)\,\sin(\omega t)+C(T)\, \omega\,\cos(\omega t)\right] \tag{20}\] That can be calculated exactly, and we find that the specific heat coincides with the expression (12); it thus gives the correct result for all \(N\) and all conclusions (and Fig. 3) remain unaltered. ## V Interpretations There are three aspects in the exact results for the specific heat that we wish to highlight. First, as we already mentioned, the nonequilibrium driving \(\zeta\neq 0\) brings forward a dependence on kinetic parameters. That is for example in the second term of (12). In other words, out-of-equilibrium heat capacity measurement reveals information about kinetics, e.g. about the presence of a kinetic barrier and whether that barrier is more or less obstructive at high _versus_ at low density, as in the second terms of (14)-(15). That information is visible starting at order \(\zeta^{2}\) in the driving, and most clearly at lower temperature. Because of the driving, the heat capacity can become negative. There is a general interpretation. As explained in (10), the heat capacity \(C(T)=-\langle\frac{\mathrm{d}V}{\mathrm{d}T}\rangle^{s}\) is obtained from taking minus the steady variation of the quasipotential (9) under a temperature change. Looking at Fig. 2, we see the quasipotential as a function of temperature, as computed for the purpose of Section III. In the case of a uniform kinetic barrier Fig. 2, the quasipotentials are all decreasing with temperature \(\frac{\mathrm{d}V}{\mathrm{d}T}(s_{i})<0\), which makes a positive heat capacity. On the other hand, in the right plot of Fig. 4, we see the situation corresponding to the negative low-temperature heat capacity in Fig. 5. There, at low temperatures, the quasipotential with the largest variation has a positive temperature derivative, \(\frac{{\rm d}V}{{\rm d}T}(s_{3})>0\). That configuration is exactly corresponding to the one where the kinetic barrier is active (two occupied neighbors). The other quasipotentials are quasi-constant at low temperatures. More generally, and as was observed in [14], heat capacity is a steady covariance between the quasipotential (9) and the stationary occupation statistics (5): \[C(T)=\langle V\,;\,\frac{{\rm d}\log P^{s}}{{\rm d}T}\rangle^{s} \tag{21}\] Since we know the stationary distribution (5) and since \(\langle V\rangle^{s}=0\), we can write that out more explicitly as \[C(T)=-\frac{\mu\beta^{2}}{(1+e^{\mu\beta})^{N}}\,\sum_{\eta}V(\eta)\,{\cal N}( \eta)\,e^{\mu\beta{\cal N}(\eta)}\] where \({\cal N}(\eta)\) denotes the number of particles in the system for configuration \(\eta\). Let us take the system size \(N=3\), which is fine since the heat capacity simply scales with \(N\). Then, using the notation of Section (III), \[C(T)=-3\frac{\mu\beta^{2}}{(1+e^{\mu\beta})^{3}}\,[e^{\mu\beta}V(s_{1})+2\,e^ {2\mu\beta}\,V(s_{2})+e^{3\mu\beta}\,V(s_{3})]\] To be specific, let us take \(\mu<0\) and \(\beta|\mu|\gg 1\), and look at Fig. 4(a-b), and observe the negativity of \(V(s_{3})\) with a strong divergence at low temperatures, while \(V(s_{1})\) and \(V(s_{2})\) remain of order one. Whenever we have \[V(s_{3})\simeq-v\;e^{(-2\mu+b)\,\beta} \tag{22}\] for constants \(v>0\) and \(0<b<-\mu\), it follows that, in most significant order, \[C(T)\simeq-3\beta^{2}\mu\,e^{3\mu\beta}\,V(s_{3})\simeq 3v\,\beta^{2}\mu\,e^{(b +\mu)\beta}<0\] In the case of Fig. 4(b) (with \(\mu=-0.24\)), we find \(v=0.99,b=0.08\). Hence, the low-temperature negativity of the heat capacity in Fig. 5(a) (orange curve) follows when the system in its highest energy state (here, \(s_{3}\)) still must absorb much heat (as in (22)) to make the transition to the lower energies (here, \(s_{2}\)). In other words, negative heat capacity indicates a negative correlation between the quasipotential (9) and the change in stationary occupation statistics (5). Indeed, under nonequilibrium conditions, there may happen a negative correlation between a heat-related (Clausius-like) entropy and the configurational (Boltzmann-like) entropy. This never happens in canonical equilibrium, where both are related to the energy degeneracy and density of states, yielding the variance of the energy in the equilibrium case of (21). Finally, we explain the dynamical phase transition as we vary the chemical potential, where the heat capacity jumps from zero to infinity at zero-temperature. It is due to the growth of relaxation times beyond the dissipative time scale. (We refer to [14] for a general mechanism.) The point can be made quantitatively. For \(\mu<0\), the empty state \(\sum_{x}\eta_{x}=0\), and for \(\mu>0\), the filled state \(\sum_{x}\eta_{x}=N\), has the largest stationary probability and overwhelmingly so at absolute zero. The current carrying configurations (e.g. around half-filling) are therefore suppressed and that implies that the current goes to zero as \(e^{-\beta|\mu|}\) for \(\beta\uparrow\infty\). It sets the dissipation time \(\tau_{d}\propto\nu^{-1}\,e^{|\mu|\beta}\). On the other hand, the current-carrying configurations are separated from those dominant configurations by the kinetic barrier: for \(\mu<0\), it takes a time of the order \(\tau\propto\tilde{\nu}^{-1}\,e^{\beta\Delta}\) to get empty, and for \(\mu>0\), the relaxation time is \(\tau\propto\tilde{\nu}^{-1}\,e^{\beta(\Delta-\mu)}\) to get filled. The Third Law (in the extension [14]) holds when \(\tau<\tau_{d}\), which is violated for small \(|\mu|/\Delta\) when \(\mu,\Delta,\nu,\tilde{\nu},\zeta\) are all nonzero. At the values \(\mu=-\Delta\) and \(\mu=\Delta/2\) of the chemical potential, a zero-temperature transition in thermal properties occurs, unseen in equilibrium. That is exactly verified by the exact results, as also seen in the inset of Fig.3(a). When the kinetic barrier is not uniform, the same physics can be applied. When births and deaths get facilitated at low density, the empty configuration gets more accessible (and the full configuration is less accessible) and the extended Third Law gets most violated at positive \(\mu\); see the inset of Fig.5(a). Alternatively, when births and deaths are obstructed at low density, the full configuration becomes more accessible and the Third Law is most violated when \(\mu<0\). That explains the inset of Fig.5(b). ## VI Conclusions Applications of nonequilibrium physics abound at low temperatures. Calorimetric considerations for many-body nonequilibria are however only starting. From the exact results in the present paper, it follows that heat capacities may reveal important kinetic information such as about density-dependent barriers, and the low-temperature thermal behavior is strongly affected by the location of the Fermi energy. In particular, a zero-temperature phase transition is observed where the heat capacity jumps from zero to infinity. The transition as a function of the Fermi energy locates the energy barrier between quantum dots and loads, and indicates where the relaxation time starts to exceed the dissipation time. The methods developed in the paper, such as AC-calorimetry for nonequilibrium for which we provide proof of applicability, are also available for experimental explorations. In particular, we believe that for driven lattice gases (e.g. on optical lattices), nonequilibrium phase transitions at finite temperature will show divergences of the heat capacity as defined here as well. As a final remark, our main contribution need not be seen solely as a result in low-temperature electronics or hard-condensed matter physics. In fact, we expect that coherent coupling with quantum loads will modify the results, at least in some detail. The methodology and interest are also of a wider scope in opening and exploring the thermal properties of a many-body nonequilibrium fermionic material. We are optimistic that such studies will prove a valuable tool, next to others such as spectroscopy, to scan static and dynamical degrees of freedom that get excited in steady nonequilibrium as a function of the ambient temperature. **Acknowledgment:** We are indebted to Faezeh Khodabandehlou and to Karel Netocny for many clarifying discussions.
2308.08615
Scalable Lattice Sampling using Factorized Generative Models
Boltzmann distributions over lattices are pervasive in Computational Physics. Sampling them becomes increasingly difficult with increasing lattice-size, especially near critical regions, e.g., phase transitions in statistical systems and continuum limits in lattice field theory. Machine learning-based methods, such as normalizing flows, have demonstrated the ability to alleviate these issues for small lattices. However, scaling to large lattices is a major challenge. We present a novel approach called Parallelizable Block Metropolis-within-Gibbs (PBMG) for generating samples for any lattice model. It factorizes the joint distribution of the lattice into local parametric kernels, thereby allowing efficient sampling of very large lattices. We validate our approach on the XY model and the Scalar {\phi}^4 theory. PBMG achieves high acceptance rates and less correlated samples, and the observable statistics estimated from the samples match the ground truth. Moreover, PBMG significantly speeds up inference for large lattices as compared to HMC and plain MCMC algorithms.
Ali Faraz, Ankur Singha, Dipankar Chakrabarti, Vipul Arora
2023-08-16T18:18:22Z
http://arxiv.org/abs/2308.08615v3
# Scalable Lattice Sampling using Factorized Generative Models ###### Abstract Boltzmann distributions over lattices are pervasive in Computational Physics. Sampling them becomes increasingly difficult with the increase in the number of dimensions, especially near critical regions, e.g., phase transitions or continuum limits. Conditional generative models are emerging as promising tools for sampling in critical regions. When conditioned on the parameters to be varied, they can be efficaciously extended to critical regions, without the need for retraining. However, current approaches do not scale well to large lattices. We present a novel approach called Parallelizable Block Metropolis-within-Gibbs (PBMG) for generating samples for any local lattice model. It factorizes the joint distribution of lattice into local parametric kernels, thereby allowing efficient sampling of very large lattices. We optimize the model with reverse Kullback-Leibler divergence (RKLD) to avoid the need for ground truth samples. Since the local distributions are simpler, the model is not affected by mode collapse, which generally occurs while training with RKLD. We validate our approach on the XY model and the Scalar \(\mathbf{\phi^{4}}\) theory. PBMG achieves high acceptance rates and the observable statistics estimated from the samples match the ground truth. **Keywords:** Markov Chain Monte Carlo, Metropolis-Hastings, Lattice models, Proposal distribution, Scalable and Efficient Sampling Lattice models in Physics are mathematical models that contain information in the form of a lattice \(\phi\) characterized by Boltzmann distribution defined with the help of a Hamiltonian \(H(\mathbf{\phi})\) or action, \(s(\mathbf{\phi})\). \[p(\mathbf{\phi};\mathbf{\theta})\propto e^{-H(\mathbf{\phi};\mathbf{\theta})} \tag{1}\] Each lattice site is a random variable that could be binary as in an Ising model, angle as in an XY model or real-valued as in scalar \(\phi^{4}\) theory. The statistical properties of these lattices vary with parameters \(\mathbf{\theta}\), showing drastic changes near certain regions of \(\mathbf{\theta}\) called critical regions. Although these distributions are known only up to a normalizing constant, they can be sampled from using statistical methods such as MCMC (Markov Chain Monte Carlo) [1; 2]. These methods provide convergence guarantees that the samples represent the target distribution \(p(\mathbf{\phi};\mathbf{\theta})\), they can be quite inefficient, especially in the critical regions. In the critical regions, the correlation between successive samples of the Markov chain (characterized by the autocorrelation time of the chain) becomes very large. This phenomenon is called critical slowing down [3; 4]. Certain algorithms, such as, Swendsen-Wang [5], Wolff [6], worm [7], loop [8], directed loop [9] and HMC (Hamiltonian Monte Carlo) [10], make global MCMC updates to tackle the problem of critical slowing down. However, these methods do not scale well for large lattices. Generative machine learning (ML) methods are emerging as promising tools to sample Boltzmann distributions. ML methods, such as Gaussian mixture models (GMM) and normalizing flows (NF), that give exact model probabilities can be used in Metropolis-Hastings (MH) algorithm to propose samples which are accepted or rejected. Such algorithms provide convergence guarantees too. NF-based methods have been used for \(\phi^{4}\), 2D Ising model etc. in statistical physics [11; 12; 13; 14; 15; 16] and for general Monte Carlo sampling [17]. Self-learning Monte Carlo [18], [19] and Restricted Boltzmann Machines [20] also aim at efficient sampling of lattices using ML methods. On the other hand, there are ML approaches that do not give exact model probability, and hence, do not guarantee convergence. These include likelihood-free methods, such as, generative adversarial networks [21; 22; 23] and likelihood-based methods, such as, variational autoencoders (VAEs) [24] and Boltzmann generators [25]. These methods produce one-shot samples, as opposed to Markov chains. To address critical slowing down, some ML methods [11] train the generative model for every \(\mathbf{\theta}\). Since training samples are not available for every \(\mathbf{\theta}\), Reverse Kullback Leibler divergence (RKLD) based learning is used, which is, however, prone to mode collapse (when \(p(\mathbf{\phi};\mathbf{\theta})\) is multi-modal but the ML model fails to cover all the modes). Moreover, this approach depends on on-the-fly learning, which could be unstable. As an alternative, the use of conditional generative models is a promising approach [21; 26; 27], where the model learns from samples in non-critical regions and extrapolates or interpolates to critical regions. However, none of the above ML models scale well with lattice size because they model the entire lattice jointly. We propose factorizing the joint distribution of the lattice and sampling using a Metropolis-within-Gibbs algorithm, with proposals generated by an ML model conditioned on \(\mathbf{\theta}\). Gibbs algorithm has previously been used with ML models to sample large graphs [28], albeit not for sampling from explicit distribution functions. We propose Parallelizable Block Metropolis-within-Gibbs (PBMG) algorithm that models a given target distribution by factorizing it into local distributions allowing it to scale well to high-dimensional distributions. The model is used to propose samples for the Metropolis-Hastings (MH) algorithm. We present this method for sampling from conditional Boltzmann distributions over large lattices. We use conditional generative models, e.g., conditional Gaussian mixture models (GMM) and conditional normalizing flows, to efficiently sample in critical regions. The conditional generative models are generally trained from given samples. However, since generating ground truth samples for large lattices is very difficult, we use RKLD-based training. The problem of mode collapse is avoided in PBMG because the local distributions are much simpler as compared to the joint distributions. To validate the proposed approach, we apply it to 2-D lattices, namely, the XY model from statistical Physics and the scalar \(\phi^{4}\) model from lattice field theory. These two models are described in detail in the Appendix. In the XY model, each lattice site is a spin values (angle) that is locally dependent on immediate neighbors. The lattice distribution is conditioned on temperature \(T\) and exhibits phase transition with respect to \(T\), where the magnetic susceptibility diverges. We apply PBMG with conditional rational quadratic spline (RQS) flows for XY lattices. In the scalar \(\phi^{4}\) theory, each lattice site is a real number locally dependent on immediate neighbors. The lattice distribution is conditioned on parameters \(\lambda\) and \(m^{2}\). We study phase transition with respect to \(\lambda\), where susceptibility diverges. We apply PBMG with conditional GMMs, conditioned on \(\lambda\) and \(m^{2}\), for \(\phi^{4}\) theory. The contributions of this work are multifaceted: * The introduction of the PBMG algorithm, offering a robust and efficient means to sample Boltzmann distributions over lattices. * The pioneering factorization of the joint distribution into local components, laying the foundation for scalable sampling in high-dimensional lattice systems. * The strategic utilization of RKLD optimization to alleviate the need for ground truth samples while mitigating the risk of mode collapse. ## 1 Parallelizable Block Metropolis-within-Gibbs (PBMG) Consider an \(N\)-dimensional probability distribution \(p(\phi_{1},\phi_{2},\ldots,\phi_{N})\). For a lattice, \(N\) is the number of lattice sites and \(\phi_{i}\) is the random variable (field, dipole, etc.) at site \(i\). We partition these sites into \(G\) partitions such that the distribution of a site \(i\) in a partition \(g\), conditioned on all sites \(j\notin g\), is independent of all sites \(i^{\prime}\in g\setminus i\). For implementing MCMC in general state-spaces, one requires to construct a Markov chain transition kernel \(p(\phi_{i}|\phi_{j\neq i})\) that keeps the target distribution \(p(\phi_{1},\phi_{2},\ldots,\phi_{N})\) invariant, and is ergodic for this distribution. Such kernels can also be combined via composition. Keeping this in mind, let \(K_{i}\) be a transition kernel that updates the site \(i\in g\), keeping all other sites of the lattice the same. Then the combined kernel that changes all the sites in the partition \(g\) is \[K_{g}=\prod_{i\in g}K_{i} \tag{2}\] and the overall kernel for updating all the sites in a lattice is \[K=\prod_{g}K_{g}\,. \tag{3}\] Here, \(\prod\) is the iterative composition function. Such a composition is commonplace in deterministic scan Gibbs samplers. The advantage of partitioning is that all the sites in the same partition can be sampled simultaneously, thereby making the process faster. The term "**Parallelizable Block**" in our algorithm's nomenclature originates from this inherent capacity to facilitate parallel sampling. Moreover, each kernel \(K_{i}\) need not be conditioned on all the sites outside the partition \(g\), but only a small number of sites in a local neighbourhood of the site \(i\). Every site-kernel \(K_{i}\) for each \(i\in g\) and for every partition \(g\) is a Metropolis-Gibbs kernel, which means that each site-kernel \(K_{i}\) is a Gibbs kernel with a Metropolis-Hastings accept-reject step. Metropolis-within-Gibbs algorithms, also known as conditional Metropolis-Hastings, are well studied in MCMC [29]. Now, let \(\boldsymbol{\phi}_{-i}\) denote the set of random variables at all the lattice sites excluding the one corresponding to the lattice site \(i\) and \(\boldsymbol{\psi}\) denote given lattice parameters (e.g., temperature, coupling parameters). Let us define \(q(\phi_{i}^{(t)};\boldsymbol{\phi}_{-i},\boldsymbol{\psi},\boldsymbol{\theta})\) as the parametric proposal distribution, parameterized by \(\boldsymbol{\theta}\). Then, the acceptance probability \(\alpha_{K_{i}}\) for any site-kernel \(K_{i}\)\(\forall\)\(i\in g\) is \[\alpha_{K_{i}}=\frac{p(\phi_{i}^{(t+1)}|\boldsymbol{\phi}_{-i},\boldsymbol{ \psi})}{p(\phi_{i}^{(t)}|\boldsymbol{\phi}_{-i},\boldsymbol{\psi})}\cdot\frac {q(\phi_{i}^{(t)}|\phi_{i}^{(t+1)},\boldsymbol{\phi}_{-i},\boldsymbol{\psi}; \boldsymbol{\theta})}{q(\phi_{i}^{(t+1)}|\phi_{i}^{(t)},\boldsymbol{\phi}_{- i},\boldsymbol{\psi};\boldsymbol{\theta})} \tag{4}\] We note that if we design a proposal such that, \[q(\phi_{i}^{(t+1)}|\phi_{i}^{(t)},\boldsymbol{\phi}_{-i},\boldsymbol{\psi}; \boldsymbol{\theta})=q(\phi_{i}^{(t+1)}|\boldsymbol{\phi}_{-i},\boldsymbol{ \psi};\boldsymbol{\theta}) \tag{5}\] i.e., a proposal such that the sampling of the corresponding component is independent of the previous value of the component then the proposal would become an independent proposal (w.r.t. a component). In such a case, the acceptance rate is a direct measure of the performance of the proposal. But it is important to note here that, since, the proposal is an independent proposal w.r.t. a component and not a completely independent proposal, a high acceptance rate cannot guarantee that the integrated autocorrelation time for the observables will be low. This is because the calculation of the integrated autocorrelation time over here involves all the lattice points and not just a single component. Nevertheless, a high acceptance rate would ensure that consecutive samples are very much different from each other which would indirectly lower the autocorrelation within the chain. This reduces our goal to achieve the maximum acceptance rate possible i.e., an acceptance rate equal to 1. An acceptance rate of 1 will be achieved when \(q(\phi_{i}^{(t+1)}|\boldsymbol{\phi}_{-i},\boldsymbol{\psi};\boldsymbol{ \theta})\) is exactly the same as \(p(\phi_{i}^{(t+1)}|\boldsymbol{\phi}_{-i},\boldsymbol{\psi})\) and \(q(\phi_{i}^{(t)}|\boldsymbol{\phi}_{-i},\boldsymbol{\psi};\boldsymbol{\theta})\) is exactly the same as \(p(\phi_{i}^{(t)}|\boldsymbol{\phi}_{-i},\boldsymbol{\psi})\). This, essentially, reduces our goal to design(or learn) a proposal that could sample from the true conditional distribution as closely as possible. In order to achieve the above goal, we use methods like Normalizing Flows and Gaussian Mixture Models in probabilistic machine learning. To decide which method to use, we need to first analyze the true conditional distribution for a single component in the two PDFs. In the next two sections, we apply the PBMG method to the XY model and the Scalar \(\phi^{4}\) theory. In each section, we first analyze the true conditional distribution for a single component in the corresponding PDF. Subsequently, we discuss the structure of the corresponding proposal distribution, the details of the training procedure and also the inference procedure. ## 2 Application to XY Model ### Target distribution The Hamiltonian for the XY model is \[H(\mathbf{\phi})= -\frac{1}{2}\sum_{\langle i,j\rangle}\big{[}\cos(\phi_{i,j}-\phi_ {i+1,j})+\cos(\phi_{i,j}-\phi_{i,j+1})\] \[+\cos(\phi_{i,j}-\phi_{i-1,j})+\cos(\phi_{i,j}-\phi_{i,j-1})\big{]} \tag{6}\] Here, \(\phi_{i,j}\) is the angular random variable with range \([0,2\pi)\) at the lattice site with coordinates \((i,j)\). Here, we have used periodic boundary conditions for the lattice. The lattice distribution at a given temperature \(T\in\mathbb{R}\) is \[p(\mathbf{\phi};T)\propto e^{-\frac{H(\mathbf{\phi})}{T}} \tag{7}\] The local Hamiltonian for the \((i,j)\)th component \(\phi_{i,j}\) of the lattice vector \(\mathbf{\phi}\) is \[H(\phi_{i,j})= -\big{[}\cos(\phi_{i,j}-\phi_{i+1,j})+\cos(\phi_{i,j}-\phi_{i,j+ 1})\] \[+\cos(\phi_{i,j}-\phi_{i-1,j})+\cos(\phi_{i,j}-\phi_{i,j-1})\big{]} \tag{8}\] We see that the Hamiltonian of the \((i,j)\)th component depends only on the components of the four nearest neighbours denoted by \(n(i,j)=\{(i+1,j),(i,j+1),(i-1,j),(i,j-1)\}\). Therefore the conditional distribution of \(\phi_{i,j}\) given the four nearest neighbour components and temperature is \[p\left(\phi_{i,j}|\{\phi_{l,m}:(l,m)\in n(i,j)\},T\right)=p(\phi_{i,j}|\mathbf{ v}_{i,j})\propto e^{-\frac{H(\phi_{i,j})}{T}} \tag{9}\] The above conditional distribution is our target distribution. Here, \(\mathbf{v}_{i,j}=(\phi_{i+1,j},\phi_{i,j+1},\phi_{i-1,j},\phi_{i,j-1},T)\) is the 5x1 condition vector corresponding to the site \((i,j)\) which consists of the four nearest neighbour components and the temperature. For this model, we have divided the lattice into two partitions \(g_{0}\) and \(g_{1}\). \[g_{k}=\{(i,j):(i+j)\%2=k\};k=0,1 \tag{10}\] ### Modeling the Proposal distribution We use Normalizing Flows to model the proposal distribution \(q(\phi_{i,j}|\mathbf{v}_{i,j};\boldsymbol{\theta})\). \(p_{Z}(z|\mathbf{v}_{i,j};\boldsymbol{\theta}_{B})\) is the base distribution and \(T(z;\boldsymbol{\theta}_{R})\) is the invertible transformation used in the Normalizing Flow. Here, \(\boldsymbol{\theta}=\{\boldsymbol{\theta}_{B},\boldsymbol{\theta}_{R}\}\). Using the change of variables formula, \[q(\phi_{i,j}|\mathbf{v}_{i,j};\boldsymbol{\theta})=p_{Z}\left(T^{-1}(\phi_{i,j} ;\boldsymbol{\theta}_{R})|\mathbf{v}_{i,j};\boldsymbol{\theta}_{B}\right) \left|\det\left(\frac{\partial T^{-1}(\phi_{i,j};\boldsymbol{\theta}_{R})}{ \partial\phi_{i,j}}\right)\right| \tag{11}\] We use Rational Quadratic Splines (RQS) as the transform \(T\). An RQS flow is used to map a fixed interval onto another fixed interval. The intervals are partitioned into \(K\) bins where each bin is characterized by a rational-quadratic function that increases monotonically. Furthermore, these functions are characterized by \(K+1\) coordinates (referred to as knots) representing boundary values, \(K-1\) derivative values \(\mathbf{s}_{knots}\) at Figure 1: Partitioning used for a lattice in the XY model. Here, the color white represents partition-1 and the color black represents partition-2. Figure 2: Proposal distribution for the XY model internal knots and the widths \(\mathbf{w}_{bins}\) and the heights \(\mathbf{h}_{bins}\) of the \(K\) bins. For details on RQS flow, one can refer to [30]. The coefficients governing the RQS transformation's behaviour are adaptable and learned through neural networks. The base distribution is chosen to be uniform, and the interval of the uniform distribution \(w_{int}\) is also learnable. We use four different neural networks to learn these parameters, as shown in figure 2. The parameters of the RQS flow are obtained from the three neural networks \(NN_{2},NN_{3},NN_{4}\), and the interval width of the base distribution is obtained from \(NN_{1}\). In eq. 11, the combined set of parameters of \(NN_{2},NN_{3}\) and \(NN_{4}\) is denoted as \(\boldsymbol{\theta}_{R}\) and the parameters of \(NN_{1}\) is denoted as \(\boldsymbol{\theta}_{B}\). The condition vector that is input to all the neural networks is \(\mathbf{v}\). The outputs of \(NN_{1}\) and \(NN_{4}\) are passed through a Softplus activation function to give the interval width of the base distribution and the \(K-1\) slopes at the intermediate knots, respectively. The outputs of \(NN_{2}\) and \(NN_{3}\) are passed through a \(w_{int}\)-scaled Softmax activation and a \(2\pi\)-scaled Softmax activation function to give the widths and the heights of the \(K\) bins respectively. We have taken \(K\) to be \(8\). The architecture of each of the four neural networks is the same except for the out put dimension, which is dependent on the output parameter. Each neural network consists of \(200\) neurons in each hidden layer and three such layers are used. We also use a dropout of \(0.3\) and a ReLU activation function at each layer except the final one. ### Training and Inference Procedure The loss function used in the training procedure is the expected value of the KL divergence between the proposal \(q(\phi_{i,j}|\mathbf{v}_{i,j};\boldsymbol{\theta})\) and the target \(p(\phi_{i,j}|\mathbf{v}_{i,j})\) i.e., the true conditional distribution over all possible values of the condition vector \(\mathbf{v}_{i,j}\). The first four components of \(\mathbf{v}_{i,j}\) lie in the interval \([0,2\pi]\) and the last component \(T\in[0.05,2.05]\). We will, therefore, sample \(\mathbf{v}_{i,j}\) from \(f(\mathbf{v}_{i,j})=\text{Unif}([0,2\pi]^{4}\times[0.05,2.05])\) to calculate the expectation. \[\mathcal{L} =\mathbb{E}_{\mathbf{v}_{i,j}\sim f(\mathbf{v}_{i,j})}\left[ \mathbb{E}_{z\sim p_{Z}(z|\mathbf{v}_{i,j};\boldsymbol{\theta}_{B})}\left[ \log q(\phi_{i,j}|\mathbf{v}_{i,j};\boldsymbol{\theta})-\log p(\phi_{i,j}| \mathbf{v}_{i,j})\right]\right] \tag{12}\] \[=\mathbb{E}_{\mathbf{v}_{i,j}\sim f(\mathbf{v}_{i,j})}\left[ \mathbb{E}_{z\sim p_{Z}(z|\mathbf{v}_{i,j};\boldsymbol{\theta}_{B})}\left[ \log p_{Z}(z|\mathbf{v}_{i,j};\boldsymbol{\theta}_{B})+\log|\det J_{T}(z| \mathbf{v}_{i,j};\boldsymbol{\theta}_{R})|^{-1}-\log p\left(T(z;\boldsymbol{ \theta}_{R})|\mathbf{v}_{i,j}\right)\right]\right] \tag{13}\] The Monte Carlo approximation can be used to estimate the above expectation as follows \[\mathcal{L}\approx\frac{1}{n}.\frac{1}{N}\sum_{r=1}^{n}\sum_{k=1}^{N}\left[ \log p_{Z}(z_{k}|(\mathbf{v}_{i,j})_{r};\boldsymbol{\theta}_{B})+\log|\det J_ {T}(z_{k}|(\mathbf{v}_{i,j})_{r};\boldsymbol{\theta}_{R})|^{-1}-\log p\left(T(z _{k};\boldsymbol{\theta}_{R})|(\mathbf{v}_{i,j})_{r}\right)\right] \tag{14}\] The procedure for MCMC sampling using PBMG-XY is briefed in the algorithm given below. Here, \(\mathbf{V}_{g}=[\mathbf{v}_{i,j}]_{(i,j)\in g}\). ## 3 Application to \(\phi^{4}\) Theory The lattice \(\phi^{4}\) theory on a 2D lattice has undergone extensive investigation in both statistical mechanics and quantum field theory due to its intricate phase structure and critical behaviour. It demonstrates second-order phase transitions, characterized by alterations in system properties like magnetization or correlation length with varying coupling parameters. For a thorough comprehension of the lattice \(\phi^{4}\) model, one can refer to the works [31, 32, 33]. In the following section, we will discuss the proposed ML-based sampling approach for the lattice \(\phi^{4}\) theory. ### Target Distribution The Euclidean action for scalar \(\phi^{4}\) theory in 2D can be written as, \[S(\mathbf{\phi},\lambda,m^{2})=\sum_{i,j}\Bigl{(}m^{2}+4\Bigr{)}\phi_{i,j}^{2}- \phi_{i,j}[\phi_{i+1,j}+\phi_{i,j+1}+\phi_{i-1,j}+\phi_{i,j-1}]+\lambda\phi_{i,j}^{4} \tag{15}\] where \((i,j)\) represents the indices of a lattice site and \(\phi_{i,j}\) is a real-valued random variable defined for every lattice site. The lattice distribution is given by the Boltzmann distribution law, \[p(\mathbf{\phi}|\lambda,m^{2})\propto e^{-S(\mathbf{\phi},\lambda,m^{2})} \tag{16}\] The local action for lattice site \((i,j)\) can be written as \[S_{loc}(\phi_{i,j},\lambda,m^{2},\{\phi_{l,m}:(l,m)\in n(i,j)\}) =S_{loc}(\phi_{i,j},\lambda,m^{2},\kappa_{i,j}) \tag{17}\] \[=\Bigl{(}m^{2}+4\Bigr{)}\phi_{i,j}+\lambda\phi_{i,j}^{4}-2\phi_{i,j}\kappa_{i,j} \tag{18}\] where \(\kappa_{i,j}=\phi_{i+1,j}+\phi_{i,j+1}+\phi_{i-1,j}+\phi_{i,j-1}\). We see that the local action for a lattice site depends on its four nearest neighbours i.e. the fields at each lattice site only interact with its nearby lattice points. Therefore, the conditional distribution of the lattice site \((i,j)\) can be written as \[p\big{(}\phi_{i,j}|\lambda,m^{2}+4,\kappa_{i,j})=p\big{(}\phi_{i,j}|\mathbf{v}_{i,j}\big{)}\propto e^{-S_{loc}(\phi_{i,j},\mathbf{v}_{i,j})} \tag{19}\] where, \(\mathbf{v}_{i,j}=(\lambda,m^{2}+4,\kappa_{i,j})\) is the condition vector for the distribution. Eq. 19 represents the target distribution which we model using the proposed method. For \(\phi^{4}\) theory as well, we have divided the lattice into the same two partitions \(g_{0}\) and \(g_{1}\), where \[g_{k}=\{(i,j):(i+j)\%2=k\},k=0,1 \tag{20}\] ### Modeling the Proposal Distribution: PBMG-\(\phi^{4}\) We construct the proposal distribution for the scalar \(\phi^{4}\) theory by using a Gaussian Mixture Model with six Gaussian components. The proposal distribution parameterized by \(\boldsymbol{\theta}\), can be written as \[q(\phi_{i,j}|\mathbf{v}_{i,j};\boldsymbol{\theta})=\sum_{k=1}^{6}\pi_{k}( \mathbf{v}_{i,j})\mathcal{N}(\phi_{i,j}|\mu_{k}(\mathbf{v}_{i,j}),\sigma_{k}( \mathbf{v}_{i,j})) \tag{21}\] where \(\mu_{k}\), \(\sigma_{k}\)\(\pi_{k}\) are the mean, standard deviation and mixing coefficients of the \(k^{th}\) Gaussian distribution. These parameters are a function of the condition vector \(\mathbf{v}_{i,j}=(\lambda,m^{2}+4,\{\phi_{l,m}:(l,m)\in n(i,j)\})\). This function could be a neural network whose parameters can be optimized using a suitable loss function. The input to the \(k^{th}\) neural network is the condition vector \(\mathbf{v}_{i,j}\), and the outputs are the parameters \((\mu_{k},\log(\sigma_{k}),\pi_{k})\). The architectures of all six neural networks are the same, with the only difference lying in the initialization of the network parameters. In each neural network, we use three dense layers with 150 neurons in each layer, and a ReLU activation function at each layer except the final one. The neurons in the final layer use linear activation. The value of \(\log(\sigma_{k})>1\) is clipped to 1. Since \(\sum_{k}\pi_{k}=1\), the networks output logit values that are converted to \(\pi_{k}\) by applying softmax. ### Training and Inference Procedure The training procedure of PBMG-\(\phi^{4}\) is similar to that of PBMG-XY. The loss function used in the training procedure is the expected value of the KL divergence between the proposal \(q(\phi_{i,j}|\mathbf{v}_{i,j};\boldsymbol{\theta})\) and the target \(p(\phi_{i,j}|\mathbf{v}_{i,j})\) i.e., the true conditional distribution over all possible values of the condition vector \(\mathbf{v}_{i,j}\) along with an \(L_{2}\) regularization term. The effect of the regularization term is that the mixing coefficients remain close to each other and this in turn stabilizes the training. We train our model for the following range of the parameters: \(\lambda\in[2.5,15]\), \(m^{2}\in[-8,0]\) and \(\kappa_{i,j}\in[0,3]\). We will, therefore, sample \(\mathbf{v}_{i,j}\) from \(f(\mathbf{v}_{i,j})=\text{Unif}([2.5,15]\times[-4,4]\times[0,3])\) to calculate the expectation. Here, \(\boldsymbol{\pi}(\mathbf{v}_{i,j})=[\pi_{k}(\mathbf{v}_{i,j})]_{k=1}^{6}\) and \(\|.\|\) represents the \(L_{2}\) norm. \[\mathcal{L}=\mathbb{E}_{\mathbf{v}_{i,j}\sim f(\mathbf{v}_{i,j})}\left[\mathbb{ E}_{\phi_{i,j}\sim q(\phi_{i,j}|\mathbf{v}_{i,j};\boldsymbol{\theta})} \left[\log\frac{q(\phi_{i,j}|\mathbf{v}_{i,j};\boldsymbol{\theta})}{p\big{(} \phi_{i,j}|\mathbf{v}_{i,j}\big{)}}\right]+\|\boldsymbol{\pi}(\mathbf{v}_{i, j})\|\right] \tag{22}\] And the Monte Carlo approximation to the above expression is \[\mathcal{L}\approx\frac{1}{n}\sum_{r=1}^{n}\left[\frac{1}{N}\sum_{k=1}^{N}\left[ \log q((\phi_{i,j})_{k}|(\mathbf{v}_{i,j})_{r};\boldsymbol{\theta})-\log p\big{(} (\phi_{i,j})_{k}|(\mathbf{v}_{i,j})_{r}\big{)}\right]+\left\|\boldsymbol{\pi} \left((\mathbf{v}_{i,j})_{r}\right)\right\|\right] \tag{23}\] The procedure for MCMC sampling using PBMG-\(\phi^{4}\) is exactly the same as that of PBMG-XY given in Algorithm-1. ## 4 Experiments for PBMG-XY In this section, we evaluate the PBMG-XY model against the ground truth which we generate using plain-MCMC. We carry out the experiments for five lattice sizes i.e., for L = 8, 16, 32, 64, 128. For each lattice size, we simulate for 32 temperatures, evenly spaced within the range [0.05, 2.05]. We use the observables Mean Energy, Mean Magnetization and Mean Vorticity and the metrics Earth Mover Distance (EMD) and Percentage Overlap (%OL) in our experiments to compare the performance of our model with that of the ground truth. A brief explanation regarding them is provided below. Further details can be found in Appendix A.2 and Appendix C. Here, consider a set of \(N\) samples of \(L\)x\(L\) lattices \(\{\boldsymbol{\phi}_{1},\boldsymbol{\phi}_{2},...,\boldsymbol{\phi}_{N}\}\). **Mean Energy**: The Hamiltonian of a lattice sample divided by the number of lattice points is the mean energy \(E(\boldsymbol{\phi}_{i})\) of that lattice sample \(\boldsymbol{\phi}_{i}\). The observable mean energy is defined as the average mean energy per lattice sample. \[\langle E\rangle=\frac{1}{N}\sum_{i=1}^{N}E(\boldsymbol{\phi}_{i})=\frac{1}{N} \cdot\frac{1}{L^{2}}\sum_{i=1}^{N}H(\boldsymbol{\phi}_{i}) \tag{24}\] **Mean Magnetization**: The magnetization of a lattice sample is \[M(\boldsymbol{\phi})=\left|\frac{1}{L^{2}}\left(\sum_{\langle i,j\rangle}\cos (\phi_{i,j})\right)\boldsymbol{\hat{x}}+\frac{1}{L^{2}}\left(\sum_{\langle i,j\rangle}\sin(\phi_{i,j})\right)\boldsymbol{\hat{y}}\right| \tag{25}\] and the mean magnetization of a set of lattice samples is, \[\langle M\rangle=\frac{1}{N}\sum_{i=1}^{N}M(\boldsymbol{\phi}_{i}) \tag{26}\] **Mean Vorticity**: The average vorticity across all the lattice samples is defined as the mean vorticity. Here, \(V_{o}(\boldsymbol{\phi}_{i})\) is the Vorticity of the lattice sample \(\boldsymbol{\phi}_{i}\). For details refer to Appendix A.2. \[\langle V_{o}\rangle=\frac{1}{N}\sum_{i=1}^{N}V_{o}(\boldsymbol{\phi}_{i}) \tag{27}\] **Earth Mover Distance**: EMD (aka Wasserstein Metric) is a distance metric and denotes, in a sense, the distance between any two probability distributions. The mathematical definition of EMD is given below. Here, \(H_{M}\) and \(H_{G}\) are the normalized histograms generated out of the ground truth samples and the samples from PBMG-XY respectively. \[\text{EMD}(H_{M},H_{G})=\sum_{x=-\infty}^{+\infty}\left|\sum_{t=-\infty}^{x} \left(H_{M}(t)-H_{G}(t)\right)\right| \tag{28}\] **Percentage Overlap (%OL)**: %OL is a similarity metric and, as the name suggests, gives the overlap (in percentage) between the two normalized histograms \(H_{M}\) and \(H_{G}\). The formula for %OL is as follows. \[\text{\%OL}(H_{M},H_{G})=\sum_{i}\min\left(H_{M}(i),H_{G}(i)\right) \tag{29}\] The plain-MCMC algorithm is used for the generation of ground truth. We generate 10,000 samples for each temperature by thinning, selecting every \(k\)th sample. \(k\) varies for each lattice size but remains constant for all temperatures for that size (shown in table 1). Burn-in is unnecessary as we initiate the Markov chain from the highest probability point with all spins at zero. In our PBMG-XY simulations, different ensemble sizes were generated across 32 temperatures for each lattice size. These ensemble sizes were determined based on anticipated auto-correlation levels. Higher auto-correlation necessitated more samples to ensure accurate estimation of observables. Appendix D provides the details of the ensemble size generated for each temperature and lattice size. ### Results and Observations We estimate the mean magnetization and the mean vorticity on a lattice of size \(L=16\) as shown in figure 3. Similarly, we estimate the mean energy and the mean vorticity on a larger lattice of size \(L=128\) as shown in figure 4. The remaining figures for the observables estimated on other lattice sizes can be found in Appendix E. We observe that the graphs of the observables are matching to a great extent. Only in a few regions and for a few lattice sizes and observables, the discrepancy is considerable and this is because the length of the Markov chain that is generated from the model in those areas is short according to the auto-correlation that exists within the samples. The factor that restricts us from taking even longer chains is the simulation time. The acceptance rate for PBMG-XY varies considerably but lay mostly in the range of 80-95%. This variation could be due to the below-par learning of \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline L & 8 & 16 & 32 & 64 & 128 \\ \hline Value of k & 120 & 400 & 1300 & 5000 & 20000 \\ \hline \end{tabular} \end{table} Table 1: Value of k for thinning for each lattice size the target conditional distributions for a few sets of conditions. We speculate that the below-average learning results could be attributed to employing a training approach that was not conducive to achieving optimal learning outcomes in spline flows. Table 2 shows the comparison of EMD and %OL for the observable mean energy. The remaining two tables for the observables mean magnetization and mean vorticity Figure 4: a)Mean Energy and b)Mean Vorticity plots superimposed for L=128 Figure 3: a)Mean Vorticity and b)Mean Energy plots superimposed for L=16. are given in Appendix E. In table 2, we observe that the EMD between the ground truth and the model is quite low and the %OL between the two is quite high. ## 5 Experiments and Results for PBMG-\(\phi^{4}\) In this section, we evaluate the PBMG-\(\phi^{4}\) model against the ground truth which we generate using Hamiltonian Monte Carlo simulation. For the \(\phi^{4}\) theory, we conduct two kinds of experiments. One is on the computation of the lattice observables, and the other is on the computation of integrated auto-correlation time. ### Experiment on Observables We compute various observables on the lattice ensembles such as the mean \(\phi^{2}\) value, two-point susceptibility, and two-point correlation function from both HMC and PBMG-\(\phi^{4}\) One of the key observables estimated in lattice \(\phi^{4}\) theory is the "correlation function" or "two-point function". This observable measures the correlation or interaction between field operators at different lattice sites across space. The correlation function can be written as \[G_{c}(i,j)=\frac{1}{V}\sum_{l,m}[\langle\phi_{l,m}\phi_{i+l,j+m}\rangle-\langle \phi_{l,m}\rangle\langle\phi_{i+l,j+m}\rangle].\] and the zero momentum correlation function is defined as \[C(t)=\sum_{i}G_{c}(i,t),\] where \(t=j\) is the time axis. The pole mass and two-point susceptibility can be derived from \(C(t)\) and \(G(i,j)\), \[m_{p}=\log\left[\frac{C(t+1)}{C(t)}\right];\ \chi=\sum_{i}G(i,j).\] \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{EMD} & \multicolumn{2}{|c|}{\% OL} \\ \hline L & Mean & Std & Mean & Std \\ \hline \hline 8 & 0.009 & 0.012 & 68.138 & 13.638 \\ \hline 16 & 0.006 & 0.007 & 69.849 & 13.682 \\ \hline 32 & 0.003 & 0.002 & 75.619 & 11.469 \\ \hline 64 & 0.002 & 0.002 & 75.553 & 10.799 \\ \hline 128 & 0.001 & 0.001 & 75.819 & 9.260 \\ \hline \end{tabular} \end{table} Table 2: Metrics for Mean Energy The mean magnetization can be estimated as \[\langle|\tilde{\phi}|\rangle:\tilde{\phi}=\frac{1}{V}\sum_{i,j}\phi_{i,j}.\] The experiment on the observables was carried out for five lattice sizes, i.e., \(L=8,16,24,32,48,64,\text{and}\)\(128\). The parameter \(m^{2}\) is fixed at \(-4\) for all the ensembles. While preparing the ground truth, the numbers of MD steps \(N\) and step-size \(\epsilon\) in HMC are adjusted so that the acceptance rate and the auto-correlation time remain almost constant for all the lattice sizes in the experiment. The acceptance rate is between 70% to 85%, and the auto-correlation time is around 5 for all the lattice sizes. For every lattice size, 50K samples are generated. We consider every fifth lattice sample for the calculation of observables i.e. 10K samples are considered for the final estimation of each observable. In the PBMG-\(\phi^{4}\) model, different thinning and ensemble sizes are used for different lattice sizes. For \(L=8\), \(16\), and \(32\), we generate 250K samples, and every 25th sample is used for measurement, bringing down the number of samples to 10K. For \(L=64\) and 128, 100K samples are generated, and every 10th sample is used for measurement. While estimating uncertainties, we use bootstrap error analysis with a bin size of 100 for all the observables. In Figure 4(a), we calculate two-point susceptibility across the parameter \(\lambda\) keeping \(m^{2}=-4\) for \(L=8\). Similarly, in Figure 4(b), we compute the absolute mean \(\phi\) for larger lattice size i.e. \(L=64\). We can infer a second-order phase transition behaviour which is characteristic of lattice \(\phi^{4}\) theory in 2D. We also calculate the zero momentum correlation function for L = 8 and 32 for \(\lambda=5.4\) and \(m^{2}=-4\). Here, we observe the expected behaviour i.e., we observe that the correlation between the sites decreases as we move to the centre of the lattice from the origin, and it is symmetric due to the periodicity in the lattice. We also compute pole mass from both HMC and PBMG-\(\phi^{4}\) model by varying \(\lambda\) and lattice size \(L\) as shown in Table 3. We Figure 5: a)Two-point susceptibility compared between HMC and PBMG-\(\phi^{4}\) for L=8 and b)Absolute mean \(\phi\) value for L=64 tune the \(\lambda\) and \(L\) such that the pole mass remains constant, and at the same time, we move towards the critical region. The pole mass for ensemble \(S3\) is shown in Figure 7 which we compute from both HMC and PBMG-\(\phi^{4}\) generated samples. We see a significant agreement between all the observables which we calculate from HMC and PBMG-\(\phi^{4}\) model within statistical uncertainty. ### Experiment on Autocorrelation The integrated auto-correlation time measures the total correlation length in a Markov chain. It roughly quantifies how many configurations, on average, need to be skipped to obtain effectively independent configurations. For an observable \(A\), the integrated auto-correlation time, denoted by \(\tau_{int}\), is defined as \[\tau_{int}=1+2\sum\frac{C(k)}{C(0)} \tag{30}\] where the sum runs from \(k=1\) to infinity, and \(C(k)\) is the auto-correlation function of the observable \(A\) at lag \(k\) (the correlation between \(A\) at configuration \(i\) and \(A\) at configuration \(i+k\)). \(C(0)\) is the auto-correlation at zero lag. We compute the autocorrelation of two-point susceptibility \(\chi\). We perform these experiments for \(m^{2}=-4\) and \(\lambda=5.4\), such that we remain quite close to the critical region of the theory. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Ensemble & S1 & S2 & S3 & S4 & S5 \\ \hline \(m^{2}\) & -4 & -4 & -4 & -4 & -4 \\ \hline L & 16 & 24 & 32 & 48 & 64 \\ \hline \(\lambda\) & 8 & 6.3 & 5.6 & 5.0 & 4.8 \\ \hline \(m_{p}L\) & 12.80(2) & 12.79(4) & 12.82(5) & 12.87(8) & 12.81(3) \\ \hline \end{tabular} \end{table} Table 3: Set of ensemble studied where the parameter \(\lambda\) and L are varied while keeping \(m_{p}L\) fixed at \(\approx 12.8\). Figure 6: Two point Correlation function compared between HMC and PBMG-\(\phi^{4}\) for lattice size a)\(8\times 8\) and b) \(32\times 32\) We know that in HMC, the simulation cost rapidly increases as the volume of the lattice increases. Hence, simulation near the critical region becomes difficult where one needs finer lattices with large lattice volumes. This is because auto-correlation shoots up as we move towards the critical region. Therefore to compare the efficiency of our model for simulation in the critical region, we computed integrated auto-correlation time (\(\tau\)) for different lattice sizes from \(8\times 8\) upto \(400\times 400\) from both HMC and PBMG-\(\phi^{4}\) method. For each lattice volume, 10k samples are generated from both HMC and PBMG-\(\phi^{4}\) model. For HMC, we fixed the parameter MD step size to 0.01 and the number of MD steps to 20 so that for the smallest lattice, the acceptance rate and auto-correlation are comparable for both methods. Figure 8: Integrated autocorrelation time \(\tau\) of two point susceptibility for lattice ensemble generated from HMC and PBMG-\(\phi^{4}\) model Figure 7: Effective mass computed for parameter \(\lambda=5.4\) and \(L=32\) from lattice ensemble generated by HMC and PBMG-\(\phi^{4}\) model. We ignore points in the middle of time axis due to high noise Figure 8 shows how the integrated auto-correlation time(\(\tau\)) varies as we increase the lattice size in both methods. As we sample for larger lattice sizes, (\(\tau\)) increases rapidly in the case of HMC. On the other hand, from the PBMG-\(\phi^{4}\) model, we see a constant \(\tau\) as we sample for different lattice sizes. Furthermore, we observe an almost-constant acceptance rate of around 98% from our model PBMG-\(\phi^{4}\) irrespective of the lattice size and the parameters for which the simulation is carried out. This is a significant gain in terms of sampling cost while simulating a larger lattice volume. ## 6 Conclusion and Future Scope In this study, we have introduced a novel method, Parallelizable Block Metropolis-Gibbs (PBMG), designed to address the challenge of efficiently sampling Boltzmann distributions for large lattices. By decomposing the complicated joint distribution into simpler local conditional distributions, our approach leverages the power of generative machine learning, offering a scalable solution that transcends the limitations of traditional sampling methods. Through our experiments on the XY model and the Scalar \(\phi^{4}\) theory, we have demonstrated the efficacy of our approach. The PBMG models are trained using the Reverse KL objective, negating the need for existing simulated data. The trained models can be employed across various lattice sizes, eliminating the need for retraining. The design of the proposal ensures that the acceptance rate remains unaffected by the lattice size under simulation. Furthermore, the models are conditioned on the action parameters enabling us to simulate using them at multiple parameter values. PBMG not only matches the performance of baseline methods for smaller lattices but also outperforms them in terms of simulation cost for larger lattices. In the context of the XY model, our results exhibit a significant match of observables and commendable values of metrics, providing a solid foundation for the accuracy of our approach. For the Scalar \(\phi^{4}\) theory, PBMG showcases even better results with very high and consistent acceptance rates, indicating efficient sampling even near critical regions. The most remarkable observation from our experiments is the constant integrated autocorrelation time across different lattice sizes exhibited by PBMG-\(\phi^{4}\), a clear testament to its efficiency in sampling and capturing physics near the critical region. While our accomplishments are noteworthy, there remain avenues for enhancement. Further optimization of training procedures, specially tailored for Monotonic RQS flows, could lead to improved acceptance rates and reduced variation in PBMG-XY. Moreover, the successful extension of PBMG to 3D and 4D lattices holds tantalizing prospects for advancing research in this domain. ## 7 Acknowledgements We gratefully acknowledge the invaluable assistance provided by Prof. Dootika Vats (Dept. of Mathematics and Statistics, IIT Kanpur), in guiding us through the mathematical intricacies involved in this research.
2303.15250
Azimuthal rotation induced by the Marangoni force makes small Leidenfrost droplets move in random zig-zag directions
We observed the zig-zag motion of small Leidenfrost water droplets (radii less than 0.6 mm) on a hot, flat substrate. To understand this motion, we conducted an experiment using a glass capillary to fix a droplet at its edge and control the droplet height. Thermographic and interferometric observations reveal that the droplets rotated both vertically and azimuthally. Based on the characteristic frequency of the azimuthal rotation depending on the substrate temperature and droplet height, we developed a semi-empirical model considering unsteady Marangoni convection and its relaxation. We confirmed that our model can predict the characteristic time of the zig-zag motion of unfixed Leidenfrost droplets.
Ken Yamamoto
2023-03-27T14:35:17Z
http://arxiv.org/abs/2303.15250v2
## Azimuthal rotation induced by the Marangoni force makes small Leidenfrost droplets dance ### Abstract We observe zig-zag motions of small Leidenfrost water droplets (radii less than 0.6 mm) on a hot, flat substrate. To understand the motion, we conduct an experiment using a glass capillary to fix a droplet at its edge and to control the droplet height. Our thermographic and interferometric observations show that the droplets rotate both vertically and azimuthally. Based on results that the azimuthal rotation has a characteristic frequency depending on the substrate temperature and droplet height, we develop a semi-empirical model based on the unsteady Marangoni convection and its relaxation. We confirm that our model can predict the characteristic time of the zig-zag motion of unfixed droplets. A liquid droplet can levitate on a substrate whose temperature is higher than a certain temperature. This phenomenon is known as the Leidenfrost phenomenon [1] and the threshold is called the Leidenfrost temperature [2]. Above the Leidenfrost temperature, vapor generated from the droplet form a thin layer beneath the droplet and the vapor pressure supports the droplet to prevent a contact with the substrate. This distinct configuration exhibits rich and complicated dynamics of Leidenfrost droplets. It has been intensively investigated in the past two decades [2, 3, 4, 5, 6, 7, 8, 9], and the translational motion of the droplets is being one of hot topics [10, 11, 12]. Because the droplets do not contact with the substrate, friction is negligible and their motion is strongly affected by gravity. Nevertheless, Bouillant _et al._[10] showed that it is not always that simple for small droplets: on a flat substrate whose horizontality is carefully adjusted with a precision of \(\sim\)0.1 mrad, water droplets having radius \(R=1.0\) mm translate in random direction, while droplets having \(R=2.0\) mm sensitively feel gravity and translate in a certain direction. Bouillant _et al._ also visualized flows inside the droplets [10, 13] and found that the flow structure changes by the droplet size. The transition was explained by the droplet shape (aspect ratio), which is spherical for small droplets and flattened due to the gravity for large droplets. In large droplets (\(R>1.5\) mm), the flow structure is axisymmetric (contains pairs of countercurrents), whereas a symmetry-breaking occurs and the internal flow exhibits a rolling motion in small droplets. The symmetry-breaking was extensively investigated by Yim _et al._[11] both experimentally and numerically, and the results imply the rolling motion is likely to be induced by thermobuoyant (Rayleigh-Benard) effect rather than thermocapillary (Marangoni) effect. While the origin of the symmetry-breaking is still in discussion, the relevance of the rolling motion to the translational motion is more feasible. Bouillant _et al._[10] experimentally confirmed that the direction of the rolling motion corresponds to the translation direction, and hence the random direction is a result from the randomness of the rolling direction. Moreover, they also found that the base of the rolling droplet is slightly (few mrad) inclined to the translation direction. This stable inclination is induced by a lubricating vapor flow beneath the droplet [12]. As discussed above, the direction of motion is either stochastic or deterministic depending on the size of the droplet, but the moving droplets are considered to keep their lines because of their frictionless nature. Therefore, to change or control the translational direction of Leidenfrost droplets, physical or thermal inclination [14] or non-uniformities of the substrate [15, 16, 17, 18] are employed. However, we observe that small water droplets (\(R<0.6\) mm), which are generated by the atomization through the contact boiling of large droplets, start _dancing_ on a flat horizontal (horizontality precision within 1 mrad) silicon wafer (FIG. 1 and Supplementary Movie 1. See also Supplementary Information for more details). The observation implies that the straight-running stability tends to decrease with droplet size, but the direction of the turn and the angle to finish the turn are random (for instance, they occasionally turn \(270^{\circ}\) to left and then turn \(90^{\circ}\) to right). To understand this new finding, we perform time-resolved droplet surface temperature measurements combined with droplet-base-shape observations. We employ a glass capillary (outer diameter: 2.0 mm, inner diameter: 1.0 mm, its end is manually polished) to generate and capture a water droplet at its edge. The capillary is vertically held above a hot Figure 1: **Trajectory of small droplets translating on a flat horizontal silicon wafer (\(T_{\rm s}=250^{\circ}\)C). The droplets are generated by the atomization thorough the contact boiling of large droplets. The tracking is started at \(t_{\rm track}=0\), at which time the target droplet is regarded as unaffected by other droplets and shows no significant acceleration or deceleration (but has non-zero velocity). The droplet center is plotted with a 10-ms interval and the color bar indicates the radius of the droplet \(R(t_{\rm track})\). (\(X\), \(Y\)) = (0, 0) indicates the position at \(t_{\rm track}\) = 0 and \(R(0)\) for each droplet is shown in the figure. Selected trajectories of (a) \(R(0)=560\)–\(272\) μm, (b) \(R(0)=208\)–\(158\) μm, and (c) \(R(0)=145\)–\(112\) μm, are shown. Initial translation directions are taken arbitrary for better visibility.** Figure 2: **Observation result of a droplet at \(H=1.0\) mm, \(T_{\rm s}=250^{\circ}\)C. (a) Successive thermographic images. White dashed lines indicate the outline of the glass capillary (2.0 mm outer diameter). Image contrast is enhanced for better visibility. (b) Interference fringes of the droplet base (background subtracted). Angle of an axis of the line symmetry \(\theta\) is defined. (c) Time evolution of \(\theta\). Orange lines are guides to the eye having the same inclination. The distance between the lines: 0.98 s.** substrate made of sapphire, with various distances \(H\) (1.0-2.5 mm) between the capillary end and substrate to observe the contribution of the effective droplet height. The volume of the droplet for each \(H\) is kept constant by infusing water with a syringe pump at low flow rates (\(\sim\)10 mL min\({}^{-1}\)). The setup also enables to pin and observe the droplet at the same location for long time (more than one minute). The sapphire plate (thickness: 1 mm, diameter: 28.5 mm) is heated by a ring heater and the substrate temperature \(T_{\mathrm{s}}\) is varied from 220\({}^{\circ}\)C to 320\({}^{\circ}\)C. We employ IR cameras to measure the surface temperature of the droplets. For selected cases, we also use a high-speed camera mounted to an inverted microscope to obtain interferometric images [6, 10, 19, 20], which provide information of the shape of the droplet base. The IR images (recorded at 120 or 100 fps) and the interferometric images (recorded at 1000 fps) are captured in synchronization. Details of the experimental setup and procedure are provided in Supplementary Information. We perform a simultaneous measurement of the droplet surface temperature and the interference fringes of the droplet base (\(H=1.0\) mm, \(T_{\mathrm{s}}=250^{\circ}\)C, see FIG. 2). The surface temperature measurement shows that the droplet has a hotspot on its side and it rotates in the azimuthal direction (in the horizontal plane) with a certain characteristic time \(t\). Moreover, the hotspot always rotates in synchronization with a rotation of the interference fringes which always show a line symmetry (FIG. 2b and Supplementary Movie 2). However, these rotational motions are not always observed and the rotational direction (clockwise or counterclockwise), which seems random, sometimes switches. While the rotation direction is random and the rotation itself is occasionally switched, started, and ceased, the synchronization of the hotspot and the fringes is always consistent. Moreover, we find that the characteristic time of the rotation takes almost constant (FIG. 2c). As an unfixed droplet moves in a specific direction corresponding to the direction of the symmetry axis of its base shape [10], we interpret that this (perfect or imperfect) rotation of the droplet base shape causes the zig-zag motion of the droplet. To understand the rotation mechanism, we measure the rotation time \(t\) for different \(H\) and \(T_{\mathrm{s}}\) from the IR images by measuring the frequency of the temperature rise at the center of the droplet (FIG. 2a. In the case of FIG.2, \(t=0.98\) s). Consequently, it is found that \(t\) has positive correlations with both \(H\) and \(T_{\mathrm{s}}\) for \(230^{\circ}\mathrm{C}\leq T_{\mathrm{s}}\leq 280^{\circ}\)C (FIG. 3a). Note that the Leidenfrost state becomes unstable (contact with the substrate occurs) at \(T_{\mathrm{s}}=220^{\circ}\mathrm{C}\), and the rotation ceases at \(T_{\mathrm{s}}>300^{\circ}\mathrm{C}\) and/or \(H>2.0\) mm. The positive correlations can be explained by considering the Marangoni flow and its relaxation. First, we assume a vertically-rotating inner flow characterized by a velocity \(u_{\mathrm{in}}\) exists. The flow transports hot liquid (at the droplet base) toward the droplet apex, resulting in a formation of a vertical hot belt. Then, relatively cold regions appear at the droplet side and a Marangoni flow is generated due to the temperature difference between the droplet base (assumed to be the saturation temperature) and the cold region. In the next step, the Marangoni flow transports hot liquid toward the cold region and the flow relaxes. The Marangoni relaxation time \(\tau_{\mathrm{Ma}}\)[21] is calculated as \(\tau_{\mathrm{Ma}}=[\rho_{\mathrm{L}}\ L^{3}\ (\partial\gamma/\partial T)^{-1}\ \Delta T^{-1}]^{1/2}\), where \(\rho_{\mathrm{L}}\), \(L\), \((\partial\gamma/\partial T)\), and \(\Delta T\) denote the liquid density, characteristic length, surface tension gradient against temperature, and the temperature difference between the starting and ending points of the flow. Because the inner flow and the Marangoni flow are mutually perpendicular at the droplet base, a net velocity is calculated by a vector sum and its angle to the inner flow \(\theta_{\mathrm{rot}}\) (in the horizontal plane) is \(\theta_{\mathrm{rot}}\sim u_{\mathrm{Ma}}\ u_{\mathrm{in}}{}^{-1}\) where \(u_{\mathrm{Ma}}\) is the Marangoni flow velocity characterized as \(\sim L\ \tau_{\mathrm{Ma}}{}^{-1}\). After a lapse of the relaxation time, a new cold region appears to an angle perpendicular to the direction of the net velocity in the last period and the Marangoni flow is regenerated. The regeneration of the Marangoni flow repeats and the hot belt exhibits the azimuthal rotation with a characteristic time \(\tau_{\mathrm{rot}}\sim 2\pi\ \theta_{\mathrm{rot}}{}^{-1}\ \tau_{\mathrm{Ma}}\). FIG. 3b shows that \(t\) has a linear relationship with \(\tau_{\mathrm{Ma}}\) (\(H\) is applied for \(L\)), which implies the proposed concept is plausible. FIG. 3b also implies that \(\theta_{\mathrm{rot}}\) is a function of \(T_{\mathrm{s}}\). As \(\theta_{\mathrm{rot}}\) is a product of \(u_{\mathrm{Ma}}\) (\(\sim L\ \tau_{\mathrm{Ma}}{}^{-1}\)) and \(u_{\mathrm{in}}{}^{-1}\) in the proposed model, we first examine effects of \(L\) and \(u_{\mathrm{in}}\) and subsequently consider the dependence on \(T_{\mathrm{s}}\). Here we assume that the local temperature of the droplet linearly decreases with a vertical distance from the hot substrate, \(\Delta T=C_{\mathrm{i}}H\) with \(C_{\mathrm{i}}\) [K m\({}^{-1}\)] being a constant. With this assumption, we rewrite \(\tau_{\mathrm{Ma}}\) and \(\tau_{\mathrm{rot}}\) as \(\tau_{\mathrm{Ma}}{}^{*}\sim KH\) (\(K=[\rho_{\mathrm{L}}\ (C_{\mathrm{i}}\ \bar{\varphi}/\partial T)^{-1}]^{1/2}\sim 32\)) and \(\tau_{\mathrm{rot}}{}^{*}\sim 2\pi\ u_{\mathrm{in}}\ K^{2}H\), which is a function of \(H\) and \(u_{\mathrm{in}}\). FIG. 3c shows a relationship between \(t\) normalized by \(\tau_{\mathrm{Ma}}{}^{*}\) and \(H\) / \(H_{0}\), where \(H_{0}\) denotes the maximum height that the rotation is observed (= 2.0 mm for this study). The diagram indicates that \(t\) / \(\tau_{\mathrm{Ma}}{}^{*}\) is almost insensitive to \(H\), whereas it has a sensitivity to \(T_{\mathrm{s}}\). With regard to \(u_{\mathrm{in}}\), we only measured in one case of the interferometric measurement in which a few tracer particles are contained in a droplet (the measured \(u_{\mathrm{in}}\) is \(\sim\)50 mm s\({}^{-1}\), which is a typical internal-flow velocity of Leidenfrost droplets [10]. See also Supplementary Movie 3). Therefore, we estimate the effect of \(T_{\mathrm{s}}\) on \(u_{\mathrm{in}}\) from a relationship between \(T_{\rm s}\) and \(t\) / \({\tau_{\rm Ma}}^{*}\) because \(t\) / \({\tau_{\rm Ma}}^{*}\) is now a sole function of \(u_{\rm in}\). FIG. 3d shows the \(T_{\rm s}\) effect on \(t\) / \({\tau_{\rm Ma}}^{*}\), with a normalized form of (\(T_{\rm s}\) - \(T_{\rm sat}\)) / \(T_{\rm sat}\) for the abscissa (\(T_{\rm sat}\) being the saturation temperature of water). It indirectly indicates that \(u_{\rm in}\) is proportional to [(\(T_{\rm s}\) - \(T_{\rm sat}\)) / \(T_{\rm sat}\)]\({}^{n}\) and affects the rotation time. We draw a fitting curve and rewrite \({\tau_{\rm rot}}^{*}\) \(\sim\)\(C_{2}\) [(\(T_{\rm s}\) - \(T_{\rm sat}\)) / \(T_{\rm sat}\)]\({}^{n}\)_KH_, with a numerical constant \(C_{2}\) = 8.64 \(\pm\) 0.69 and an index \(n\) = 2.74 \(\pm\) 0.16 obtained by the least-square method. Although the origin of this \(T_{\rm s}\) effect is unknown, it could be explained by excess heat that is used for heating up the liquid rather than for phase change (see also Supplementary Information). Now we compare \(t\) and \({\tau_{\rm rot}}^{*}\) in FIG. 4a and find that the data collapsed on a 1:1 line. Finally, we measure a characteristic rotation time of small droplets \(t_{\rm rot}\) on the silicon wafer (\(T_{\rm s}\) = 250\({}^{\circ}\)C). We define \(t_{\rm rot}\) as 2\(\pi\) divided by a change in the angle of moving direction in a certain time. The measured \(t_{\rm rot}\) against the droplet diameter \(D\) is plotted on an inset of FIG. 4b. Although the data scatters, we can draw a fitting line of \(t_{\rm rot}\) = \(C_{3}\) [(\(T_{\rm s}\) - \(T_{\rm sat}\)) / \(T_{\rm sat}\)]\({}^{n}\)_KD_ with fixed \(T_{\rm s}\) = 250\({}^{\circ}\)C, \(K\) = 32, and the index \(n\) = 2.74, where \(D\) denotes the droplet diameter. Consequently, we obtain the numerical factor \(C_{3}\) = 23.15 \(\pm\) 0.97 and \(t_{\rm rot}\) can be estimated by the correlation of \({\tau_{\rm rot,D}}^{*}\) \(\sim\)\(C_{3}\) [(\(T_{\rm s}\) - \(T_{\rm sat}\)) / \(T_{\rm sat}\)]\({}^{n}\)_KD_ as shown in FIG. 4b, despite the fact that the droplet shape of our designed experiment (attached to the capillary) and the freely-moving small droplets (spherical) is different. The model also implies that higher \(T_{\rm s}\) leads higher \(u_{\rm in}\). It predicts that \(u_{\rm in}\) could be much higher than \(u_{\rm Ma}\) for high \(T_{\rm s}\) which results in the suppression of the azimuthal rotation as it is observed in the experiment. The size of the droplet is also important. For relatively large \(R\), \({\tau_{\rm rot}}^{*}\) is long and a large substrate will be required to observe the zig-zag motion. It is also remarkable that we merely observed circular motions of small spherical droplets whereas steady azimuthal rotation is frequently observed for the fixed droplets. It may be due to the difference in the boundary condition, where local temperature of the capillary edge is affected by that of neighboring liquid (Supplementary FIG. S1). This effect could act to stabilize the rotation. It could be an explanation of our observation that the stationarity of the rotation was decreased and the change in the rotational direction became frequent for large \(H\). The frequent direction change was also observed when the capillary is replaced with a thin needle (see Supplementary Information and Supplementary Movie 4). In summary, we observed the zig-zag motion of small (\(R\) < 0.6 mm) water Leidenfrost droplets moving on a flat horizontal silicon wafer heated at 250\({}^{\circ}\)C. While their choice of the moving direction seems random, their straight-running stability shows a droplet-size dependence. To understand the mechanism of the zig-zag motion, we conducted a designed experiment in which the droplet is fixed at the edge of a glass capillary and the effective droplet size can be controlled by varying the distance between the hot sapphire substrate and the capillary edge. We measured the droplet-surface temperature with an IR camera and observed a hot vertical belt which suggests the existence of a vertically-rotating inner flow. Moreover, the experiment with different substrate temperature (\(T_{\rm s}\) = 220-320\({}^{\circ}\)C) and the effective droplet size (\(H\) = 1.0-2.5 mm) revealed that the hot belt rotates in azimuthal direction in certain temperature range (230-280\({}^{\circ}\)C) and its characteristic time depends on \(T_{\rm s}\) and \(H\). A physical model based on the Marangoni relaxation was proposed, and it successfully predicted the result after incorporating an empirical correlation that accounts for the \(T_{\rm s}\)-dependence of the inner flow velocity \(u_{\rm in}\). Subsequently, characteristic azimuthal-rotation time of small (unfixed) droplets observed on the silicon wafer was measured. The proposed semi-empirical model predicted the result as well as the result of fixed droplets despite the difference in their shape (almost perfect sphere and part of sphere). These findings suggest that the dynamics of the small Leidenfrost droplets is purely asymmetric and more complicated than we thought. Figure 4: **Comparison of the predicted time with the measured characteristic time. Black dashed lines indicate 1:1 line. (a) The result of the droplets fixed at the capillary. Inset: Temperature dependence (\(T_{\rm s}\) = 230–280\({}^{\circ}\)C) at \(H\) = 1.5 mm. (b) The result of the small droplets at \(T_{\rm s}\) = 250\({}^{\circ}\)C (log-log plot). Inset: Droplet-diameter dependence of \(t_{\rm rot}\). The red dashed line indicates a fitting curve of \(t_{\rm rot}\) = 23.15 [(\(T_{\rm s}\) - \(T_{\rm sat}\)) / \(T_{\rm sat}\)]\({}^{2.74}\)_KD_. The coefficient of determination is 0.702. ## Acknowledgement The author would like to acknowledge Dr. Hiroaki Katsuragi (Osaka University), Dr. Yoshiyuki Tagawa (Tokyo University of Agriculture and Technology), Dr. Koji Hasegawa (Kogakuin University), Dr. Yutaku Kita (King's College London), Dr. Hideaki Teshima (Kyushu University), and Dr. Kota Fujiwara (Central Research Institute of Electric Power Industry) for fruitful discussion. The author would also like to acknowledge Dr. Masahiro Motosuke (Tokyo University of Science) for lending some experimental apparatus.
2305.17911
TotalDefMeme: A Multi-Attribute Meme dataset on Total Defence in Singapore
Total Defence is a defence policy combining and extending the concept of military defence and civil defence. While several countries have adopted total defence as their defence policy, very few studies have investigated its effectiveness. With the rapid proliferation of social media and digitalisation, many social studies have been focused on investigating policy effectiveness through specially curated surveys and questionnaires either through digital media or traditional forms. However, such references may not truly reflect the underlying sentiments about the target policies or initiatives of interest. People are more likely to express their sentiment using communication mediums such as starting topic thread on forums or sharing memes on social media. Using Singapore as a case reference, this study aims to address this research gap by proposing TotalDefMeme, a large-scale multi-modal and multi-attribute meme dataset that captures public sentiments toward Singapore's Total Defence policy. Besides supporting social informatics and public policy analysis of the Total Defence policy, TotalDefMeme can also support many downstream multi-modal machine learning tasks, such as aspect-based stance classification and multi-modal meme clustering. We perform baseline machine learning experiments on TotalDefMeme and evaluate its technical validity, and present possible future interdisciplinary research directions and application scenarios using the dataset as a baseline.
Nirmalendu Prakash, Ming Shan Hee, Roy Ka-Wei Lee
2023-05-29T06:43:37Z
http://arxiv.org/abs/2305.17911v1
# TotalDefMeme: A Multi-Attribute Meme dataset on ###### Abstract. _Total Defence_ is a defence policy combining and extending the concept of military defence and civil defence. While several countries have adopted total defence as their defence policy, very few studies have investigated its effectiveness. With the rapid proliferation of social media and digitalisation, many social studies have been focused on investigating policy effectiveness through specially curated surveys and questionnaires either through digital media or traditional forms. However, such references may not truly reflect the underlying sentiments about the target policies or initiatives of interest. People are more likely to express their sentiment using communication mediums such as starting topic thread on forums or sharing memes on social media. Using Singapore as a case reference, this study aims to address this research gap by proposing TotalDefMeme, a large-scale multi-modal and multi-attribute meme dataset that captures public sentiments toward Singapore's Total Defence policy. Besides supporting social informatics and public policy analysis of the Total Defence policy, TotalDefMeme can also support many downstream multi-modal machine learning tasks, such as aspect-based stance classification and multi-modal meme clustering. We perform baseline machine learning experiments on TotalDefMeme and evaluate its technical validity, and present possible future interdisciplinary research directions and application scenarios using the dataset as a baseline. multimodal, meme, dataset, topic clustering, stance classification + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote † †: Footnote †: thanks: [ + Footnote † †: Footnote †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † † †: thanks: [ + Footnote deed beyond their work duties, expressing stances that are supportive of Singapore's civil defence, which is of the country's six total defence pillars. Besides supporting social informatics and public policy analysis of total defence policy, TotalDefMeme can also support many downstream multimodal machine learning tasks. Existing meme research studies have extensively studied memes with malicious intent by analyzing and releasing large datasets on a few binary attributes, such as hate speech (Han et al., 2017; Krizhevsky et al., 2017), harmfulness (Krizhevsky et al., 2017), misinformation (Krizhevsky et al., 2017), offensiveness (Krizhevsky et al., 2017), information diffusion (Krizhevsky et al., 2017) etc. However, there has been limited focus on understanding the topics and stances expressed in memes. Furthermore, most of the existing meme datasets focused on the English language and western cultural context. In contrast, our TotalDefMeme is a multimodal meme dataset based on Southeast Asia and Singapore context, where the language captured is predominantly colloquial with a contextualized variant of English, otherwise known as Singlish3. Our TotalDefMeme dataset also aims to address the existing meme research gaps by providing a dataset with multiple attributes that could facilitate downstream machine learning tasks such as multimodal aspect-based stance analysis and multimodal meme clustering. Footnote 3: [https://en.wikipedia.org/wiki/Singlish](https://en.wikipedia.org/wiki/Singlish) We summarize our contributions as follows: * We construct TotalDefMeme, a large-scale multimodal and multi-attribute meme dataset that captures public sentiments toward Singapore's total defence policy. * We discuss and suggest possible future interdisciplinary research directions and application scenarios using the dataset. * We perform a set of baseline machine learning experiments on TotalDefMeme and evaluate its technical validity. ## 2. Related Works **Meme Analysis.** Multimodal study of memes has gained traction in recent years, which has resulted in multiple datasets with different objectives. Recently Facebook released a Hateful Memes Challenge(Han et al., 2017), where the task is to classify memes as hateful and non-hateful. Mathias et el.(Mathias et al., 2017) further extended the Hateful Memes Challenge dataset by adding the protected category (i.e., _race_, _disability_, _religion_, _nationality_, _sex_) that has been attacked by the meme, and the type of attack (i.e., _contempt_, _mocking_, _inferiority_, _slurs_, _exclusion_, _dehumanizing_, _inciting violence_). These hateful meme datasets have encouraged researchers to developed hateful meme detection solutions (Bradbury et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). Suryavanshi et al.(Suryavanshi et al., 2017) constructed a meme dataset on the 2016 United States presidential elections with offensiveness labels. Similarly, Pramanik et al. (Pramanik et al., 2017) released a dataset called _HarMeme_, to study harmful memes. The researchers retrieved memes by querying Google Image Search with keywords related to covid-19. As part of SemEval 2020 tasks, Sharma et al.(Sharma et al., 2017) released a dataset called Memotion, to capture fine-grained emotions expressed in memes. Pramanick et al. (Pramanik et al., 2017) further proposed a multi-hop attention-based deep learning approach to leverage spatial-domain correspondence between visual and textual modalities to extract fine-grained feature representations for sentiment classification on the Memotion dataset. In a recent study, Qu et al.(Qu et al., 2017) examined memes that spread misinformation. They collect memes from Reddit on three topics, namely, _Covid-19_, _BLM_, and _veganism_, and annotate binary misinformation labels. While more meme datasets are available to support downstream multimodal machine learning tasks, most are annotated with simple binary labels and can only support limited classification tasks. Furthermore, most existing memes datasets only contain English memes based on western cultures. Our study value-adds to the existing meme studies by providing a multimodal meme dataset that captures multiple attributes to support more multimodal machine learning tasks. Specifically, our proposed TotalDefMeme dataset contains memes annotated with multi-attribute labels, such as the types of memes, the topic discussed, the total defence pillars affected, and the stance towards the pillars. Furthermore, TotalDefMeme contains memes on Southeast Asia and Singapore's cultural context, where the language captured is _Singlish_. The multi-attribute nature of our TotalDefMeme dataset also facilitates machine learning tasks, such as aspect-based stance analysis and multimodal topic clustering, that existing datasets could not support. **Total Defence.** The Total Defence policy is a defence policy that incorporates both military and civil defence strategies. It is adopted by countries such as Finland, Norway, Sweden, and Singapore. The policy emphasizes a high level of readiness against any potential dangers and catastrophes, including war, crisis, and natural disasters, for the state and its society. Hence, as a key defence policy, Total Defence serves as the primary guide to the needs of the government and its citizens. While various countries employ the Total Defence policy, their implementation is not globally uniform. Our research involved collecting and analyzing memes pertaining to Singapore's Total Defence policy, which comprises six pillars: _Military_, _Civil_, _Economic_, _Social_, _Psychological_, and _Digital_. Through our examination Figure 1. An example of a TotalDefMeme meme and its multi-attribute annotation. The meme praises Singapore’s Civil Defence by showing the paramedics going beyond their work duties to help an elderly woman. of multiple sources45, we have summarized the definitions of each pillar as follows: Footnote 4: [https://www.mindel.gov/goms/imindle/mindel_websites/topics/totaldefence/about.html](https://www.mindel.gov/goms/imindle/mindel_websites/topics/totaldefence/about.html) * **Military Defence**: Strong and formidable defence force made up of Regulars and National Servicemen, and supported by the entire nation * **Civil Defence**: Collective effort of the society to spot signs of threats, respond effectively and recover quickly from crisis * **Economic Defence**: Strong and resilient economy that is globally competitive and able to bounce back from any crisis. * **Social Defence**: Bonds that unite us, built on trust and understanding among people of different races and religions, living in harmony and looking out for one another * **Psychological Defence**: The will and resolve to defend our way of life and interests, the fighting spirit to overcome challenges together * **Digital Defence**: Being secure, alert and responsible online. ## 3. Dataset Construction In this section, we describe the data annotation pipeline used for constructing the TotalDefMeme dataset and provide a preliminary analysis of the dataset. The pipeline, shown in Figure 2, mainly comprises three phases: _dataset collection, dataset processing_, and _dataset annotation_. ### Dataset Collection To collect memes related to Singapore's Total Defence, we adopted a keyword-based approach using Google Search to obtain more relevant memes. We studied the Total Defence concepts across multiple sources and crafted the appropriate keywords such as "_police force_", "_racism_", "_phone scams_". Subsequently, these keywords are inserted into a template query: "Singapore <keyword> Memes". We further scraped various publicly available groups on popular social media platforms, such as Reddit and Instagram, to increase our coverage6. Including memes from social media platforms enables our examination of recent and viral Singapore-related memes. We obtained a dataset of 7,200 diverse memes through these methods. Footnote 6: [https://www.naidef.gov/goms/imindle/mindel_websites/topics/totaldefence/about.html](https://www.naidef.gov/goms/imindle/mindel_websites/topics/totaldefence/about.html) ### Dataset Processing To align with our research on multimodal memes, we applied strict filtering criteria. First, we performed a simple filtering on the quality of the memes where we removed memes with image resolution smaller than 224x224 pixels and memes with text exceeding 50-word tokens. Second, we applied a text extraction tool using the EasyOCR7 algorithm to retrieve the text found within the meme and remove the ones without text. Third, we utilized the pHash algorithm to identify groups of duplicates and preserved each group's memes with the highest resolution. Finally, we retain 5,301 memes, which serves as the final TotalDefMeme dataset. Footnote 7: [https://www.jaided.ai/easycore/](https://www.jaided.ai/easycore/) ### Dataset Annotation We recruited six annotators who are familiar with Singapore's culture and knowledgeable about Singapore's Total Defence concept from a pool of shortlisted Singapore Figure 3. Snapshot of the user interface used for _Meme Identification_ and _Pillars, Topics & Stances annotation_ Figure 2. Data Collection & Annotation Pipeline. passed a short screening survey. The annotators are tasked with identifying the Singapore-related memes and annotating the pillars, topics, and stances of these Singapore-related memes. Screenshots of the user interface can be seen in Figure 3 **Meme Region Identification** The annotators will conduct a manual review and distinguish Singapore-related multi-modal memes from the diverse collection of visuals, including infographics and posters. Each meme can be categorised into two types: generic memes and Singapore-related memes. For example, a meme that addresses the topic of working overtime shall be considered a generic meme, as this scenario can occur in the majority of countries, while a meme that addresses the Singapore Police Force shall be considered a Singapore-related meme. **Pillars, Topics & Stances Annotation** Upon identifying the Singapore-related meme, the annotators annotate the pillars, topics, and stances of the meme. The annotators will first assign the memes' defence pillars: _military_, _civil_, _economic_, _social_, _psychological_, _digital_, _or others_. We included the "_others_" pillar as an option as there are memes that talk about daily occurrences and modern affairs, which do not involve any of the six defence pillars. Next, they annotate the relevant topic tags associated with the meme (i.e., nouns, pronouns, and phrases) in a free-text format. The annotators can enter the most appropriate tags and are encouraged to enter as many relevant tags as possible. Lastly, the annotators annotate the meme's stances towards the assigned pillars: _support_, _against_, or _neutral_. **Quality Control Measures** To ensure the reliability of the dataset, each meme is annotated by two annotators. If the disagreements contain similar opinions, the overlap annotations will be considered correct labels. However, if there are disagreements with entirely different perspectives, a third annotator will be brought in to provide an additional annotation for the meme. The overlapping annotations between at least two annotators will then be considered the correct labels. In the extreme case where all three annotators have different opinions, the meme will be flagged and removed from the dataset. Finally, we held review discussions with the annotators to discuss the annotations with disagreements, allowing our annotators to receive feedback and improve their annotations. ### Dataset Analysis Figure 4 shows the statistical summary of the TotalDefMeme dataset. More than half of the memes in our dataset are Singapore-related. On the defence pillar annotation, we notice the class label imbalance. _Military_ is the dominant class, with close to a third of the memes (36%) labeled to be related to military defence. Conversely, only 1.9% and 2.3% of the memes are labeled as _Digital_ and _Social_ classes, respectively. Interestingly, we also noted a substantial number of memes annotated as _Others_, which captures other Singapore-related but non total defence related topics, adding diversity to the TotalDefMeme dataset. We computed Krippendorff's Alpha score to measure the inter-annotator agreement for the various annotation labels. Table 1 presents the scores for memes' _types_, _pillars_, and _stances_. Although the task requires in-depth contextual knowledge of Singapore and total defence, the annotators have achieved a moderate agreement with alpha scores of 0.65 and 0.55 for _types_ and _pillars_, respectively, indicating quality annotations. The agreement for _stances_ is weaker (alpha score of 0.21) due to the subjectivity of the task. Nevertheless, we will release the annotations and annotators' id in our released dataset to facilitate further analysis. ## 4. Interdisciplinary research and applications The purpose of publishing this unique dataset is to motivate social science and computer science researchers to explore interdisciplinary research and propose novel machine learning tasks. As a starting point, we foresee this dataset to have several applications and usage scenarios. A few examples are as follows: * Analysis and assessment of country's total defence readiness * Multimodal aspect-based stance classification * Multimodal meme clustering * Domain adaption of meme analysis \begin{table} \begin{tabular}{c c c} \hline Types & Pillars & Stances \\ \hline \hline 0.65 & 0.55 & 0.21 \\ \hline \end{tabular} \end{table} Table 1. Krippendorff’s Alpha for each task: Meme Type (3 classes), Defence Pillar (7 classes) and Stance (3 classes). Figure 4. Statistics about TotalDefMeme dataset A clear use case for the TotalDefMeme dataset is to support interdisciplinary computational social science research. Computational social scientists can analyze the dataset to answer social informatics and policy-related questions. Furthermore, the annotation framework can also be applied to annotate and analyze total defence memes from other countries, such as Russia, Ukraine, etc. Existing studies have explored sentiment and emotion classification in memes (Krishnan et al., 2017). Our proposed dataset extends this line of research to support multimodal aspect-based stance classification, where the task aims to predict the stance towards an entity, concept, or events illustrated in a meme. Specifically, in TotalDefMeme, a possible task is to predict the stance towards the total defence pillar illustrated in a given meme. This is challenging as the machine learning model will need to interpret multi-modality information in the meme to predict both the total defence pillars and the corresponding stances. We will benchmark several baselines on this task using TotalDefMeme next Section 5. Existing studies have explored performing clustering of memes using unimodal approaches (Bartos et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015), where the memes' visuals are first captioned with text before applying topic models to cluster the memes or perform clustering on the memes' visuals representations. However, such approaches neglect the rich information in the interaction between textual and visual modality. Furthermore, evaluating the clustering algorithms is challenging as the ground truth is often unavailable. We hope that TotalDefMeme can encourage researchers to propose multimodal meme clustering methods. TotalDefMeme memes are also annotated with the topics and total defence pillars, which could serve as ground truth for evaluating the multimodal meme clustering algorithms. We will also benchmark several baselines on this task using TotalDefMeme. Another potential use case of TotalDefMeme is to support domain adaption of meme analysis. Multimodal meme analysis techniques can be trained using TotalDefMeme and evaluate its performance on other meme datasets. This is particularly useful for analyzing total defence memes collected from other countries. ## 5. Experiments ### Meme Clustering To perform the clustering of memes, we first compute the memes' representation using various unimodal and multimodal models. Specifically, for unimodal models, we adopted pre-trained BERT (Bartos et al., 2015) to compute a meme's representation using its texts, and VGG16 (Krizhevsky et al., 2015) to extract a meme's representation using its visual. For multimodal models, we use VisualBERT (Krizhevsky et al., 2015) to compute the meme's representation using both its visual and text. We also utilize CLIP (Krizhevsky et al., 2015) to extract a meme's visual and text representations separately before concatenating them to get the final meme's representation. The computed meme's representations are subsequently input into k-mean clustering algorithm. We set the cluster size to 7, aiming to obtain pillar-wise semantic clusters. Table 2 shows the evaluation of the k-mean clustering results using the various unimodal and multimodal embeddings. Specifically, we computed the Silhouette and Normalized Mutual Information(NMI) scores for learned clusters. We observe that using the multimodal embeddings have achieved better clustering performance; VisualBERT embeddings has the highest Silhouette score (0.094), while CLIP embeddings \begin{table} \begin{tabular}{c|c|c} \hline Embedding & Silhouette Score & NMI \\ \hline \hline BERT & 0.030 & 0.020 \\ VGG16 & -0.013 & 0.010 \\ VisualBERT & **0.094** & 0.006 \\ CLIP & 0.012 & **0.091** \\ \hline \end{tabular} \end{table} Table 2. Silhouette Score & NMI of the k-Mean clusters using various embeddings. \begin{table} \begin{tabular}{c|c|c} \hline Models & Pillar & Stance \\ \hline \hline BERT & 0.18 & 0.30 \\ VGG & 0.14 & 0.30 \\ VisualBERT & 0.05 & 0.22 \\ CLIP & **0.57** & **0.54** \\ \hline \end{tabular} \end{table} Table 3. Accuracy scores for pillar and stance classification tasks. Figure 5. t-SNE visualization of (a) CLIP and (b) VisualBERT embeddings. achieved the highest NMI (0.091). Conversely, using only the text (i.e., BERT) or visual (i.e., VGG16) embeddings results in poor clustering performance. To further examine the multimodal embeddings, we show the t-SNE visualization of the k-mean clustering algorithm that was trained using CLIP and VisualBERT embeddings in Figure 5. The scatterplots are color-coded with the annotated total defence pillars. Interestingly, we observed that the CLIP embeddings seem to be clustering together according to the total defence pillar. In contrast, the VisualBERT embeddings seem to be scattered with no clear patterns based on pillars. Nevertheless, we note that the Silhouette score and NMI of simple k-mean clustering with multimodal embeddings are still very low, highlighting the challenge of this task and the need to develop better multimodal meme clustering methods. ### Aspect-based Stance Classification To perform aspect-based stance classification, we first stratified split TotalDefMeme by pillar into train, validation, and test sets using the ratio 60/20/20. Next, we train the baselines using the train set. For unimodal baselines, we obtain BERT (Berritt et al., 2017) embeddings using the memes' text, and VGG16 (Krizhevsky et al., 2016) embeddings using the memes' visual, and train a MLP layer on top of the embeddings. For multimodal baselines, we use VisualBERT (Krizhevsky et al., 2016) and CLIP (Krizhevsky et al., 2016) embeddings and train a MLP layer using both the memes' text and visual. As TotalDefMeme is an imbalance dataset, we upsampled the minority classes in the training set when training the model. For this experiment, we formulate the aspect-based stance classification as a multitask classification problem where the model predicts both the pillars and pillars' stances as separate outputs. To achieve this, we modified the loss function in the linear output layer of the baseline classifiers to predict the memes' pillars and corresponding pillars' stances. Table 3 shows baselines' accuracy scores for the pillar and stance classification tasks. We observe that the CLIP achieved the best performance on both tasks. To further examine the predictions, we show example predictions of the best-performing model, CLIP, in Table 4. In example memes (a) and (c), CLIP has correctly predicted the pillars and stances. More interestingly, we also noted that there are cases where the models made incorrect predictions in this task. For instance, in example meme (b), CLIP correctly predicted the "psychology" pillar but missed out on the "economic pillar". For example meme (d), CLIP correctly predicted the pillar but predicted an incorrect stance. The case studies and experimental results suggest room for improvement in the aspect-based stance classification task. We hope the release of TotalDefMeme can encourage more researchers to develop better aspect-based stance classification techniques. ## 6. Conclusion In this paper, we have introduced TotalDefMeme, a novel multimodal multi-attribute meme dataset based on _Total Defence_. The dataset extends beyond the traditional hateful/offensive meme datasets by incorporating an aspect of public policy analysis from a social community perspective. We believe that social media memes capture a raw take on the bread and butter issues relating to governmental policies and initiatives, thereby providing the underlying public sentiments that may not be accurately captured from public surveys and questionnaires. This dataset is collected from the web without any preset limit on the number or domain of topics, and is annotated with the relevant defence pillars, associated topics and stance labels based on expert knowledge. Therefore, it provides the research community with a baseline for a comprehensive study on social media memes with respect to Total Defence or related governmental policies. We envisage the dataset to continue to push the boundaries for multimodal meme understanding and further the research for visual-language aspect-based stance classification and multimodal meme clustering.
2308.00249
Spin dynamics of the $E_8$ particles
In this article, we report on inelastic neutron scattering measurements on a quasi-1D antiferromagnet BaCo$_2$V$_2$O$_8$ under a transverse magnetic field applied along the (0,1,0) direction. Combining results of inelastic neutron scattering experiments, analytical analysis, and numerical simulations, we precisely studied the $E_8$ excitations appearing in the whole Brillouin zone at $B_c^{1D}\approx 4.7$ T. The energy scan at $Q=(0,0,2)$ reveals a match between the data and the theoretical prediction of energies of multiple $E_8$ excitations. Furthermore, dispersions of the lightest three $E_8$ particles have been clearly observed, confirming the existence of the $E_8$ particles in BaCo$_2$V$_2$O$_8$. Our results lay down a concrete ground to systematically study the physics of the exotic $E_8$ particles.
Xiao Wang, Konrad Puzniak, Karin Schmalzl, C. Balz, M. Matsuda, Akira Okutani, M. Hagiwara, Jie Ma, Jianda Wu, Bella Lake
2023-08-01T03:09:16Z
http://arxiv.org/abs/2308.00249v1
# Spin dynamics of the \(E_{8}\) particles ###### Abstract In this article, we report on inelastic neutron scattering measurements on a quasi-1D antiferromagnet BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) under a transverse magnetic field applied along the (0,1,0) direction. Combining results of inelastic neutron scattering experiments, analytical analysis, and numerical simulations, we precisely studied the \(E_{8}\) excitations appearing in the whole Brillouin zone at \(B_{c}^{1D}\approx 4.7\) T. The energy scan at \(Q=(0,0,2)\) reveals a match between the data and the theoretical prediction of energies of multiple \(E_{8}\) excitations. Furthermore, dispersions of the lightest three \(E_{8}\) particles have been clearly observed, confirming the existence of the \(E_{8}\) particles in BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\). Our results lay down a concrete ground to systematically study the physics of the exotic \(E_{8}\) particles. + Footnote †: preprint: Unlike the classical phase transition driven by thermal fluctuations, the quantum phase transition arises at zero temperature when the system is tuned by a non-thermal parameter [1]. For a continuous quantum phase transition, novel physics with higher symmetry may emerge at the quantum critical point (QCP), classified by a set of critical exponents manifested into scaling form [1]. Moreover, when the system is driven away from the QCP with a relevant perturbation, exotic physics may further emerge due to the strong renormalization of the almost infinite low-lying excitations, which is "emergence of emergence" [2; 3; 4]. One such paradigmatic model is the transverse-field Ising chain (TFIC) [1; 5]. When an Ising chain is tuned to its QCP by a magnetic field applied transverse to its Ising anisotropy, a central charge 1/2 conformal field theory emerges with corresponding scaling exponents falling into the class of Ising universality see Fig. 1[5; 6]. Surprisingly, when it is further perturbed by a longitudinal field parallel to the Ising direction, the quantum \(E_{8}\) integrable model emerges - a massive relativistic quantum field theory containing eight massive \(E_{8}\) particles whose relative masses have precise values as listed in the first row of Table 1. The physics of the model is described by scattering of the \(E_{8}\) particles, which is characterized by the maximal exceptional Lie \(E_{8}\) algebra [2; 7; 8; 9; 10]. For the experimental realization of the exotic \(E_{8}\) physics, two conditions need to be satisfied: accessing the Ising universality and the presence of a small perturbation field along the Ising direction. An early inelastic neutron scattering (INS) experiment on the ferromagnetic chain compound CoNb\({}_{2}\)O\({}_{6}\) provided evidence of the existence of the lightest two particles: the ratio of the energies of the lowest two peaks echos the Golden ratio of the two lightest \(E_{8}\) particles' masses [11]. However, there is an apparent deviation in the spectrum continuum region between the recent THz experiment on the material [9] and the analytical result [10], which implies more efforts [12; 13] are needed for confirming the existence of the \(E_{8}\) physics in the material CoNb\({}_{2}\)O\({}_{6}\). Recently, the quasi-1D Heisenberg-Ising antiferromagnetic (AFM) material BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) (a member of a family of materials with formula \(AM_{2}\)V\({}_{2}\)O\({}_{8}\) in which: \(A\) = Sr, Ba, and the magnetic ions are \(M\) = Cu, Ni, Co, Mn), has attracted the attention of several experimental studies [14; 15]. When the materials are tuned by a transverse magnetic field, quantum criticality with Ising universality is observed [8]. Then the \(E_{8}\) excitations with zero transfer momentum have been sought at low temperatures where the interchain coupling and long-range magnetic order provide the longitudinal field [8; 9]. These \(E_{8}\) excitations are measured by inelastic neutron scattering [8] (in this case all the \(E_{8}\) particles were found) and terahertz spectroscopy [9] (in the latter case the excitations up to the fifth \(E_{8}\) particles were found) and compared successfully to theory. However, all these measurements were confined to _only_ the AFM zone center where the precise \(E_{8}\) masses have already been calculated. Furthermore, other peaks due to combinations of the \(E_{8}\) particles and zone folding were also observed, which, while fully explainable, resulted in many overlapping excitations decreasing the certainty of the results. Considering that the \(E_{8}\) model is a massive relativistic quantum field theory, a full investigation of the _relativistic dispersion_ of the \(E_{8}\) particles in the whole Brillouin zone is necessary for a complete realization of the \(E_{8}\) physics in the material BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\). In this work, we combine and compare experimental, analytical, and numerical approaches for the dispersion of the lightest three branches of excitations and unambiguously demonstrate the existence of the exotic \(E_{8}\) particles in the AFM material BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\). BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) crystallizes in tetragonal symmetry (space group \(I4_{1}\)/acd No. 142) with the lattice parameters: \(a\) = \(b\) = 12.40 A and \(c\) = 8.375 A [15]. The magnetic Co\({}^{2+}\) ions have effective spin of \(S=\frac{1}{2}\) and are arranged in edge-sharing CoO\({}_{6}\) octahedra forming 4-fold screw chains running along the **c**-axis (see inset of Fig. 1). There are four screw chains per unit cell, two of which rotate clockwise while the other two rotate anticlockwise [16]. The Co\({}^{2+}\) ions are coupled by strong AFM interactions within the screw chains which have partial Ising (XXZ) anisotropy favouring spin directions parallel to the **c**-axis. In the absence of an external magnetic field, BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) develops a long-range magnetic Neel order below \(T_{\rm N}=5.5\) K due to weak interchain coupling, where neighboring spins are aligned antiferromagnetically along the screw chains and ferromagnetically (antiferromagnetically) between chains along the **a** (**b**) direction respectively. The spins point almost parallel to the **c**-axis. Under an external transverse magnetic field applied perpendicular to the spin direction (e.g. parallel to the **b**-axis), BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) undergoes a three-dimensional (3D) quantum phase transition at \(\mu_{0}H_{\perp}^{c,3D}\approx 10.3\) T (Fig. 1) that was identified, by a combination of field theory, numerical analysis, and neutron scattering experiments, as a spin-flop transition from the **c** to **a** directions [14]. This transition originates from the complex \(g\)-factor of BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\). A one-dimensional (1D) quantum phase transition was also discovered at the lower transverse magnetic field of \(\mu_{0}H_{\perp}^{c,1D}=4.7\) T [8; 17]. This transition lies hidden within the dome of the 3D magnetic order (see Fig. 1) which provides the (staggered) longitudinal magnetic field required for the emergence of the \(E_{8}\) quantum particles. The aim of this paper is to investigate BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) at its putative critical transverse field \(\mu_{0}H_{\perp}^{c,1D}\approx 4.7\) T [8] by combining an inelastic neutron scattering study of the dispersion of \(E_{8}\) particles with a theoretical analysis based on the infinite time-evolving block decimation (iTEBD) technique [18]. The plan of this paper is as follows: firstly we describe our experimental and theoretical approaches, then the neutron measurements of the dispersion of \(E_{8}\) particles are presented and compared to the numerical simulations based on the iTEBD technique. Excellent agreement is achieved as a function of wavevector and energy for the lowest three \(E_{8}\) particles. Two large, high-quality single crystals of BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) were grown using the floating-zone technique at Osaka University, Japan, and at the Core Lab for Quantum Materials, Helmholtz Zentrum Berlin fur Materialien und Energie (HZB), Germany. Inelastic neutron scattering was performed to measure the magnetic excitations on the cold neutron multichopper spectrometer, LET (at the ISIS Facility, Rutherford Appleton Laboratory, UK) using the HZB crystal (mass 4.13 g) [19]. INS experiments were also performed on the cold neutron triple-axis spectrometer, IN12 from the Forschungszentrum Julich Collaborating Research Group (FZJ-CRG) installed at Institut Laue Langevin (ILL) France, using the Osaka crystal (mass 3.66 g). For the LET experiment, the single crystal was aligned in the (0,K,L) horizontal scattering plane and a vertical field cryomagnet was used to apply a constant magnetic field of \(B=4.7\) T along the **a**-axis to reach the 1D QCP. These measurements were carried out at \(T=0.3\) K using a \({}^{3}\)He-insert. This temperature is well below the Neel temperature (\(T_{\rm N}=5.5\) K) ensuring the presence of the effective longitudinal perturbing field necessary to stabilize \(E_{8}\) physics. Using repetition rate multiplication and the chopper frequencies 280/140 Hz, incident neutron energies of \(E_{i}\) = 22.69, 13.21, 8.51, 6.00, 4.42, 3.42, 2.70 meV were achieved with corresponding elastic energy resolutions of \(\Delta E\) = 0.91, 0.41, 0.22, 0.14, 0.094, 0.065, 0.048 meV. The INS data were processed using the MANTID and HORACE software packages and converted to absolute units. The spectrum for the incident energy of 6 meV is displayed in Fig. 2 as a function of energy and wavevector transfer along the chain direction, (0,0,L). For the IN12 experiment, the crystal was aligned with the **a**- and **c**-axes within the horizontal instrumental scattering plane and a vertical DC magnetic field of 4.7 T was applied parallel to the **b**-axis. A fixed final wavevector of \(k_{f}=1.15\) A\({}^{-1}\) was used, giving an energy resolution of \(\Delta E\approx 0.114\) meV and wavevector resolution of \(\approx 0.067\) r.l.u. A Beryllium filter was used to suppress higher-order wavelengths and spurious scattering. A series of energy scans at constant wavevector in the range from **Q** = (0,0,1.5) to (0,0,2.5) were performed over the energy range from \(E=05\) meV with steps of at least 0.05 meV. Constant-energy scans were also performed within this range. These measurements took place at a temperature of \(T=1.50\) K(\(\ll T_{\rm N}\)). The constant-energy and constant-wavevector scans are combined together to make the energy-wavevector map in Fig. 3 (c). When applying a transverse field along (0,1,0) direction, the effective Hamiltonian for BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) is described by a Figure 1: Schematic phase diagram of BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) in a transverse magnetic field, light blue indicates the AFM order phase while the deep blue area covers the location of emergent exotic \(E_{8}\) particles around the putative 1D QCP. The inset shows one of the CoO\({}_{6}\) screw chains. 1D spin-1/2 Heisenberg-Ising model [8; 9; 17; 20; 21]: \[\begin{split}\mathcal{H}&=H_{XXZ}+H_{t}+H_{s}\\ H_{XXZ}&=J\sum_{n}[S_{n}^{z}S_{n+1}^{z}+\epsilon(S_{ n}^{x}S_{n+1}^{x}+S_{n}^{y}S_{n+1}^{y})]\\ H_{t}&=-\mu_{B}g_{yy}H\sum_{n}[S_{n}^{y}+h_{x}(-1)^ {n}S_{n}^{x}\\ &\quad+h_{z}\cos(\pi\frac{2n-1}{4})S_{n}^{z}]\\ H_{s}&=-\mu_{B}H^{\prime}\sum_{n}(-1)^{n}S_{n}^{z} \end{split} \tag{1}\] where \(S_{n}^{\alpha}=\frac{1}{2}\sigma_{n}^{\alpha}(\alpha=x,y,z)\) are spin-1/2 operators at site \(n\) with Pauli matrices \(\sigma^{\alpha}\). \(J=5.8\) meV, \(\epsilon=0.46\), \(h_{x(z)}=0.4(0.14)\), \(g_{yy}=2.75\). The applied transverse field is set \(\mu_{0}H=4.7\) T, which is the critical field of the putative 1D QCP [8; 21]. The effective staggered longitudinal field \(\mu_{B}H^{\prime}=0.018J\) comes from a mean-field treatment of the inter-chain coupling in the 3D ordering region below \(T_{\text{N}}\)[8; 14]. \(H_{s}\) provides a necessary relevant perturbation for realizing the quantum \(E_{8}\) physics [8]. Focusing on the parameter region around the putative 1D QCP, in the scaling limit, the effective Hamiltonian of the spin chain becomes [7; 8; 10] \[\mathcal{H}_{E_{8}}=\mathcal{H}_{c=1/2}+h\int dx\sigma(x). \tag{2}\] \(\mathcal{H}_{c=1/2}\) is the Hamiltonian for a central charge 1/2 conformal field theory, which describes the quantum critical physics of the TFIC. \(h\) and \(\sigma(x)\) corresponding to the scaling limits of \(\mu_{B}H^{\prime}\) and \(\sigma_{j}^{z}\) are the strengths of the perturbation field and the relevant primary field, respectively. To determine the dispersion of \(E_{8}\) particles and compare with the spectrum measured by INS, we calculate the spin dynamic structure factor (DSF) in the field theory frame, \(D^{\alpha\alpha}(\omega,q)=\sum_{n=1}^{\infty}\frac{(2\pi)^{2}}{\prod_{n_{i}= n_{a_{i}}}^{n_{i}}}\int_{-\infty}^{\infty}\prod_{j=1}^{n_{j}}\frac{d\theta_{j}}{ 2\pi}|\langle 0|\sigma^{\alpha}|A_{a_{1}}(\theta_{1})...A_{a_{n}}(\theta_{n}) \rangle|^{2}\)\(\delta(\omega-\sum_{j=1}^{n}E_{j})\delta(q-\sum_{j=1}^{n}P_{j}),\) where \(\alpha=x,z\), and \(a_{i}=1...8\) are quasi-particles obtained from the quantum \(E_{8}\) integrable theory [2; 7; 8; 10]. \(n_{a_{i}}\) is the number of particle \(a_{i}\) involved in the corresponding channel. \(E_{j}=m_{a_{j}}\cosh\theta_{j}\) and \(P_{j}=m_{a_{j}}\sinh\theta_{j}\) are the energy and momentum of particle \(a_{j}\) in terms of the rapidity \(\theta\). The two Dirac \(\delta\)-functions reflect the energy and momentum conservation of the scattering. The DSF of \(\sigma^{x,z}\) can be directly calculated from quantum \(E_{8}\) integrable field theory [10], and the DSF of \(\sigma^{y}\) can be obtained from DSF of \(\sigma^{z}\)[22]. The analytical result for the dispersion of the lightest three \(E_{8}\) particles is shown in Fig. 3(a). For a better comparison of the theoretical prediction from quantum \(E_{8}\) field theory with the INS experimental result, two subtle issues are worth noting. 1. In the above field theory frame calculation, the speed of light is set as \(c=1\). For the quantum \(E_{8}\) model, as a massive relativistic quantum field theory, the dispersion of the \(E_{8}\) particles follows the massive relativistic dispersion \(E_{i}^{2}=\Delta_{i}^{2}+p_{i}^{2}c^{2}\), where \(\Delta_{i}=m_{i}c^{2}\) and \(p_{i}=m_{i}c\) with the \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline Single & \(m_{1}\) & \(m_{2}\) & \(m_{3}\) & & \(m_{4}\) & & \(m_{5}\) & & & \(m_{6}\) & & \(m_{7}\) \\ \hline Multi & & & & & \(2m_{1}\) & & & \(m_{1}\)+\(m_{2}\) & & & \(m_{1}\)+\(m_{3}\) & \(3m_{1}\) & & & \(2m_{2}\) & \\ \hline Theoretical ratio & -1 & 1.618 & 1.989 & 2 & 2.405 & 2.618 & 2.956 & 2.989 & 3 & 3.218 & 3.236 & 3.891 \\ \(m_{i}\)/\(m_{1}\) & & & & & & & & & & & \\ \hline BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) & 1.26 & 2.04 & 2.50 & 2.52 & 3.03 & 3.30 & 3.72 & 3.73 & 3.75 & 4.05 & 4.08 & 4.90 \\ theory [meV] & & & & & & & & & & & & \\ \hline INT2 & (0,0,2) & 1.26 & 2.05 & 2.49 & - & - & - & - & - & - & - & - \\ \([\)meV] & & & & & & & & & & & & \\ \hline iTEBD & (0,0,2) & 1.26 & 2.06 & 2.50 & 2.52 & 3.06 & 3.32 & 3.64 & 3.70 & 3.78 & 4.04 & 4.10 & 4.84 \\ \([\)meV] & & & & & & & & & & & & \\ \hline \end{tabular} \end{table} Table 1: Predicted ratios of the energies of the single \(E_{8}\) particle (first row) and the multi-particle (second row) excitations along with their expected values in BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) (third row). The experimental excitation energies at various wavevectors from LET and IN12 are then listed along with the values obtained from iTEBD. Figure 2: Magnetic intensity of BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) measured in a transverse magnetic field of \(\mu_{0}H_{\perp}^{c,1D}=4.7\) T at \(T=0.3\) K using the LET spectrometer. The data is displayed in absolute units as a function of wavevector \(\mathbf{Q}\) = (0,0,L) and energy, for neutron incident energy \(E_{i}\) = 6 meV (integration range: \(-1.0\leq H\leq 1.0\) & \(-2\leq K\leq 2\)). "rest mass" of the \(i^{th}\)\(E_{8}\) particle \(m_{i}\) and the "speed of light" \(c\). When coming to a real material which actually is a lattice discrete in space, we need to re-scale the dispersion of the \(E_{8}\) particles with the proper energy scale and length (momentum) scale serving as IR cutoffs. The theoretically expected energy peak corresponding to the lightest \(E_{8}\) particle \(m_{1}\) can be estimated from \(E_{m_{1}}^{\text{theory}}=C_{\text{lattice}}H^{\prime 8/15}\approx 1.2\,\)meV [23], where \(C_{\text{lattice}}=4.010\cdots\) is a modified constant for the lattice which originally comes from quantum \(E_{8}\) field theory [24]. The value of \(E_{m_{1}}^{\text{TEBD}}\) matches the minimum gap \(\Delta_{1}=1.26\,\)meV observed at the zone center (corresponding to zero transfer momentum), thus \(\Delta_{1}\) can naturally serve as IR cutoff of the energy scale for the experimental data. Since \(\Delta_{1}=m_{1}c^{2}\) then we can pick up the corresponding IR momentum cutoff \(p_{1}=m_{1}c\). By applying these two IR cutoffs scales we arrive at \[\frac{E_{i}^{2}}{\Delta_{1}^{2}}=\frac{\Delta_{i}^{2}}{\Delta_{1}^{2}}+\frac{ p_{i}^{2}}{(\Delta_{1}/c)^{2}}=\frac{\Delta_{i}^{2}}{{\Delta_{1}}^{2}}+ \left(\frac{\hbar(L-2)\pi/2d}{m_{1}c}\right)^{2}, \tag{3}\] where \(p_{i}=\hbar(L-2)\pi/2d\) is the momentum transfer with respect to **Q** = (0,0,2) and \(d=8.4192/4=2.105\) is the nearest neighbor distance between Co\({}^{2+}\) ions projected onto the chain direction. We need to determine the value of \(c\) to obtain the IR cutoff for the momentum, whose value cannot be uniquely determined by the analytical theory but actually depends on the microscopic details of the material. 2. The four-fold periodicity of BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) leads to the sizable zone-folding effect of the experimental measurement, Figure 3: (a) Analytical calculation of dispersion of three lightest \(E_{8}\) particles. (b) Numerical simulation of dispersion using the iTEBD method, based on Eq. (9). The dashed lines illustrate dispersions of the three lightest \(E_{8}\) particles given by Eq. (8). (c) Energy-wavevector map of the magnetic excitations constructed from constant-energy and constant-wavevector scans measured on the IN12 spectrometer at \(B_{c}^{1L}\) = 4.7 T and \(T=1.5\) K. The black symbols are the peak positions extracted from fitting the individual scans, whereas the dashed lines are fits to the dispersions to the lowest three \(E_{8}\) particles using Eq. (8). (d) shows an energy scan measured at the wavevector \(Q\) = (0,0,2) on LET and IN12, with the theoretically predicted energies of the first \(E_{8}\) excitations indicated by the red solid lines. Identified peaks are labelled \(m_{n}\) (single \(E_{8}\) excitations), \(m_{n}+m_{m}\) (multi-\(E_{8}\) excitations) and \(F_{n}\) (zone-folding peaks). which makes the \(E_{8}\) particles' dispersion shadowed by additional spectra. Such an effect cannot be obtained from the field theory calculation, instead, we need to go back to the original effective lattice model. By comparing spectra obtained from the lattice model and the field theory, the \(E_{8}\) particles' dispersion will be extracted. To make these two subtle issues clear, we carry out iTEBD simulation for the effective Hamiltonian Eq. (6) with \(J=1\), \(\epsilon=0.47\), and critical field \(\mu_{B}g_{yy}H=0.15\)[8; 21], \[\begin{split} D^{\alpha\alpha}_{\rm lat}(\omega,q)=\frac{1}{N}& \sum_{j,j^{\prime}=1}^{N}\exp\{-iq(j-j^{\prime})\}\\ &\times\int_{-\infty}^{\infty}dt\exp(i\omega t)\langle S^{\alpha }_{j}(t)S^{\alpha}_{j^{\prime}}(0)\rangle,\end{split} \tag{4}\] with total number of lattice sites \(N\) (\(N\rightarrow\infty\) in iTEBD), and spin-1/2 operators \(S^{\alpha},\alpha=x,y,z\). The iTEBD simulation result is shown in Fig. 3 (b), where the value of the "speed of light" is found to be \(c\approx(1.441\pm 0.096)\times 10^{3}\)\(m/s\) and the zone-folding effect can be identified as well. The procedure of the iTEBD calculation is as follows: 1. Generate a four-periodic ground state wave function of the effective Hamiltonian Eq. (6) with the parameters. The imaginary time-evolution is done with fifth-order Trotter-Suzuki decomposition [25], where the imaginary time slide is set as \(d\tau=0.01\). The convergence condition is chosen as the difference of the norm of singular values in the matrix product states being smaller than \(10^{-12}\). The truncated dimension is chosen as \(\chi=45\)[18; 26]. 2. Calculate the DSF [Eq. (9)] for \(S^{x}\) and \(S^{z}\), while the DSF of \(S^{y}\) can be obtained from DSF of \(S^{z}\) by using \(D^{yy}(\omega,q)=\omega^{2}D^{zz}(\omega,q)/(4J^{2})\)[22]. For calculating this DSF with iTEBD algorithm, we first do real time and space propagation in Heisenberg picture, then by using Fourier transformation we transform the obtained result into momentum and energy space to obtain the final spectrum. The real time evolution is done by a second order Trotter-Suzuki decomposition with \(t=200,\ dt=0.02\) for obtaining a relatively high accurate result near the Brillouin zone center. 3. A zone-folding effect is necessary to consider when obtaining the final spectrum for comparison with INS experimental results. The magnetic excitations of BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\), measured using the LET time-of-flight spectrometer at \(T\) = 0.3 K and \(\mu_{0}H=\mu_{0}H_{\perp}^{c,1D}(=4.7\) T) with incident neutron energy of 6 meV, is shown in Fig. 2 as a function of energy and wavevector transfer along the chain direction. To complement this, Fig. 3 (c) shows the low-energy excitations built from the energy- and wavevector-scans measured at \(T=1.5\) K and \(\mu_{0}H=4.7\) T on the IN12 spectrometer. An incredibly rich series of modes is found with complex dispersions and intensity modulations. First of all, the 4-fold screw-chain structure about the **c**-axis gives rise to four independent chain spectra shifted consecutively in wavevector along the chain by \(\Delta L=1\) r.l.u. Each individual spectrum is periodic over an interval of \(\Delta L=4\) r.l.u. due to the fact that there are four Co\({}^{2+}\) ions along each chain per unit cell (because of the screw-chain structure). Together these four spectra ensure that an antiferromagnetic zone center where the excitations are minimum is found at every integer value of \(L\). At low energies, each spectrum is expected to consist of the \(E_{8}\)-particles observed as sharp (resolution-limited) gapped modes. In addition, multi-particle excitations are expected, due to the simultaneous creation of two or more \(E_{8}\) particles such as \(m_{1}+m_{2}\). These excitations form a continuum with a sharp lower boundary. Finally, zone-folding modes are expected which are a consequence of the screw chain structure of BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\). Unlike the other excitations, these can be incommensurate with minima that do not always occur at integer values of \(L\). According to theory, the ratios of the energies of the eight \(E_{8}\)-particles have precise values [27]. These values are given in the first row of Table 1 along with the expected values of the multiparticle excitations. The excitations were indeed observed previously at these energies at the antiferromagnetic zone centers (integer \(L\)-wavevectors) of BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[8]. Here we reconfirm this observation using our higher resolution (but lower intensity) data. The energy scans through our LET and IN12 datasets are shown in Fig. 3(d). For both datasets, the first peak is observed at 1.26 meV. Multiplying the \(E_{8}\) particle ratios by this value gives the theoretically expected peak positions for BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) (second row of Table 1) which are also indicated by the solid vertical red lines in Fig. 3(d). The first three peaks in the LET and IN12 experimental data (at \(m_{1}=1.26\), \(m_{2}=2.04\), \(m_{3}=2.52\) meV and at \(m_{1}=1.26\), \(m_{2}=2.05\), \(m_{3}=2.49\) meV respectively) clearly lie at the expected positions of the first three \(E_{8}\) peaks, (it should be noted that the multi-particle peak \(2m_{1}\) coincides with \(m_{3}\)). It should be also noted that close to \(m_{2}\) there is a zone-folding peak of 1.98 meV (indicated by F0). There exists also another zone-folding peak at 2.75 meV (indicated by F1). The fourth \(E_{8}\) excitation at \(m_{4}=3.04\) meV is very weak, while the feature at 3.30 meV is the lower boundary of the \(m_{1}+m_{2}\) multi-particle continuum. At higher energies, it is difficult to distinguish the \(E_{8}\) peaks due to overlapping continuua and zone-folding modes. The positions of peaks were found by fitting Gaussians. They are listed in Table 1 and are in agreement with the results of H. Zou _et. al._[8]. Because we collected high-precision data over a wide range of wavevectors rather than at just the antiferromagnetic zone centers, we now have the opportunity to observe the behavior of the \(E_{8}\)-particles as well as the other excitations as a function of wavevector as well as energy. Returning to the energy-wavevector plots in Fig. 2 and 3(c), it is clear that the three lowest \(E_{8}\) excitations actually form dispersive modes with parabolic curvature and a minimum at **Q** = (0,0,2). The zone-folding modes are now easily identified, such as the dispersive excitations which have minima at the incommensurate wavevectors (0,0,1.875) and (0,0,2.15) at \(E\) = 1.9 meV and overlap with \(m_{2}\) at (0,0,2). Finally above 3 meV broad diffuse scattering is observed due to the multi-particle continua and overlapping modes. The dispersions of the \(E_{8}\) particles are expected to follow the theoretical expression given by Eq. (8), which can be modified as \[E_{i}=\sqrt{\Delta_{i}^{2}+\left(\gamma\cdot\left(L-2\right)\right)^{2}}, \tag{5}\] where the single parameter \(\gamma=\frac{\hbar\pi c}{2d}\) can be extracted. We simultaneously fit the lowest three \(E_{8}\) dispersions and get the value \(\gamma=8.07\) meV. The fitted dispersions are given by the red lines in Fig. 3 (c) and show good agreement with the data. The'speed of light' was extracted from \(\gamma\) and found to be \(c\approx(1.643\pm 0.041)\times 10^{3}\ m/s\) (using \(d=2.105\) A - the projection of the nearest neighbor Co\({}^{2+}\)-Co\({}^{2+}\) distance onto the **c**-axis). The value of \(\gamma\), found from fitting the lowest three \(E_{8}\) dispersions in the case of LET data, is \(\gamma=8.81\) meV and the'speed of light' is found to be \(c\approx(1.794\pm 0.008)\times 10^{3}\ m/s\). This value agrees well with the value of \(c\approx(1.441\pm 0.096)\times 10^{3}\ m/s\) found from iTEBD. We further compare experimental, analytical, and numerical data with constant momentum scans [Fig. 4(a)] and constant energy scans [Fig. 4 (b)]. For the former comparison in Fig. 4(a) the analytical data are not included due to the presence of zone-folding effects for energies above 2 meV. The iTEBD data agree very well with the experimental data. For the latter comparison Fig. 4(b) in order to avoid mixture from zone-folding effect the energy window is chosen to match with the dispersion of the lightest \(E_{8}\) particle. The analytical and iTEBD data show excellent agreement with each other, and both show good agreement with the experimental data. We note that there is about \(1\%\) deviation in momentum for the peak position corresponding to \(1\%\ m_{1}\) energy shift. This possibly is because the transverse field, applied during the experiment, is slightly smaller than the exact critical field which can result in slightly heavier \(E_{8}\) particles which implies a slightly larger minimum gap \(m_{1}\). \(1\%\ m_{1}\) shift corresponds to about 0.1 T to 0.2 T shift from the exact critical field whose value lies in the range of the identified putative QCP \(4.7\pm 0.3\) T [8]. To conclude, the quasi-1D antiferromagnet BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) is a very important material in the field of quantum magnetism. Among its unique properties are: Ising-like anisotropy, large intrachain versus weak but non-negligible interchain interactions, an anisotropic \(g\) tensor producing easy-axis anisotropy, and effective staggered fields under the application of an external magnetic field. Combining the results of INS experiments, theoretical, and numerical iTEBD simulations, we have precisely studied the \(E_{8}\) excitation spectrum appearing at the one-dimensional quantum critical point of \(B_{c}^{1D}\) = 4.7 T. The observation of dynamical spectra through INS, together with excellent agreement with analytical analysis and iTEBD numerical simulations, enabled us to observe the dispersion of the first three \(E_{8}\) particles and several multi-particles modes, paving the way toward possible manipulation of the \(E_{8}\) particles. The crystal growth and characterization took place at the Core Laboratory Quantum Materials, Helmholtz Zentrum Berlin fur Materialien und Energie, Germany and at the Center for Advanced High Magnetic Field Science, Graduate School of Science, Osaka University. The authors would like to thank ISIS and ILL facilities for the allocation of neutron beam-time. We would like to also thank C. Fritsche for his help in the preparation of the sample holder. K. P. is also very grateful to Dr. C. Rohr for all his important suggestions about the general structure of the article. The LET data ([https://doi.org/10.5286/ISIS.E.RB2210086](https://doi.org/10.5286/ISIS.E.RB2210086)) were reduced using Mantid and were analyzed using the Horace-MATLAB software package. This work has, in part, been supported by National Natural Science Foundation of China No. Figure 4: (a) Comparison of DSF results between the iTEBD and the LET experimental data for constant momentum cuts. (b) Comparison of DSF results between the iTEBD data, the analytical data, and the experimental IN12 and LET data for constant energy cuts. The presented LET experimental constant momentum and constant energy cuts are shifted by an offset of 0.004 r.l.u (the offset was caused by experimental conditions). All DSF intensities are normalized up to the maximum intensity of experimental data. U2032213 (J. M.), 12274288 (X. W. and J. W.) and the Innovation Program for Quantum Science and Technology Grant No. 2021ZD0301900 (X. W. and J. W.), 2022YFA1402702 (J. M.), and the Natural Science Foundation of Shanghai with grant No. 20ZR1428400 and Shanghai Pujiang Program with grant No. 20PJ1408100 (X.W. and J.W.), and Grants-in-Aid for Scientific Research (Nos. 25220803 and 24244059) from MEXT. X. W. and K. P contributed equally to this study. J. W., J. M., and B. L. conceived and coordinate the project. J. M., C. B, and B. L. designed the experiment. X. W. and J. W. carry out analytical and iTEBD calculations and provide theoretical analysis. X. W., K. P., J. M., J. W., and B. L. wrote the manuscript. ## References * Sachdev (2011) S. Sachdev, _Quantum Phase Transitions_ (Cambridge University Press, Cambridge, England, 2011) pp. 1-521. * Zamolodchikov (1989) A. B. Zamolodchikov, Int. J. Mod. Phys. A **4**, 4235 (1989). * Dorey (1997) P. Dorey, Lect. Notes Phys. **498**, 85 (1997). * Braden et al. (1990) H. Braden, E. Corrigan, P. Dorey, and R. Sasaki, Nucl. Phys. B **338**, 689 (1990). * Pfeuty (1970) P. Pfeuty, Ann. Phys. **59**, 79 (1970). * Boyanovsky (1989) D. Boyanovsky, Phys. Rev. B **39**, 6744 (1989). * Delfino and Mussardo (1995) G. Delfino and G. Mussardo, Nucl. Phys. B **455**, 724 (1995). * Zou et al. (2021) H. Zou, Y. Cui, X. Wang, Z. Zhang, J. Yang, G. Xu, A. Okutani, M. Hagiwara, M. Matsuda, G. Wang, and et al., Phys. Rev. Lett. **127** (2021). * Zhang et al. (2020) Z. Zhang, K. Amelin, X. Wang, H. Zou, J. Yang, U. Nagel, T. Ro om, T. Dey, A. A. Nugroho, T. Lorenz, J. Wu, and Z. Wang, Phys. Rev. B **101**, 220411 (2020). * Wang et al. (2021) X. Wang, H. Zou, K. Hodsagi, M. Kormos, G. Takacs, and J. Wu, Phys. Rev. B **103**, 235117 (2021). * Coldea et al. (2010) R. Coldea, D. A. Tennant, E. M. Wheeler, E. Wawrzynska, D. Prabhakaran, M. Telling, K. Habicht, P. Smeibidl, and K. Kiefer, Science **327**, 177-180 (2010). * Morris et al. (2021) C. M. Morris, N. Desai, J. Viirok, D. Huvonen, U. Nagel, T. Room, J. W. Krizan, R. J. Cava, T. M. McQueen, S. M. Koohpayeh, R. K. Kaul, and N. P. Armitage, Nature Phys. **17**, 832-836 (2021). * Fava et al. (2020) M. Fava, R. Coldea, and S. A. Parameswaran, Proc. Nat. Acad. Sci. **117**, 25219 (2020). * Faure _et al._ (2018) Q. Faure _et al._, Nature Phys. **14**, 716 (2018). * Niesen et al. (2013) S. K. Niesen, G. Kolland, M. Seher, O. Breunig, M. Valldor, M. Braden, B. Grenier, and T. Lorenz, Phys. Rev. B **87**, 224413 (2013). * Can\(\check{\rm e}\)vet et al. (2013) E. Can\(\check{\rm e}\)vet, B. Grenier, M. Klanjsek, C. Berthier, M. Horvatic, V. Simonet, and P. Lejay, Phys. Rev. B **87**, 054408 (2013). * Cui et al. (2019) Y. Cui, H. Zou, N. Xi, Z. He, Y. X. Yang, L. Shu, G. H. Zhang, Z. Hu, T. Chen, R. Yu, J. Wu, and W. Yu, Phys. Rev. Lett. **123**, 067203 (2019). * Vidal (2007) G. Vidal, Phys. Rev. Lett. **98**, 070201 (2007). * Lake et al. (2022) B. Lake, K. Puzniak, J. Ma, and C. Balz, _Dispersion of \(E_{8}\) particles in the spin-1/2 antiferromagnetic XXZ chain BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) in a transverse magnetic field_, Ph.D. thesis (2022). * Kimura et al. (2013) S. Kimura, K. Okunishi, M. Hagiwara, K. Kindo, Z. He, T. Taniyama, M. Itoh, K. Koyama, and K. Watanabe, J. Phys. Soc. Jpn. **82**, 033706 (2013). * Zou et al. (2019) H. Zou, R. Yu, and J. Wu, J. Phys.: Condens. Matt. **32**, 045602 (2019). * Wu et al. (2014) J. Wu, M. Kormos, and Q. Si, Phys. Rev. Lett. **113**, 247201 (2014). * Yang et al. (2023) J. Yang, X. Wang, and J. Wu, J. Phys. A: Math. Theor. **56**, 013001 (2023). * Caselle and Hasenbusch (2000) M. Caselle and M. Hasenbusch, Nucl. Phys. B **579**, 667 (2000). * Hatano and Suzuki (2005) N. Hatano and M. Suzuki, _Quantum Annealing and Other Optimization Methods_ (Springer Berlin Heidelberg, Berlin, Heidelberg, 2005) pp. 37-68. * Danshita and Naidon (2009) I. Danshita and P. Naidon, Phys. Rev. A **79**, 043601 (2009). * Borthwick and Garibaldi (2011) D. Borthwick and S. Garibaldi, Not. Amer. Math. Soc. **58**, 1055 (2011). **Supplemental Material--Spin dynamics of the \(E_{8}\) particles** ## Details of neutron experiments Two large, high-quality single crystals of BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) were grown using the floating-zone technique at Osaka University, Japan, and at the Core Lab for Quantum Materials, Helmholtz Zentrum Berlin fur Materialien und Energie (HZB), Germany. Inelastic neutron scattering was performed to measure the magnetic excitations on the cold neutron multichopper spectrometer, LET (at the ISIS Facility, Rutherford Appleton Laboratory, UK) using the HZB crystal (mass 4.13 g) [19]. INS experiments were also performed on the cold neutron triple-axis spectrometer, IN12 from the Forschungszentrum Julich Collaborating Research Group (FZJ-CRG) installed at Institut Laue Langevin (ILL) France, using the Osaka crystal (mass 3.66 g). For the LET experiment, the single crystal was aligned in the (0,K,L) horizontal scattering plane and a vertical field cryomagnet was used to apply a constant magnetic field of \(B=4.7\) T along the **a**-axis to reach the 1D QCP. These measurements were carried out at \(T=0.3\) K using a \({}^{3}\)He-insert. This temperature is well below the Neel temperature (\(T_{\rm N}=5.5\) K) ensuring the presence of the effective longitudinal perturbing field necessary to stabilize \(E_{8}\) physics. Using repetition rate multiplication and the chopper frequencies 280/140 Hz, incident neutron energies of \(E_{i}\) = 22.69, 13.21, 8.51, 6.00, 4.42, 3.42, 2.70 meV were achieved with corresponding elastic energy resolutions of \(\Delta E\) = 0.91, 0.41, 0.22, 0.14, 0.094, 0.065, 0.048 meV. The INS data were processed using the MANTID and HORACE software packages and converted to absolute units. For the IN12 experiment, the crystal was aligned with the **a**- and **c**-axes within the horizontal instrumental scattering plane and a vertical DC magnetic field of 4.7 T was applied parallel to the **b**-axis. A fixed final wavevector of \(k_{f}=1.15\) A\({}^{-1}\) was used, giving an energy resolution of \(\Delta E\approx 0.114\) meV and wavevector resolution of \(\approx 0.067\) r.l.u. A Beryllium filter was used to suppress higher-order wavelengths and spurious scattering. ## Theoretical model and dispersion of \(E_{8}\) particles When applying a transverse field along (0,1,0) direction, the effective Hamiltonian for BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) is described by a 1D spin-1/2 Heisenberg-Ising model [8; 17; 20; 21]: \[\begin{split}\mathcal{H}&=H_{XXZ}+H_{t}+H_{s}\\ H_{XXZ}&=J\sum_{n}[S_{n}^{z}S_{n+1}^{z}+\epsilon(S_ {n}^{x}S_{n+1}^{x}+S_{n}^{y}S_{n+1}^{y})]\\ H_{t}&=-\mu_{B}g_{yy}H\sum_{n}[S_{n}^{y}+h_{x}(-1)^ {n}S_{n}^{x}\\ &\quad+h_{z}\cos(\pi\frac{2n-1}{4})S_{n}^{z}]\\ H_{s}&=-\mu_{B}H^{\prime}\sum_{n}(-1)^{n}S_{n}^{z} \end{split} \tag{6}\] where \(S_{n}^{\alpha}=\frac{1}{2}\sigma_{n}^{\alpha}(\alpha=x,y,z)\) are spin-1/2 operators at site \(n\) with Pauli matrices \(\sigma^{\alpha}\). \(J=5.8\) meV, \(\epsilon=0.46\), \(h_{x(z)}=0.4(0.14)\), \(g_{yy}=2.75\). The applied transverse field is set \(\mu_{0}H=4.7\) T, which is the critical field of the putative 1D QCP [8; 21]. The effective staggered longitudinal field \(\mu_{B}H^{\prime}=0.018J\) comes from a mean-field treatment of the inter-chain coupling in the 3D ordering region below \(T_{\rm N}\)[8; 14]. \(H_{s}\) provides a necessary relevant perturbation for realizing the quantum \(E_{8}\) physics [8]. Focusing on the parameter region around the putative 1D QCP, in the scaling limit, the effective Hamiltonian of the spin chain becomes [8; 7; 10] \[\mathcal{H}_{E_{8}}=\mathcal{H}_{c=1/2}+h\int dx\sigma(x). \tag{7}\] \(\mathcal{H}_{c=1/2}\) is the Hamiltonian for a central charge 1/2 conformal field theory, which describes the quantum critical physics of the TFIC. \(h\) and \(\sigma(x)\) corresponding to the scaling limits of \(\mu_{B}H^{\prime}\) and \(\sigma_{j}^{z}\) are the strengths of the perturbation field and the relevant primary field, respectively. To determine the dispersion of \(E_{8}\) particles and compare with the spectrum measured by INS, we calculate the DSF in the field theory frame, \(D^{\alpha\alpha}(\omega,q)=\sum_{n=1}^{\infty}\frac{(2\pi)^{2}}{\prod_{a_{i}=1 }^{8}n_{a_{i}}!}\int_{-\infty}^{\infty}\prod_{j=1}^{n}\frac{d\theta_{j}}{2\pi}| \langle 0|\sigma^{\alpha}|A_{a_{1}}(\theta_{1})...A_{a_{n}}(\theta_{n})\rangle |^{2}\) \(\delta(\omega-\sum_{j=1}^{n}E_{j})\delta(q-\sum_{j=1}^{n}P_{j})\), where \(\alpha=x,z\), and \(a_{i}=1...8\) are quasi-particles obtained from the quantum \(E_{8}\) integrable theory [2; 7; 8; 10]. \(n_{a_{i}}\) is the number of particle \(a_{i}\) involved in the corresponding channel. \(E_{j}=m_{a_{j}}\cosh\theta_{j}\) and \(P_{j}=m_{a_{j}}\sinh\theta_{j}\) are the energy and momentum of particle \(a_{j}\) in terms of the rapidity \(\theta\), respectively. The two Dirac \(\delta\)-functions reflect the energy and momentum conservation of the scattering. The DSF of \(\sigma^{x,z}\) can be directly calculated from quantum integrable field theory [10], and the DSF of \(\sigma^{y}\) can be obtained from DSF of \(\sigma^{z}\)[22]. For a better comparison of the theoretical prediction from quantum \(E_{8}\) field theory with the INS experimental result, two subtle issues are worth noting. 1. In the above field theory frame calculation, the speed of light is set as \(c=1\). For the quantum \(E_{8}\) model, as a massive relativistic quantum field theory, the dispersion of the \(E_{8}\) particles follows the massive relativistic dispersion \(E_{i}^{2}=\Delta_{i}^{2}+p_{i}^{2}c^{2}\), where \(\Delta_{i}=m_{i}c^{2}\) and \(p_{i}=m_{i}c\) with the "rest mass" of the \(i^{th}\)\(E_{8}\) particle \(m_{i}\) and the "speed of light" \(c\). When coming to a real material which actually is a lattice discrete in space, we need to re-scale the dispersion of the \(E_{8}\) particles with the proper energy scale and length (momentum) scale serving as IR cutoffs. The theoretically expected energy peak corresponding to the lightest \(E_{8}\) particle \(m_{1}\) can be estimated from \(E_{m_{1}}^{\text{theory}}=C_{\text{lattice}}H^{\prime 8/15}\approx 1.2\,\text{meV}\)[23], where \(C_{\text{lattice}}=4.010\cdots\) is a modified constant for the lattice which originally comes from quantum \(E_{8}\) field theory [24]. The value of \(E_{m_{1}}^{\text{TireshD}}\) matches the minimum gap \(\Delta_{1}=1.26\,\text{meV}\) observed at the zone center (corresponding to zero transfer momentum), thus \(\Delta_{1}\) can naturally serve as IR cutoff of the energy scale for the experimental data. Since \(\Delta_{1}=m_{1}c^{2}\) then we can pick up the corresponding IR momentum cutoff \(p_{1}=m_{1}c\). By applying these two IR cutoffs scales we arrive at \[\frac{E_{i}^{2}}{\Delta_{1}^{2}}=\frac{\Delta_{i}^{2}}{\Delta_{1}^{2}}+\frac {p_{i}^{2}}{(\Delta_{1}/c)^{2}}=\frac{\Delta_{i}^{2}}{{\Delta_{1}}^{2}}+\left( \frac{\hbar(L-2)\pi/2d}{m_{1}c}\right)^{2}, \tag{8}\] where \(p_{i}=\hbar(L-2)\pi/2d\) is the momentum transfer with respect to \(\mathbf{Q}\) = (0,0,2) and \(d=8.4192/4=2.105\) is the nearest neighbor distance between Co\({}^{2+}\) ions projected onto the chain direction. We need to determine the value of \(c\) to obtain the IR cutoff for the momentum, whose value cannot be uniquely determined by the analytical theory but actually depends on the microscopic details of the material. 2. The four-fold periodicity of BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) leads to the sizable zone-folding effect of the experimental measurement, which makes the \(E_{8}\) particles' dispersion shadowed by additional spectra. Such an effect cannot be obtained from the field theory calculation, instead, we need to go back to the original effective lattice model. By comparing spectra obtained from the lattice model and the field theory, the \(E_{8}\) particles' dispersion will be extracted. To make these two subtle issues clear, we carry out iTEBD simulation for the effective Hamiltonian Eq. (6) with \(J=1\), \(\epsilon=0.47\), and critical field \(\mu_{B}g_{yy}H=0.15\)[8; 21], \[\begin{split} D_{\text{lat}}^{\alpha\alpha}(\omega,q)=\frac{1}{N} \sum_{j,j^{\prime}=1}^{N}\exp\{-iq(j-j^{\prime})\}\\ \times\int_{-\infty}^{\infty}dt\exp(i\omega t)\langle S_{j}^{ \alpha}(t)S_{j^{\prime}}^{\alpha}(0)\rangle,\end{split} \tag{9}\] with total number of lattice sites \(N\) (\(N\rightarrow\infty\) in iTEBD), and spin-1/2 operators \(S^{\alpha},\alpha=x,y,z\). The procedure of the iTEBD calculation is as follows: 1. Generate a four-periodic ground state wave function of the effective Hamiltonian Eq. (6) with the parameters. The imaginary time-evolution is done with fifth-order Trotter-Suzuki decomposition [25], where the imaginary time slide is set as \(d\tau=0.01\). The convergence condition is chosen as the difference of the norm of singular values in the matrix product states being smaller than \(10^{-12}\). The truncated dimension is chosen as \(\chi=45\)[18; 26]. 2. Calculate the DSF [Eq. (9)] for \(S^{x}\) and \(S^{z}\), while the DSF of \(S^{y}\) can be obtained from DSF of \(S^{z}\) by using \(D^{yy}(\omega,q)=\omega^{2}D^{zz}(\omega,q)/(4J^{2})\)[22]. For calculating this DSF with iTEBD algorithm, we first do real time and space propagation in Heisenberg picture, then by using Fourier transformation we transform the obtained result into momentum and energy space to obtain the final spectrum. The real time evolution is done by a second order Trotter-Suzuki decomposition with \(t=200,\ dt=0.02\) for obtaining a relatively high accurate result near the Brillouin zone center. 3. A zone-folding effect is necessary to consider when obtaining the final spectrum for comparison with INS experimental results.
2308.03740
A Cost Analysis of Generative Language Models and Influence Operations
Despite speculation that recent large language models (LLMs) are likely to be used maliciously to improve the quality or scale of influence operations, uncertainty persists regarding the economic value that LLMs offer propagandists. This research constructs a model of costs facing propagandists for content generation at scale and analyzes (1) the potential savings that LLMs could offer propagandists, (2) the potential deterrent effect of monitoring controls on API-accessible LLMs, and (3) the optimal strategy for propagandists choosing between multiple private and/or open source LLMs when conducting influence operations. Primary results suggest that LLMs need only produce usable outputs with relatively low reliability (roughly 25%) to offer cost savings to propagandists, that the potential reduction in content generation costs can be quite high (up to 70% for a highly reliable model), and that monitoring capabilities have sharply limited cost imposition effects when alternative open source models are available. In addition, these results suggest that nation-states -- even those conducting many large-scale influence operations per year -- are unlikely to benefit economically from training custom LLMs specifically for use in influence operations.
Micah Musser
2023-08-07T17:38:41Z
http://arxiv.org/abs/2308.03740v1
# A Cost Analysis of Generative Language Models and Influence Operations ###### Abstract Despite speculation that recent large language models (LLMs) are likely to be used maliciously to improve the quality or scale of influence operations, uncertainty persists regarding the economic value that LLMs offer propagandists. This research constructs a model of costs facing propagandists for content generation at scale and analyzes (1) the potential savings that LLMs could offer propagandists, (2) the potential deterrent effect of monitoring controls on API-accessible LLMs, and (3) the optimal strategy for propagandists choosing between multiple private and/or open source LLMs when conducting influence operations. Primary results suggest that LLMs need only produce usable outputs with relatively low reliability (roughly 25%) to offer cost savings to propagandists, that the potential reduction in content generation costs can be quite high (up to 70% for a highly reliable model), and that monitoring capabilities have sharply limited cost imposition effects when alternative open source models are available. In addition, these results suggest that nation-states--even those conducting many large-scale influence operations per year--are unlikely to benefit economically from training custom LLMs specifically for use in influence operations. Influence Operations Language Models Cost Modeling ## 1 Introduction For the past several years, experts have speculated that newly emerging large language models (LLMs) may be used by malicious actors to generate divisive, misleading, or false information for the purposes of social manipulation. [3, 4, 9, 21, 32, 39, 51, 58] Organizations releasing such large language models have explicitly acknowledged this as a misuse risk, [53, 69] and some major players advocate for "best practices" on limiting access to large language models that include "publish[ing] usage guidelines" and "build[ing] systems and infrastructure to enforce usage guidelines." [14] However, other organizations and commentators have expressed skepticism that influence operations would benefit from using language models to produce content, potentially because they still require human curation or because the costs of generating disinformation content are already extremely low. [7, 33, 34, 43, 62] Such uncertainty has resulted in calls to explicitly evaluate the costs of conventional influence operations as compared to AI-enabled ones, as illustrated in the following quote from a 2020 workshop: [M]odels like GPT-3 can be used to create false, misleading, or propagandistic essays, tweets, and news stories _de novo_.. [W]hile automated generation of disinformation may be feasible in principle, human labor may still be more cost-effective for such purposes. Others disagreed, and saw automated generation as much more cost-effective than training and paying humans to generate disinformation. Participants agreed that empirically investigating the economics of automated vs human generated disinformation is important. [65] Despite this interest in the economics of influence operations, the topic remains underexplored, no doubt due in large part to the difficulty of assessing the economics of presently existing and highly secretive influence operations. To this author's knowledge, only one public attempt has been made to actually model the costs and tradeoffs facing influence operators deciding whether or not to make use of AI systems. [25] This research primarily addresses the economics of deepfaked visual and audio content, with a focus on whether or not a technological "arms race" between detection systems and malicious actors is likely to happen.1 Footnote 1: For some discussion of the “arms race” dynamic in synthetic content, see [37]. Other research, such as [35], has attempted to model the impact of AI-enabled phishing campaigns, but without examining whether the use of such AI tools is therefore cost effective for malicious actors. This paper attempts to explore two related but different questions. First, what are the potential economic benefits of using language models to produce disinformation content, relative to human authorship? And second, what is the economic value of one possible policy intervention designed to reduce the risk of automated disinformation generation, namely, the use of monitoring controls on API-accessible models? To make progress on these questions, this paper attempts to model the costs of _content generation_ for an influence operator in various situations, including the use of a public (open source) language model, a private, API-accessible language model, or a manual campaign.2 I emphasize that this analysis focuses very specifically on the costs of content generation, which is only one part of the disinformation pipeline and may be--for some operations--less costly than other requirements facing operators, such as maintaining an infrastructure of inauthentic accounts or identifying the appropriate channels for distributing content. [20, 57] Footnote 2: Note that I examine primarily two types of model access for LLMs: I discuss private, API-accessible models, in which a private entity owns and operates a model which users can query via an API but where the training data, code, and final model parameters are not available to actors outside of the entity; and I discuss public (or open source) models, by which I mean models where the model itself is hosted somewhere and can be downloaded and used on local computing infrastructure by any third party. Note that it is possible to further differentiate forms of structured model access, see [59, 60, 61], and to consider other possible release decisions such as staged release or making models available only to researchers instead of all third parties. From the perspective of any given third party at a particular point in time, however, AI models can generally be classified as either inaccessible, accessible through an API, or downloadable. The paper is organized as follows: Section 2 discusses a simple base scenario: how much could the use of a language model save if the language model required no human curation and could be deployed fully autonomously? Section 3 then analyzes the more likely scenario of human-machine teams, where human curators review and approve model outputs instead of writing content themselves. In section 4, I then consider the cost imposition that could be generated by the use of monitoring controls on an API-accessible LLM available to would-be influence operators. Section 5 considers the value of monitoring controls under circumstances which permit operators to choose between the use of multiple models, including open source ones. While all analyses through Section 5 focus only on marginal costs, Section 6 expands this to include an analysis of the fixed costs associated with different methods for accessing a language model. Section 7 further analyzes the robustness of the results from the preceding sections, and 8 concludes with a discussion of the implications of this research. This analysis is strictly focused on the use of **text-based** language models to generate short social media posts (which I will often refer to as "tweets" for the sake of focusing attention on a specific use case with relatively constant output lengths, though there is nothing platform-specific about this analysis). The model can generalize to other types of language content as well, such as news articles or blog posts. Newer text-to-image models have sparked analogous concerns about disinformation uses [5, 25, 67]; this model in principle can also generalize to non-text-based forms of content generation, though the interpretation of some key parameters must change.3 I emphasize that the model and its usage here are meant to be "first attempts" at explicitly modeling the cost decision facing a malicious actor who is deciding whether to make use of a large language model; it has several key limitations (see Section 8), but nonetheless may hopefully serve to inspire further refinements. In addition, further work may focus on explicitly analyzing the economcis of producing image- or video-based content using generative AI systems, as well as the use of audio-based AI systems for impersonating target individuals, all of which have recently seen a sharp rise in prevalence. [11, 10, 55] Footnote 3: For instance, the cost to review an output might, when considering a text-to-image model, include touch-up work done by a human designer to finalize model outputs for posting. ## 2 Fully Automated Content Generation For an "ordinary" campaign, where all content is manually authored by humans, let \(L\) represent the labor productivity of human authors (measured as outputs per hour) and \(w\) represent the hourly wage of human authors. For simplicity, I treat both of these variables as constant over the full course of an arbitrarily long influence operation, which implies that the marginal cost of an additional output is constant and equal to \(w/L\).4 The only real difficulty when framing a manual campaign in these terms is to estimate these two values. Footnote 4: One objection to this framing is that in real influence operations, the cost of a marginal output declines over time due to the widespread use of time-saving tactics such as “copypasta,” as reported for instance in [15]. There are two reasons to think that this objection need not require a non-linear estimation of manual costs. First, \(w/L\) can be thought of as an amortized cost per output, and not as an extrapolated cost per output based on assumptions about how long it would take to write posts _de novo_. The method Estimates of either the wages paid to disinformation authors or the productivity of such works are hard to find, though some scattered pieces of information do exist, primarily in the context of Russian influence operations. (Because wages and labor productivity likely vary widely across campaigns conducted in different regions of the world, the specific estimates produced by this model can be thought of as reflective of the value of LLMs specifically for Russian influence operations, though see footnote 8.) In 2018, _BuzzFeed News_ reported that the Internet Research Agency (IRA) had posted job ads in 2014 and 2015 for "social media specialists" and "content managers" paying roughly $2.86-9.53 per hour.5[36] Reporting from the indepenent St. Petersburg-based publication _Fontanka_ in 2022 surfaced more information: a reporter who successfully interviewed for a job with "Cyber Front Z" (which appears to be linked in some way to the IRA [19]) was offered a job that would pay $1.41-2.78 per hour.6[30] In addition, another (older) article from 2014 in _BuzzFeed News_ implies an average hourly wage for IRA employees in the range $3.62-5.44.7[56] Though this source does not contain direct information about salaries, the fact that it falls within the overall range suggested by the other two sources is encouraging. Footnote 5: The specific figure from [36] is for 40,000 rubles per month for two different jobs posted sometime in either 2014 or 2015. The lower bound of this estimate comes from converting rubles to dollars at the lowest conversion rate within those years, converting to 2022 USD, and then assuming 240 hours of work per month (10 hours of work, 6 days a week, for 4 weeks). The upper bound comes from converting rubles to dollars at the highest conversion rate within those years, converting to 2022 USD, and then assuming 160 hours of work per month (8 hours of work, 5 days a week, for 4 weeks). The reason for the wide spread is primarily that the value of the public fell dramatically in late 2014. Footnote 6: The level of variation is again due to the use of 160 hours of work per month as a high-end estimate and 240 hours of work per month as a low-end estimate, as well as the fact that the value of the rule fluctutated significantly in the 10 days between Cyber Front Z’s hiring call and the publication of _Fontanka_’s report. Footnote 7: The _BuzzFeed_ article places the total estimated budget of the IRA in 2014 at $10 million, with half “earmarked to be paid in cash” (likely for employee salaries, of which the organization had 600 at the time). If we assume that these employees worked between 160 and 240 hours per month, and that all cash-earmarked funds were paid out as salaries, then the average hourly wage for IRA employees in 2014 would have fallen in the range $3.62-5.44 after adjusting to inflation. This may slightly overstate the figure for employees who were tasked with content generation, who may have earned lower wages overall than other types of employees. Some of these sources also contain information about the expected output of content generators working for the IRA. The _Fontanka_ report noted that employees at Cyber Front Z were expected to write 200 comments on social media posts per shift, or somewhere in the range of 20-25 comments per hour, depending on the length of a shift. But the 2014 _BuzzFeed_ article about older IRA campaigns suggests that operators managing Twitter accounts were only expected to tweet 5-6.25 times per hour. In the following models, I use Monte Carlo sampling to estimate both \(w\) and \(L\), treating both as random uniform distributions over the full range given by the above estimates. The smallest possible cost of a marginal output \(w/L\) given these parameter ranges is therefore $0.06, the largest possible cost is $1.91, and the expectation of the marginal cost is $0.44.8 Footnote 8: Conveniently, although these numbers were taken from a variety of sources related to Russian propaganda efforts, an expectation of $0.44/post happens to align nicely with the notion of the “50 cent army,” the traditional term used for Chinese propagadists who were assumed to be paid roughly $0.50 for each post they wrote. Although [28] questions this estimate and suggests that most Chinese propagadists are salarated bureaucrats, the authors do not provide an alternate way of estimating the effective cost to the government of each post produced by these bureaucrats. In the alternate case where an influence operation employs a language model to generate content fully autonomously, the only marginal costs associated with content generation are those required to run inference on a model. For its largest, most powerful language model,9 OpenAI currently charges $0.00006 per token, while Cohere charges an even lower $0.000015 per token for generation tasks. [13, 49] Since I am generally considering tweets or comments on social media as the standard type of content in this threat model, I estimate the average token length of outputs at around 40 tokens, in which case the marginal cost for an additional output from these models would fall in the range $0.0006-0.0024. If, alternatively, a threat actor uses an open source model which requires them to set up and maintain their own compute infrastructure, these costs may be higher, but it seems reasonable to estimate that, for any reasonably large operation, an operator could keep inference costs within an order of magnitude of the costs offered by major companies.10 The estimated values for the marginal inference cost \(IC\) of an additional AI output therefore fall roughly in the range of $0.0006-0.024.11 Footnote 10: While major companies like OpenAI and Cobere benefit from very large economies of scale, they also deploy much larger models than a propagandist would be likely to run on local equipment, see Section 6. Footnote 11: It is worth noting that these infrastructure costs could vary substantially depending on factors like the size and capability of the model used by a propagandist. For instance, token generation with GPT-3.5 is already 30 times less expensive than with GPT-4, and running a small model on a single local GPU may be much cheaper still. In this work, I do not attempt to describe how model performance relates to per-token infrastructure costs in a way that would allow propagandists to identify the optimal choice of model for a given operation; instead, I simply treat the per-token infrastructure cost of **all** models as some constant but unknown value drawn from the above range. This is a significant oversimplification. However, final estimates for most values of interest using this model do not depend heavily on values of \(IC\) (see Section7), and instead suggest that content generation costs with human-machine teams remain dominated by labor costs, not the infrastructure costs of running LLMs. Because relatively little depends on the precise estimation of \(IC\), it seems adequate to draw a single value from a wide range of possibilities and treat it as describing infrastructure costs for all models. Given these estimates, a threat model of **pure** automation will always have lower marginal costs than that of a manual influence operation. This is not surprising. In addition, if an operator must expend fixed costs \(FC\) to acquire a working model (whether that means training it from scratch, stealing it, fine-tuning it for an operation, or even just familiarizing one's staff with the model's capabilities), then the model pays for itself after \(\frac{FC}{w/L-IC}\) outputs. With expected values \(E(\frac{w}{L})\approx 0.44\) and \(E(IC)\approx 0.01\), then the use of an AI model would pay for itself after a campaign of size \(2.33FC\). ## 3 Human-Machine Teams with Unrestricted AI Access With current models, it is unlikely that in **most** cases, an operator would choose to run a purely automated campaign.12 For most campaigns, especially those where the consistency and quality of posts matters heavily to the campaign's overall success, a human-machine team is a more realistic scenario. For the purposes of this paper, I imagine that a human-machine team operates in the following way: a language model is tasked with outputting content which is subsequently reviewed by a human prior to posting. The human must approve an output (perhaps with some light editing) before it is posted online. Footnote 12: However, note that if the goal of a campaign is distraction, such that the quality of individual posts does not matter to the operator, pure automation may be a perfectly workable strategy for existing language models. See [28] for an analysis along these lines in the context of Chinese influence operations. To incorporate an operation along these lines into this model, I introduce two additional parameters. First, let \(\alpha\) represent some constant proportion indicating how much faster a human can review outputs rather than writing them from scratch, such that \(\alpha L\) represents the total number of posts a human can generate and review in an hour.13 And second, let \(p\) represent the proportion of outputs from a language model that are usable for an operator's campaign (or that will be usable after a light edit during the review process; \(p\) then falls in the range [0,1]).14 Then the cost of producing a marginal output using a human-machine team can as be modeled as a constant, with this strategy being cheaper than paying a human to write a marginal output whenever the inequality Footnote 13: Reviewing outputs’ here includes the time necessary to prompt an LLM to generate candidate outputs for review. Footnote 14: Note that \(\alpha\) and \(p\) will be inversely related if the actual underlying capability of a given model remains constant: reviewers can choose to spend more time editing potential AI outputs or engaging in careful prompt engineering, thereby increasing the proportion of outputs that are considered “usable” at the cost of reducing \(\alpha\), or they can simply make binary yes-no rulings on potential outputs, which increases \(\alpha\) at the cost of reducing \(p\). For any given combination of model and campaign, there is likely some optimal level of investment that reviewers should make in each output, but this would be hard to predict _a priori_. This model attempts to handle this ambiguity by sampling from a relatively wide range of values for \(\alpha\) while treating \(p\) in most places as an entirely free-floating variable. But it is important to emphasize that readers trying to imagine plausible values of \(p\) for existing models should **not** interpret this parameter as corresponding only to the proportion of outputs from language models that are perfectly suited for use in an influence operation with no editing or prompt engineering whatsoever. \[\left(\frac{w}{\alpha L}+IC\right)\frac{1}{p}<\frac{w}{L} \tag{1}\] obtains. Note that, because the inference costs of running a model are generally dwarfed by labor costs, this inequality loosely approximates to the inequality \(\alpha>1/p\), which states that whenever the speedup in a human's ability to produce and review AI generations (compared to writing them manually) is greater than the number of AI generations necessary to find a "usable" output, we expect the marginal cost of an output from a human-machine team to be cheaper than the marginal cost of a human writing an additional output. Choosing an appropriate value for \(\alpha\) is one of the more difficult tasks associated with parameter estimation in this model. Although some economic studies on the labor impacts of large language models have begun to emerge, [8, 12, 17, 27, 31, 46, 64] they are mostly of limited usefulness. [12] and [31] speculate that large language models will enable efficacy gains for human workers but do not measure such gains, while [17] analyzes worker exposure to large language models but not efficiency impacts of the models. [8] estimates a 14% efficiency improvement among call center workers using large language models, while [46] estimates a 37% reduction in time spent on various tasks among college-educated professionals. However, these papers do not provide information about the rate at which workers reject LLM suggestions, which is necessary to calculate the costs of generating outputs but is not necessary to analyze impacts on worker productivity.15[27] analyzes efficiency gains for a specific code completion task and provides an absolute minimum estimate of \(\alpha\approx 2.27\), while [64] estimates \(\alpha\) at 4.26.16 Footnote 15: In this model, the efficiency speedup associated with the use of an LLM is disaggregated into an increase in the rate at which humans can generate and review outputs, compared to manually writing them (\(\alpha\)), offset by the percentage of outputs that are actually usable (\(p\)). This disaggregation is necessary when evaluating operator costs, because inefficiencies caused by lowering \(p\) generate increased inference costs, while inefficiences caused by lowering \(\alpha\) do not. In addition, the disaggregation separates efficiency gains into a parameter specific to a given human-task pair (which I estimate using Monte Carlo sampling), and a parameter specific to a given model-task pair (which I primarily treat as a free-floating variable). Footnote 16: [27] measures only the observed efficiency improvement on a given coding task and does not estimate \(p\), similarly to [8]. [64], however, estimates a total efficiency gain of 6% from the use of AI **and** finds that 25% of AI suggestions were accepted by coders, implying a value of \(\alpha=4.26\). [42] does not estimate the total efficiency gain of using AI to assist with coding tasks but does observe that roughly 22% of AI-generated suggestions were accepted by coders in a large-scale deployment. Because this is consistent with the acceptance rate observed by [64], it is likely more accurate to say that [27] implies a value of \(\alpha\) between 9 and 10.5 when assuming a corresponding value for \(p\) of roughly 0.22–0.25. For the purposes of this paper, I randomly sample values for \(\alpha\) uniformly from the range \([2,10]\), indicating a wide range of uncertainty about how much faster it would be for an operator to generate and review outputs instead of manually writing them.17 Figure 1(a) samples 10,000 possible parameter estimates for each parameter except \(p\) and plots the cost savings of a marginal output that could be gained from switching from manual authorship to a human-machine team as a function of \(p\). In Figure 1(a), each thin blue line represents this cost savings as a function of \(p\) for one particular choice of parameter estimates, and the thick blue line represents, for each value of \(p\), the mean cost savings for a marginal output. For each of these individual sets of parameter estimates, the value of \(p\) at which it becomes cost-effective to use a human-machine team instead of manual authorship is different; Figure 1(b) shows the distribution of break-even performance thresholds across all 10,000 parameter estimates. On average, \(p=0.25\) is the point at which an LLM is expected to become cost-effective, relative to fully manual authorship. Figure 1: Predicted Per-Output Cost Savings as a Function of \(p\) Over a sufficiently long campaign, small per-output savings can add up to relatively significant amounts. Figure 2 shows cumulative savings as a function of both \(p\) and campaign length, up to 10 million tweets. (Solid lines designate savings of one million dollar increments, with dashed lines designating increments of $500,000; savings are calculated as the mean savings for each combination of \(p\) and campaign size over 10,000 parameter samples.) It is worth emphasizing that for multiple nation-states, the posting of several million tweets to Twitter is an entirely realistic goal in the medium-term. Based solely on publicly released datasets of coordinated inauthentic activity on Twitter, actors affiliated with the following countries all appear to have posted multiple millions of inauthentic tweets prior to December 2021: Serbia (17M), Saudi Arabia (17M), Turkey (15M), Egypt (7M), Iran (5.5M), Russia (5M), the United Arab Emirates (4.9M), China (3.9M), Venezuela (3.8M), and Cuba (2M).18 These estimates are based only on infrequently released Twitter data partially covering October 2018-December 2021 (with long gaps between some releases), and is therefore likely a major undercount not only of inauthentic state-affiliated activity on Twitter specifically, but even more so of state-affiliated influence operations generally.19 Footnote 18: See [68]. These figures were calculated based on the file sizes of “Tweet Information” files for each campaign, which contain metadata about tweets. As a baseline, the 353MB file corresponding to the Russian campaign released in June of 2020 contains 1.04 million tweets, suggesting that roughly 340MB of data corresponds to 1 million tweets. Note that one 4.2GB file was attributed to a joint campaign between Saudi Arabia, the United Arab Emirates, and Egypt; for simplicity, I simply divided the (imputed) number of tweets in this campaign evenly across all three countries, though it is likely given the objectives of the campaign that a disproportionately larger number of tweets came from Saudi Arabia. Footnote 19: However, note that some of these campaigns used heavy automation; for instance, a large quantity of the posts associated with Egypt and Saudi Arabia were automated postings from the Quran. The use of LLMs to generate content compared to such heavily automated activity would likely not similarly lower costs of content generation, though it would significantly improve quality. I thank Renee DiResta for this point. If influence operators had unrestricted access to an LLM capable of producing usable text at least 75% of the time, this model predicts that an operator could save upwards of $3 million over the course of a 10-million tweet campaign, with an expected reduction in per-output content generation costs over 67%. Moreover, based on public information about Twitter takedowns, there is a meaningful number of nation-state actors who appear likely to produce >10 million tweets (or the equivalent amount of text on other platforms) in the near- to medium-term. Figure 2: Cumulative Savings as a Function of Campaign Length and \(p\) ## 4 Monitoring Controls on AI Models Not all language models can be accessed by potential propagandists without restrictions. In the case of ChatGPT, for instance, the model itself continues to be held privately by OpenAI, with users of ChatGPT being required to make an account in order to access the model.20 Early beta users of GPT-3 were required to provide a description of their intended uses of the model prior to being granted access, but roughly eighteen months after the model's announcement, OpenAI removed the waitlist and allowed more immediate access to the model; OpenAI followed a similar trajectory with its text-to-image model DALL-E 2 after only five months.[47, 48] While the model has become increasingly available to anyone to use, the fact that it remains behind a closed API makes it possible for OpenAI to monitor user interactions with ChatGPT. This monitoring is optional for a company that controls an API-accessible model, but is likely to primarily consist of using automated systems or spot checks to analyze the API requests made by users in order to assess whether a user is deliberately generating a large quantity of harmful content, where the monitoring is performed either in-house or by contractors. Users who are deemed to be deliberately generating such content could have their access to the model revoked via a revocation of API access tokens, IP address blocking, or some other measure.21 Footnote 20: Note that there are many downstream applications of ChatGPT that may not require end-users to sign up for API access. However, the creator of the downstream application must themselves maintain API access to ChatGPT, and could potentially have such access revoked if their users appear to be abusing their indirect access to the original model. Footnote 21: Monitoring controls may also entail behavioral analysis, though this is likely to play a smaller role as compared to the detection of malicious behavior on social media platforms themselves due to the absence of user networks. In addition, companies with API-accessible models can adopt other restrictions, such as limiting the volume of allowed generations in a given time period or blocking access from users in certain countries. I do not directly analyze the cost imposition of such controls here, though it would be possible to do so using this model: all that would be necessary is to estimate the penalty required to evade such restrictions (e.g. by creating a second account or using a VPN to avoid country-based controls) and the frequency with which such a penalty is imposed. Note also that some forms of monitoring for open source models are also possible; for instance, it may be feasible to track information about who downloads various models hosted by third party entities, or to restrict the ability to download such models to trusted individuals. However, once a model has been successfully downloaded, monitoring of how it is used becomes impossible, and it becomes similarly impossible to impose penalties for misuse. As such, I do not consider these other forms of monitoring on open source models in this paper, and simply assume that malicious actors would be able to download an open source model and use it without interference if desired. Can such monitoring controls impose meaningful costs on propagandists attempting to use language models to conduct large-scale influence operations? While blocking a user account or IP address imposes a penalty on a malicious actor, propagandists can generally create a new account or use a new IP address to continue accessing the same model, at which point the detection process must restart. In other words, if there is a roughly constant rate of detection per output \(\lambda\), and each detection \(D\) incurs a penalty \(P\) and resets the clock for the next detection, then the costs imposed by monitoring controls over a given campaign length can be modeled as a random draw from a Poisson distribution of detections, multiplied by the penalty for each detection. Then the costs \(C\) of a campaign of size \(n\) will be equal to the minimum of either the manual cost of producing content, or the cost of using a language model to generate content plus the costs of evading detection:22 Footnote 22: Note that \(n\) here refers to the number of _usable_ outputs that have been produced by the model, but since all outputs (usable or not) contribute to the model owner’s ability to detect malicious use, \(n\) must be divided by \(p\) in equation 2 to account for unusable outputs that nonetheless contribute to the eventual detection of the malicious use. \[C(n)=\min\left(\frac{nw}{L},\quad\frac{n}{p}\left(\frac{w}{\alpha L}+IC\right) +P*D\sim\text{Pois}\left(\lambda\frac{n}{p}\right)\right) \tag{2}\] The penalty paid for a detection could conceivably be quite low, if a human must simply generate a new email account and sign up for the API again. Even so, doing so may generate friction costs as the human switches between reviewing outputs to creating a new account. Companies controlling access to an API-accessible model may also adopt relatively more stringent deterrance methods, for instance by requiring proof-of-humannes to sign up for an API. I--perhaps generously--imagine that each detection could require between 0.5 and 2 hours of a worker's time to evade before an operation can resume. This means that \(P\sim U(0.5w,2w)\), where \(w\) itself is sampled from a uniform random distribution.23 Footnote 23: In dollar terms, \(E[P]=86.84\). Figure 3 shows, for four possible values of \(p\), how improvements in detection capabilities alter the costs imposed by monitoring controls. The figure suggests that improving detection capabilities has different effects over three general phases: 1. If the probability of detection per output is less than roughly 0.1%, the monitoring controls impose minimal costs. Improvements in the ability to monitor the model do not substantially alter the cost calculus that propagandists perform. 2. As capabilities improve from 0.1% probability of detection per output to roughly 1% probability of detection, costs imposed on propagandists increase by roughly similar dollar values regardless of the underlying model's capabilities. 3. Somewhere between a 1% probability of detection per output and a 10% probability of detection per output, the monitoring controls impose costs equivalent to the difference between a manual campaign and an AI-augmented one. At or above this detection rate, the propagandist prefers to use a manual campaign, and further improvements in detection capabilities impose no additional costs. The total costs imposed by monitoring controls are significantly greater for better-performing models (as the potential savings from the use of the AI model were originally much larger), but better detection capabilities are required to fully impose such costs. The shaded regions in Figure 3 represent the interquartile range of possible outcomes across 10,000 parameter estimates. There is clearly enormous variation in estimates for the dollar value of costs imposed by monitoring controls, which is further analyzed in Section 7. The general transition phases, however, are consistent across nearly all samples: a detection capability of 10% probability of detection per output completely eliminates the incentive to use a model, while detection capabilities in the range 0.1-1% still impose meaningful costs. ## 5 The Value of Monitoring when Public Models Are Accessible The preceding section imagines that a propagandist must decide whether to produce content using a monitored language model or a manual process of human authorship. This might be plausible if there were a single (API-accessible) language model available to propagandists; in the real world, however, many language models have proliferated rapidly. [21, 61] In this section, I instead imagine that a propagandist can choose between the use of two different language models, where model 1 is an API-accessible model with monitoring controls in place, and model 2 is an open source model instead. For now, I examine only the **variable costs** associated with either generation strategy, though in the next section I briefly discuss the fixed costs associated with downloading, fine-tuning, and running an open source model. Assuming that both the API-accessiblle and the open source model satisfy inequality 1, the propagandist would prefer to use the private model 1 so long as the condition \[\left(\frac{w}{\alpha L}+IC_{1}\right)\frac{n}{p_{1}}+P*D\sim\text{Pois}\left( \lambda\frac{n}{p_{1}}\right)<\left(\frac{w}{\alpha L}+IC_{2}\right)\frac{n}{ p_{2}} \tag{3}\] Figure 3: Penalties Imposed by as a Function of Monitoring Efficacy, for Varying Levels of \(p\) is satisfied. To make this equation slightly more manageable, we can assume that the inference costs are the same regardless of model.24 From the propagandist's perspective, where \(D\) is unknown at the start of the campaign, we may also substitute \(D\sim\text{Pois}\left(\lambda\frac{n}{p_{1}}\right)\) with \(E[D]\), which is just \(\lambda\frac{n}{p_{1}}\). Then, with some rearranging, inequality 3 becomes: Footnote 24: This decision is justified by the fact that inference costs are generally dwarfed by labor costs in this model; see Section 7. \[P\lambda<\left(\frac{w}{\alpha L}+IC\right)\left(\frac{p_{1}-p_{2}}{p_{2}}\right) \tag{4}\] Inequality 4 states that the propagandist's expected marginal costs from relying on an API-accessible LLM are lower than those of the open source LLM only if the penalty per detection times the detection rate is lower than the marginal cost of reviewing an output, times the percentage performance improvement that the private model offers relative to the public one.25 Footnote 25: Note that if the open source model is actually better-performing than the API-accessible model, the right-hand side of inequality 4 will be negative. Since the left-hand side is necessarily positive, this means that the API-accessible model is never preferred to the open source model when considering only **variable** costs. However, as Section 6 briefly discusses, it is possible for a propagandist to prefer a worse-performing API-accessible model over a better open source one if the fixed costs associated with the open source model are sufficiently high. Let \(\hat{p}\) represent the threshold performance of an AI model at which it becomes cost-effective to use the model, relative to a manual campaign. Then there are four relevant scenarios that determine the propagandist's cost-optimal strategy: 1. If \(p_{1}\leq\hat{p}\wedge p_{2}\leq\hat{p}\), the propagandist prefers to use a manual campaign regardless of any monitoring controls on the API-accessible model; 2. If \(p_{2}>\hat{p}\wedge p_{2}>p_{1}\), the propagandist prefers to use the better-performing open source model regardless of any monitoring controls on the API-accessible model; 3. If \(p_{1}>\hat{p}\wedge p_{2}\leq\hat{p}\), the propagandist prefers to use the API-accessible model, but will fall back to the use of a manual campaign if monitoring controls impose sufficient costs; and 4. If \(p_{2}>\hat{p}\wedge p_{1}>p_{2}\), the propagandist prefers to use the API-accessible model, but will fall back to the use of the open source model if monitoring controls impose sufficient costs. For each pair \((p_{1},p_{2})\) that satisfies either condition 3 or 4 above, it is possible to estimate the minimum detection capability \(\hat{\lambda}\) that imposes sufficient costs to deter the propagandist from using the API-accessible model. For condition 3, this value can be estimated using equation 2, and for condition 4, it can be estimated using equation 4.26 Since further improvements in detection impose no additional costs after a propagandist has resorted to their fallback strategy, the maximum cost imposition of monitoring controls over a campaign of length \(n\) can further be estimated as \(P\hat{\lambda}\frac{n}{p_{1}}\). Figure 4 shows, for all values of \((p_{1},p_{2})\), the optimal strategy pursued by the propagandist, the detection capability needed to make the propagandist indifferent between the use of the API-accessible model and the relevant fallback option (inset (a)), and the costs imposed by such a detection capability over a ten-million-tweet campaign (inset (b)). Footnote 26: See Appendix A for equations used to calculate \(\hat{\lambda}\), including when the propagandist must pay fixed costs to access and/or fine-tune a public model. The lower right-hand rectangle of Figure 4(b) shows similar information as Figure 3: with ideal detection capabilities, the maximum cost imposition of monitoring controls ranges from under $1,000,000 to roughly $3,500,000 over the course of a ten-million-tweet campaign, depending on the value of \(p_{1}\). However, Figure 4(b) further shows that these cost impositions are dramatically reduced when operators can instead switch to alternative open source models, even if those models perform less well. For instance, if both models perform with \(p>0.5\), but the best available open source model consistently performs only 90% as well as the private model, then optimal detection capabilities (roughly, a 0.2% probability of detection per output) will impose under $250,000 in additional costs. ## 6 Fixed Costs Associated with Running and Training Local Language Models The previous discussion focuses entirely on variable costs associated with a manual campaign, the use of an open source model, or the use of an API-accessible model. For simplicity, I have treated the fixed costs associated with each type of campaign as negligible.27 However, a propagandist need not treat the performance of an open source model as fixed; instead, they can choose to expend some additional up-front resources on fine-tuning the model to improve it. Let \(FR\) represent the feasibility region consisting of all points \((FC,p_{2})\) for which it is feasible to reach a given performance by expending \(FC\) in fixed costs.28 Then the total expected costs facing the operator are given by:29 Footnote 29: Note that equation 5 can be expanded to include relevant comparisons of multiple API-accessible models or multiple locally-running open source models. For instance, the feasibility region of an existing, relatively small model may not include high values of \(p\) which could be achieved were a propagadist to train a larger model from scratch, though such training might require much higher fixed costs. Similarly, multiple API-accessible models may exist with different detection capabilities and performances. Figure 4: Optimal Strategies and Maximum Costs Imposed as a Function of \(p_{1}\) and \(p_{2}\) \[C(n)=\min\left(\begin{aligned} \text{Manual}&=\frac{nw}{L}\\ \text{API-Accessible LLM}&=\frac{n}{p_{1}}\left(\frac{w}{ \alpha L}+IC+P\lambda\right)\\ \text{Open Source LLM}&=\frac{n}{p_{2}}\left(\frac{w}{ \alpha L}+IC\right)+FC,\quad(p_{2},FC)\in FR\end{aligned}\right) \tag{5}\] The question of defining the feasibility region, that is, of articulating what types of capabilities are possible at various levels of investment, is both task-specific and requires advanced technical knowledge that goes well beyond the scope of this paper. Nonetheless, existing knowledge can be used to make some general estimates, as in the following scenarios: * Suppose that ChatGPT-3.5 is capable of producing "usable" outputs for an operator at a rate of 0.85, but that ChatGPT-4 has a higher success rate of 1.0.30 Because ChatGPT-4 requires a $20 monthly fee to access, while ChatGPT-3.5 does not, we can treat the penalty for detection from ChatGPT-4 as $20 higher than the penalty for detection from ChatGPT-3.5 (assuming that the propagadist will be detected at least once per month, and thus effectively pays this as a one-time signup fee after each detection). Plugging these values into the line for the API-accessible LLM in equation 5, using our Monte Carlo sampling for the other parameters, and rearranging, we can estimate that the propagandist will prefer to use ChatGPT-3.5 as long as \(\lambda>0.0009\) (i.e., as long as there is at least a 0.09% probability of detection for each output). Footnote 30: [22] estimates the total cost of training GPT-3 at $4,600,000. For this scenario, I imagine that the performance of GPT-4 could be replicated with a GPT-3-sized model plus fine-tuning, but that it could not be replicated with a smaller model. * Suppose further that OpenAI has in fact implemented monitoring controls sufficient to detect malicious action at this rate. However, the propagandist can also reach performance on par with ChatGPT-3.5 by expending only $600 to download and fine-tune an existing open source model, i.e. the point \((\$600,0.85)\in FR\).31 Again plugging the relevant values into equation 5 and using Monte Carlo sampling for the remaining parameters, we can estimate that the propagandist will prefer to use the open source model if they anticipate using it for more than roughly 250,000 outputs. Footnote 31: In [66], researchers were able to achieve GPT-3.5-level performance for less than $600 in fine-tuning expenses. Note that the cost required to both train and fine-tune open source models in order to reach arbitrary levels of capability appears to be continuously declining; see also [52]. For this reason, the estimates provided here if anything underestimate the ease and economic value of quickly pretraining or fine-tuning existing open source models for use in specialized influence operations, as opposed to relying on API-accessible models. * Finally, suppose that the propagandist cannot further improve any existing open source language models beyond this threshold with additional fine-tuning. However, the propagandist can choose to train a more advanced model of their own for $4,600,000 which could perform as well as ChatGPT-4 (but without any usage monitoring).32 If the OpenAI models were the only ones available, training this model would be cost-effective if the propagandist planned to engage in influence operations requiring roughly 310 million outputs or more. But at that scale, using the $600-fine-tuned open source model is more cost effective than either OpenAI model, and compared to **that** alternative, the propagandist only finds it cost-effective to train a model from scratch if they intend to conduct campaigns larger than 410 million outputs in size.33 Footnote 32: In [66], researchers were able to achieve GPT-3.5-level performance for less than $600 in fine-tuning expenses. Note that the cost required to both train and fine-tune open source models in order to reach arbitrary levels of capability appears to be continuously declining; see also [52]. For this reason, the estimates provided here if anything underestimate the ease and economic value of quickly pretraining or fine-tuning existing open source models for use in specialized influence operations, as opposed to relying on API-accessible models. Footnote 33: In [66], researchers were able to achieve GPT-3.5-level performance for less than $600 in fine-tuning expenses. Note that the cost required to both train and fine-tune open source models in order to reach arbitrary levels of capability appears to be continuously declining; see also [52]. For this reason, the estimates provided here if anything underestimate the ease and economic value of quickly pretraining or fine-tuning existing open source models for use in specialized influence operations, as opposed to relying on API-accessible models. Footnote 34: In [66], researchers were able to achieve GPT-3.5-level performance for less than $600 in fine-tuning expenses. Note that the cost required to both train and fine-tune open source models in order to reach arbitrary levels of capability appears to be continuously declining; see also [52]. For this reason, the estimates provided here if anything underestimate the ease and economic value of quickly pretraining or fine-tuning existing open source models for use in specialized influence operations, as opposed to relying on API-accessible models. Although they require a lot of suppositions, these scenarios are useful for illustrating some general points: for very small campaigns, propagandists are likely to prefer using API-accessible models, even if those models have monitoring controls that impose significant costs. But given only moderate assumptions about the payoffs of fine-tuning small, lightweight models to perform propaganda-specific tasks, it very quickly becomes more cost-effective for operators to rely on such models. And even when those models still have relatively large limitations that necessitate continued and careful human curation of outputs, training a large language model from scratch is almost never economically worthwhile except at extremely large scales. Sensitivity Analysis The previous results include the following specific estimates: 1. **Threshold Performance**: A marginal output produced by a human-AI team is expected to become cheaper than a marginal output written by a human author whenever a language model is able to produce usable outputs at a rate higher than 0.25 (95% CI: [0.12, 0.51]). 2. **Maximum Savings**: Over the course of a 10-million-tweet campaign, with a language model that produces usable outputs at a rate of 75%, a propagandist could expect to save $3 million in content generation costs, on average (assuming no fixed costs to using the model and no monitoring controls in place on the model; 95% CI: [$430,000, $9.4 million]). 3. **Optimal Detection Rate (API Only)**: If an operator can access an API-accesible model that produces usable outputs at a rate of 75%, and if their only fallback in response to costly monitoring controls is to resort to human authorship, then monitoring controls that can detect misuse with a probability of at least 4% per output would be required to fully deter the propagandist from using the API-accessible model (95% CI: [0.9%, 12%]). 4. **Maximum Cost Imposition (Public Option)**: However, if an open source but slightly worse-performing model (say, one that produces usable outputs at a rate of 70%) exists, then the maximum cost imposition generated by monitoring controls is $740,000 (95% CI: [$44,000, $3.0 million]). 5. **Minimum Viable Size (Fine-tuning vs. API)**: If a propagandist can fine-tune an existing open source model for $600 to produce usable outputs at a rate of 85%, and if a similarly capable API-accessible model exists that with a 0.1% probability of detection per output, then the fine-tuned model is preferred for any campaign larger than roughly 130,000 outputs (95% CI: [38,000 outputs, 420,000 outputs]). 6. **Minimum Viable Size (Training vs. Fine-tuning)**: However, if reaching a performance reliability of 100% requires training an LLM from scratch at roughly the cost of GPT-3's original training run ($4.6 million), the training from scratch is only cost effective for campaigns larger than roughly 410 million outputs (95% CI: [82 million, 1.1 billion]). These estimates require a number of parameters to be manually specified, primarily models' performance rates, detection capabilities, and fixed costs (at least when these parameters themselves are not the object of analysis). However, the estimates also rely on Monte Carlo sampling for five key variables: \(\alpha\), \(w\), \(L\), \(IC\), and \(P\). Table 1 provides summaries for the ranges over which values for each variable were (uniformly) sampled, and reiterates the general source(s) from which each of these ranges were extrapolated. While all of these sampling ranges span large regions of uncertainty, uncertainty in some parameters more strongly drive variation in estimates for the above numerical results than uncertainty in others. Figure 5 attempts to visualize the way that uncertainty in each parameter contributes to variation in estimates for the six numerical results listed above. Each boxplot represents 10,000 samples of a particular parameter of interest where only the parameter on the x-axis is allowed to vary; all other parameters are held constant at their midpoint value. Effectively, this samples from the marginal distribution for the estimate with respect to a single parameter of interest. The final, yellow boxplot on the right-hand side of each subplot shows the variation in the final estimate when all parameters listed along the x-axis are allowed to vary on the ranges shown in Table 1, that is, the yellow boxplot represents the overall probability distribution for the estimate as given by this model. Subplot (a), for instance, shows that the vast majority of uncertainty in the predicted threshold performance at which it becomes cost effective to use a language model depends on \(\alpha\), and not on inference costs, wages, or labor productivity. By contrast, other parameters of interest are more heavily determined by values of \(w\), \(L\), and--for estimates where monitoring controls on an API-accessible model are relevant--\(P\). No parameter estimate heavily depends on variation in \(IC\), which further justifies treating the inference costs of different models as equivalent (see footnote 24). Interestingly, most estimates for the potential savings that unrestricted access to a reasonably reliable LLM could generate (supblot (b)) span roughly two orders of magnitude. But the estimates for the maximum potential cost imposition of monitoring controls when a slightly-less-reliable open source model exists (subplot (d)) span a wider three orders of magnitude. There is also relatively large uncertainty regarding the detection capability needed to fully deter propagandists from using an API-accessible model (when public options do not exist, subplot (c)). There is less variation, by contrast, in the estimates regarding the scale of operation at which a fine-tuned open source model becomes cheaper than a similarly-performing API-accessible model (subplot (e)). Note that, since such a comparison assumes that the two models perform similarly, the only differences between them when minimizing equation 5 depend on \(FC\), \(\lambda\), and \(P\) (which is itself partially a function of \(w\))--thus, variation in \(\alpha\), \(L\), and \(IC\) does not cause variation in the estimated scale necessary for a fine-tuned open source model to become preferred to a private, API-accessible one. \begin{table} \begin{tabular}{c|c|c|c|c} \hline Parameter & Lower Bound & Upper Bound & Midpoint & Justification \\ \hline \(\alpha\) & 2 & 10 & 6 & Observed Values from Code Generation Tasks \\ \(w\) & \$1.41 & \$9.53 & \$5.47 & Historical IRA Job Postings and Operations \\ \(L\) & 5 & 25 & 15 & Historical IRA Job Postings and Operations \\ \(IC\) & \$0.0006 & \$0.024 & \$0.01 (approx.) & API Fees for Existing Models \\ \(P\) & \(0.5w\) & \(2w\) & \(1.25w\) & Optimistic Range of Impact \\ \hline \end{tabular} \end{table} Table 1: Sampling Ranges for Key Parameters Figure 5: Contributions from Uncertainty in Key Parameters to Variation in Overall Results ## 8 Discussion The preceding analysis suggests that, under a relatively wide range of potential scenarios, the use of language models to produce misinformation content is highly cost effective, relative to the use of purely manual content generation. While it is not surprising that a fully automated campaign would be cheaper than paying humans to write content for influence operations, these models suggest that the use of even relatively unreliable models can substantially reduce propagandists' costs via human-machine teams, as long as models produce "usable" outputs more often than one in four times. With human-machine teams, labor costs still dominate an operation's overall content generation costs, but savings can quickly approach the millions of dollars. Before concluding, it is worth discussing a few general comments on the implications and limitations of this work. ### Model limitations This model of influence operations is limited in a number of key ways. First, and most notably, parameter estimates regarding worker productivity and wages in existing influence operations are based on a small number of investigative reports or job postings, almost exclusively in the context of Russian influence operations. These figures may or may not generalize to other propaganda operations, but an absence of public data about the organization and economic structures of propaganda campaigns makes further precision difficult. Additionally, while some research has begun to emerge regarding the impact of LLM usage on worker productivity, [8, 17, 12, 46, 64], there is still large uncertainty regarding how effective disinformation operators will be at incorporating LLMs into their workflows. Future economic research in other domains may significantly help to narrow the uncertainties in this model. Relatedly, the model analyzes the economics of using LLMs for discrete tasks insofar as it uses a single value--\(p\)--to describe a model's capability. But \(p\) is not meant to be a description of a model's abstract capabilities, but rather its reliability at producing usable outputs on a specific task. For instance, a model may perform reliably enough to save money on the task of tweet generation, but struggle more with longer-form content, making it cost ineffective for use on the task of fake news article generation. To **some** extent, it may be reasonable to interpret a finding that "use of such and such a model becomes cost effective given certain assumptions at \(x\) outputs" as meaning that the model becomes cost effective when used across multiple tasks to produce any type of written content as long as \(x\) tweets. But such an inference relies on the assumption that performance across tasks is relatively consistent, which may or may not be true. In other words, while the models used here can give a rough sense of the scale at which certain content production strategies become cost saving, they do not fully capture the multiplicity of content generation tasks for which actual propagandists would be likely to use LLMs. This model is also strictly focused on the cost savings associated with producing content at scale, and not with quality improvements that LLMs could offer propagandists. But major quality improvements may be possible, providing an additional incentive to makes use of LLMs in propaganda campaigns. The use of copypasta, stilted language from non-native speakers, or transliterations of idioms that do not make sense in a propagandist's target language have often provided important clues regarding inauthentic behavior. [15, 24, 63] These errors in human cultural awareness and translation ability are more prevalent in influence operations conducted by some countries than others, and countries whose propagandists frequently make such errors may be forced to prioritize volume in output instead of taking time to carefully craft believable--and more persuasive--persons for their fake accounts. [23, 38] Language models, by contrast, are unlikely to make such easily-noticeable mistakes, though they may still struggle to effectively mimic discursive norms common in fine-grained target populations. [7, 9, 58] Other research has noted that content produced from GPT-3 can change readers' opinions on sensitive political issues, and can do so even better than existing examples of propaganda news articles with only light copyediting. [22] These improvements in quality may permit propagandists to meaningfully alter the strategies that they employ in influence operations. [21] ### Factors other than cost may disincentivize the use of LLMs in influence operations Even if it is strictly cost-effective for malicious actors to use LLMs to produce disinformation content, organizational, bureaucratic, or cultural barriers may cause propaganda outlets to nonetheless avoid doing so. Propaganda outlets can take a number of forms. [18] describes the proliferation of private "disinformation-for-hire" firms that are contracted to generate and promote disinformation, whereas [28] argues that large quantities of Chinese-origin propaganda on social media is produced by a diffuse set of government bureaucrats who are paid per output to produce propaganda content but without any centralized direction or oversight. Propaganda outlets organized around the first model will likely have much stronger incentives to adopt cost-saving technologies than those operating on the second model. In the extreme, we can even speculate that bureaucratic structures which reward departments on the basis of personnel size may actively disincentivize the adoption of LLMs for propaganda purposes.34[6] notes that many countries continue to use decentralized organizational models to produce content for influence operations, but that the current trend is increasingly towards centralized, third-party firms, which are much more likely to have the coordination and desire necessary to adopt new cost-saving technologies. According to [6], between January 2019 and November 2020, public government contracts suggest that states have paid at least $60 million (and likely much more) to private forms to conduct influence operations. Footnote 34: It is not clear whether any major propaganda outlets face this set of incentives. However, one common issue facing propagandists is that it is remarkably difficult to evaluate the impact of disinformation on actual political attitudes and behaviors, with some research indicating that the concrete effects of exposure to influence operations is relatively small. [16] The difficulty of evaluating the political import of specific operations, combined with the insulation from cost-cutting pressures that exists for in-house propaganda authors (contrasted with specialized firms), could largely nullify the incentives to adopt LLMs for content generation. More generally, it is unclear to what extent propagandists are optimizers or satisficers. The development of "deepfake" technology over the 2010s led some analysts to speculate that Russia would unleash a "wave" of deepfaked disinformation against the West. [41] However, despite some high-profile examples [1], deepfaked images and videos remained an apparently minor component of Russian influence operations for relatively long after the technology to produce them existed. Only in recent months, with the rise of text-to-image generative AI systems, have AI-generated images become more commonly observed as a tool for disinformation (though not necessarily as a tool of Russian disinformation specifically). [54, 45] This potentially suggests that technical barriers to adoption can meaningfully deter propagandists from using new technologies, and that improvements in user-friendliness affect propagandists' decision-making more than improvements in underlying capabilities. Similarly, although the use of LLMs to produce disinformation was largely speculative until recently, recent months have seen networks of Twitter accounts posting tweets containing ChatGPT's default refusal to complete a user request response, likely suggesting attempts to use the model to generate content to post on social media. [44] While the models presented here suggest that for even relatively small campaigns, a propagandist's cost-optimal solution is to fine-tune an open source model, propagandists may instead prefer to rely on solutions with lower adoption costs even when doing so is economically irrational. ### Nation-states do not have strong incentives to secretly train LLMs for influence operations The maximum length of a campaign evaluated in the preceding sections was 10 million tweets. This volume of coordinated inauthentic activity on Twitter between October 2018 and December 2021 was exceeded by only three countries (see Section 3). However, this is true only when considering (1) publicly attributed activity that was (2) posted to and removed by Twitter (3) over a three-year period with significant gaps in reporting. It seems reasonably likely that for a small but significant number of nation-state actors, the amount of content generated for use in influence operations over the near- to medium-term could substantially exceed the equivalent of 10 million tweets. In fact, [28] estimates that the Chinese government fabricates and posts "about 448 million social media comments a year." However, even at this scale, the value of training an LLM from scratch to produce disinformation content--as opposed to simply fine-tuning an existing open source model--is dubious, even if the fine-tuned model is not as capable (see Section 6). If the best attainable performance of any open source model, even after careful fine-tuning, was still very low for a given task, it **might** become economically viable to train an LLM from scratch. But the propagandist must not only believe this to be true of existing open source models, but also of any future models that may be released between the time the operator begins training their model and the time they generate enough posts to fully recoup their expenses. Given the rapid rate of both public model release and the tendency of access to privately-held models to become easier and less restricted over time, this is unlikely to be a reasonable bet.35 Footnote 35: Note also that, even if the Chinese government produces roughly 448 million inauthentic social media posts per year, this volume of content is likely not produced by a single propaganda agency that could pay the initial fixed costs of model training and then recoup their expenses over time; rather, it is produced (at least in part) by a diffuse set of bureaucrats, no one member of which stands to gain from paying large upfront costs for the sake of increasing their individual efficiency at generating misinformation. [28] It is possible that the use of LLMs may themselves enable much larger campaigns, such that although training a model from scratch would be a poor economic decision under **current** scales of operation, doing so would enable much larger scales of operation that **do** justify such an investment. But there are checks on the scale at which propagandists can operate that go beyond the costs of content generation, including the difficulty of maintaining large networks of inauthentic accounts without being detected by platforms. [20, 57] It seems reasonably likely, then, that even nation-states may find it difficult to justify secretive large-scale training runs of LLMs intended primarily for use in influence operations. ### The comparative value of technical mitigations against LLM misuse There are three broad technical interventions which LLM developers can pursue to reduce the likelihood of their models being abused: they can train or fine-tune the model itself in ways that reduce its propensity to comply with malicious user requests (thereby reducing \(p\)), they can invest in capabilities to detect misuse or impose greater penalties on identified malicious actors (thereby increasing \(P\lambda\)),36 or they can embed watermarks into model outputs or pursue other strategies that increase the potential for detection of synthetic content online.37 The model and analysis presented here indicates that all three strategies can be valuable, though in different ways. Footnote 36: “Increasing penalties” here can mean anything that imposes additional friction upon propagandists once identified. For instance, requiring a CAPTCHA in addition to an email address for users signing up for model access imposes additional costs, though not very large ones. Footnote 37: The model presented here does not readily include a way for analyzing this strategy. Watermarks reduce the value of LLM outputs, but they do not make it more costly to generate the outputs, meaning that a strict cost comparison does not capture the relevant differences. Model alignment efforts that reduce \(p\) and monitoring controls that increase \(P\lambda\) are primarily useful when rival open source models do not exist, or if propagandists are particularly "sticky" and unlikely to switch to such rival models. Model alignment efforts can also be pursued by groups who develop and release open source models, though to date, this is less common than among businesses that seek to monetize their models. However, monitoring controls are not in principle applicable to open source models (at least not in the form analyzed here, though see footnote 21).38 Footnote 38: Note that even if these interventions do not carry major benefits from a security perspective, they may still be valuable from a safety perspective. In addition, if propagandists are satisficers, it is possible that a failed attempt to make use of an API-accessible model (whether due to the model’s refusal to produce the desired outputs or a quick detection) may dissuade them from seriously pursuing the use of LLMs by other means as well. The development of watermarks for LLMs is an active area of research. Existing proposals for technical methods of watermarking LLMs, however, are "shallow" in the sense that they are added on top of a pretrained LLM and can easily be removed by a user or eliminated via fine-tuning. [30] However, a growing number of researchers are exploring the feasibility of "deep" watermarks or other methods that persist and allow for attribution even after fine-tuning. [3, 41, 42] Whether or not such deep watermarks will prove to be feasible is an open question--but if they are, and if propagandists looking to use LLMs for content generation primarily rely on fine-tuned versions of open source models, then embedding such watermarks into open source models may be a valuable intervention. It is important to emphasize, however, that while the model presented here can shed some light on the expected value of some types of technical interventions, no combination of strictly technical safeguards is likely to fully address the issues posed by LLM-enabled influence operations. To more comprehensively address these risks, it will be necessary to combine technical, policy, and legal interventions. For more discussion about the variety of technical, policy, and legal tools available to various stakeholders, see (_inter alia_) [22, 58, 59, 61, 62, 69]. ## Acknowledgements and Supplemental Materials For their comments, discussion, careful analysis, and general supportiveness for this work, I would like to thank Renee DiResta, Irene Solaiman, Katherine Quinn, Girish Sastry, Josh Goldstein, Andrew Lohn, Mia Hoffmann, and John Bansemer. All errors remain my own. A GitHub repository for this work, which contains code to produce all associated figures and results via Monte Carlo estimation, is available at [https://github.com/georgetown-cset/disinfo-costs](https://github.com/georgetown-cset/disinfo-costs). A blog-style summary of the work for policymakers can be found at [https://cset.georgetown.edu/article/how-much-money-could-large-language-models-save-propagandists/](https://cset.georgetown.edu/article/how-much-money-could-large-language-models-save-propagandists/). Supplemental Equations The model presented in this paper requires only algebraic manipulation, primarily of equation 5, in order to calculate any of the final variables of interest discussed in Section 7. However, some of this algebraic manipulation can be tedious, so I reproduce here some useful solutions for various parameters of interest. First, let \(\hat{p}_{1H}\) represent the threshold performance value at which the use of the private model is preferred to reliance on a manual campaign (assuming the detection rate \(\lambda\) is already fixed). Then this value is given by: \[\hat{p}_{1H}=\frac{1}{\alpha}+\frac{L}{w}\left(IC+P\lambda\right) \tag{6}\] Let \(\hat{p}_{1AI}\) represent the threshold performance value at which the use of the private model is preferred to an alternative open source option. This value is given by: \[\hat{p}_{1AI}=p_{2}+\frac{p_{2}P\lambda-\frac{FC}{n}}{\frac{w}{\alpha L}+IC} \tag{7}\] Note that, if \(p_{1}=p_{2}\), the propagandist is only indifferent between the API-accessible and open source models if \(p_{2}P\lambda=\frac{FC}{n}\). In other words, the API-accessible model can perform worse than the open source model and face meaningful monitoring risks, and yet still be preferred if \(\frac{FC}{n}\) is sufficiently large. If, alternatively, values of \(p_{1}\) are already known, we may instead want to calculate the minimum detection capability at which a propagandist is deterred from using the API-accessible model. If the propagandist's fallback to the use of the API-accessible model is a manual campaign, then the minimum deterrant detection capability \(\hat{\lambda}_{H}\) is given by: \[\hat{\lambda}_{H}=\frac{1}{P}\left(\frac{wp_{1}}{L}-\frac{w}{\alpha L}-IC\right) \tag{8}\] Alternatively, if the public model is sufficiently well-performing that the propagandist will use it instead as a fallback, then the minimum deterrant detection capability \(\hat{\lambda}_{AI}\) is instead given by: \[\hat{\lambda}_{AI}=\frac{1}{P}\left(\frac{w}{\alpha L}+IC\right)\left(\frac{ p_{1}-p_{2}}{p_{2}}\right)+\frac{p_{1}FC}{nP} \tag{9}\] Note that in the two preceding equations, a detection penalty of $0 causes \(\hat{\lambda}\) to be undefined, because no detection capability could possibly deter malicious use if the detection does not itself impose some form of penalty. In addition, note that the equation 9 is the general case of equation 4, where fixed costs associated with running an open source model are no longer assumed to be $0. Finally, let equation 5 represent the three choices for operation facing the propagandist, but let it also be possible for the propagandist to spend \(FC\) (or \(\Delta FC\) more than was spent for the use of the current open source model option) to create or fine-tune a new model with target capability \(\hat{p}\). Then, for each of the three campaign styles, we can set \(\frac{\hbar}{p}\left(\frac{w}{\alpha L}+IC\right)+FC\) equal to the corresponding portion of equation 4 and solve for \(n\) to calculate the minimum campaign size at which expending \(FC\) becomes cost effective relative to an existing choice of model (including manual authorship as a "choice" of model). The overall minimum viable scale for a model where \((FC,\hat{p})\in FC\) is then the maximum solution of \(n\) across all alternative model choices (because the minimum viable scale is the scale at which a model becomes cost effective relative to the next-most-cost-effective option). This is given by: \[\hat{n}=\max\left(\begin{array}{c}\text{Manual}=\frac{\hat{p}FC}{\frac{ pw}{L}-\left(\frac{w}{\alpha L}+IC\right)}\\ \text{API-Accessible LLM}=\frac{(\hat{p}\cdot p_{1})FC}{\hat{p}P\lambda+( \hat{p}-p_{1})\left(\frac{w}{\alpha L}+IC\right)}\\ \text{Open Source LLM}=\frac{(\hat{p}\cdot p_{2})\Delta FC}{(\hat{p}-p_{2}) \left(\frac{w}{\alpha L}+IC\right)}\end{array}\right) \tag{10}\]
2302.08896
Modelling and Kron reduction of power flow networks in directed graphs
Electrical grids are large-sized complex systems that require strong computing power for monitoring and analysis. Kron reduction is a general reduction method in graph theory and is often used for electrical circuit simplification. In this paper, we propose a novel formulation of the weighted Laplacian matrix for directed graphs. The proposed matrix is proved to be strictly equivalent to the conventionally formulated Laplacian matrix and is verified to well model a lossless DC power flow network in directed graphs. We as well present significant properties of the proposed weighted Laplacian and conditions of Kron reduction in directed graphs and in lossless DC power flow networks. The reduction method is verified via simulation models of IEEE-3, IEEE-5, IEEE-9, IEEE-14, and IEEE RTS-96 test systems.
Ruohan Wang, Zhiyong Sun
2023-02-17T14:27:21Z
http://arxiv.org/abs/2302.08896v1
# Modelling and Kron reduction of power flow networks in directed graphs ###### Abstract Electrical grids are large-sized complex systems that require strong computing power for monitoring and analysis. Kron reduction is a general reduction method in graph theory and is often used for electrical circuit simplification. In this paper, we propose a novel formulation of the weighted Laplacian matrix for directed graphs. The proposed matrix is proved to be strictly equivalent to the conventionally formulated Laplacian matrix and is verified to well model a lossless DC power flow network in directed graphs. We as well present significant properties of the proposed weighted Laplacian and conditions of Kron reduction in directed graphs and in lossless DC power flow networks. The reduction method is verified via simulation models of IEEE-3, IEEE-5, IEEE-9, IEEE-14, and IEEE RTS-96 test system. Directed graphs Laplacian matrix Incidence matrix Kron reduction Schur complement DC power flow ## 1 Introduction ### Background and motivations Large-scale systems such as electrical grids require a heavy computing workload due to their sizes and complexity. It is only natural to think of applying model reduction techniques to ease the workload. Kron reduction is a ubiquitous reduction method in electrical circuit analysis. Kron reduction is widely used in control theory and engineering to simplify and analyze large-scale systems, particularly in the design of control systems for electric power grids, aircraft, and other complex systems. It is also used in other fields, such as biology and economics, where it can be used to reduce the complexity of models and make them more tractable for analysis and simulation. Originally proposed in [1] as purely algebraic Gaussian elimination of certain vertices in electrical circuits, Kron reduction can also be viewed from the standpoint of graph theory. By the nature of electrical circuit modeling, most of the existing model reduction work in the field of control theory is based on undirected graphs. However, in many applications including networked control systems, directed graphs arise just as naturally as undirected ones, hence before the reduction process, it is of interest to think about using directed graphs for electrical power network modeling. ### Literature review In this subsection, we review some existing research work on the analysis and model reduction of electrical networks. An algorithm for maximizing power flows within a power network to prevent catastrophic power outages was proposed and verified in [2]. A parallel distributed memory structure exploiting framework which accelerates the solution of the Security Constrained Optimal Power Flow (SCOPF) problems was proposed in [3]. Basic graph theories were used to lay the foundation for further discussion in [2] and [3]. However, in both papers, the main focus was the development of the proposed algorithms for a full-sized network, where the model reduction technique was not taken into consideration. A novel notion termed _cutset angle_ was eloquently proposed by Dobson in [4] with the purpose of monitoring power flow network stress. The formulation of _cutset angle_ by Dobson could be viewed as a two-stage treatment: add a synthetic vertex being the algebraically weighted sum of all other vertices to the network, and apply Kron reduction to the network eliminating all vertices except for the synthetic vertex. Undirected graphs were used by Dobson for modelling electrical circuits in [4]. Similarly, the terminal voltage/current behavior of a purely linear resistive circuit was derived in [5] by J. C. Willems and E.I. Verriest, followed by [6], where A. van der Schaft characterized the input-output behaviors of a linear resistive circuit before and after the removal of certain vertices. The heavy usage of the symmetric weighted Laplacian of a graph was the highlight of [6]. Meanwhile, Dorfler et al. provided a detailed graph-theoretic analysis of the Kron reduction process in [7], which was followed by the application of Kron reduction on resistive circuits in [8]. Purely algebraic conditions that relate synchronization and transient stability of a power network were derived by Dorfler et al. in [9]. Then in [10], Dorfler et al. further proposed analytical approaches to phase and frequency synchronization in Kron-reduced networks. In [11], Dorfler et al. surveyed both historic and recent results on electrical network analysis based on algebraic graph theory. Dorfler et al. concluded [11] by a series of open questions at the intersection of algebraic graph theory and electrical networks. Based on Dorfler's work, the Kron-reduced model was used to analyze both the transient and steady-state behavior of unreduced electrical networks in [12]. Also based on Dorfler's work, a time-domain generalization of Kron reduction for purely resistive and inductive networks was put forth in [13]. However, despite the fact that the mentioned series of work on Kron reduction and its application on electrical networks were comprehensive and enlighting, all of them were still solely targeting undirected graphs. Young et al. introduced a pairwise property of vertices that only depends on connections between the vertices in [14], which is a novel generalized notion of effective resistances that apply to both undirected and directed graphs. The focus of the very paper was the development of the foundation of effective resistances for their application involving directed graphs. In [15], Sugiyama et al. extensively elaborated on Kron reduction to directed graphs. Despite that modelling electrical networks as directed graphs was briefly mentioned in [14] and [15], little physical interpretation of electrical networks had been covered, unfortunately. ### Contributions Contributions of this paper are summarized as follows: * Modelling power flow networks in directed graphs is justified. A novel expression for the weighted directed Laplacian matrix using the graph's incidence matrix is proposed and proved to be strictly equivalent to the conventional weighted Laplacian. * A number of properties of the proposed weighted Laplacian matrix are analyzed, including its eigenvalue, entry values and the existence of Schur complements. These properties are significant to model and characterize power flow networks in directed graphs. * Input and output behaviors of a lossless power flow network are characterized by the proposed weighted Laplacian. I/O behaviors of the reduced network are characterized by the reduced Laplacian matrix. * Implementations of Kron reduction to IEEE-3, IEEE-5, IEEE-9, and IEEE-14 are successfully delivered. Numerical results of network reduction on IEEE-14 test feeder and IEEE RTS-96 are presented, showing that the proposed approach can be applied to power networks with considerable sizes. ### Organization Section 2 gives a summary of the problem formulation of this paper. Section 3 recalls some preliminaries in matrix analysis and algebraic graph theory. Section 4 presents the formulation of the weighted Laplacian in the context of a DC power flow network and graph-theoretic analysis of Schur complements. Section 5 presents the graph-theoretic analysis of the Kron reduction process on DC power flow networks. Section 6 presents numerical results of the proposed Kron reduction to an IEEE-14 test feeder and the modified IEEE RTS-96 test system. Finally, Section 7 concludes the paper and suggests future research directions. ## 2 Problem formulation We seek answers to these particular questions. * How to model a lossless DC power flow network using a directed weighted graph? * What are the properties of the proposed weighted Laplacian matrix? * How is the proposed weighted Laplacian matrix related to the conventionally defined Laplacian matrix? * Does Kron reduction always exist for a directed graph? * Can Kron reduction always be performed to a lossless power flow network? * How are input-output behaviors of the original network and the reduced one related? These are the major problems that motivate the work. Some were formulated during the literature review phase, and others arose inevitably during the model reduction process, which in return complemented problem formulation. ## 3 Preliminaries ### Schur complement Schur complement will be introduced in this section since it is the core of Kron reduction. Consider a partitioned matrix \(\mathcal{M}=\left(\begin{array}{cc}\mathcal{P}&\mathcal{Q}\\ \mathcal{R}&\mathcal{X}\end{array}\right)\), where \(\mathcal{P},\mathcal{Q},\mathcal{R},\mathcal{X}\) are respectively \(p\times p,p\times q,q\times p,q\times q\) sized matrices and the non-singular matrix \(\mathcal{P}\) is called the leading principal sub-matrix of \(\mathcal{M}\)[16]. The term 'Schur complement' of \(\mathcal{P}\) was introduced by Schur: \(\mathcal{M}/\mathcal{P}\triangleq\mathcal{X}-\mathcal{R}\mathcal{P}^{-1} \mathcal{Q}\). Note that Schur complement exists with respect to any non-singular sub-matrix formed with columns and rows from the original matrix. Let \(\alpha,\beta\) be given index sets, which are subsets of \(\{1,2,...,p+q\}\). We denote the cardinality of an index set \(\alpha\) by the notation \(\left|\alpha\right|\) and its complement by the notation \(\alpha^{c}=\{1,2,...,p+q\}\setminus\alpha\). Let \(\mathcal{M}\left[\alpha,\beta\right]\) denote the sub-matrix of \(\mathcal{M}\) formed with rows indexed by \(\alpha\) and columns indexed by \(\beta\). If \(\mathcal{M}\left[\alpha^{c},\beta^{c}\right]\) is non-singular, we denote the Schur complement of \(\mathcal{M}\left[\alpha^{c},\beta^{c}\right]\) by \(\mathcal{M}/\mathcal{M}\left[\alpha^{c},\beta^{c}\right]\triangleq\mathcal{M }\left[\alpha,\beta\right]-\mathcal{M}\left[\alpha,\beta^{c}\right](\mathcal{ M}\left[\alpha^{c},\beta^{c}\right])^{-1}\mathcal{M}\left[\alpha^{c},\beta\right]\). ### Kron reduction Kron reduction is a general method in graph theory for reducing the size of an electrical network by removing unimportant vertices and edges. It was first introduced by Kron as 'Reduction Formulas' in [1] which was obtained through a pure Gaussian elimination procedure. Consider **a linear resistive circuit** with \(n\) vertices, vertex voltages \(V\in\mathbb{R}^{n\times 1}\), vertex current injections \(I\in\mathbb{R}^{n\times 1}\), branch impedances \(z_{ij}\geq 0\) connecting vertex \(i\) and vertex \(j\) and the impedance matrix \(Z\in\mathbb{R}^{n\times n}\), which is a Laplacian matrix. By partitioning vertices following Kirchhoff's laws into two subsets: border vertices \(\alpha\subset\left\{1,...,n\right\},\left|\alpha\right|\geq 2\) and inner vertices \(\alpha^{c}=\{1,...,n\}\setminus\alpha\), the current-balance equations for the network can be partitioned as \[\left[\begin{array}{c}V_{\alpha}\\ V_{\alpha^{c}}\end{array}\right]=\left[\begin{array}{cc}Z_{\alpha\alpha}&Z_{ \alpha\alpha^{c}}\\ Z_{\alpha^{c}\alpha}&Z_{\alpha^{c}\alpha^{c}}\end{array}\right]\left[\begin{array} []{c}I_{\alpha}\\ I_{\alpha^{c}}\end{array}\right]. \tag{1}\] Gaussian elimination of inner current injections \(I_{\alpha^{c}}\) in (1) gives an electrically-equivalent reduced network with border vertices \(\alpha\) obeying the reduced current-balance equations \[V_{\alpha}+Z_{ac}V_{\alpha^{c}}=Z_{red}I_{\alpha} \tag{2}\] where the reduced impedance matrix \(Z_{red}\) is given by the Schur complement of \(Z\) with respect to inner vertices, that is \(Z_{red}=Z_{\alpha\alpha}-Z_{\alpha\alpha^{c}}Z_{\alpha^{c}\alpha^{c}}^{-1}Z_{ \alpha^{c}\alpha}\), and the accompanying matrix \(Z_{ac}=-Z_{\alpha\alpha^{c}}Z_{\alpha^{c}\alpha^{c}}^{-1}\) maps inner voltages to border voltages in the reduced network. Figure 1: Gaussian elimination on resistive circuits _Example_ A linear resistive circuit with 6 vertices is presented in Fig. 1. Vertices \(3,4,5,6\) are _inner vertices_ that are only connected to other vertices within the network. Vertices \(1,2\) are _border vertices_ that are connected to other vertices within the network **and** voltage/current sources outside the network. Current-balance equations for this network can be partitioned in the form of (1). Gaussian elimination of inner vertices gives an electrically-equivalent reduced network, of which border vertices obey the reduced current-balance equations given by (2). This example illustrates that both a linear resistive circuit and the reduced network eliminating all inner vertices can be described by the matrix-formed current-balance equations. \(\square\) Similarly, consider **a lossless DC power flow network** with \(n\) vertices, vertex active powers \(P\in\mathbb{R}^{n\times 1}\), vertex angles \(\theta\in\mathbb{R}^{n\times 1}\), branch susceptances \(b_{ij}\geq 0\) connecting vertex \(i\) and vertex \(j\) and the susceptance matrix \(S\in\mathbb{R}^{n\times n}\) which is a Laplacian matrix. By partitioning vertices into two subsets: border vertices \(\beta\subset\left\{1,...,n\right\},\left|\beta\right|\geq 2\) and inner vertices \(\beta^{c}=\left\{1,...,n\right\}\setminus\beta\), the power-angle equation for the network can be partitioned as \[\left[\begin{array}{c}P_{\beta}\\ P_{\beta^{c}}\end{array}\right]=\left[\begin{array}{cc}S_{\beta\beta}&S_{ \beta\beta^{c}}\\ S_{\beta^{c}\beta}&S_{\beta^{c}\beta^{c}}\end{array}\right]\left[\begin{array} []{c}\theta_{\beta}\\ \theta_{\beta^{c}}\end{array}\right]. \tag{3}\] Gaussian elimination of inner angles \(\theta_{\beta^{c}}\) in (3) gives an electrically-equivalent reduced network with border vertices \(\beta\) obeying the reduced power-angle equation: \[P_{\beta}+S_{ac}P_{\beta^{c}}=S_{red}\theta_{\beta} \tag{4}\] where the reduced susceptance matrix \(S_{red}\) is given by the Schur complement of \(S\) with respect to inner vertices, that is \(S_{red}=S_{\beta\beta}-S_{\beta\beta^{c}}S_{\beta^{c}\beta^{c}}^{-1}S_{\beta^ {c}\beta^{c}}\), and the accompanying matrix \(S_{ac}=-S_{\beta\beta^{c}}S_{\beta^{c}\beta^{c}}^{-1}\) maps inner active powers to border active powers in the reduced network. Kron reduction will be performed mainly to **lossless DC power networks** in this paper. _Example_ A lossless DC power flow network with 6 buses/vertices is presented in Fig. 2. Vertices \(3,4,5,6\) are _inner vertices_ that are only connected to other vertices within the network. Vertices \(1,2\) are _border vertices_ that are connected to other vertices within the network and generators/loadings outside the network. Power-angle equation for this network can be partitioned in the form of (3). Gaussian elimination of inner vertices gives an electrically-equivalent reduced network, of which border vertices obey the reduced power-balance equations given by (4). Similarly, this example illustrates that both a lossless DC power flow network and the reduced network eliminating all inner vertices can be described by the matrix-formed power-angle equation and that the reduction process essentially performs the Schur complement. \(\square\) Before continuing to the next subsection, we would like to distinguish between _block-by-block Kron reduction_ and _iterative Kron reduction_. First, we recall the definition of _iterative Kron reduction_. **Definition 1** (Iterative Kron reduction, T. Sugiyama and K. Sato [15]): _Iterative Kron reduction associated to a weighted Laplacian matrix \(\mathcal{L}\in\mathbb{R}^{n\times n}\) and indices \(\left\{1,...,\left|\alpha\right|\right\}\), is a sequence of matrices \(\mathcal{L}^{l}\in\mathbb{R}^{(n-l)\times(n-l)}\), \(l\in\left\{1,...,n-\left|\alpha\right|\right\}\), which is defined as_ \[\mathcal{L}^{l}=\mathcal{L}^{l-1}/\mathcal{L}^{l-1}[\left\{k_{l}\right\}, \left\{k_{l}\right\}] \tag{5}\] _where \(\mathcal{L}^{0}=\mathcal{L}\) and \(k_{l}=n+1-l\)._ Figure 2: Gaussian elimination on lossless DC power flow networks **Remark 1**: Block-by-block Kron reduction _eliminates more than one vertex during each reduction process. We adopt block-by-block Kron reduction as the main reduction method in this paper, which essentially performs the Schur complement with block sub-matrices. In contrast,_ iterative Kron reduction _eliminates one vertex during each reduction process. The reduction result of_ block-by-block Kron reduction _has been proved to be strictly equivalent to that of_ iterative Kron reduction _when the same vertex subset is eliminated in [15]._ ### Directed graph, the incidence matrix and its variation Consider a directed and **unweighted** graph \(\mathcal{G}_{d}=(\mathcal{V},\varepsilon_{d},\mathcal{H})\), where \(\mathcal{V}\) denotes the finite vertex set, \(\varepsilon_{d}\) denotes the directed edge set and \(\mathcal{H}\in\mathbb{R}^{|\mathcal{V}|,|\varepsilon_{d}|}\) is the corresponding unique incidence matrix. \(|\mathcal{V}|\) is the number of vertices, and \(|\varepsilon_{d}|\) is the number of edges. The \((i,j)\)th element \([\mathcal{H}]_{ij}\) of the incidence matrix \(\mathcal{H}\) is equal to 1 if vertex \(i\) is the head of the edge \(j\), is equal to -1 if the vertex is the tail of edge, and 0 otherwise. The head/tail specification in the context of \(a\)\(DC\)\(power\)\(flow\)\(network\)\(graph\) is determined by the positioning of diode-like-functional reactive components on transmission lines, which will be elaborated in Section 4.1. Thus the incidence matrix functions as a mapping from \(\varepsilon_{d}\) to the set of ordered pairs of \((v,w)\in\mathcal{V}^{2}\), with no self-loops allowed in the graph under consideration. For a given graph \(\mathcal{G}_{d}\), we identify a subset \(\mathcal{V}_{b}\subset\mathcal{V}\) as _boundary vertices_. Vertices that are tails of all edges connected to them are _sink_ vertices. Vertices that are heads of all edges connected to them are _source_ vertices. _Sink_ and _source_ vertices are _boundary vertices_. _Boundary vertices_**cannot** be eliminated. The subset \(\mathcal{V}_{i}=\mathcal{V}\setminus\mathcal{V}_{b}\) contains all the other vertices of the graph, being called _interior vertices_. _Interior vertices_**can** be eliminated. A power flow network usually includes both _sink_ and _source_. By identifying them as _boundary vertices_, we ensure the integrity of the reduced graphs. Consider a directed and **weighted** graph \(\mathcal{G}_{d}=(\mathcal{V},\varepsilon_{d},\mathcal{A})\). Entries \([\mathcal{A}]_{ij}\) of the adjacency matrix \(\mathcal{A}\) can be expressed in: \[\left[\mathcal{A}\right]_{ij}\triangleq\left\{\begin{array}{rl}b_{k},&\text {if }(v_{i},v_{j})=e_{ij}\in\varepsilon_{d},b_{k}\text{ is the weight on edge }e_{ij}\\ &0,\text{otherwise}.\end{array}\right. \tag{6}\] A diagonal degree matrix \(\mathcal{D}\) corresponding to the directed and weighted graph \(\mathcal{G}_{d}\) can be derived from the introduced adjacency matrix \(\mathcal{A}\). Diagonal entries \([\mathcal{D}]_{ii}\) of the diagonal degree matrix \(\mathcal{D}\) are defined as: \([\mathcal{D}]_{ii}\triangleq\sum_{j=1}^{n}[\mathcal{A}]_{ij}\). Before continuing to the formulation of the weighted Laplacian matrix for a lossless power flow network, definitions of different classes of directed graphs and _walk products_ are given below for future reference. **Definition 2** (Strongly-connected graph): _A directed graph is said to be strongly-connected if every vertex can be reached from every other vertex._ **Definition 3** (Quasi-strongly-connected graph): _A directed graph is said to be quasi-strongly-connected if there exists one vertex that can reach all the other vertices in the graph. The very vertex is called root vertex._ **Definition 4** (Walk products \((\mathcal{A}^{k})_{0k}\), [17]): _Let \(A\) be the \(n\times n\) adjacency matrix for a given weighted directed graph \(\mathcal{G}_{d}\). Let \((\mathcal{A}^{k})_{0k}\) given by \((v_{0},v_{1})\), \((v_{1},v_{2})\),..., \((v_{k-1},v_{k})\) be a walk in \(\mathcal{G}_{d}\). The walk product for the walk \((\mathcal{A}^{k})_{0k}\) is_ \[\prod_{j=1}^{k}[\mathcal{A}]_{j-1,j}. \tag{7}\] **Remark 2**: _The product given by the expression in (7) is a generic quadrature of the \((v_{0},v_{k})\)-entry of \(\mathcal{A}^{k}\). The walk product \((\mathcal{A}^{k})_{0k}\) is non-zero only when all quadrated elements \([\mathcal{A}]_{j-1,j}\), \(j=1,2,...,k\) are non-zero. A non-zero walk product \((\mathcal{A}^{k})_{0k}\) indicates that there exists a directed path in \(\mathcal{G}_{d}\) from \(v_{0}\) to \(v_{k}\)._ Next, we continue to formulate the modelling of a lossless power flow network via weighted Laplacian. Consider a graph \(\mathcal{G}_{d}\). In the context of a DC power flow network, \(\theta\) is the vector of angles _at_ vertices/buses which can be expressed as \(\theta=[\theta_{1},\theta_{2},\ldots,\theta_{n}]\) where \(\theta_{i}\) is the angle at the vertex \(v_{i}\). The notation \(\varphi\) denotes the vector of angle difference _across_ edges (between the head and the tail of the edge) of which entries \(\varphi_{k}\) can be expressed as: \[\varphi_{k}=\theta_{i}-\theta_{j} \tag{8}\] where \(v_{i}\) is the head of edge\({}_{k}\) and \(v_{j}\) is the tail of edge\({}_{k}\). \(P_{edge}\) is the vector of active power flowing _through_ edges of which entries \(P_{edge_{k}}\) can be expressed as: \[P_{edge_{k}}=b_{k}\varphi_{k} \tag{9}\] where \(b_{k}\) is the weight of edge\({}_{k}\). \(P_{v}\) is the vector of active power **extractions**_at_ vertices/buses of which entries \(P_{v_{i}}\) can be expressed as: \[P_{v_{i}}=\sum_{k=1}^{l}P_{\text{edge}_{k}} \tag{10}\] where the vertex \(v_{i}\) is the head of edge\({}_{k}\) and the number of edges out of \(v_{i}\) is \(l\). Define a matrix \(\mathcal{H}_{o}\) being the variation of the incidence matrix \(\mathcal{H}\) by replacing all \(-1\) entries with \(0\). In order to have a symmetric notation, define another matrix \(\mathcal{H}_{i}\) being the variation of the incidence matrix \(\mathcal{H}\) by replacing all \(1\) entries with \(0\). Since \(\mathcal{H}=\mathcal{H}_{o}+\mathcal{H}_{i}\), \(\mathcal{H}\) maps \(P_{edge}\) to active power summations considering both **extractions and injections**_at_ vertices. \(\mathcal{H}_{i}\) maps \(P_{edge}\) to active power **injections** alone _at_ vertices. \(\mathcal{H}_{o}\) maps \(P_{edge}\) to active power **extractions**\(P_{v}\) alone _at_ vertices, which will be the focus of this paper. Kirchhoff's treatment of circuit graphs is external currents entering/leaving certain vertices of the graph. The motivation of the treatment in this paper is analog to Kirchhoff's treatment of circuit graphs, which is external active power injecting/extracting to/from certain vertices of the graph. By considering both power injection and extraction, this hybrid treatment is indispensable in conventional power flow analysis. Still, it requires the articulation of \(\mathcal{H}\), being the composition of \(\mathcal{H}_{o}\) and \(\mathcal{H}_{i}\). This conventional treatment will inevitably result in operations on undirected graphs, which is against our intentions: reduction to directed graphs. Hence in this paper we intentionally distinguish between \(\mathcal{H}_{o}\) and \(\mathcal{H}_{i}\), and we emphasize \(\mathcal{H}_{o}\). According to the author's literature review, this treatment has neither been studied nor proposed. Hereby we introduce our special treatment on the formulation of _vertex power balance law_ and _angle difference law_ using the incidence matrix \(\mathcal{H}\) and its variation \(\mathcal{H}_{o}\). _Vertex power balance law_ can be given as: \[\mathcal{H}_{o}P_{edge}=-P_{v}. \tag{11}\] Correspondingly, _angle difference law_ can be written as: \[\varphi=\mathcal{H}^{T}\theta. \tag{12}\] The formula (11) describes that _active power extractions_ at one vertex is the summation of all active powers on the edges that have their heads at the very vertex. The formula (12) illustrates that the _angle difference_ between any two vertices can be derived from the product of the transpose of the incidence matrix and the vector of bus angle \(\theta\). ## 4 Modelling of power flow networks and weighted Laplacian properties ### Weighted Laplacian matrix In this subsection, for a given lossless power flow network, we present the formulation of the corresponding weighting matrix \(B\), the formulation of the corresponding incidence matrix \(\mathcal{H}\), and subsequently, the formulation of the corresponding weighted Laplacian matrix \(\mathcal{L}\). In order to streamline the formulation of problems in the context of directed graphs, we assume that the reactances of all reactive components in the network are strictly negative. Whenever there is a line with its reactance with a non-negative value, then we remove the directed edge corresponding to this reactance. Thus we may as well define the susceptance \(b_{i}\) of each reactive component as the negative reciprocal of its reactance \(x_{i}\), that is \(b_{i}=-\frac{1}{x_{i}}>0\), for every edge \(e_{i}\) of the network graph. Positioning of diode-like-functional reactive components determines the \(orientation\) of each edge (i.e. active power is only allowed to flow through 'diodes' forwardly). Define the diagonal matrix \(B\triangleq diag\left\{b_{1},...,b_{n}\right\}\). By far we have defined the diagonal weighting matrix \(B\) and the incidence matrix \(\mathcal{H}\) for the directed graph \(\mathcal{G}_{d}\) in the context of a DC power flow network. Furthermore, we will throughout assume that the network graph under consideration contains at least two vertices. The following example illustrates the formulations of \(\mathcal{H}\), \(\mathcal{H}_{o}\), and \(B\). _Example_ Consider a lossless 4-bus power flow network; see Fig.3 (upper). See Fig. 3 (bottom) for the corresponding graph representation of the 4-bus lossless network. Edge weights are labeled next to edges accordingly. Assume all edge susceptances are 1. The incidence matrix \(\mathcal{H}\), its variation \(\mathcal{H}_{o}\), and the weighting diagonal matrix \(B\) are: \[\mathcal{H}=\left[\begin{array}{cccccc}1&1&0&0&0\\ 0&0&0&-1&-1\\ -1&0&-1&1&0\\ 0&-1&1&0&1\end{array}\right],\] \[\mathcal{H}_{o}=\left[\begin{array}{cccccc}1&1&0&0&0\\ 0&0&0&0&0\\ 0&0&0&1&0\\ 0&0&1&0&1\end{array}\right],\] \[B=\operatorname{diag}\left\{1,1,1,1,1\right\}.\] This example illustrates the specification process of head/tail for every edge in the context of a DC power flow network. The specification corresponds to the formulation of the incidence matrix \(\mathcal{H}\), and edge susceptances correspond to the diagonal entries of the diagonal weighting matrix \(B\). To characterize the relation between vertex angles \(\theta\) and vertex active power extractions \(P_{v}\) of a lossless network, we consider a distribution of angles over its vertices, such that the corresponding angles and active power flow satisfy _vertex power balance_ and _angle difference law_. Hence we again present the two laws given in (11) and (12) together with the relationship between active power _through_ edges and the angle differences _across_ the edge: \[\varphi =\mathcal{H}^{T}\theta, \tag{13}\] \[P_{egde} =-B\varphi,\] (14) \[-P_{v} =\mathcal{H}_{o}P_{edge}. \tag{15}\] Replacing \(P_{edge}\) in (15) with its expression (14), and replacing \(\varphi\) with its expression in (13), we have: \[P_{v}=\mathcal{H}_{o}B\mathcal{H}^{T}\theta=\mathcal{L}\theta \tag{16}\] where \(\mathcal{L}=\mathcal{H}_{o}B\mathcal{H}^{T}\). We now formally introduce our definition of the weighted Laplacian matrix. Figure 3: A lossless 4-bus power flow network (upper) and its graph representation (bottom) **Definition 5** (Weighted Laplacian matrix): _For any directed graph with the incidence matrix \(\mathcal{H}\) and the weighting diagonal matrix \(B\), the square matrix \(\mathcal{H}_{o}B\mathcal{H}^{T}\) is defined as the weighted Laplacian matrix \(\mathcal{L}\) of the graph._ The weighted Laplacian matrix features many important properties, which will be elaborated on in Section 4 and lay the foundation of the characterization of input-output behavior of \(any\) lossless network in Section 5. Every theorem and lemma introduced in the sequel will be equipped with proof and an example for readers' understanding. ### Weighted Laplacians of directed graphs This subsection will present several important properties of the weighted Laplacian matrix \(\mathcal{L}\) of a given directed graph \(\mathcal{G}_{d}\). First, we present a theorem for generalized directed graphs. **Theorem 4.1**: _Consider a directed graph \(\mathcal{G}_{d}\) with incidence matrix \(\mathcal{H}\) and its variation \(\mathcal{H}_{o}\). Let \(B\) be a positive definite diagonal weighting matrix, of which the dimension is equal to the number of edges. Then the weighted Laplacian matrix \(\mathcal{L}=\mathcal{H}_{o}B\mathcal{H}^{T}\) has the following properties_ 1. _The weighted Laplacian_ \(\mathcal{L}\) _is asymmetric, having all eigenvalues with non-negative real parts, and dependent on the orientation of the graph._ 2. _The weighted Laplacian_ \(\mathcal{L}\) _has non-negative diagonal entries, and non-positive off-diagonal entries._ 3. _The weighted Laplacian_ \(\mathcal{L}\) _has zero row sums. The vector_ \(\mathbf{1}\) _is in the right nullspace of_ \(\mathcal{L}\)_._ _Proof:_ For the proof of _Theorem_ 4.1, we aim to prove that our definition of the weighted Laplacian matrix is strictly equivalent to the conventional definition: \(\mathcal{L}_{conv}\triangleq\mathcal{D}-\mathcal{A}\). Consider a directed graph \(\mathcal{G}_{d}\) with its corresponding incidence matrix \(\mathcal{H}\) and weighting diagonal matrix \(B\). Entries \([\mathcal{L}]_{ij}\) of our novel definition of the weighted Laplacian matrix \(\mathcal{L}\) can be expressed in: \[i\neq j:[\mathcal{L}]_{ij}\triangleq\left\{\begin{array}{cl}-b_{k},&\text{ if }(v_{i},v_{j})=e_{ij}\in\varepsilon_{d},\\ &b_{k}\text{ is the weight on edge }e_{ij}\\ 0,&\text{ if }(v_{i},v_{j})=e_{ij}\notin\varepsilon_{d}\end{array}\right. \tag{17}\] Now recall the conventional Laplacian matrix definition for a directed graph: \(\mathcal{L}_{conv}\triangleq\mathcal{D}-\mathcal{A}\), where \(\mathcal{A}\) is the adjacency matrix, entries of which have been declared in Section 3.3, and \(\mathcal{D}\) is the diagonal degree matrix. Hence, entries \([\mathcal{L}_{conv}]_{ij}\) of the conventional Laplacian matrix \(\mathcal{L}_{conv}\) can be expressed as: \[i\neq j:[\mathcal{L}_{conv}]_{ij}\triangleq\left\{\begin{array}{cl}-b_{k},& \text{ if }(v_{i},v_{j})=e_{ij}\in\varepsilon_{d},\\ &b_{k}\text{ is the weight on edge }e_{ij}\\ 0,&\text{ if }(v_{i},v_{j})=e_{ij}\notin\varepsilon_{d}\end{array}\right. \tag{18}\] \[i=j:[\mathcal{L}_{conv}]_{ii}\triangleq-\sum_{p=1,p\neq i}^{n}[\mathcal{L}_{ conv}]_{ip}.\] Observing (17) and (18) it is evident that for a directed weighted graph \(\mathcal{G}_{d}\), our definition of the weighted Laplacian: \(\mathcal{H}_{o}B\mathcal{H}^{T}\) is strictly equivalent to the conventional definition: \(\mathcal{D}-\mathcal{A}\). Since \(\mathcal{L}_{conv}\) is known to be asymmetric, having all eigenvalues with non-negative real parts, we conclude the proof of _Theorem_ 4.1.1. Since \(b_{k}\) is always positive as defined in Section 4.1, off-diagonal entries of \(\mathcal{L}\) are always non-positive. Observing the definition for diagonal entries of \([\mathcal{L}]_{ii}\) in (17), it is evident that \(\mathcal{L}\) has non-negative diagonal entries and zero row sums. Hence the proofs of _Theorem_ 4.1.2 and _Theorem_ 4.1.3 are concluded. _Example_ For the directed graph in Fig. 3 (bottom), we assign edge weights \(\{b_{1},b_{2},b_{3},b_{4},b_{5}\}\) as \(\{1,2,3,4,5\}\). Then the incidence matrix \(\mathcal{H}\), its variation \(\mathcal{H}_{o}\), the diagonal matrix \(B\), its weighted Laplacian \(\mathcal{L}\) and its eigenvalues are: \[\mathcal{H} =\left[\begin{array}{ccccc}1&1&0&0&0\\ 0&0&0&-1&-1\\ -1&0&-1&1&0\\ 0&-1&1&0&1\end{array}\right],\quad\mathcal{H}_{o}=\left[\begin{array}{ccccc} 1&1&0&0&0\\ 0&0&0&0&0\\ 0&0&0&1&0\\ 0&0&1&0&1\end{array}\right],\] \[B =\left[\begin{array}{ccccc}1&0&0&0&0\\ 0&2&0&0&0\\ 0&0&3&0&0\\ 0&0&0&4&0\\ 0&0&0&0&5\end{array}\right],\qquad\qquad\mathcal{L}=\left[\begin{array}{ccccc} 3&0&-1&-2\\ 0&0&0&0&0\\ 0&-4&4&0\\ 0&-5&-3&8\end{array}\right],\] \[\text{eig}(\mathcal{L})=\{3,8,4,0\}.\] Correspondingly the adjacency matrix \(\mathcal{A}\), the degree matrix \(\mathcal{D}\), and its conventional Laplacian matrix \(\mathcal{L}_{conv}\) are: \[\mathcal{A} =\left[\begin{array}{ccccc}0&0&1&2\\ 0&0&0&0\\ 0&4&0&0\\ 0&5&3&0\end{array}\right],\quad\mathcal{D}=\left[\begin{array}{ccccc}3&0&0& 0\\ 0&0&0&0\\ 0&0&4&0\\ 0&0&0&8\end{array}\right],\] \[\mathcal{L}_{conv} =\left[\begin{array}{ccccc}3&0&-1&-2\\ 0&0&0&0\\ 0&-4&4&0\\ 0&-5&-3&8\end{array}\right].\] In this example, it holds that \(\mathcal{D}-\mathcal{A}=\mathcal{H}_{o}B\mathcal{H}^{T}\). This example presents that for a given directed graph, our definition of the Laplacian matrix is strictly equivalent to the conventional definition and that the Laplacian possesses all properties as stated in _Theorem_4.1. \(\square\) We then introduce our lemmas on the properties of the weighted Laplacians of different classes of directed graphs. Before continuing, we recall the definition for _reachable subset_ and _Lemma_3.2 in [15] on the existence of Schur complement (with notations changed to match this paper) for future reference: **Definition 6** (Reachable subset, T. Sugiyama and K. Sato [15]): _Let \(\mathcal{G}_{d}=(\mathcal{V},\varepsilon_{d},\mathcal{H})\) be a directed and weighted graph with diagonal weighting matrix \(B\) and \(\mathcal{V}_{\alpha}\subset\mathcal{V}\) be a proper subset of vertices with \(|\mathcal{V}_{\alpha}|\geq 2\). \(\mathcal{V}_{\alpha^{c}}=\mathcal{V}\setminus\mathcal{V}_{\alpha}\). We refer to \(\mathcal{V}_{\alpha}\subset\mathcal{V}\) as a reachable subset of \(\mathcal{G}_{d}\) if for **any**\(v_{i}\in\mathcal{V}_{\alpha^{c}}\), there exist a vertex \(v_{j}\in\mathcal{V}_{\alpha}\) and a path in \(\mathcal{G}_{d}\) from \(v_{i}\) to \(v_{j}\)._ **Lemma 4.2** (Existence of Schur complement, T. Sugiyama and K. Sato [15]): _Let \(\mathcal{G}_{d}=(\mathcal{V},\varepsilon_{d},\mathcal{H})\) be a directed and weighted graph with diagonal weighting matrix \(B\) and \(\mathcal{V}_{\alpha}\subset\mathcal{V}\) be a proper subset of vertices with \(|\mathcal{V}_{\alpha}|\geq 2\). \(\mathcal{V}_{\alpha^{c}}=\mathcal{V}\setminus\mathcal{V}_{\alpha}\). Then, the Schur complement of \(\mathcal{L}\) with respect to the sub-matrix consisting of columns and rows corresponding to vertices \(\mathcal{V}_{\alpha}\) exists **if and only if**\(\mathcal{V}_{\alpha}\) is a reachable subset of \(\mathcal{G}_{d}\)._ **Lemma 4.3**: _If the graph \(\mathcal{G}_{d}\) is strongly-connected, then_ 1. _All diagonal entries of_ \(\mathcal{L}\) _are positive._ 2. _All Schur complements of_ \(\mathcal{L}\) _exist._ _Proof:_ 1. For every vertex \(v_{i}\in\mathcal{V}\) there exits at least one edge of which \(v_{i}\) is the head, featuring a negative \([\mathcal{L}]_{ij},i\neq j\), therefore all diagonal entries \([\mathcal{L}]_{ii}\) are positive. Hence the proof for _Lemma_4.3.1 is concluded. 2. Schur complements of \(\mathcal{L}\) with respect to sub-matrices consisting of rows and columns corresponding to \(\mathcal{V}_{\alpha}\) exist for any vertex subset \(\mathcal{V}_{\alpha}\). In the case of a strong-connected graph, any vertex subset \(\mathcal{V}_{\alpha}\) is always a _reachable subset_ to \(\mathcal{V}_{\alpha^{c}}\). Hence by referring to _Lemma_4.2 we conclude the proof for _Lemma_4.3.2. _Example_ Consider a strongly-connected graph in Fig. 4. Assume all edge weights are 1 for simplicity. The weighted Laplacian of the original graph is: \[\mathcal{L}=\left[\begin{array}{ccccc}2&0&-1&0&-1\\ 0&1&0&-1&0\\ -1&0&2&-1&0\\ 0&-1&0&2&-1\\ 0&0&-1&-1&2\end{array}\right].\] All diagonal entries of \(\mathcal{L}\) are positive, and all Schur complements of \(\mathcal{L}\) with respect to any sub-matrix corresponding to \(\mathcal{V}_{\alpha}\) exist. In Fig. 4, vertices \(4,5\) are eliminated during the reduction. The corresponding Schur complement is \(\mathcal{L}_{red}\): \[\mathcal{L}_{\text{red}}=\left[\begin{array}{cccc}2&-0.333&-1.667\\ 0&0.333&-0.333\\ -1&-0.667&1.667\end{array}\right].\] The reduced weighted Laplacian \(\mathcal{L}_{red}\) corresponds to the reduced graph in Fig. 4 (right). This example illustrates that all diagonal entries of the weighted Laplacian matrix \(\mathcal{L}\) corresponding to a strongly-connected graph \(\mathcal{G}_{d}\) are positive. Furthermore, with respect to the sub-matrix consisting of rows and columns corresponding to any chosen vertex subset, the Schur complement of the weighted Laplacian exits. \(\square\) Next we introduce the properties of the weighted Laplacian of a quasi-strongly-connected graph. **Remark 3**: _For readers' understanding, we use notations retained vertices \(\mathcal{V}_{\alpha}\) and eliminated vertices \(\mathcal{V}_{\alpha^{c}}\) which shall be formally introduced in Section 5.1 here in Lemma 4.4. The Schur complement of \(\mathcal{L}\) stated in Lemma 4.4 is with respect to the sub-matrix consisting of rows and columns corresponding to retained vertices._ **Lemma 4.4**: _If the graph \(\mathcal{G}_{d}\) is quasi-strongly-connected, and \(\mathcal{G}_{d}\) consists of sink vertices, then the following statements hold._ 1. _The diagonal entries of_ \(\mathcal{L}\) _corresponding to sink vertices are_ \(0\)_, and all other diagonal entries are positive._ 2. _Consider the Schur complement of_ \(\mathcal{L}\) _with respect to the sub-matrix consisting of rows and columns corresponding to_ retained vertices__\(\mathcal{V}_{\alpha}\)_. The Schur complement exists if and only if the subset of_ retained vertices _includes the entire sink vertices_._ _Proof:_ 1. Since _sink_ vertices are vertices that are tails of all edges connected to them, every other vertex not being a _sink_ has a positive out-degree. The diagonal entries of \(\mathcal{L}\) indicate out-degrees of corresponding vertices. Hence the proof for _Lemma_ 4.4.1 is concluded. 2. For the proof of _Lemma_ 4.4.2, we first prove that the Schur complement exists if the subset of _retained vertices_ includes **the entire _sink_ vertices**. Since **the entire _sink_ vertices** are included in the subset \(\mathcal{V}_{\alpha}\), there always exists a directed path in \(\mathcal{G}_{d}\) starting at any vertex in \(\mathcal{V}_{\alpha^{c}}\) and ending at a _sink_ vertex in \(\mathcal{V}_{\alpha}\). Therefore \(\mathcal{V}_{\alpha}\) is Figure 4: Illustration of Kron reduction to a strongly-connected graph (Left: original graph. Right: reduced graph with vertices \(4,5\) eliminated). Edge weights are omitted for simplicity. always a _reachable subset_ of \(\mathcal{G}_{d}\) for \(\mathcal{V}_{\alpha^{c}}\). According to _Lemma_ 4.2, the Schur complement exists if the subset of _retained vertices_ includes **the entire _sink_ vertices**. Hence we conclude the proof for the first part of _Lemma_ 4.4.2. We then prove that the Schur complement does not exist if the subset of _retained vertices_ does not include **the entire _sink_ vertices**. Since _sink_ vertices are vertices that are tails of all edges connected to them, there exists no directed path in \(\mathcal{G}_{d}\) starting at one vertex of \(\mathcal{V}_{\alpha^{c}}\) and ending at any _sink_ vertex of \(\mathcal{V}_{\alpha}\). Therefore, \(\mathcal{V}_{\alpha}\) is **never** a _reachable subset_ of \(\mathcal{G}_{d}\) for \(\mathcal{V}_{\alpha^{c}}\). Referring to _Lemma_ 4.2, the Schur complement does not exist if the subset of _retained vertices_ does not include **the entire _sink_ vertices**. Hence the proof for _Lemma_ 4.4.2 is concluded. Hereby we can claim that the Schur complement of \(\mathcal{L}\) with respect to the sub-matrix consisting of rows and columns corresponding to _retained vertices_\(\mathcal{V}_{\alpha}\) exists if and only if the subset of _retained vertices_ includes **the entire _sink_ vertices**. _Example_ For the quasi-strongly-connected graphs in Fig. 5 (left), assuming all edge weights are \(1\) for simplicity, the weighted Laplacian \(\mathcal{L}_{acy}\) of the acyclic graph and the weighted Laplacian \(\mathcal{L}_{cyc}\) of the cyclic graph are: \[\mathcal{L}_{acy}=\left[\begin{array}{cccccc}2&0&-1&0&-1&0\\ 0&0&0&0&0&0\\ 0&0&1&-1&0&0\\ 0&-1&0&1&0&0\\ 0&0&-1&0&2&-1\\ 0&-1&0&0&0&1\end{array}\right],\] \[\mathcal{L}_{cyc}=\left[\begin{array}{cccccc}2&0&-1&0&-1\\ 0&0&0&0&0\\ 0&0&1&-1&0\\ 0&-1&0&2&-1\\ 0&0&-1&0&1\end{array}\right].\] All diagonal entries of both \(\mathcal{L}_{acy}\) and \(\mathcal{L}_{cyc}\) except for the _boundary vertex_\(2\) are positive. Except for the Schur complement of sub-matrix corresponding to the _boundary vertex_\(2\), all the other Schur complements of \(\mathcal{L}\) exist. In Fig. 5, all _interior vertices_ are eliminated during the reduction (\(3,4,5,6\) for the upper graph, \(3,4,5\) for the bottom graph). The corresponding Schur complements are \(\mathcal{L}_{acy\ red}\) and \(\mathcal{L}_{cyc\ red}\): \[\mathcal{L}_{acy\ red}=\left[\begin{array}{cc}2&-2\\ 0&0\end{array}\right],\quad\mathcal{L}_{cyc\ red}=\left[\begin{array}{cc}2& -2\\ 0&0\end{array}\right].\] This example illustrates that except for the diagonal entries corresponding to _sink_ vertices, all other diagonal entries of the weighted Laplacian \(\mathcal{L}\) of a quasi-strongly-connected graph \(\mathcal{G}_{d}\) are positive (both for cyclic and acyclic graphs). It also illustrates that the Schur complement of \(\mathcal{L}\) with respect to sub-matrix consisting of rows and columns corresponding to vertices (of which the subset includes **the entire _sink_ vertices**) exists (both for cyclic and acyclic graphs). ### Directed graphs corresponding to weighted Laplacian matrices In this section, we will present that there exist directed graphs corresponding to given weighted Laplacians and reduced weighted Laplacians. We will also present that certain properties of the corresponding graph are preserved during the reduction process. **Theorem 4.5**: _Consider an asymmetric matrix \(\mathcal{L}\) having all eigenvalues with non-negative real parts, non-negative diagonal entries, non-positive off-diagonal entries, and zero row sums. Then_ 1. \(\mathcal{L}\) _corresponds to the Laplacian matrix of a directed weighted graph._ 2. \(\mathcal{L}\) _can be written as_ \(\mathcal{L}=\mathcal{H}_{o}B\mathcal{H}^{T}\)_, with_ \(\mathcal{H}\) _the incidence matrix of the corresponding graph,_ \(\mathcal{H}_{o}\) _being the appointed variation of_ \(\mathcal{H}\)_, and_ \(B\) _a positive definite diagonal matrix of the corresponding graph._ _Proof:_ 1. Consider _Theorem_ 4.5 as the reverse statement of _Theorem_ 4.1. Then for every asymmetric matrix \(\mathcal{L}\) with properties stated in _Theorem_ 4.5, there exists a weighted directed graph corresponding to it. 2. The matrix \(\mathcal{L}\) can be written as \(\mathcal{L}=\mathcal{D}-\mathcal{A}\), where \(\mathcal{D}\) is the graph's degree matrix and \(\mathcal{A}\) is the graph's adjacency matrix. Recall that in the proof for _Theorem_4.1, we proved that \(\mathcal{D}-\mathcal{A}=\mathcal{H}_{o}B\mathcal{H}^{T}\). Hence we can declare that \(\mathcal{L}\) can be written as \(\mathcal{L}=\mathcal{H}_{o}B\mathcal{H}^{T}\) with \(\mathcal{H}\) the incidence matrix of the corresponding graph, \(\mathcal{H}_{o}\) being the appointed variation of \(\mathcal{H}\), and \(B\) a positive definite diagonal matrix of the corresponding graph. _Example_ For the following weighted asymmetric Laplacian matrix \(\mathcal{L}\), there exists a directed graph corresponding to it; see Fig. 6. Figure 5: Illustration of Kron reduction to an acyclic quasi-strongly-connected graph (upper) and a cyclic quasi-strongly-connected graph (bottom). Edge weights are omitted for simplicity. _Boundary vertices_ are marked in red. Figure 6: Example of a directed graph corresponding to the Laplacian matrix given in _Theorem_4.5 (edge weights marked next to edges) The corresponding incidence matrix \(\mathcal{H}\) and the diagonal matrix \(B\) are: \[\mathcal{H} =\left[\begin{array}{cccccccccc}1&1&0&0&0&0&0&0&0&0\\ 0&0&0&1&0&0&0&0&0&0\\ 0&0&0&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&0&-1&-1\\ -1&0&1&-1&0&1&0&0&0&0\\ 0&-1&-1&0&-1&0&1&0&0&0\\ 0&0&0&0&0&-1&0&-1&1&0\\ 0&0&0&0&0&0&-1&1&0&1\end{array}\right],\] \[B =\operatorname{diag}\left\{1,2,5,3,4,6,7,10,8,9\right\}.\] This example illustrates that an asymmetric matrix with the properties stated in _Theorem_ 4.5 always corresponds to a weighted directed graph and can be written as \(\mathcal{L}=\mathcal{H}_{o}B\mathcal{H}^{T}\) with edge directions encoded in the incidence matrix \(\mathcal{H}\), edge weights encoded in the weighting diagonal matrix \(B\). \(\square\) Next we present theorems corresponding to _Theorem_ 4.5 but in the case of graphs being strongly-connected and quasi-strongly-connected. **Theorem 4.6**: _Suppose the corresponding graph \(\mathcal{G}_{d}\) of the Laplacian matrix \(\mathcal{L}=\mathcal{H}_{o}B\mathcal{H}^{T}\) is strongly-connected. Then every Schur complement (if existing) of \(\mathcal{L}\) can be written as \(\bar{\mathcal{H}}_{o}\bar{B}\bar{\mathcal{H}}^{T}\), with \(\bar{B}\) a positive definite diagonal matrix, and \(\bar{\mathcal{H}}\) the incidence matrix of a strongly-connected directed graph \(\bar{\mathcal{G}}_{d}\)._ _Proof:_ From _Remark_ 1 we know that _block-by-block Kron reduction_ is strictly equivalent to _iterative Kron reduction_ regarding reduction results. For the proof of _Theorem_ 4.6, first, we consider our Kron reduction as a sequence of _iterative Kron reduction_. For each iterative step, we focus on the sub-graph consisting of the eliminated vertex \(v_{k}\) and all of its adjacent vertices. For a better illustration without loss of generality, consider the sub-graph given in Fig. 8 as an example. Adjacent vertices of \(v_{k}\) are \(v_{i}\), \(v_{j}\), and \(v_{m}\). The sub-graph before reduction is associated with the adjacency matrix \(\mathcal{A}_{sub}\) and the Figure 8: Sub-graph consisting of the eliminated vertex \(v_{k}\) and all of its adjacent vertices Figure 7: Strongly-connected graphs corresponding to the weighted Laplacians before and after reduction (edge weights are omitted for simplicity) corresponding weighted Laplacian \(\mathcal{L}_{red}\): \[\mathcal{A}_{sub}=\left[\begin{array}{cccc}0&b_{ik}&0&0\\ 0&0&b_{kj}&0\\ 0&0&0&0\\ 0&b_{mk}&0&0\end{array}\right],\mathcal{L}_{sub}=\left[\begin{array}{cccc}b_ {ik}&-b_{ik}&0&0\\ 0&b_{kj}&-b_{kj}&0\\ 0&0&0&0\\ 0&-b_{mk}&0&b_{mk}\end{array}\right]\] where non-zero entries in the adjacency matrix \(\mathcal{A}_{sub}\) denote the edge weights on the edges from \(v_{i}\) to \(v_{j}\). In the unreduced sub-graph in Fig. 8, there are two nonzero walk products: \((\mathcal{A}_{sub}^{2})_{ij}\) and \((\mathcal{A}_{sub}^{2})_{mj}\) which are expressed in: \[\begin{array}{c}\left(\mathcal{A}_{sub}^{2}\right)_{ij}=\left[\mathcal{A}_{ sub}\right]_{ik}\left[\mathcal{A}_{sub}\right]_{kj}=b_{ik}b_{kj}\neq 0,\\ \left(\mathcal{A}_{sub}^{2}\right)_{mj}=\left[\mathcal{A}_{\text{sub}}\right] _{mk}\left[\mathcal{A}_{\text{sub}}\right]_{kj}=b_{mk}b_{kj}\neq 0.\end{array}\] By decomposing the weighted Laplacian \(\mathcal{L}_{sub}\) as \(\left[\begin{array}{cc}\mathcal{L}_{sub}\{i,j,m\},\{i,j,m\}&\mathcal{L}_{ sub\{i,j,m\},\{k\}}\\ \mathcal{L}_{sub}\{k\},\{i,j,m\}&\left[\mathcal{L}_{sub}\right]_{kk}\end{array}\right]\) where \(\mathcal{L}_{sub\{i,j,m\},\{i,j,m\}}=\left[\begin{array}{cc}b_{ik}&0&0\\ 0&0&0\\ 0&0&b_{mk}\end{array}\right]\), \(\mathcal{L}_{sub\{i,j,m\},\{k\}}=\left[\begin{array}{cc}-b_{ik}\\ 0\\ -b_{mk}\end{array}\right]\), \(\left[\begin{array}{cc}\mathcal{L}_{sub}\right]_{kk}=b_{kj}\), and \(\mathcal{L}_{sub\{k\},\{i,j,m\}}=\left[\begin{array}{cc}0&-b_{kj}&0\\ 0&0\\ 0&-b_{mk}&b_{mk}\end{array}\right]\), the iterative Kron reduction eliminating vertex \(v_{k}\) can be formulated as the Schur complement of \(\mathcal{L}_{sub}\) with respect to \(\left[\mathcal{L}_{sub}\right]_{kk}\): \[\mathcal{L}_{sub-red} =\mathcal{L}_{sub\{i,j,m\},\{i,j,m\}}-\mathcal{L}_{sub\{i,j,m\}, \{k\}}(\left[\mathcal{L}_{sub}\right]_{kk})^{-1}\mathcal{L}_{sub\{k\},\{i,j,m \}}\] \[=\left[\begin{array}{ccc}b_{ik}&-b_{ik}&0\\ 0&0&0\\ 0&-b_{mk}&b_{mk}\end{array}\right]\] The reduced adjacency matrix \(\mathcal{A}_{sub-red}\) corresponding to the reduced weighted Laplacian is: \[\mathcal{A}_{\text{sub-red}}=\left[\begin{array}{ccc}0&b_{ik}&0\\ 0&0&0\\ 0&b_{mk}&0\end{array}\right].\] It is obvious that in the reduced adjacency matrix, there are two nonzero entries \([\mathcal{A}_{sub-red}]_{ij}\) and \([\mathcal{A}_{sub-red}]_{mj}\) which means that there exists a directed path from \(v_{i}\) to \(v_{j}\) and a directed path from \(v_{m}\) to \(v_{j}\). Non-zero walk products remain non-zero during each step of the iterative Kron reduction, hence non-zero walk products remain non-zero after the iterative Kron reduction. Therefore we can claim that non-zero walk products remain non-zero after the block-by-block Kron reduction. In other words, there exists a directed path from \(v_{i}\) to \(v_{j}\) in the reduced graph if there exists a directed path from \(v_{i}\) to \(v_{j}\) in the original graph. In a strongly-connected graph there exists at least one directed path for every vertex to reach any other vertex in the graph. Hence there exists at least one directed path for every vertex to reach any other vertex in the reduced graph. Therefore, the reduced graph is again a strongly-connected graph. _Example_ Consider the corresponding graph \(\mathcal{G}_{d}\) of the Laplacian in Fig. 7 (left). The weighted Laplacian \(\mathcal{L}\) of the graph is: \[\mathcal{L}=\left[\begin{array}{cccc}1&-1&0&0\\ 0&1&-1&0\\ 0&0&1&-1\\ -1&0&0&1\end{array}\right].\] The reduced Laplacian \(\mathcal{L}_{red}\) and its corresponding incidence matrix \(\bar{\mathcal{H}}\), variation of the incidence matrix \(\bar{\mathcal{H}}_{o}\) and diagonal weighting matrix \(\bar{B}\) are: \[\mathcal{L}_{\text{red}} =\left[\begin{array}{rrr}1&-1&0\\ 0&1&-1\\ -1&0&1\end{array}\right],\bar{\mathcal{V}}=\left[\begin{array}{rrr}1&0&-1\\ -1&1&0\\ 0&-1&1\end{array}\right],\] \[\bar{\mathcal{H}}_{o} =\left[\begin{array}{rrr}1&0&0\\ 0&1&0\\ 0&0&1\end{array}\right],\qquad\bar{B}=\operatorname{diag}\{1,1,1\}.\] The reduced graph in Fig. 7 (right) corresponding to the reduced incidence matrix \(\bar{\mathcal{H}}\) is again a strongly-connected graph. This example illustrates that every Schur complement of the weighted Laplacian \(\mathcal{L}\) corresponding to a strongly-connected graph \(\mathcal{G}_{d}\) is again a weighted Laplacian matrix \(\mathcal{L}_{red}\), which again corresponds to a strongly-connected graph. \(\square\) **Theorem 4.7**: _Suppose the corresponding graph \(\mathcal{G}_{d}\) of the Laplacian matrix \(\mathcal{L}=\mathcal{H}_{o}B\mathcal{H}^{T}\) is quasi-strongly-connected. Then every Schur complement (if existing) of \(\mathcal{L}\) can be written as \(\bar{\mathcal{H}}_{o}\bar{B}\bar{\mathcal{H}}^{T}\), with \(\bar{B}\) a positive definite diagonal matrix, and \(\bar{\mathcal{H}}\) the incidence matrix of a quasi-strongly-connected directed graph \(\bar{\mathcal{G}}_{d}\)._ _Proof:_ We have proved that for any directed graph \(\mathcal{G}_{d}\), there exists a directed path from \(v_{i}\) to \(v_{j}\) in the reduced graph if there exists a directed path from \(v_{i}\) to \(v_{j}\) in the original graph in the proof for _Theorem_ 4.6. In a quasi-strongly-connected graph, there exists at least a _source_ vertex in the unreduced graph. Since _source_ vertices are _boundary vertices_ which will not be eliminated during Kron reduction, there exists a directed path for every other vertex (except for _sink_) to start at _sink_ vertex and end at the very vertex in the reduced graph, hence concluding the proof for _Theorem_ 4.7. \(\blacksquare\) _Example_ Consider the corresponding quasi-strongly-connected graphs in Fig. 9 of two given Laplacians \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\): \[\mathcal{L}_{1} =\left[\begin{array}{rrrrr}2&0&-1&0&-1&0\\ 0&0&0&0&0&0\\ 0&0&1&-1&0&0\\ 0&-1&0&1&0&0\\ 0&0&-1&0&2&-1\\ 0&-1&0&0&0&1\end{array}\right],\] \[\mathcal{L}_{2} =\left[\begin{array}{rrrrr}2&0&-1&0&-1\\ 0&0&0&0&0\\ 0&0&1&-1&0\\ 0&-1&0&2&-1\\ 0&0&-1&0&1\end{array}\right].\] The reduced Laplacians \(\mathcal{L}_{1\text{ red}}\) and \(\mathcal{L}_{2\text{ red}}\), the corresponding incidence matrices \(\bar{\mathcal{H}}_{1}\) and \(\bar{\mathcal{H}}_{2}\), their variations \(\bar{\mathcal{H}}_{1o}\) and \(\bar{\mathcal{H}}_{2o}\) and corresponding diagonal weighting matrices \(\bar{B}_{1}\) and \(\bar{B}_{2}\) are: \[\mathcal{L}_{1\text{ red}} =\left[\begin{array}{rrrrr}2&-0.5&-1.5&0\\ 0&0&0&0\\ 0&0&1&-1\\ 0&-1&0&1\end{array}\right],\quad\mathcal{L}_{2\text{ red}}=\left[\begin{array}{rrrrr}2&0&-2&0\\ 0&0&0&0\\ 0&0&1&-1\\ 0&-1&-1&2\end{array}\right],\] \[\overline{\mathcal{H}}_{1} =\left[\begin{array}{rrrrr}1&0&0&1\\ 0&0&-1&-1\\ -1&1&0&0\\ 0&-1&1&1\end{array}\right],\] \[\overline{\mathcal{H}}_{1o} =\left[\begin{array}{rrr}1&0&0&1\\ 0&0&0&0\\ 0&1&0&0\\ 0&0&1&1\end{array}\right],\] \[\bar{B}_{1} =\operatorname{diag}\{1.5,1,1,0.5\},\qquad\qquad\bar{B}_{2}= \operatorname{diag}\{2,1,1,1\}.\] The reduced graphs (Fig. 9, right) corresponding to the reduced Laplacians are still quasi-strongly-connected. This example illustrates that every existing Schur complement of the weighted Laplacian \(\mathcal{L}\) corresponding to a quasi-strongly-connected graph \(\mathcal{G}_{d}\) is again a weighted Laplacian matrix \(\mathcal{L}_{red}\), which again corresponds to a quasi-strongly-connected graph. ## 5 Kron reduction to power flow networks In this section, we present the graph-theoretic analysis of the Kron reduction process on DC power flow networks. ### Vertex classification In this subsection we will identify a set of vertices that are actually eliminated and a set of vertices that are actually retained during the Kron reduction process. First, recall that for a given graph \(\mathcal{G}_{d}\), we identified a subset \(\mathcal{V}_{b}\subset\mathcal{V}\) as _boundary vertices_ and a subset \(\mathcal{V}_{i}=\mathcal{V}\setminus\mathcal{V}_{b}\) as _interior vertices_ in Section 4.1. Boundary vertices are vertices that **cannot** be eliminated. Interior vertices are vertices that **can** be eliminated. Although all _interior vertices_**can** be eliminated, there are times during Kron reduction when some of the interior vertices are to be retained. Hereby we further identify a subset termed _eliminated vertices_\(\mathcal{V}_{\alpha^{c}}\subseteq\mathcal{V}_{i}\) being the vertices that are **actually eliminated** during the Kron reduction process, and the subset termed _retained vertices_\(\mathcal{V}_{\alpha}=\mathcal{V}\setminus\mathcal{V}_{\alpha^{c}}\) being the vertices that are **actually retained** during reduction. See Fig. 10 for a diagrammatic illustration of vertex classification. Recall the power-angle equation (16) in Section 4.1. Decompose \(P_{v}\) as \(\left[\begin{array}{c}P_{v\alpha}\\ P_{v\alpha^{c}}\end{array}\right]\) with \(P_{v\alpha}\) corresponding to active power extractions at retained vertices and \(P_{v\alpha^{c}}\) corresponding to active power extractions at eliminated vertices. Decompose \(\theta\) as \(\left[\begin{array}{cc}\theta_{\alpha}\\ \theta_{\alpha^{c}}\end{array}\right]\) with \(\theta_{\alpha}\) corresponding to angles at retained vertices and \(\theta_{\alpha^{c}}\) angles at eliminated vertices. Further decompose \(\mathcal{L}\) as \(\left[\begin{array}{cc}\mathcal{L}_{\alpha\alpha}&\mathcal{L}_{\alpha\alpha^ {c}}\\ \mathcal{L}_{\alpha^{c}\alpha}&\mathcal{L}_{\alpha^{c}\alpha^{c}}\end{array}\right]\) with subblocks being composed of columns and rows corresponding retained and eliminated vertices respectively. Then (16) can be partitioned as \[\left[\begin{array}{c}P_{v\alpha}\\ P_{v\alpha^{c}}\end{array}\right]=\left[\begin{array}{cc}\mathcal{L}_{ \alpha\alpha}&\mathcal{L}_{\alpha\alpha^{c}}\\ \mathcal{L}_{\alpha^{c}\alpha}&\mathcal{L}_{\alpha^{c}\alpha^{c}}\end{array} \right]\left[\begin{array}{c}\theta_{\alpha}\\ \theta_{\alpha^{c}}\end{array}\right]. \tag{19}\] Gaussian elimination of _eliminated angles_\(\theta_{\alpha^{c}}\) in (19) gives a reduced network with _retained vertices_ obeying the reduced power flow equations \[P_{v\alpha}+\mathcal{L}_{ac}P_{v\alpha^{c}}=\mathcal{L}_{red}\theta_{\alpha} \tag{20}\] where the reduced Laplacian matrix is given by the Schur complement of \(\mathcal{L}\) with respect to _retained vertices_\(\mathcal{V}_{\alpha}\), that is \(\mathcal{L}_{red}=\mathcal{L}_{\alpha\alpha}-\mathcal{L}_{\alpha\alpha^{c}} \mathcal{L}_{\alpha^{c}\alpha^{c}}^{-1}\mathcal{L}_{\alpha^{c}\alpha}\) and the accompanying matrix \(\mathcal{L}_{ac}=-\mathcal{L}_{\alpha\alpha^{c}}\mathcal{L}_{\alpha^{c} \alpha^{c}}^{-1}\) maps _eliminated active power extractions_\(P_{v\alpha^{c}}\) to _retained active power extractions_\(P_{vred}=P_{v\alpha}+\mathcal{L}_{ac}P_{v\alpha^{c}}\) in the reduced network. ### Kron reduction to power flow networks Following the identification of _retained vertices_ and _eliminated vertices_ in the last section, we formally give the definition of Kron reduction to power flow networks. **Definition 7** (Kron reduction to power flow networks): _Consider a power flow network corresponding to the graph representation \(\mathcal{G}_{d}=(\mathcal{V},\varepsilon_{d},\mathcal{H},B)\). Let \(\mathcal{L}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\) denote the weighted Laplacian matrix: \(\mathcal{H}_{o}B\mathcal{H}^{\mathcal{T}}\) of the graph. Let \(\mathcal{V}_{\alpha}\subset\mathcal{V}\), **the** retained vertices _be a **proper** subset of vertices with \(|\mathcal{V}_{\alpha}|\geq 2\). ('**Proper**' means that boundary vertices are always included in \(\mathcal{V}_{\alpha}\) following the classification in Section 5.1.) Then the \(|\mathcal{V}_{\alpha}|\times|\mathcal{V}_{\alpha}|\) dimensional Kron reduced matrix \(\mathcal{L}_{red}\) is defined by the Schur complement of \(\mathcal{L}\) with respect to the sub-matrix consisting of rows and columns corresponding to retained vertices:_ \[\mathcal{L}_{red}=\mathcal{L}_{\alpha\alpha}-\mathcal{L}_{\alpha\alpha^{c}} \mathcal{L}_{\alpha^{c}\alpha^{c}}^{-1}\mathcal{L}_{\alpha^{c}\alpha}, \tag{21}\] _which gives the reduced power flow network with the reduced graph representation i.e., \(\bar{\mathcal{G}}_{d}=(\mathcal{V}_{\alpha},\bar{\varepsilon}_{d},\bar{ \mathcal{H}},\bar{\mathcal{H}})\), with \(\mathcal{L}_{red}=\mathcal{L}_{\alpha\alpha}-\mathcal{L}_{\alpha\alpha^{c}} \mathcal{L}_{\alpha^{c}\alpha}^{-1}\mathcal{L}_{\alpha^{c}\alpha}\)._ **Remark 4**: _In most cases (most IEEE test feeders) power flow networks are neither strongly-connected nor quasi-strongly-connected. However, there are several cases when power flow networks are relatively simple and are quasi-strongly-connected. See the example of an IEEE-3 test feeder, in Fig. 11. Vertex \(1\) is the root vertex of this quasi-strongly-connected graph._ Figure 11: IEEE-3 test feeder and its directed graph representation (boundary vertices marked in red) Next, we discuss sufficient conditions for the existence of Kron reduction to power flow networks. **Lemma 5.1** (Existence of Kron reduction to power flow networks with quasi-strongly-connected graph representations).: _Consider a power flow network corresponding to the quasi-strongly-connected graph representation \(\mathcal{G}_{d}=(\mathcal{V},\varepsilon_{d},\mathcal{H},B)\) with the weighted Laplacian \(\mathcal{L}=\mathcal{IG}_{o}B\mathcal{J}(^{T}\). Let \(\mathcal{V}_{\alpha}\subset\mathcal{V}\), the retained vertices be a proper subset of vertices with \(|\mathcal{V}_{\alpha}|\geq 2\). Then Kron reduction always exists for this network._ _Proof:_ For a given power flow network corresponding to the quasi-strongly-connected graph \(\mathcal{G}_{d}\) with the weighted Laplacian \(\mathcal{L}\), since \(\mathcal{V}_{sink}\in\mathcal{V}_{b}\subset\mathcal{V}_{\alpha}\), Schur complements of \(\mathcal{L}\) with respect to sub-matrices consisting of rows and columns corresponding to \(\mathcal{V}_{\alpha}\) always exist by referring to _Lemma_ 4.4.3. Therefore, Kron reduction always exists for this network. Laplacian matrix of the reduced network is given by (21). \(\blacksquare\) _Example_ For the quasi-strongly-connected corresponding graph representation of an IEEE-5 test feeder in Fig. 12, vertex \(1\) and vertex \(5\) are boundary vertices, which are included in \(\mathcal{V}_{\alpha}\). Kron reduction of this network eliminates vertex \(2\); see Fig. 12. Assume all edge susceptances are \(1\). The graph of the reduced network is quasi-strongly-connected, which conforms to _Theorem_ 4.7. The weighted Laplacian \(\mathcal{L}\) for the original network and the weighted Laplacian \(\mathcal{L}_{red}\) for the reduced network are: \[\mathcal{L}=\left[\begin{array}{ccccc}2&-1&-1&0&0\\ 0&3&-1&-1&-1\\ 0&0&1&-1&0\\ 0&0&0&1&-1\\ 0&0&0&0&0\end{array}\right],\quad\mathcal{L}_{\text{red}}\,=\left[\begin{array}[ ]{ccccc}2&-1&-1\\ 0&3&-3\\ 0&0&0\end{array}\right].\] This example illustrates that Kron reduction exists for a lossless DC power flow network that corresponds to a quasi-strongly-connected graph. \(\square\) **Lemma 5.2** (Existence of Kron reduction to generalized power flow networks).: _Consider a generalized power flow network with the graph representation \(\mathcal{G}_{d}=(\mathcal{V},\varepsilon_{d},\mathcal{H},B)\) that consists of sink vertices and source vertices. Let \(\mathcal{V}_{\alpha}\subset\mathcal{V}\), the retained vertices be a proper subset of vertices with \(|\mathcal{V}_{\alpha}|\geq 2\). Then Kron reduction always exists for this network._ _Proof:_ For a given power flow network of which the graph is not quasi-strongly-connected but still consists of _sink_ vertices and _source_ vertices, since \(\mathcal{V}_{sink}\in\mathcal{V}_{b}\subset\mathcal{V}_{\alpha}\), there exists a directed path in \(\mathcal{G}_{d}\) starting at any vertex in \(\mathcal{V}_{\alpha^{c}}\) and ending at a _sink_ vertex in \(\mathcal{V}_{\alpha}\). Therefore \(\mathcal{V}_{\alpha}\) is always a _reachable subset_ of \(\mathcal{G}_{d}\) for \(\mathcal{V}_{\alpha^{c}}\). By referring to _Lemma_ 4.2, we can declare Schur complements of \(\mathcal{L}\) with respect to sub-matrices consisting of rows and columns corresponding to \(\mathcal{V}_{\alpha}\) always exist. The proof for _Lemma_ 5.2 is concluded. \(\blacksquare\) Figure 12: IEEE-5 test feeder, its directed graph representation (boundary vertices marked in red), the reduced graph representation, and the restored reduced network _Example_ For the corresponding graph representation of an IEEE-9 test feeder in Fig. 13, vertices \(1,2,3,5,6,8\) are boundary vertices, which are included in \(\mathcal{V}_{\alpha}\). The graph representation of this network is not quasi-strongly-connected. Kron reduction of this network eliminates vertex \(4,7\); see Fig. 13. Assume all edge susceptances are 1. The weighted Laplacian \(\mathcal{L}\) for the original network and the weighted Laplacian \(\mathcal{L}_{red}\) for the reduced network are: \[\mathcal{L}=\left[\begin{array}{ccccccccc}1&0&0&-1&0&0&0&0&0\\ 0&1&0&0&0&0&-1&0&0\\ 0&0&1&0&0&0&0&0&-1\\ 0&0&0&2&-1&-1&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&-1&0&2&-1&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&-1&0&-1&2\end{array}\right],\quad\mathcal{L}_{\text{red}}=\left[ \begin{array}{ccccccccc}1&0&0&-0.5&-0.5&0&0\\ 0&1&0&-0.5&0&-0.5&0\\ 0&0&1&0&0&0&-1\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&-1&-1&2\end{array}\right].\] This example illustrates that Kron reduction exists for a lossless DC power flow network that corresponds to a weighted directed graph consisting of _sink_ and _source_ vertices. \(\square\) ### Input-output behaviors of lossless DC power flow networks In this subsection, we present how the weighted Laplacian matrix \(\mathcal{L}\) and its Kron-reduced form \(\mathcal{L}_{red}\) function as I/O mappings for a lossless power flow system. **Theorem 5.3**: _Consider a lossless DC power flow network with the graph representation \(\mathcal{G}_{d}=(\mathcal{V},\varepsilon_{d},\mathcal{H},B)\) of which boundary vertices consist of both sink and source vertices. The corresponding weighted Laplacian is \(\mathcal{L}=\mathcal{H}_{o}B\mathcal{H}^{T}\). Then_ Figure 13: IEEE-9 test feeder, its directed graph representation (boundary vertices marked in red), the reduced graph representation, and the restored reduced network 1. _The Laplacian_ \(\mathcal{L}\) _maps vertex angle vector_ \(\theta\) _(input) to vertex power extraction vector_ \(P_{v}\) _(output)._ 2. _For any retained vertex angle vector_ \(\theta_{\alpha}\)_, there exists a unique_ \(\mathcal{L}_{red}\) _such that (_20_) is satisfied._ 3. _To any weighted directed Laplacian matrix_ \(\mathcal{L}_{red}\) _there corresponds a lossless DC power flow network of which the input-output behavior is given by the linear map:_ \[P_{vred}=\mathcal{L}_{red}\theta_{\alpha}.\] (22) Proof.: 1. For the proof of _Theorem_ 5.3.1, we aim to show that \(\mathcal{L}\) indeed functions as a mapping from \(\theta\) to \(P_{v}\). Recall the expression (16), the statement in _Theorem_ 5.3.1 is evident. 2. For the proof of _Theorem_ 5.3.2, we first aim to prove that for any retained vertex subset, the Schur complement of the reduction always exists. Consider the directed graph representation \(\mathcal{G}_{d}\) corresponding to the given lossless DC power flow network. Since both _sink_ and _source_ vertices are boundary vertices, they are not eliminated. Therefore for any vertex \(v_{i}\in\mathcal{V}_{\alpha^{c}}\), there exists a directed path in \(\mathcal{G}_{d}\) starting at the very vertex and ending at the _sink_ vertex. Hence \(\mathcal{V}_{\alpha}\) is a reachable subset for \(\mathcal{V}_{\alpha^{c}}\). By recalling _Lemma_ 4.2, we can declare that for any retained vertex subset, the Schur complement of the reduction always exists, which also means that \(\mathcal{L}_{\alpha^{c}\alpha^{c}}\) is non-singular and \(\mathcal{L}_{red}=\mathcal{L}_{\alpha\alpha}-\mathcal{L}_{\alpha\alpha^{c}} \mathcal{L}_{\alpha^{c}\alpha^{c}}^{-1}\mathcal{L}_{\alpha^{c}\alpha}\) exists for (20). Hence we conclude the proof for _Theorem_ 5.3.2. 3. Following the proof for _Theorem_ 5.3.2, the left hand side of (20) is precisely the expression for \(P_{red}\). Hence we have \(P_{red}=\mathcal{L}_{red}\theta_{\alpha}\). According to our proof for _Theorem_ 4.5, the reduced Laplacian \(\mathcal{L}_{red}\) corresponds to a weighted directed graph, from which the reduced lossless DC power flow network can be restored. **Remark 5**: _An example illustrating Theorem 5.3 will be given in Section 6. In Section 4 we presented several important properties of the weighted Laplacians of different types of directed graphs. In Section 5 we presented the methodology of using directed graphs to model lossless power flow networks and the physical interpretation of the weighted Laplacian. This work can be viewed as an extension of the work in [6] by A. van der Schaft._ ## 6 Numerical results In this section, the IEEE-14 test feeder will be used as a detailed example for numerical testing; see the weighted graph representation of the IEEE-14 power flow network in Fig. 14. A two-stage reduction process will be adopted. During the first stage, boundary vertices and vertices that are connected to boundary vertices via an edge are retained. During Stage II where the reduction is performed on the reduced result of Stage I, all interior vertices are to be eliminated. Testing on a modified IEEE RTS-96 test system will also be presented in order to show the scalability of the proposed reduction method. ### IEEE-14 test feeder #### 6.1.1 Reduction process The reduction process is detailed as follows: 1. Vertex classification: Each bus of the IEEE-14 network corresponds to a vertex of the graph. Buses that are connected to generators (outside the IEEE-14 network) correspond to _source vertices_. Relatively, buses that are connected to loadings (outside the IEEE-14 network) correspond to _sink vertices_. Buses that are connected to both generators and loadings (outside the IEEE-14 network) correspond to _source vertices_, while we assume the dominant power flow pattern is active power flowing out of the very buses for simulation simplicity. All the other buses correspond to _interior vertices_. Sink and source vertices are marked in red squares, and interior vertices are marked in black circles. So far we have applied our vertex classification method proposed in Section 5.1 to the IEEE-14 test feeder. 2. Edge direction specification: Each transmission line between two buses corresponds to a directed edge. Edge directions are indicated by arrows, and edge weights are marked next to edges. Edge directions are determined by the 'diodes' positioning on the transmission lines, which are dictated by the attributes of the buses on two ends of the transmission line, i.e., 1. In the case of connecting one sink vertex and one interior vertex, the diode faces toward the sink vertex. 2. In the case of connecting one source vertex and one interior vertex, the diode faces toward the interior vertex. 3. In the case of connecting two interior vertices, the diode faces toward the interior vertex that is connected to the sink vertex. So far we have defined the incidence matrix \(\mathcal{H}\) for the corresponding weighted directed graph using the proposed specification in Section 4.1, the numerical results of which are omitted due to page limit. 3. Derivation of the weighted Laplacian: For the simplicity of the weighting diagonal matrix \(B\), we assume all edge weights are \(1\). So far we can derive the weighted Laplacian matrix \(\mathcal{L}=\mathcal{H}_{o}B\mathcal{H}^{T}\) based on _Definition_5 for the corresponding graph. 4. Input profile: Since active power flows from buses with high voltage angles to buses with low voltage angles, we choose the angle profile conforming to the incidence matrix. We deliberately set each vertex angle to be a small shift from the reference angle \(\alpha\). Phase shifts are in \([-0.6,0.6]\). The motivation of this treatment is that DC power flow essentially is a linearization of AC power flow, which takes the small angle difference as one of its several prerequisites. By limiting phase shifts in \([-0.6,0.6]\), we manage to keep angle differences smaller than 1.2. See the angle profile \(\theta\) for the unreduced graph in the \(3^{rd}\) column of Table 1. 5. Output profile: we derive the vertex active power \(P_{v}\) based on the expression given in (16). See the vertex power \(P_{v}\) in the \(2^{nd}\) column of Table 1. 6. Reduction Stage I: we calculate the reduced weighted Laplacian matrix of the reduced graph preserving boundary vertices and vertices that are connected to boundary vertices via an edge using the expression given in (21). We then calculate the reduced power profile based on the expression given in (22). See the reduced power \(P_{v}^{{}^{\prime}}\) in the \(4^{th}\) column of Table 1. The numerical results conform to _Theorem_5.3. See the reduced graph corresponding to the reduced weighted Laplacian matrix of Stage I in Fig. 15. The successful delivery of the reduced directed graph conforms to _Lemma_5.2. 7. Reduction Stage II: we calculate the reduced weighted Laplacian matrix of the reduced graph eliminating all interior vertices based on the expression given in (21). We then calculate the reduced power profile based on the expression given in (22). See the reduced power \(P_{v}^{{}^{\prime\prime}}\) in the \(6^{th}\) column of Table 1, and the reduced graph corresponding to the reduced weighted Laplacian matrix of Stage II in Fig. 16. The reduction results conform to _Lemma_5.2 and _Theorem_5.3. Figure 14: Graph representation of IEEE-14 test feeder, boundary vertices marked in red squares #### 6.1.2 Results 1. The lossless power flow network model of the IEEE-14 test feeder in a directed graph is successfully delivered. 2. The proposed weighted Laplacian matrix has been successfully derived from the directed graph, conforming to _Definition 5_. 3. The vertex angle input profile is suitably chosen to meet the linearization requirement. 4. The active power output profile is derived, showing that the proposed weighted Laplacian matrix functions as a mapping of system input to output. 5. Kron reduction is performed on the built lossless power flow network, of which the reduced results conform to _Theorem 5.3_. 6. Notice that during stage I, \(P_{v}^{{}^{\prime}}\) of boundary vertices also remain unchanged after the transformation: \(P_{v}^{{}^{\prime}}=P_{va}+\mathcal{L}_{ac}P_{va}{}^{e}\). It will be interesting for future work to look into this matter. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Vertex \({}_{i}\) & \(P_{vi}\left(p.u.\right)\) & \(\theta_{i}\left({}^{\circ}\right)\) & \(P_{vi}^{{}^{\prime}}\left(p.u.\right)\) & \(\theta_{i}^{{}^{\prime}}\left({}^{\circ}\right)\) & \(P_{vi}^{{}^{\prime\prime}}\left(p.u.\right)\) & \(\theta_{i}^{{}^{\prime\prime}}\left({}^{\circ}\right)\) \\ \hline \(1\) & \(0.58\) & \(\alpha+0.5271\) & \(0.58\) & \(\alpha+0.5271\) & \(1.98\) & \(\alpha+0.5271\) \\ \hline \(2\) & \(1.1\) & \(\alpha+0.3371\) & \(1.6\) & \(\alpha+0.3371\) & \(\times\) & \(\times\) \\ \hline \(3\) & \(0.1\) & \(\alpha-0.0629\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ \hline \(4\) & \(0.1\) & \(\alpha-0.1629\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ \hline \(5\) & \(0.4\) & \(\alpha+0.1371\) & \(0.6\) & \(\alpha+0.1371\) & \(\times\) & \(\times\) \\ \hline \(6\) & \(0.8\) & \(\alpha+0.0371\) & \(1.1\) & \(\alpha+0.0371\) & \(\times\) & \(\times\) \\ \hline \(7\) & \(0.7\) & \(\alpha+0.1371\) & \(1\) & \(\alpha+0.1371\) & \(\times\) & \(\times\) \\ \hline \(8\) & \(0.39\) & \(\alpha+0.5271\) & \(0.39\) & \(\alpha+0.5271\) & \(0.99\) & \(\alpha+0.5271\) \\ \hline \(9\) & \(0.1\) & \(\alpha-0.2629\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ \hline \(10\) & \(0.1\) & \(\alpha-0.1629\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ \hline \(11\) & \(0.1\) & \(\alpha-0.0629\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ \hline \(12\) & \(0.3\) & \(\alpha-0.1629\) & \(0.3\) & \(\alpha-0.1629\) & \(\times\) & \(\times\) \\ \hline \(13\) & \(0\) & \(\alpha-0.4629\) & \(0\) & \(\alpha-0.4629\) & \(0\) & \(\alpha-0.4629\) \\ \hline \(14\) & \(0.1\) & \(\alpha-0.3629\) & \(0.1\) & \(\alpha-0.3629\) & \(\times\) & \(\times\) \\ \hline \end{tabular} \end{table} Table 1: Vertex parameters of IEEE-14 test feeder before reduction (column 2,3), after stage I reduction (vertices \(3,4,9,10,11\) eliminated) (column 4,5), and after stage II reduction (all interior vertices eliminated) (column 6,7) Figure 15: Reduced IEEE-14 test feeder with vertices 3,4,9,10,11 eliminated ### Modified IEEE RTS-96 test system #### 6.2.1 Reduction process In this example, we take Area \(4\) of the modified IEEE RTS-96 test system from [18] in Fig. 17 as the reduction object. Buses connected to generators, loadings, and buses in Area 3 are boundary vertices. The remaining vertices are interior vertices. The corresponding weighted Laplacian matrices of the original and the reduced graph are omitted due to the page limit. Bus angles and active power extractions are omitted as well. All interior vertices are eliminated during the reduction process in Fig. 18. #### 6.2.2 Results 1. The directed graph corresponding to Area \(4\) of the IEEE RTS-96 test system is successfully derived. 2. The proposed weighted Laplacian matrix is derived based on the directed graph and is strictly equivalent to the conventionally defined Laplacian matrix, conforming to _Definition 5_ and _Theorem 4.1_. 3. Kron reduced network is successfully derived by computing the Schur complement of the weighted Laplacian matrix. 4. The successful delivery of the Kron reduced network validates the scalability of the proposed reduction method. ## 7 Conclusions and recommendations We have studied Kron reduction on directed graphs and on directed power flow networks. Our work was motivated by the gap in the existing research work between Kron reduction and its application to directed graphs, and the gap between Kron reduction and its application to electrical networks. We have proposed a novel formulation of the weighted Laplacian matrix for a directed graph in a way that the novel definition is strictly equivalent to the conventional definition of the weighted Laplacian. We presented a comprehensive graph-theoretic analysis of Kron reduction to directed graphs. This analysis led to new physical insights regarding the application of power flow networks. Our analysis demands answers to further questions, such as effective resistance and sensitivity analysis of Kron reduction to directed DC power flow networks, and Kron reduction to other characteristic electrical networks. Undirected/directed graphs with complex-valued weights as well for modelling power networks would be another interesting topic for future research. ## Acknowledgment The first author would like to thanks EIT InnoEnergy and SENSE, for gaining the access to Europe's largest innovation community, including top partners in business, research, and higher education. The paper presents research outcomes from the first author's MSc graduation project. The first author would like to thank the supervisor team (Dr. Zhiyong Sun and Prof. Siep Weiland) for stimulating conversations on this topic and for the guidance and effort in this project. Figure 16: Reduced IEEE-14 test feeder with all interior vertices eliminated Figure 17: Wiring diagram of the modified IEEE RTS-96 test system [18] Figure 18: Kron reduction on Area 4 of the modified IEEE RTS-96 test system
2310.16661
The role of atomic interactions in cavity-induced continuous time crystals
We consider continuous time-crystalline phases in dissipative many-body systems of atoms in cavities, focusing on the role of short-range interatomic interactions. First, we show that the latter can alter the nature of the time crystal by changing the type of the underlying critical bifurcation. Second, we characterize the heating mechanism and dynamics resulting from the short-range interactions and demonstrate that they make the time crystal inherently metastable. We argue that this is generic for the broader class of dissipative time crystals in atom-cavity systems whenever the cavity loss rate is comparable to the atomic recoil energy. We observe that such a scenario for heating resembles the one proposed for preheating of the early universe, where the oscillating coherent inflation field decays into a cascade of exponentially growing fluctuations. By extending approaches for dissipative dynamical systems to our many-body problem, we obtain analytical predictions for the parameters describing the phase transition and the heating rate inside the time-crystalline phase. We underpin and extend the analytical predictions of the heating rates with numerical simulations.
Christian H. Johansen, Johannes Lang, Francesco Piazza
2023-10-25T14:21:54Z
http://arxiv.org/abs/2310.16661v1
# The role of atomic interactions in cavity-induced continuous time crystals ###### Abstract We consider continuous time-crystalline phases in dissipative many-body systems of atoms in cavities, focusing on the role of short-range interatomic interactions. First, we show that the latter can alter the nature of the time crystal by changing the type of the underlying critical bifurcation. Second, we characterize the heating mechanism and dynamics resulting from the short-range interactions and demonstrate that they make the time crystal inherently metastable. We argue that this is generic for the broader class of dissipative time crystals in atom-cavity systems whenever the cavity loss rate is comparable to the atomic recoil energy. We observe that such a scenario for heating resembles the one proposed for preheating of the early universe, where the oscillating coherent inflation field decays into a cascade of exponentially growing fluctuations. By extending approaches for dissipative dynamical systems to our many-body problem, we obtain analytical predictions for the parameters describing the phase transition and the heating rate inside the time-crystalline phase. We underpin and extend the analytical predictions of the heating rates with numerical simulations. _Introduction.--_ Following the first conceptualization of time-crystalline phases of matter [1; 2], it was quickly proven that such phases cannot appear in thermal equilibrium [3; 4; 5]. However, it turned out to be possible to realize such phases in periodically driven systems, both closed [6; 7; 8; 9; 10; 11] and dissipative [12; 13]. Among the latter, systems of atoms in optical cavities have emerged as an ideal platform to realize continuous time-crystalline phases [14; 15; 16], where an effectively time-independent drive of the atomic system is counterbalanced by the loss of photons out of the cavity mirrors. In these phases, continuous time-translation invariance is spontaneously broken, and oscillations persist even though the system possesses a macroscopic number of degrees of freedom, among which energy can be redistributed via interactions. Since the phase space of scattering by cavity-mediated interactions between atoms is limited, due to their long range, redistribution of energy through these processes is inefficient [17; 18; 19]. However, the intrinsic atomic short-range interactions allow for efficient redistribution of energy among the atoms. Indeed, experiments show strong indications that these interactions are one of the main fundamental limiting factors to the measured lifetime of the time crystal [12]. Despite their crucial role short-range atomic interactions have not been theoretically investigated so far in a systematic way for continuous time crystals in atom-cavity setups. In this work, we undertake this task. Not only do we provide a full picture of the possible destabilization processes but we also show that short-range interactions can alter the nature of the time crystal itself. We consider a simple and experimentally realizable mechanism for the appearance of time-crystalline phases for an interacting BEC coupled to two cavity modes [20]. By extending approaches for classical non-linear dissipative systems to our many-body problem, we obtain an analytical description of the time crystal in terms of cavity-induced critical bifurcations and show how inter-atomic interactions can modify the nature of the latter. Within this approach, we also compute the dependence of the energy-redistribution rates on external parameters and identify the scattering processes responsible for making the time crystal metastable. The analytical understanding of the results, which we also underpin with numerical analysis, allows for a deep insight into the generic features of the phenomenology beyond the specific model considered and provides orientation for future investigations both in theory and experiment. _Model.--_ The system considered is an ultracold gas of bosonic atoms in a BEC state, dispersively coupled with equal strength to two modes of an optical cavity. In this regime, a photon imparts a recoil momentum of \(Q=2\pi/\lambda\) to an atom, with \(\lambda\) being the wavelength of the photon in a given mode. In the thermodynamic limit, the atomic BEC at momentum \(k\) is described by a complex field \(\psi_{k}\) satisfying the Gross-Pitaevski mean-field equations. Furthermore, in the limit of a small transverse extend of the BEC compared to the cavity waist we can simplify the model to one spatial dimension [19; 20] \[\begin{split} i\partial_{t}\psi_{k}=& k^{2}\psi_{k}+U\sum_{q,q^{\prime}}\psi_{q}\psi_{q^{\prime}} \tilde{\psi}_{q+q^{\prime}-k}\\ &+\frac{\tilde{\eta}}{\sqrt{2}}\sum_{j=1,2}\operatorname{Re} \left(\phi_{j}\right)\left(\psi_{k+Q}+\psi_{k-Q}\right)\,,\end{split} \tag{1}\] where the bar denotes complex conjugation. This equation has been written in units of the recoil energy \(E_{R}=\hbar^{2}Q^{2}/2m\) and in the rotating frame of the laser. The time-dependence of the fields is kept implicit and the atom field has been normalized to 1. The cavity-mode wavelengths have been chosen to be equal, as we assume the modes differ in transverse direction [20]. The coupling strength \(\tilde{\eta}\) can experimentally be tuned by the strength of the transverse pump while the atoms are interacting with each other through a contact interaction of strength \(U\). The complex field \(\phi_{j}\) corresponds to the coherent cavity-field amplitude which satisfies the equation \[\begin{split} i\partial_{t}\phi_{j}=&\left(\Delta_{ j}-i\kappa\right)\phi_{j}\\ &+\frac{\tilde{\eta}}{2\sqrt{2}}\sum_{k=-\infty}^{\infty}\bar{ \psi}_{k}\left(\psi_{k+Q}+\psi_{k-Q}\right),\end{split} \tag{2}\] where the cavity field has been normalized by the square of the atom number. The cavity linewidths, \(\kappa\), have been assumed to be identical for both modes. In the following we will consider \(\kappa\) on an energy scale similar to the recoil energy, as realized for instance in [21]. In the actual implementation of the dispersive atom-cavity coupling, the characteristic frequency of each cavity mode \(\Delta_{j}\) corresponds to the detuning of the mode frequency with respect to laser-driven two-photon transitions [20]. The steady-state of this model can break time-translation invariance when the two detunings have opposite signs. With this in mind the detunings are parametrized as \(\Delta_{1}=-\left(\Delta-\frac{\delta}{2}\right)\) and \(\Delta_{2}=\Delta+\frac{\delta}{2}\). By choosing \(0<\delta<2\Delta\) the negative detuning has the smallest amplitude \(|\Delta_{1}|<|\Delta_{2}|\). _Nature of the time crystal.--_ Below a critical coupling strength \(\eta_{c}\), all atoms are in the homogeneous state \(\psi_{0}\), and the coherent part of the cavity fields is empty. This configuration is denoted as the normal phase (NP) and it is always a fixed point of the equations of motion eqs. (1) and (2). As \(\tilde{\eta}\) is increased beyond \(\eta_{c}\) the NP fixed point becomes unstable and the system enters a state where a fraction of the atom population is transferred to \(\psi_{\pm Q}\) and the coherent fields of the cavity becomes finite. This symmetry-broken state is often referred to as the superradiant (SR) or self-organized state [22; 23]. The frequency \(\omega_{c}\) of the excitation becoming undamped above \(\eta_{c}\), can be derived through a linear expansion around the NP fixed point [24] (see [25] for an alternative approach). One finds three non-negative real solutions for the frequency of the critical mode. These three solutions are \(\omega_{c}=0\), a resonance at the energy of the Bogoliubov excitation of the BEC at the recoil momentum \(\omega_{c}=\omega_{a}=\sqrt{E_{R}\left(E_{R}+2U\right)}\) and a solution given by \[\omega_{c}=\sqrt{\frac{\delta^{2}}{4}+\sqrt{\left(4\Delta^{2}-\delta^{2} \right)\left(\Delta^{2}+\kappa^{2}\right)}-\Delta^{2}-\kappa^{2}}, \tag{3}\] which is solely determined by cavity parameters, that is, it does not depend on \(U\) and \(E_{R}\). This feature, which can be attributed to the fact that the cavity is the only dissipation channel, implies a robustness of this self-sustained periodic signal to perturbations of the nonlinear medium that causes this signal to appear in the first place. Out of the three modes the critical one is identified by having the smallest real critical coupling. Differently from the frequency, the critical coupling always depends on both cavity and atom parameters (see supplementary) such that the phase diagram will depend on all parameters of the theory. In fig. 1(a) the frequency of the critical mode at \(\eta_{c}\) is plotted as a function of \(\kappa\) and \(\Delta\) and is a good order parameter for distinguishing the three different phases of the system. For \(\Delta<\delta/2\) both cavity modes have a positive detuning and the system always exhibits static superradiance (SSR), characterized by a critical mode with zero frequency. SSR requires a finite critical atom-cavity coupling such that the critical mode is a polariton. For \(\Delta>\delta/2\) one of the modes acquires a negative detuning. Differently from a positively-detuned mode, a negatively-detuned one disfavors a superradiant density modulation. The competition between the two cavity modes induces an oscillating superradiant phase (OSR) [26; 20], which also requires a finite coupling strength such that the critical mode is again a polariton. Instead, when \(\omega_{c}\) equals \(\omega_{a}\), the critical coupling \(\eta_{c}\) vanishes (see supplementary) making the critical mode purely atomic and we refer to this instability as the atomic instability (AI). Both the OSR and AI critical modes break continuous time-translation invariance and can thus potentially signal a continuous time-crystal phase. However, whether the latter is stable is determined by non-linear effects not included so far. In order to capture these in the present interacting many-body system, we perform a systematic Figure 1: The critical frequency of the instability is shown in the lower plot of a) as a function of \(\Delta\) and \(\kappa\). By tuning \(\Delta\) the critical mode change from exhibiting static to oscillating superradiance and a purely atomic instability over a large range of cavity loss rates. Above the critical frequency and coupling is shown along the white dashed line. The upper plot shows the critical frequency and coupling along the dashed line in the lower plot. In b) the sign of the cubic interaction as a function of \(\Delta\) and \(U\) is plotted for \(\kappa=0.4E_{R}\). This determines the stability of the symmetry-broken state beyond the linear analysis. For for the entire figure \(\delta=0.2E_{R}\). perturbative expansion in the relative distance from the critical point \(\eta=\left(\tilde{\eta}-\eta_{c}\right)/\eta_{c}\). The resulting effective non-linear equation is of the Stuart-Landau form (see e.g. [27]), and is an equation of motion for the collective degrees of freedom which are excited in the SR phases. These degrees of freedom constitute the so-called center manifold and are defined by the critical mode, which is composed of both cavity modes as well as of the zero and recoil momentum components of the BEC, \(\psi_{0}\) and \(\psi_{\pm Q}\). Within the center manifold and to leading order in \(\eta\) the recoil momentum component is given by \[\psi_{\pm Q}(t)=\sqrt{\eta}R\left(c_{+}\mathrm{e}^{i\omega_{c}t}+c_{-}\mathrm{ e}^{-i\omega_{c}t}\right), \tag{4}\] with \(c_{\pm}\) being the atomic components of the critical-mode eigenvector obtained from the linear analysis [24]. The cavity fields have the same form with \(c_{\pm}\) replaced by the cavity components of the critical mode. Finally, since to leading order the only occupied atom components are \(\psi_{0}\) and \(\psi_{\pm Q}\), these are linked by normalization such that \[\psi_{0}=\sqrt{1-\left|\psi_{Q}\right|^{2}-\left|\psi_{-Q}\right|^{2}}\sim b_{ 0}+b_{+}\mathrm{e}^{i2\omega_{c}t}+\bar{b}_{+}\mathrm{e}^{-i2\omega_{c}t}, \tag{5}\] with \(b_{0}=1-\eta R^{2}\left(\left|c_{+}\right|^{2}+\left|c_{-}\right|^{2}\right)\) and \(b_{+}=-\eta R^{2}c_{+}\bar{c}_{-}\). The perturbative approach yields an equation of motion for the SR amplitude, \(R\): \[\dot{R}=\gamma R-g^{r}R^{3}, \tag{6}\] where \(\gamma\) is the exponential growth rate of the critical mode obtained from the linear analysis, which in this case can be shown to be positive (see supplementary). The non-linearity of the center manifold or in other words, the strength of the self-interaction of the excitations present in the critical mode, is quantified by \(g^{r}\), (see supplementary for closed expressions for these quantities). For stable time-crystalline and static solutions, \(R\) must be time-independent, real, and positive: \[R=\sqrt{\frac{\gamma}{g^{r}}}>0. \tag{7}\] As \(\gamma>0\), our analytic solutions can only be stable if \(g^{r}>0\). This is physically clear since otherwise the attractive self-interaction would lead to a first order transition into a phase that requires higher-order non-linearities for stabilization. The sign of \(g^{r}\) is shown in fig. 1(b). If \(\omega_{c}\) is pushed to \(\omega_{a}\), \(g_{r}=0\) i.e. the self-interaction vanishes as the critical mode is purely atomic, which corresponds to the white region in fig. 1(b). As the fraction \(\gamma/g^{r}\) goes to zero as \(\omega_{c}\) approaches \(\omega_{a}\) (see supplementary), the AI phase has no stable time-crystaline solution. Short-range interactions between the atoms qualitatively modify \(g^{r}\) and lead to two separatrices in fig. 1(b). The expression for the separatrix \(U_{\mathrm{c2}}(\Delta)\), drawn with a solid line is given in the supplementary material, while the separatrix \(U_{\mathrm{c1}}(\Delta)\) between the white and the blue region, is defined by the condition that the energy cost of a Bogoliubov excitation, \(\omega_{a}\), equals \(\Delta\). When \(U>U_{\mathrm{c1}}\) the self-interactions of the critical mode become finite and repulsive as \(\omega_{c}<\omega_{a}\), leading to a finite cavity component of the critical mode. It is further remarkable that the sign of the self-interactions can be changed via \(U\). Indeed, within the blue region in fig. 1(b), that is, for \(U_{\mathrm{c1}}<U<U_{\mathrm{c2}}\), the self-interactions of the critical mode are attractive: \(g_{r}<0\). This is due to the fact the short-range repulsion \(U\), which penalizes density modulations and in particular excitation of the recoil component \(\psi_{\pm Q}\), is not sufficient to counteract the decrease of energy due to coupling to the negatively detuned cavity mode. The resulting instability of the stationary OSR solution corresponds to a subcritical Hopf bifurcation [28] of eq. (5). On the other hand, when \(U>U_{\mathrm{c2}}\) (green region in the figure), the short-range repulsion penalizes density-modulations enough to change the sign of the self-interaction of the critical mode and thus stabilize the OSR phase. This corresponds to a transition from a subcritical to a supercritical Hopf bifurcation. _Energy redistribution and melting of the time crystal.--_ The OSR time crystal is thus, up to this point, found to exist in a stable fashion as a supercritical Hopf bifurcation. Still, to fully assess its stability, one must allow for energy redistribution between all degrees of freedom, including those not belonging to the critical polariton mode defining the center manifold of the bifurcation. We will refer to those as the not-center-manifold (NCM) modes. Hence, one needs to treat the many-body problem of scattering between quasi-particles and a time Figure 2: a) The dynamic nature of the OSR phases combined with finite atom-interaction leads to occupation of atom modes out of the center manifold, through the symmetric and asymmetric process illustrated here. b) The scaling of the growth rates, computed from the Floquet quasi-energies of the linearized equations, for the asymmetric channel marked with orange stars, with a square-root fit (orange line) and the scaling of symmetric channel marked with red stars, with a linear fit (red line). The same parameters as in fig. 3 have been used. dependent coherent field. Let us first predict which NCM modes initially participate in the scattering process, assuming we are only slightly into the OSR phase. In this regime, we can exploit our analytical knowledge from eqs. (4) and (5). The fastest-growing NCM mode results from scattering between the atomic components \(b_{0}\) and \(c_{\pm}\) of the center manifold, as illustrated in fig. 2(a). For this process, the outgoing NCM modes with energies \(\epsilon_{q},\epsilon_{q^{\prime}}\) have to satisfy \(q+q^{\prime}=Q,\ \epsilon_{q}+\epsilon_{q^{\prime}}=\omega_{c}.\) Since here \(q\neq-q^{\prime}\), we call this the asymmetric channel. Near the critical point, we can approximate \(\epsilon_{q}\) with the Bogoliubov dispersion of the BEC excitations in the absence of the cavity field, which for small \(U\) reads \(\omega_{B}(k)\approx E_{R}k^{2}+U\). This yields \(q=Q/2+\sqrt{\omega_{c}-E_{R}/2-2U}/\sqrt{2}\) and \(q^{\prime}=Q-q\). From the solution of eqs. (4) and (5), we predict an exponential growth of these two Bogoliubov modes with a rate proportional to \(U\sqrt{\eta}\). This asymmetric channel can be closed off if \(\omega_{c}<E_{R}/2-2U\), which leaves us with a different channel where the component \(b_{0}\) scatters with \(b_{+}\), or \(c_{+}\) with \(c_{-}\). Both these processes produce a symmetric NCM pair with \(q=-q^{\prime}=\sqrt{\omega_{c}-U}\). One representative process of this symmetric channel is shown in fig. 2(a). In contrast to the asymmetric counterpart, we predict an exponential growth rate proportional to \(U\eta\). In order to further verify the above predictions, we have linearized the eqs. (1) and (2) around the OSR phase and extracted the rate by computing the Floquet quasi-energies. The result of this calculation is shown in the lower panel of fig. 3. It is seen that the predicted momentum (orange marks for the asymmetric channel and red mark for the symmetric channel) is only reliable close to the phase transition as the dispersion of the NCM mode is quickly modified due to the growing oscillating density modulation. We also find an additional momentum component that grows (marked in green), which arises from the scattering between a negative momentum NCM mode in the symmetric channel and the recoil component of the center manifold. The computed growth rates for the symmetric and asymmetric modes are shown in fig. 2(b), and in both cases, an excellent agreement with our simple predictions based on fig. 2(a) is demonstrated. Finally, in order to fully confirm our predictions, we performed a full numerical integration using a Runge-Kutta-4 routine, starting from the OSR phase at \(\eta=0.06\), corresponding to the white dashed line in the lower panel of fig. 3. After evolving the system for 200 periods we compared the momentum distribution with the predictions based on the Floquet quasi-energies and found excellent agreement, as shown in the upper panel of fig. 3. An important outcome of our analysis is that the time crystal is always metastable due to energy redistribution caused by scattering out of the center manifold. Its lifetime, however, increases significantly by considering \(\omega_{c}<E_{R}/2\) to prohibit the asymmetric scattering processes that lead to much higher growth rates. _Conclusions.--_ We have provided a systematic analysis of the role of short-range interactions on the nature and stability of continuous time crystals in dissipative many-body systems of ultracold bosonic atoms in cavities. First, we have shown that short-range interatomic interactions can alter the nature of the time crystal by transforming the underlying classical bifurcation from sub- to supercritical. Second, we have studied the effect of short-range interactions on heating and melting of the time crystal. The heating mechanism we have discussed arises due to the oscillating nature of the atomic fields \(\psi_{0}\) and \(\psi_{|Q|}\). As shown in the supplementary material, the amplitude of these fields is not dependent on the details of the underlying critical polaritonic mode, but rather only on the frequency of the oscillations and the proper dimensionless distance from the critical point. Furthermore, we find that the cavity losses cannot efficiently cool the system [29; 30; 31] (NCM modes can be de-excited only at higher order in our expansion, see supplementary material). This suggests that the heating mechanism we identified is generic for these cavity systems [32], as long as the cavity line width is comparable to the recoil energy. We note that time-dependent Hartree-Fock approximations would miss this heating [33], as they lack collisions and thus redistribution [34]. As we show it is precisely these effects that eventually lead to the metastable nature of the time-crystalline state, consistent with numerical predictions in related models [35; 36]. Figure 3: The lower plot shows the exponential growth rates of the atomic modes outside of the center manifold. The parameters are equivalent to those in fig. 1 with \(\Delta=0.6E_{R}\) resulting in \(\omega_{c}=0.586E_{R}\), and we choose \(U=0.01E_{R}\). The orange ticks indicate the predicted momentum based on the asymmetric channel, while the red tick signifies the symmetric channel momentum. The green tick is the atom mode coupled to the symmetric channel through the cavity. The upper plot shows the resulting atom distribution after 200 periods at the dashed line in the lower plot, both with numerical integration of eqs. (1) and (2) in blue and from the linearized prediction with the dashed green line. Finally, we point out that the heating mechanism described here is analogous to preheating in the early universe [37; 38], where the weakly interacting and oscillating, coherent inflation field decays into a cascade of exponentially growing fluctuations, leading to extreme non-equilibrium conditions inaccessible to perturbative methods and finally to prethermalization [39]. The analytic discussion presented here corresponds to the linearized classical regime [40], which at later times will be superseded by increasingly non-linear effects leading to a cascade of even more quickly growing fluctuations that eventually thermalize [41] and thus destroy the time-crystalline phase. It will be interesting to pursue this analogy deeper into the highly excited regime using appropriate atom-photon diagrammatic approaches [42; 43]. CHJ would like to thank Johnathan Dubois for many helpful and insightful discussions.
2302.06663
Mergers of neutron stars and black holes with cores of giant stars: a population synthesis study
We perform population synthesis of massive binaries to study the mergers of neutron stars (NSs) and black holes (BHs) with the cores of their giant secondaries during common envelope evolution (CEE). We use different values of the efficiency parameter $\alpha_{\rm CE}$ in the framework of the energy formalism for traditional CEE ($\alpha_{\rm CE} \leq 1$) and including additional energy sources to unbind the envelope ($\alpha_{\rm CE} > 1$). We constrain the possible values of $\alpha_{\rm CE}$ by comparing the results of our simulations with local rate densities of binary compact object mergers as inferred from gravitational-wave observations. We find two main evolutionary pathways of binary systems that result in NS-core mergers, while only one of them can also lead to the merger of a BH with the core of the giant star. We explore the zero age main sequence (ZAMS) statistical properties of systems that result in NS/BH-core mergers and find that the two evolutionary channels correspond to a bimodal distribution of orbital separations. We estimate the percentage of the mergers' event rates relative to core collapse supernovae (CCSNe). We include the effect of mass accreted by the NS/BH during CEE in a separate set of simulations and find it does not affect the mergers' event rates.
Aldana Grichener
2023-02-13T19:48:47Z
http://arxiv.org/abs/2302.06663v2
# Mergers of neutron stars and black holes with cores of giant stars: a population synthesis study ###### Abstract We perform population synthesis of massive binaries to study the mergers of neutron stars (NSs) and black holes (BHs) with the cores of their giant secondaries during common envelope evolution (CEE). We use different values of the efficiency parameter \(\alpha_{\rm CE}\) in the framework of the energy formalism for traditional CEE (\(\alpha_{\rm CE}\leq 1\)) and including additional energy sources to unbind the envelope (\(\alpha_{\rm CE}>1\)). We constrain the possible values of \(\alpha_{\rm CE}\) by comparing the results of our simulations with local rate densities of binary compact object mergers as inferred from gravitational waves observations. We find two primary evolutionary pathways of binary systems that result in NS-core mergers, while only one of them can also lead to the merger of a BH with the core of the giant star. We explore the zero age main sequence (ZAMS) statistical properties of systems that result in NS/BH-core mergers and find that the two evolutionary channels correspond to a bimodal distribution of orbital separations. We estimate the percentage of the mergers' event rates relative to core collapse supernovae (CCSNe). We include the effect of mass accreted by the NS/BH during CEE in a separate set of simulations and find it does not affect the mergers' event rates. binaries: general - stars: neutron stars - stars: black holes - stars: massive -methods: numerical 0000-0002-2181-8084]Aldana Grichener ## 1 Introduction Most massive stars are in close multiple systems consisting of two or more stars in orbit around their common center of mass. The immense swelling of one or both stars in a binary system at late evolutionary phases might lead to a substantial decrease in the orbital separation of the system to the point where mass transfer takes place. If the mass transfer becomes unstable then the system might enter a common envelope evolution (CEE) phase (e.g., Paczynski, 1976; Iben and Livio, 1993; Taam and Sandquist, 2000; Izzard et al., 2012; Ivanova et al., 2013; Roepke and De Marco, 2022) in which the envelopes of both stars merge. In cases where at the onset of CEE the stars in the binary are a neutron star (NS) or a black hole (BH) and a massive giant, the compact object 1 is immersed within the envelope of the giant star and spirals closer to its core. This can lead to either the ejection of the envelope and a surviving compact object-core pair or to the merger of the core with the NS/BH. Another proposed alternative is the formation of a stable Thorne-Zytkow object (Thorne and Zytkow, 1977). The fate of massive binaries is of significant interest due to the various astrophysical phenomena it engenders. Footnote 1: Throughout the manuscript ”compact object” refers only to a NS or a BH. Mergers of NSs/BHs with cores of giant stars are a topic of ongoing research in a variety of contexts (e.g., Fryer and Woosley, 1998; Zhang and Fryer, 2001; Barkov and Komissarov, 2011; Chevalier, 2012; Schroder et al., 2020; Metzger, 2022; Guarini et al., 2022). In particular, such mergers can lead to transient events known as common envelope jet supernovae (CEJSNe; Soker and Gilkis, 2018). In a CEJSN event a NS/BH is engulfed by a giant star and spirals-in inside its envelope. The compact object accretes mass via an accretion disk and launches two opposite jets that propagate through the giant's envelope and expel mass. Eventually, the NS/BH reaches the dense core of the giant star and launches more energetic jets as they merge. The effects of the jets that the compact object launches on the envelope and on the core of the giant star are broadly studied in one dimensional and three dimensional simulations (e.g., Moreno Mendez et al., 2017; Moriya, 2018; Gilkis et al., 2019; Lopez-Camara et al., 2019; Grichener et al., 2021; Ragoler et al., 2022; Hillel et al., 2022; Schreier et al., 2022). Unfortunately, the high opacity of the giant's envelope does not allow for direct observations of core-NS/BH mergers with existing facilities. Moreover, even though the merger of a compact object with the core of a gi ant star emits gravitational waves (e.g., Ginat et al., 2020), their signal is much lower than in the case of binary compact object mergers, and is undetectable at the moment. A future detection by next-generation gravitational waves detectors can serve as an observational signature of a merger event. Meanwhile, we can study the effect that NS/BH-core mergers would have on their surroundings. Due to the high energies of the jets in their correlated CEJSN transient, NS/BH-core mergers might account for several astrophysical phenomena, such as heavy r-process nucleosynthesis that occurs inside jets that a NS launches as it merges with the core of a giant star (Papish et al., 2015; Grichener and Soker, 2019; Grichener et al., 2022; Grichener and Soker, 2022) and high energy neutrinos emission from jets that a BH launches while spiraling-in inside the envelope of a giant prior to the merger with its core (Grichener and Soker, 2021). Moreover, some peculiar transient events such as fast blue optical transients and other rare supernovae (SNe) might hint to NS/BH-core mergers and the CEJSN mechanism as well (e.g., Thone et al., 2011; Soker and Gilikis, 2018; Soker et al., 2019; Dong et al., 2021; Soker, 2022; Soker, 2022). The recently proposed hypernebula transient that is powered by jets accompanying hyper-Eddington mass transfer from an evolved post-main sequence (MS) star onto a NS/BH shortly before CEE (Sridhar and Metzger, 2022; Sridhar et al., 2022) might serve as a precursor of a NS/BH-core merger. Estimating the mergers' event rates is crucial to understand whether they can account for said phenomena. In this study we use the population synthesis code COMPAS (Team COMPAS: Riley et al., 2022) to find the merger rates of NSs/BHs with cores of giant stars. Many population synthesis studies of double compact object binaries formation and merger were performed using COMPAS (e.g., Stevenson et al., 2017; Vigna-Gomez et al., 2018; Neijssel et al., 2019; Vigna-Gomez et al., 2020; Broekgaarden et al., 2021; Raveh et al., 2022). To our knowledge, the only study which performed population synthesis of NS/BH-core mergers is Schroder et al. (2020). In the present work we include in the simulations the effects of additional energy sources that are required to unbind the envelope and estimate the merger rates in these scenarios. Moreover, we find different evolutionary routes that lead to NS/BH-core mergers and study the structure of the core during the merger event. Our study is organized as follows. We begin by describing the initial parameters and prescriptions we use in our population models (section 2). We then present the main evolutionary channels towards NS/BH-core mergers, the properties of binary systems that result in these mergers and their event rates (section 3). We summarize our findings and discuss their relevance to previous works in section 4. ## 2 Population model We use the population synthesis code Compact Object Mergers: Population, Astrophysics and Statistics (COMPAS; Stevenson et al., 2017; Vigna-Gomez et al., 2018; Team COMPAS: Riley et al., 2022) to find the evolutionary pathways of binary systems that result in the merger of a NS/BH and the core of a giant star, explore their statistical properties and estimate the mergers' event rate. COMPAS generates populations of isolated stellar binary systems and evolves the stars in the binary using the analytical fits in Hurley et al. (2000) based on the stellar models of Pols et al. (1998). We sample the initial distribution of binary properties at the ZAMS (zero age main sequence) according to the "Fiducial model" of Vigna-Gomez et al. (2018) as described below. Henceforth we refer to the initially heavier star as the primary star, and the initially lighter star as the secondary star. We draw the mass of the primary star from the Kroupa initial mass function (IMF) in the form \(dN/dM_{\rm 1,ZAMS}\propto M_{\rm 1,ZAMS}^{-2.3}\)(Kroupa, 2001) with masses in the range \(5\leq M_{\rm 1,ZAMS}/M_{\odot}\leq 100\), and the mass of the secondary star from a flat distribution in the mass ratio with \(0.1\leq q_{\rm ZAMS}\equiv M_{\rm 2,ZAMS}/M_{\rm 1,ZAMS}\leq 1\)(Sana et al., 2012). The initial separations follow the flat-in-the-log distribution between \(0.1\leq a_{\rm ZAMS}/{\rm AU}\leq 1000\)(Sana et al., 2012). We take all of the orbits to be circular at ZAMS (i.e., \(e_{\rm ZAMS}=0\)), and all the stars in our sample have solar metalicity \(Z=0.0142\)(e.g., Asplund et al., 2009). In general, massive stars end their lives in SN explosions. COMPAS differentiates between three SN scenarios: regular hydrogen-rich core collapse supernovae (CCSNe), electron capture supernovae (ECSNe) and ultra stripped supernovae (USSNe), according to the core and envelope masses prior to the explosion. For NS remnants, we take a bimodal natal-kick velocity distribution where CCSNe constitute the higher mode of \(\sigma_{\rm high}=265\) kms\({}^{-1}\) and ECSNe together with USSNe contribute to the lower mode of \(\sigma_{\rm low}=30\) kms\({}^{-1}\). We draw the BH natal kicks from the same bimodal distribution reduced according the fallback model of Fryer et al. (2012). We set the maximum allowed NS mass to be \(M_{\rm NS,mas}=2M_{\odot}\), as according to Ozel et al. (2010) this value reproduces the observational mass gap between NSs and BHs. To determine the orbital separation between both stars in the binary system after a CEE event, COM PAS uses the energy formalism in which the energy difference between the orbital energies before and after the CEE phase is compared with the binding energy of the envelope (van den Heuvel, 1976; Webbink, 1984; Livio and Soker, 1988; Iben and Livio, 1993; see Ivanova et al., 2013 for a review). In the case of a NS/BH that is swallowed by a giant star \[\begin{split} E_{\rm bind}&=\frac{\alpha_{\rm CE} GM_{\rm giant,pre-CE}M_{\rm NS/BH}}{2a_{\rm pre-CE}}\\ &-\frac{\alpha_{\rm CE}GM_{\rm giant,post-CE}M_{\rm NS/BH}}{2a_{ \rm post-CE}}\end{split} \tag{1}\] where \(\alpha_{\rm CE}\) is the common envelope efficiency parameter, \(M_{\rm NS/BH}\) is the mass of the compact object, \(M_{\rm giant,pre-CE}\) and \(M_{\rm giant,post-CE}\) are the masses of the giant star before and after the CEE phase respectively, and \(a_{\rm pre-CE}\) and \(a_{\rm post-CE}\) are the orbital separations before and after CEE respectively. The binding energy is calculated using the lambda formalism of de Kool (1990) as implemented by Xu and Li (2010). To find whether the NS/BH merges with the giant star's core, we compare the orbital separation of the systems after the CEE phase with the radius of the core, which is estimated using approximate analytical relations between the mass and radius of the core in different evolutionary phases from Hall and Tout (2014). If the orbital separation is smaller than the radius of the core, we assume that a NS/BH-core merger has occurred. Traditionally, the maximal value of the common envelope efficiency parameter \(\alpha_{\rm CE}\) is one, representing a case where the entire difference between orbital energies goes to unbind the envelope of the giant star and is precisely sufficient for this purpose. However, many hydrodynamical simulations do not find full envelope ejection, in contradiction to observations of post common envelope systems (e.g., Passy et al., 2012; Ricker and Taam, 2012; Kuruwita et al., 2016; Ohlmann et al., 2016; Iaconi et al., 2017; Glanz and Perets, 2021; Glanz and Perets, 2021), implying that additional energy sources are required to unbind the envelope. Several physical processes have been suggested as possible mechanism for the ejection of the common envelope, such as the core-companion system's interaction with a circumbinary disk in the final CEE stages (e.g., Kashi and Soker, 2011; Kuruwita et al., 2016), jets launched from the compact object while it is inside the envelope of the secondary star (e.g., Sabach et al., 2017; Soker, 2017), the recombination energy of hydrogen and helium (e.g., Ivanova et al., 2015), envelope inflation followed by long period pulsations (e.g., Clayton et al., 2017) and dust driven winds (Glanz and Perets, 2018). In the energy formalism an additional energy source can be represented by \(\alpha_{\rm CE}>1\) regardless of its nature. We perform simulations for \(0.1\leq\alpha_{\rm CE}\leq 5\) to allow for both traditional CEE and scenarios with additional energy sources. For each value of \(\alpha_{\rm CE}\), we evolve \(10^{7}\) isolated binaries with the initial parameter distribution described above. ## 3 Results ### Constraining the values of \(\alpha_{\rm CE}\) Due to the lack of ability to observe systems in the relatively short and low luminosity CEE phase we cannot compare the results of our population synthesis simulations to direct observations. However, since roughly \(80-90\%\) progenitors of binary compact object mergers in our simulations go through the same evolutionary stages as progenitors of NS/BH-core mergers until the CEE of the compact objects with the giant star (Fig. 2), we find their merger rates for different values of \(\alpha_{\rm CE}\) and compare these rates with observations to determine the possible values of the efficiency parameter. We crudely estimate the local transient rate density of different events in our COMPAS simulations using \[R_{\rm event}=f_{\rm event}\frac{n_{\rm SFR}f_{\rm pop}}{\langle M\rangle}, \tag{2}\] where \(R_{\rm event}\) is the local transient rate density in \({\rm Gpc}^{-3}\ {\rm yr}^{-1}\), \(f_{\rm event}=N_{\rm event}/N_{\rm total}\) is the ratio between the number of systems that result in this type of event and the total number of systems simulated in our COMPAS simulation, \(n_{\rm SFR}\sim 10^{7}{\rm M}_{\odot}{\rm Gpc}^{-3}\ {\rm yr}^{-1}\) is the local star formation rate (Madau and Dickinson, 2014), \(f_{\rm pop}\) is the fraction of the Kroupa IMF (Kroupa, 2001) we simulate and \(\langle M\rangle\) is the average stellar mass. Integrating over the Kroupa IMF between \(M_{\rm min}=5M_{\odot}\) and \(M_{\rm max}=100M_{\odot}\) and dividing by the total mass we find that \(f_{\rm pop}\simeq 0.007\). The average stellar mass according to the Kroupa IMF is \(\langle M\rangle\simeq 0.39M_{\odot}\). Substituting all of the above into equation (2) gives \[R_{\rm event}=1.79\times 10^{5}f_{\rm event}\ {\rm Gpc}^{-3}\ {\rm yr}^{-1}. \tag{3}\] In Fig 1 we present the local merger rate density of NS-NS binaries (upper panel; blue dots), BH-BH binaries (middle panel; green dots) and NS-BH binaries (lower panel; red dots) in our COMPAS simulations as computed by using equation (3). The rectangular filled areas in each panel are the possible observational ranges of these rates as inferred from gravitational waves emission according to The LIGO Scientific Collaboration et al. (2021). We conclude that the local merger rate densities are well within the observational error margin for \(\alpha_{\rm CE}\gtrsim 0.5\). This is in accordance with previous studies which favour \(\alpha_{\rm CE}\simeq 2\) compared to small \(\alpha_{\rm CE}\) to explain the observed merger rates of double compact object binaries (e.g., Garcia et al., 2021; Zevin et al., 2021; Broekgaarden and Berger, 2021). ### Evolutionary channels towards NS/BH-core mergers The two main evolutionary channels of binary systems that result in the merger of a NS/BH with the core of a giant star are presented in Fig. 2. Both scenarios begin with two massive stars on the MS. The initially heavier star \(M_{1}\), to which we will refer as the primary star, evolves faster and becomes a giant while the lighter secondary star \(M_{2}\) keeps burning hydrogen at its core. In the main channel, denoted as _channel_\(I\) (left panel of Fig. 2), a stable mass transfer episode from the giant primary to its MS secondary occurs at this stage. The giant star eventually loses its envelope through mass transfer and the remaining naked core keeps evolving until it explodes in an stripped-envelope SN event in the same manner as Wolf-Rayet stars2(e.g., Wheeler and Levreault, 1985; Woosley et al., 1995; Eldridge and Tout, 2004; McClelland and Eldridge, 2016), leaving a NS/BH remnant, depending on the mass of the stripped core. The natal kick might drive the compact object away from the SN location, but a low enough kick allows for a bound NS/BH-MS binary system (e.g., Kochanek et al., 2019). The secondary star continues to evolve and when the hydrogen in its core is depleted it enters the Hertzsprung Gap (HG) phase where it expands until the ignition of helium in its core. From the HG phase and onward, mass transfer occurs in the opposite direction, i.e. from the secondary star to the primary. The expanding secondary becomes a giant and it can engulf the NS/BH initiating a CEE phase where the compact object spirals-in inside the envelope of the giant star. The compact object might be swallowed by the giant star later in the evolution as well, e.g, during core helium burning, as we show in table 2. Tidal forces can bring the NS/BH into the envelope even if the binary separation is several times larger than the radius of the giant star (e.g., Soker, 1996). Currently the COMPAS code does not include tidal evolution, implying that it underestimates the number of systems that go through NS/BH-giant star CEE, and therefore the rates of NS/BH core mergers and their resultant transients. Footnote 2: We note that the stripped core in our scenario is not necessarily a Wolf-Rayet star according to the observational definition (Shenar et al., 2020). The giant star-NS/BH CEE has two possible outcomes. If the envelope is not ejected before the NS/BH enters the core then the NS/BH merges with the core of the giant star as shown on the left in the left panel of Fig. 2. However, if the envelope is entirely unbound before the NS/BH reaches the core, then the core will keep evolving and eventually ends its life in a stripped-envelope SN explosion resulting in another NS/BH (right side in the left panel of Fig. 2). If the bi Figure 1: Local rate density of NS-NS mergers (blue dots), BH-BH mergers (green dots) and NS-BH mergers (red dots) for different values of common envelope efficiencies. The rectangular areas represent the error margin of mergers observation taken from The LIGO Scientific Collaboration et al. (2021). Figure 2: A schematic illustration of the evolution of a massive binary system. Two massive MS stars evolve towards a giant star-NS/BH CEE. The NS/BH spirals-in inside the envelope of the giant star. The CEE can lead either to the merger of the NS/BH with the core of the giant star potentially powering a bright transient event, or to the formation of double compact objects binary that might merge in the future by emission of gravitational waves. Left panel: main evolution channel, where dynamically stable mass transfer from the post MS primary star to its MS secondary companion takes place. Right Panel: Secondary evolution channel in which a dynamically unstable mass transfer leads to the formation of a common envelope that surrounds the secondary MS star and the core of its post MS primary, reducing their orbital separation, and allowing only for NS-core mergers. nary remains bound after the second SN explosion, and the two compact objects are close enough, they might merge on a timescale shorter than the Hubble time emitting potentially detectable gravitational waves (the majority of systems shown in Fig. 1). Less massive and wider binaries evolve through a secondary channel (_channel II_; right panel of Fig. 2), where dynamically unstable mass transfer between the post MS primary star and its MS secondary leads to an early CEE phase that brings the core of the giant and the MS star closer together. At the end of CEE the envelope is ejected and the evolution continues as in _channel I_. We note that this formation channel can only involve a NS compact object due to the relatively low masses of the SN progenitor. A massive BH progenitor always leads to stable mass transfer from the post-MS primary to its MS secondary (second stage of channel I in Fig. 2), implying that all BH-core mergers evolve through _channel I_. This evolution channel always ends with the merger of the NS with the core of the giant star due to the small orbital separation at the end of first CEE (see equation 1). In both formation channels presented in Fig. 2) several short dynamically stable mass transfer episodes might occur throughout the evolution besides the mass transfer events mentioned above. Table 1 presents the percentage of systems that evolve through _channel I_ (second column; left panel of Fig. 2) and through _channel II_ (third column; right panel of Fig. 2) for different values of \(\alpha_{\rm CE}\) in our simulations (first column). A remaining small percentage of the systems evolved through various other channels, including binary evolution where there is no mass transfer between the stars in the binary system when the primary star is a giant. We note a non-monotonic behaviour with \(\alpha_{\rm CE}\) as a result of two separate CEE phases that are affected by this parameter. A larger value of \(\alpha_{\rm CE}\) implies a larger orbital separation in the end of CEE. In _channel II_ this means that on the one hand more systems survive the first CEE phase (between the giant primary and the MS secondary) and therefore can lead to NS-core mergers, but on the other hand in the second CEE (NS primary-giant secondary) more NSs do not manage to enter the core of the giant star. ### Properties of binary systems that result in NS/BH-core mergers As mentioned in section 2, we use COMPAS to explore the properties of NS/BH-core merger progenitors. In Figs. 3-6 we show different distributions of these progenitors at the ZAMS for representative values of \(\alpha_{\rm CE}\). Several interesting trends emerge from these figures. While NS-core mergers occur for \(M_{\rm 1,ZAMS}\gtrsim 6.5M_{\odot}\) up to the heaviest stars in our mass distribution, BH-core mergers tend to involve much heavier stars in the initial binary system, with \(M_{\rm 1,ZAMS}\gtrsim 37.7M_{\odot}\) (left panels of Figs. 3-6). This can be explained by the heavier masses required to produce a BH in a SN event under the assumptions of COMPAS. Binary systems with higher mass ratios at the ZAMS are more likely to result in NS/BH-core mergers. For both mergers of NSs and BHs with cores of giant stars most of the progenitor binaries have initial mass ratios of \(\rm q_{ZAMS}\equiv M_{\rm 2,ZAMS}/M_{\rm 1,ZAMS}\gtrsim 0.5\) (middle panels of Figs. 3-6), while NS-core mergers can also originate from \(0.25\lesssim q_{\rm ZAMS}\lesssim 0.5\). The percentage of systems of the latter mass ratio strongly decreases with \(\alpha_{\rm CE}\). We note a bimodal distribution in the initial orbital separation of systems that result in a core-NS merger (right upper panels of Figs. 3-6) where each peak corresponds to one of the evolution channels presented in Fig. 2. We find that the higher/lower peak in the initial separation coincide with the percentage of systems that evolve through channel I/II (table 1). The larger population is in small separations and has a peak at \(a_{\rm ZAMS}\simeq 0.65\rm AU\simeq 130R_{\odot}\) for most values of \(\alpha_{\rm CE}\) we simulated. It corresponds to _channel I_, which is characterized by stable mass transfer from the post MS primary to the MS secondary. One might expect that small initial separation would typically lead to unstable mass transfer and CEE, and this is indeed a common outcome in our population models. However, in most such systems the secondary star merges with the core of the giant primary while still on the MS and hence \begin{table} \begin{tabular}{||c c c||} \hline \(\alpha_{\rm CE}\) & Channel I & Channel II \\ \hline 0.5 & 73.3 \% & 23.3 \% \\ \hline 0.6 & 71.1 \% & 25.7 \% \\ \hline 0.75 & 73.1 \% & 25.2 \% \\ \hline 1 & 66.7 \% & 32.2 \% \\ \hline 1.25 & 56.7\% & 42.7 \% \\ \hline 1.5 & 54.9\% & 45.0 \% \\ \hline 1.75 & 73.2 \% & 26.8 \% \\ \hline 2 & 85.8 \% & 14.2 \% \\ \hline 2.5 & 99.6 \% & 0.3\% \\ \hline 3 & 99.8\% & 0\% \\ \hline 4 & 99.6\% & 0\% \\ \hline 5 & 99.7\% & 0\% \\ \hline \end{tabular} \end{table} Table 1: Percentage of NS-core mergers that evolved through _channel I_ and _channel II_ (Fig. 2) from all NS-core mergers for values of \(\alpha_{\rm CE}\) that coincide with observations of double compact object mergers (Fig. 1). cannot lead to a NS/BH-core merger, i.e. the prevalence of stable mass transfer we find at small separations is a selection effect. The stars in binary systems whose initial orbital separation is sufficiently small remain close enough so that the resulting NS can be swallowed by the secondary at its giant phase and eventually merge with its core, while for systems that begin further apart the NS and the giant star do not engage in CEE. BH-core mergers exhibit only the first peak in the initial orbital separation, i.e., evolve only through _channel I3_. Footnote 3: We made sure the single peak is not due to a small sample of BH-core mergers by performing simulations of \(10^{7}\) binaries drawing the mass of the primary from Kroupa IMF with masses in the range \(35\leq M_{1}/M_{\odot}\leq 100\). This is equivalent to running \(1.6\times 10^{8}\) systems in the range \(5\leq M_{1}/M_{\odot}\leq 100\). The second population of binaries that end their lives in a merger of a NS with the core of a giant star during CEE is smaller and has a peak around \(a_{\rm ZAMS}\simeq 4.65{\rm AU}\simeq 1000{\rm R}_{\odot}\). This population corresponds to _channel II_. Primaries with masses in the range \(6.5M_{\odot}\lesssim M_{1,{\rm ZAMS}}\lesssim 20M_{\odot}\) evolve through this channel. In this case a common envelope is formed around the core of the post MS primary star and the MS secondary, and dynamical friction between the core-MS system and the gas of the envelope reduces the orbital separation significantly 4. At some point the common envelope is ejected and a bound, more compact system of a MS star and the naked core remains behind, and keeps evolving towards a NS-core merger as in _channel I_. Even though stable mass transfer can occur between the post MS primary and the MS secondary in this regime of initial orbital separations, the stars in such binaries are Figure 3: Distributions of binary properties at the ZAMS of systems that result in NS-core mergers (upper panels; orange) or in BH-core mergers (lower panels; turquoise) for a common envelope efficiency parameter \(\alpha_{\rm CE}=0.5\). Left panels: initial mass of the NS/BH (orange dots/turquoise dots) progenitors (primary star, \(M_{1,{\rm ZAMS}}\)) vs the initial mass of the star that swallows the NS/BH (secondary star, \(M_{2,{\rm ZAMS}}\)). Middle panels: initial mass ratio \(q_{\rm ZAMS}\equiv M_{2,{\rm ZAMS}}/M_{1,{\rm ZAMS}}\) that leads to NS/BH-core mergers. The orange bins represent the percentage of binary systems that begin with a certain mass ratio and result in a NS-core merger from all systems where the compact object merges with the core of the giant star during the CEE phase. The turquoise bins depict this percentage for BH-core mergers. Right panels: Similar to the middle panels for the binary initial orbital separations. not close enough to lead to a CEE between the NS and the giant at a later stage, and therefore cannot lead to a NS-core merger event. For even larger initial orbital separations stars in the binary systems would mainly evolve separately. ### Mergers' event rate We can estimate the merger rates of NSs/BHs with the cores of the giant secondaries using the results of our population synthesis models. We present the percentage of these rates relative to CCSNe events in Fig. 7 for the values of \(\alpha_{\rm CE}\) we find in section 3.1. The orange and turquoise dots are the percentages of NS-core and BH-core mergers, respectively, from CCSNe in our population model. We note that for both NS and BH mergers with the giant's cores the event rate decreases with \(\alpha_{\rm CE}\), a trend we can also see by comparing the left panels of Figs. 3-6. In the energy formalism (see equation 1), higher values of \(\alpha_{\rm CE}\) imply that more energy is transmitted from the orbit (or from an additional energy source) to the envelope and it is ejected earlier in the evolution. The underlying physical concept is that if a larger amount of energy is deposited inside the envelope it expands to larger radii before the outer layers are ejected, becoming less dense and reducing the dynamical friction between the spiraling-in primary and the gas of the secondary's envelope. This results in larger orbital separations at the end of CEE, implying that more NSs/BHs remain outside the core rather than merge with it. BH-core mergers reduce to nearly nothing for \(\alpha_{\rm CE}\gtrsim 4\) while the event rate of NS-core mergers drops about an order of magnitude between the lowest and highest values of \(\alpha_{\rm CE}\) we simulate. In table 2 we present how the percentage of systems where a NS merges with the helium (second column) or carbon-oxygen (third column) core of a giant star from all NS-core mergers varies with the efficiency parameter \(\alpha_{\rm CE}\) (first column). For larger values of \(\alpha_{\rm CE}\) the number of mergers where the core is composed of helium is overall larger, i.e., the mergers occur at earlier times during the giant-NS/BH CEE. As larger amounts of energy go to unbind the envelope it is ejected at an earlier stage, and the orbital separation at the end of the CEE phase is larger (equation 1). This implies that mergers between the giant's core and the NS for larger values of \(\alpha_{\rm CE}\) occur for more massive secondaries with higher envelope binding energies (as we indeed see in the fourth column of table 2). Such giants are larger to begin with and thus can swallow the NS as they expand Figure 4: Similar to Fig. 3 for a common envelope efficiency parameter \(\alpha_{\rm CE}=1\). \begin{table} \begin{tabular}{||c c c c||} \hline \(\alpha_{\rm CE}\) & He core & CO core & \(\langle M_{\rm Z,ZAMS}\rangle\,[M_{\odot}]\) \\ \hline 0.5 & 85.5 \% & 14.5 \% & 10.4 \\ \hline 0.6 & 85.9 \% & 14.1 \% & 10.5 \\ \hline 0.75 & 85.1 \% & 14.8 \% & 11.2 \\ \hline 1 & 86.3 \% & 13.7 \% & 12.0 \\ \hline 1.25 & 90.2 \% & 9.8 \% & 12.7 \\ \hline 1.5 & 97.0 \% & 3.0\% & 13.9 \\ \hline 1.75 & 98.6 \% & 1.4 \% & 16.4 \\ \hline 2 & 99.6 \% & 0.4 \% & 18.3 \\ \hline 2.5 & 99.9\% & 0.1\% & 21.4 \\ \hline 3 & 100\% & 0\% & 22.9 \\ \hline 4 & 100\% & 0\% & 25.0 \\ \hline 5 & 100 \% & 0 \% & 25.6 \\ \hline \end{tabular} \end{table} Table 2: Percentage of NS-core mergers where the core of the giant secondary star is composed of helium (second column) or Carbon-Oxygen (third column) from all NS-core mergers for different values of the efficiency parameter \(\alpha_{\rm CE}\) (first column). The fourth column lists the average mass of the secondary at the ZAMS for each value of \(\alpha_{\rm CE}\) Figure 5: Similar to Fig. 3 for a common envelope efficiency parameter \(\alpha_{\rm CE}=1.5\). Figure 6: Similar to Fig. 3 for a common envelope efficiency parameter \(\alpha_{\rm CE}=2\). Figure 7: Percentage of NS-core mergers (orange dots) and BH-core mergers (turquoise dots) of all CCSNe for the values of common envelope efficiencies that coincide with observations of binary compact object mergers (Fig. 1). at earlier stages, before core helium burning. We find that binary systems whose outcome is BH-core mergers require \(\langle M_{\rm 2,ZAMS}\rangle\gtrsim 37.7M_{\odot}\) (section 3.3). Therefore, all BH-core mergers occur when the core of the giant secondary is composed mainly of helium regardless of the \(\alpha_{\rm CE}\) value. We examine the effects of mass accretion by the NS/BH while inside the envelope of the giant star and find no statistically significant differences in any of the quantities we have discussed (see appendix). ## 4 Summary and Discussion In this work we performed extensive population synthesis of massive binaries in search of NSs and BHs mergers with cores of giant stars. We used COMPAS to generate populations of massive binary systems (section 2) and followed the evolution of binaries that result in these mergers (section 3.2). We used the energy formalism to determine whether a merger occurred during the CEE of the giant secondary star with the NS/BH. We simulated cases of traditional CEE in which the orbital energy released due to the decay in the orbit is the main cause for envelope ejection (\(\alpha_{\rm CE}\leq 1\)), and other cases where we assume an additional energy source (\(\alpha_{\rm CE}>1\)). We constrained the possible values of \(\alpha_{\rm CE}\) by comparing observations of binary compact object mergers to our simulated rates (section 3.1). We found one main evolution route of NS-core and BH-core merger events (_channel I_; left panel of Fig. 2) and an additional secondary evolution pathway for NS-core mergers (_channel II_; right panel of Fig. 2). The right panels of Figs. 3-6 exhibit a bimodal distribution in the initial orbital separation of NS-core merger progenitors whose modes are consistent with the two different evolution routes. For relatively close binaries with initial orbital separation \(a_{\rm ZAMS}\lesssim 1.3AU\) only cases where dynamically stable mass transfer occurs when the primary star is in its post MS phase can lead to NS (and also BH)-core mergers (_channel I_). However, at larger initial orbital separations (\(3AU\lesssim a_{\rm ZAMS}\lesssim 7AU\)) a CEE is required to reduce the separation of the core-MS binary and bring them close enough to allow for a later CEE between the NS and the giant in which the NS might merge with its core (_channel II_). We compute the mergers' event rates and find their percentage relative to CCSNe for different values of the CEE efficiency parameter we simulated (section 3.4). We find that for \(\alpha_{\rm CE}=1\), which is the commonly used value for CEEs that involve massive giants, there is about 1 NS-core merger per 100 CCSN events and about 1 BH-core merger per 1000 CCSN events. The number of mergers decreases with the CEE efficiency parameter \(\alpha_{\rm CE}\) as expected from the energy formalism and its underlying physics. For \(\alpha_{\rm CE}\gtrsim 4\) the event rate of BH-core mergers reduces to nearly nothing within the accuracy of our simulations. We estimate the amount of accreted mass by the NS and by the BH while inside the envelope of the giant star in the CEJSN scenario (see appendix) and find it does not affect the mergers' event rate. The energy formalism for CEE has several shortcomings. The large uncertainty in the energy and mass transfer during the CEE phase might greatly affect the rates of transients whose progenitor binaries go through CEE (e.g., Olejak et al., 2021). Moreover, many hydrodynamical simulations find that the orbital energy cannot be the sole energy source that contributes to envelope ejection in massive binaries (see references in section 2). Using values of \(\alpha_{\rm CE}>1\) to represent additional energy sources, as we did in the present study, disregards that these energy sources do not depend on the orbital separation in the same way as the orbital energy. However, keeping in mind this formalism is a phenomenological treatment, \(\alpha_{\rm CE}>1\) yields that the spiraling-in star can end at a larger radius, and allows to obtain reasonable results in population synthesis of massive stars. A possible refinement of our analysis would take into account that \(\alpha_{\rm CE}\) is in general system-dependant. For instance, if jets are the additional energy source that unbinds the envelope \(\alpha_{\rm CE}\) is naturally larger for a NS accretor than a MS star, and even larger for a BH. Another example is the study of De Marco et al. 2011 which finds that \(\alpha_{\rm CE}\) is inversely correlated with the binary mass ratio. We note that other prescriptions of CEE applicable to population synthesis, such as Hirai and Mandel (2022) and Di Stefano et al. (2022) were recently suggested, but are not yet implemented in the available codes. It would be interesting to compare the results presented in this manuscript with results obtained using different CEE formalisms. The results of this study can be used to test whether NS/BH-core mergers and their resultant CEJSN transient events can account for several high energy astrophysical phenomena (as proposed and studied in e.g., Papish et al., 2015; Soker and Gillis, 2018; Soker et al., 2019; Grichener and Soker, 2019; Soker, 2021; Grichener et al., 2022; Soker, 2022) and what would be their overall contribution to said events. Mergers with different core structures, for instance, can lead to transient events with different properties. R-process nucleosynthesis in CEJSN requires that the merger of the giant's core with the NS occurs when the core is CO rich. Grichener and Soker (2019) found that one CEJSN r-process events per \(\simeq 1000\) CCSNe suffices to explain the solar system r-process abundances by the CEJSN r-process scenario, which is consistent with our results for \(\alpha_{\rm CE}\simeq 1.25\) (Fig. 7 and table 2). For this scenario to account for a substantial fraction of the r-process abundance (above 10%) \(1.25\lesssim\alpha_{\rm CE}\lesssim 1.75\) is required. Grichener & Soker (2021) found that jets launched by a BH inside the envelope of a giant star might emit neutrinos with energies of \(10^{15}\) eV as detected by IceCube (Aartsen et al., 2013). The event rate required to explain the high energy neutrinos flux assuming the properties of the model described in Grichener & Soker (2021), and based on Aartsen et al. (2021) is \(\approx 3\%\) from CCSNe, and can be lower for jets with higher energies. We find that the percentage of systems that result in CEE of a BH with the giant secondary star (stage six in _channel I_; left panel of fig 2) is about \(\simeq 0.6\%\) from CCSNe for all values of \(\alpha_{\rm CE}\) we simulate. This implies that the CEJSN scenario for high energy neutrinos might have a significant contribution to the high energy neutrino flux. We note that even though the motivation for this work was the CEJSN scenario and its possible outcomes, the population model presented in this manuscript is not exclusive for the jets powering mechanism, and the merger rates shown in Fig. 7 can be used for any model that aims to explain NS/BH-core mergers in both frameworks of traditional CEE (\(\alpha_{\rm CE}\leq 1\)), or models that assume another energy source besides the orbital energy (\(\alpha_{\rm CE}>1\)). ## Acknowledgments I thank Noam Soker, Vladimir Kalnitsky, Amit Kashi, Avishai Gilkis, Jan J. Eldridge, Hila Glanz and Dmitry Shishkin for helpful discussions and important suggestions that helped in improving this manuscript. This research was supported by a grant from the Israel Science Foundation (769/20). I acknowledges support from Irwin and Joan Jacobs Fellowship. Simulations in this paper made use of the COMPAS rapid binary population synthesis code (version 02.31.06), which is freely available at [http://github.com/TeamCOMPAS/COMPAS](http://github.com/TeamCOMPAS/COMPAS). ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2308.13553
Synthesizing 3D computed tomography from MRI or CBCT using 2.5D deep neural networks
Deep learning techniques, particularly convolutional neural networks (CNNs), have gained traction for synthetic computed tomography (sCT) generation from Magnetic resonance imaging (MRI), Cone-beam computed tomography (CBCT) and PET. In this report, we introduce a method to syn-thesize CT from MRI or CBCT. Our method is based on multi-slice (2.5D) CNNs. 2.5D CNNs offer distinct advantages over 3D CNNs when dealing with volumetric data. In the experiments, we evaluate the performance of our method for two tasks, MRI-to-sCT and CBCT-to-sCT generation. Target organs for both tasks are brain and pelvis.
Satoshi Kondo, Satoshi Kasai, Kousuke Hirasawa
2023-08-23T21:36:41Z
http://arxiv.org/abs/2308.13553v1
# Synthesizing 3D computed tomography from MRI or CBCT using 2.5D deep neural networks ###### Abstract Deep learning techniques, particularly convolutional neural networks (CNNs), have gained traction for synthetic computed tomography (sCT) generation from Magnetic resonance imaging (MRI), Cone-beam computed tomography (CBCT) and PET. In this report, we introduce a method to synthesize CT from MRI or CBCT. Our method is based on multi-slice (2.5D) CNNs. 2.5D CNNs offer distinct advantages over 3D CNNs when dealing with volumetric data. In the experiments, we evaluate the performance of our method for two tasks, MRI-to-sCT and CBCT-to-sCT generation. Target organs for both tasks are brain and pelvis. Keywords:Synthetic computed tomography, 2.5D convolutional neural networks. ## 1 Introduction Radiation therapy (RT) is a critical cancer treatment that often requires computed tomography (CT) for accurate dose calculations. Magnetic resonance imaging (MRI) provides superior soft tissue contrast, but lacks the electron density data of CT for dose calculations. Combining the two modalities presents challenges, including mis-registration errors. MRI-only RT has emerged to address these challenges, reduce ionizing radiation exposure, and improve patient comfort. However, the generation of synthetic CT images from MRI (sCT) remains challenging due to the lack of direct correlation between nuclear magnetic properties and electron density. Deep learning (DL) techniques, particularly convolutional neural networks (CNNs), have gained traction for sCT generation from MRI, Cone-beam CT (CBCT) and PET [1]. In this report, we introduce a method to synthesize CT from MRI or CBCT. Our method is based on multi-slice (2.5D) CNNs. 2.5D CNNs offer distinct advantages over 3D CNNs when dealing with volumetric data. These benefits stem from a thoughtful compromise between computational efficiency and capturing relevant spatial context. In the experiments, we evaluate the performance of our method for two tasks, MRI-to-sCT and CBCT-to-sCT generation. Target organs for both tasks are brain and pelvis. ## 2 Proposed Method Our base method is same for both tasks and both organs. We use encoder-decoder type deep neural networks for converting MRI or CBCT images to synthetic CT (sCT) images. Figure 1 shows an overview of our method. Although the input images are 3D volumes, we use a 2D deep neural network model with multi-slice inputs (2.5D CNNs). 2.5D CNNs offer distinct advantages over 3D CNNs when dealing with volumetric data. These benefits stem from a thoughtful compromise between computational efficiency and capturing relevant spatial context. Reasons why 2.5D CNNs are favored in many cases include reduced computational complexity, memory efficiency, leveraging anisotropic resolution, multi-planar analysis, contextual information, and overcoming class imbalance. In our model, \(N\) consecutive slices in an input volume are processed to produce one slice in a sCT volume. The input slices are along transverse plane. The consecutive slices are processed as an \(N\) channel 2D image in our model. In training phase, \(N\) slices are randomly selected \(M\) times from each volume in the training dataset in each epoch. In inference phase, each volume is processed in slice-by-slice way and each slice in sCT volume is produced. We use L1 error between predicted sCT slices and ground truth CT slices as the loss function. ## 3 Experiments ### Dataset Data was acquired for radiotherapy treatments in the radiotherapy departments of UMC Utrecht, UMC Groningen, and Radboud Nijmegen [2]. The numbers of data are summarized in Table 1. Each data includes source image (MRI for the MRI-to-sCT task and CBCT for the CBCT-to-sCT task), ground truth (CT) and mask. We divide each dataset to training and validation data. The numbers of training and validation data are 162 and 18 in each dataset, respectively. Figure 1: Overview of our method. ### Experimental conditions We used U-Net [3] as the basic segmentation network and replaced its encoder part as EfficientNet [4]. We conducted hyper-parameter tuning. The hyper-parameters include the encoder size, the number of slices, the initial learning rate. As the results of hyper-parameter tuning, we selected EfficientNet-B7 as the encoder, 3 as the number slices. The initial learning rates were selected as 1\(\times\)10\({}^{-3}\), 5\(\times\)10\({}^{-4}\), 1\(\times\)10\({}^{-4}\), and 5\(\times\)10\({}^{-5}\) for task-1 brain, task-1 pelvis, task-2 brain, and task-2 pelvis, respectively. The optimizer was AdamW [5] and the learning rate was decreased at every epoch with cosine annealing. The number of epochs was 100, and We used the model with the lowest loss value for the validation data as the final model. As pre-processing, histogram normalization was performed for MRI volumes. No data augmentations were performed. ### Experimental results Table 2 shows the summary of the experimental results. We show two metrics; PSNR and Mean Absolute Error (MAE). These are the differences between sCT and ground truth CT. As for the tasks, it cannot be seen big differences between MRI-to-sCT and CBCT-to-sCT. Figures 2, 3, 4 and 5 show examples of experimental results. In each figure, (a) shows an input slice (MRI or CBCT), (b) shows the corresponding slice of sCT, and (c) shows the corresponding slice of ground truth (CT). \begin{table} \begin{tabular}{c c c} \hline Task & Organ & Number of data \\ \hline \multirow{2}{*}{MRI-to-sCT} & Brain & 180 \\ & Pelvis & 180 \\ & Brain & 180 \\ CBCT-to-sCT & Pelvis & 180 \\ \hline \end{tabular} \end{table} Table 1: Datasets \begin{table} \begin{tabular}{c c c c} \hline Task & Organ & PSNR (dB) \(\uparrow\) & Mean Absolute Error (HU) \(\downarrow\) \\ \hline \multirow{2}{*}{MRI-to-sCT} & Brain & 27.06 & 77.93 \\ & Pelvis & 28.51 & 64.26 \\ \multirow{2}{*}{CBCT-to-sCT} & Brain & 27.38 & 81.44 \\ & Pelvis & 28.12 & 68.07 \\ \hline \end{tabular} \end{table} Table 2: Experimental results for validation dataset. Figure 4: Examples of experimental results in CBCT-to-SCT / Brain. (a) CBCT (input). (b) sCT (out-put). (c) CT (ground truth). Figure 5: Examples of experimental results in CBCT-to-SCT / Pelvis. (a) CBCT (input). (b) sCT (out-put). (c) CT (ground truth). Figure 3: Examples of experimental results in MRI-to-SCT / Pelvis. (a) MRI (input). (b) sCT (out-put). (c) CT (ground truth). Figure 2: Examples of experimental results in MRI-to-SCT / Brain. (a) MRI (input). (b) sCT (output). (c) CT (ground truth). We also evaluated our method in SynthRAD2023 challenge site [6]. In preliminary test task, the algorithm run six cases on the grand challenge platform and the system gives MAE, PSNR and SSIM metrics for each case. Table 3 shows the summary of the preliminary test task. ## 4 Conclusions In this report, we introduced a method to synthesize CT from MRI or CBCT. Our method is based on multi-slice (2.5D) CNNs. In the experiments, we evaluatde the performance of our method for two tasks, MRI-to-sCT and CBCT-to-sCT generation. Target organs for both tasks are brain and pelvis. From the experimental results, big differences in performance between MRI-to-sCT and CBCT-to-sCT were not observed. As for the organs, the results for pelvis were slightly better than the results for brain.
2306.05845
Heat transport in a Coulomb ion crystal with a topological defect
The thermodynamics of low-dimensional systems departs significantly from phenomenologically deducted macroscopic laws. Particular examples, not yet fully understood, are provided by the breakdown of Fourier's law and the ballistic transport of heat. Low-dimensional trapped ion systems provide an experimentally accessible and well-controlled platform for the study of these problems. In our work, we study the transport of thermal energy in low-dimensional trapped ion crystals, focusing in particular on the influence of the Aubry-like transition that occurs when a topological defect is present in the crystal. We show that the transition significantly hinders efficient heat transport, being responsible for the rise of a marked temperature gradient in the non-equilibrium steady state. Further analysis reveals the importance of the motional eigenfrequencies of the crystal.
L. Timm, H. Weimer, L. Santos, T. E. Mehlstäubler
2023-06-09T12:18:14Z
http://arxiv.org/abs/2306.05845v1
# Heat transport in a Coulomb ion crystal with a topological defect ###### Abstract The thermodynamics of low-dimensional systems departs significantly from phenomenologically deducted macroscopic laws. Particular examples, not yet fully understood, are provided by the breakdown of Fourier's law and the ballistic transport of heat. Low-dimensional trapped ion systems provide an experimentally accessible and well-controlled platform for the study of these problems. In our work, we study the transport of thermal energy in low-dimensional trapped ion crystals, focusing in particular on the influence of the Aubry-like transition that occurs when a topological defect is present in the crystal. We show that the transition significantly hinders efficient heat transport, being responsible for the rise of a marked temperature gradient in the non-equilibrium steady state. Further analysis reveals the importance of the motional eigenfrequencies of the crystal. ## I Introduction The study of how heat is transported through a system in different phases is an active field of research since the days of Newton and Fourier [1; 2; 3; 4; 5]. Surprisingly, well-established phenomenological findings seem to be invalid in low-dimensional harmonic systems [6]. Phenomena contradictory to the laws governing the heat transport on macroscopic scales have been observed in different lattice models [6; 7], sparking the interest to understand the role of different microscopic properties. The interplay of linear and nonlinear dynamics, as in the well-known Fermi-Pasta-Ulam model [8], or the importance of integrability and disorder in the system, are some examples to be named here [9; 10; 11; 12; 13; 14]. While transport in these theoretical models, also in the quantum realm [15; 16], has attracted considerable attention, the experimental investigation of low-dimensional systems proved to be difficult due to the lack of a well suited platform with sufficient control and readout techniques. In this context, trapped ions offer a particularly interesting platform, with excellent access to the particles, as well as a rich variety of laser manipulation and readout techniques [17]. In addition, the possibility to vary the confinement in the different trap dimensions allows for the tuning of the geometry and the dimensionality of the crystals, nonlinear effects are introduced due to the Coulomb interaction. Along this direction, there have been already several theoretical works studying the transport of energy in ion crystals in different configurations and limits [18; 19; 20; 21; 22; 23], while first experiments demonstrated the controlled insertion of motional excitations and the readout of their dynamics through the crystal [24; 25]. In addition to regular lattice configurations, such as linear chains or triangular lattices, experiments have realized stable lattice defects in ion crystals [26; 27; 28]. Their discovery has triggered a series of investigations, including the experimental confirmation of the Kibble-Zurek scaling for their creation probability, and their ability to emulate paradigmatic models of nanofriction [26; 29; 30; 31; 32; 33]. Especially the detection of a sliding-to-pinned transition, celebrated in the tribology context as the Aubry transition [34], has opened questions regarding their effect on the dynamics of local excitations [35; 11; 36]. Simultaneously, a complementary approach employing linear ion strings in the periodic potential of a standing wave laser field has demonstrated similar physics, showing the existence of a frictionless phase [37; 38; 39; 40; 41]. Previous work showed that energy transport in ion crystals in the presence of a topological defect is non-trivial [23]. In this paper, we expand this study in the context of thermal conductivity and investigate heat flux through the crystal and the defect. In particular, we are interested in how the robust energy localization observed in the pinned phase translates into the temperature profiles and heat flux in the non-equilibrium steady state. Towards that end, we couple the two ends of the system Figure 1: Schematic depiction of the system considered. The outer four ions of a zigzag crystal with a kink defect are coupled to Langevin heat baths with different temperatures. The ion crystal develops a temperature profile in the steady state. The grey points depict the zigzag crystal, the coloured points are for a crystal with a localized (odd) defect in the central region. to a source and a drain of thermal energy represented by Langevin heat baths [42; 43]. Our results emphasize the importance of defects for heat transport properties in crystalline structures, and suggest to make use of the advantages of trapped ion systems to measure them. The structure of the paper is as follows. Section II presents the system under consideration. The corresponding dynamical equations are discussed in Sec. III. The harmonic approximation valid at low energies is introduced in Sec. IV. In Sec. V we analyze how the presence of a topological defect affects the temperature distribution in the crystal, whereas the total heat flux is discussed in Sec. VI. Finally, we summarize our conclusions in Sec. VII. ## II Ion Coulomb crystals We consider in the following \(N\) ions of mass \(m\) confined in a linear Paul trap that provides, in ponderomotive approximation, a harmonic potential with secular frequencies \(\omega_{z},\omega_{x}=\alpha\omega_{z}\) and \(\omega_{y}=\beta\omega_{x}\). We neglect the micromotion from the fast oscillating electric field [44]. The system is characterized by the Hamiltonian: \[H=\sum_{i}^{N}\frac{\vec{p}_{i}^{\,2}}{2}+\mathcal{V}(\{\vec{r}_{i}\}) \tag{1}\] where \(\vec{p}_{i}=(p_{i}^{z},p_{i}^{x},p_{i}^{y})\) and \(\vec{r}_{i}=(z_{i},x_{i},y_{i})\) are, respectively, the momentum and position of the \(i\)-th ion. The potential energy is of the form \(\mathcal{V}=\sum_{i}v_{i}\), with \[2v_{i}(\{\vec{r}_{j}\})=z_{i}^{2}+\alpha^{2}\left(x_{i}^{2}+\beta^{2}y_{i}^{2 }\right)+\sum_{j\neq i}\frac{1}{|\vec{r}_{i}-\vec{r}_{j}|}, \tag{2}\] where the last term is provided by the Coulomb repulsion between the ions. Throughout the paper we fix \(\omega_{z}=2\pi\times 25\)kHz, and use \(L=(e^{2}/4\pi\epsilon_{0}m\omega_{z}^{2})^{1/3}\) as the length unit, \(E=m\omega_{z}^{2}L^{2}\) as the energy unit, and \(W=1/\omega_{z}\) as the time unit, with \(e\) the elementary charge and \(\epsilon_{0}\) the vacuum permittivity. The system crystallizes when the thermal energy of the ions is sufficiently low. The shape of the resulting crystal is determined by the competition between the Coulomb repulsion that tends to maximize the distance between ions, and the trap confinement, which pushes the particles closer together. Different structural phases have been observed depending on \(N\) and the aspect ratios \(\alpha\) and \(\beta\). For the regime \(\alpha,\beta>1\) considered in this work, the crystal structure lies solely in the \(zx\) plane. Moreover, we tune \(\alpha\) into a regime where the minimal-energy configuration is provided by the ions forming a triangular ladder along the \(z\)-axis, see Fig. 1. This crystal is commonly referred to as zigzag. Due to the mirror symmetry (\(x_{i}\leftrightarrow-x_{i}\)) of the potential \(\mathcal{V}\), there exist two such states, which can be transformed into each other by flipping the positions along \(x\). Interestingly, this opens the possibility to introduce topological defects, or kinks, in the crystal, which can be interpreted as a domain wall between the two degenerate zigzag configurations, see Fig. 1. Kinks in ion crystals have been subject to intensive study [29; 30; 45]. It has been experimentally shown that a kink enables the emulation of nanofriction models, including a sliding to pinned transition, also known as Aubry transition, which occurs due to the local incommensurability of the ion distances in the upper (\(x>0\)) and lower (\(x<0\)) sub-chains of the triangular ladder [32; 33]. The transition occurs when, by increasing \(\alpha\), the crystal is squeezed closer to the \(z\)-axis, modifying the influence of the sub-chains on each other. Most importantly, when transitioning from the sliding to the pinned phase the \(\mathds{Z}_{2}\) symmetry of the crystal along \(z\) is broken, leading to robust localization features in the dynamics [23]. The defect slides into one of two possible equilibrium positions away from the trap center or, if the thermal energy permits it, jumps perpetually between the two configurations by overcoming the energy barrier that connects them. Although the dynamics is non-linear, the blockade of the energy transport can be traced back to the presence of asymmetric motional modes of the crystal that dominate the dynamics for small-enough energies. In the following, we study how the transition influences the thermal conductivity of the system when the crystal is transporting heat from a warmer to a colder bath. ## III Dynamical equations In order to investigate the thermal conductivity properties of a two-dimensional Coulomb crystal, we assume that the particles at the edges of the system are coupled to Langevin heat baths with different temperatures, as schematically indicated in Fig. 1. Therefore, the Hamilton equations determined from the Hamiltonian (1) must be modified to include the corresponding dissipation and fluctuation terms. The resulting Langevin equations acquire the form: \[\frac{d^{2}}{dt^{2}}\vec{r}_{i}=-\vec{\nabla}_{i}\mathcal{V}-\mathbf{\Gamma}_{i} \cdot\vec{p}_{i}+\vec{\xi}_{i}(t) \tag{3}\] where \(\mathbf{\Gamma}_{i}=\text{diag}(\gamma_{i}^{z},\gamma_{i}^{x},\gamma_{i}^{y})\) is a diagonal matrix containing the dissipation rates in the different spatial dimensions. The stochastic force \(\vec{\xi}_{i}(t)\), provided by momentum kicks exerted by the heat baths, fulfills the fluctuation-dissipation theorem: \[\langle\vec{\xi}_{i}(t)\rangle=\vec{0}\quad\langle\vec{\xi}_{i}(t)\otimes\vec{ \xi}_{j}(t^{\prime})\rangle=2\mathbf{\Gamma}_{i}\cdot\mathbf{T}_{i}\delta_{ij}\delta(t -t^{\prime}) \tag{4}\] where \(\langle\rangle\) denotes the ensemble average, \((\vec{a}\otimes\vec{b})_{ij}=a_{i}b_{j}\) is the outer product of two vectors and \(\mathbf{T}_{i}=\text{diag}(T_{i}^{z},T_{i}^{x},T_{i}^{y})\) is a matrix containing the temperatures of the heat baths in units of \(F=E/k_{B}\), with \(k_{B}\) the Boltzmann constant. In a trapped-ion experiment, the emulation of different heat baths may be accomplished by Doppler cooling lasers detuned from a cooling transition [46]. While in this case the reachable temperatures of the heat baths are Doppler-limited, advanced cooling techniques are able to reach sub-Doppler regimes. We assume that the projection of the cooling lasers is the same for all spatial dimensions, hence we can write \(\mathbf{\Gamma_{i}}=\gamma_{i}\mathds{1}\) and \(\mathbf{T_{i}}=T_{i}\mathds{1}\). We are interested in the behavior of the dynamical temperature of the ions, which we define as \[\tau_{i}=\frac{\langle\vec{p}_{i}^{\;2}\rangle}{3}, \tag{5}\] as well as in the total amount of energy the crystal can transport. To quantify the latter, we define the net energy the system gains from the coupling to the heat baths \[\frac{dH}{dt}=\sum_{i}j_{i} \tag{6}\] where \(j_{i}\) is the energy transported from the heat bath to ion \(i\). Calculating the time-derivative of \(H\) and inserting the Langevin equation (3), we can write \[j_{i}=\vec{p}_{i}(t)\cdot\vec{\xi}_{i}-\vec{p}_{i}\cdot\mathbf{\Gamma_{i}}\cdot \vec{p}_{i}. \tag{7}\] During the thermalization process, the crystal, depending on the initial conditions, gains or looses energy. When the equilibrium steady state is reached, the same amount of energy is dissipated into the colder heat bath as is flowing into the system from the hotter bath, so that \(\sum_{i}\left\langle j_{i}\right\rangle=0\). The amount of energy that is transported, i.e. flowing into the system on one end and dissipated at the other end of the system, is the systems heat flux, defined by \[J(t)=\frac{1}{2}\sum_{i}\left|\left\langle j_{i}(t)\right\rangle\right| \tag{8}\] which we employ as a measure for the thermal conductivity of the crystal. In the steady state, \(lim_{t\rightarrow\infty}J(t)\) gives the amount of energy transported through the system. In order to calculate \(J\), we employ Novikov's theorem [47], which yields \[\left\langle j_{i}\right\rangle =\operatorname{tr}(\mathbf{\Gamma_{i}}\cdot\mathbf{T_{i}})-\langle\vec{p }_{i}\cdot\mathbf{\Gamma_{i}}\cdot\vec{p}_{i}\rangle \tag{9}\] \[=3\gamma_{i}(T_{i}-\tau_{i}) \tag{10}\] where the last equality is only valid for our choice \(\mathbf{\Gamma_{i}},\mathbf{T_{i}}\propto\mathds{1}\). While the first term in Eq. (9) is externally determined, the second one characterizes the response of the system to the heat current and needs to be calculated. Towards this end, we perform numerical calculations solving the stochastic dynamical equations (3) for discretized time steps [48]. For a given set of parameters, we calculate 500 independent trajectories of the ions. In order to determine the ensemble averages in Eq. (5), we average over the trajectories and build a time average of \(50\,\mathrm{ms}\) when the system has reached the steady state. In addition to this numerical approach, the linearization of the dynamical equations provides important insights, as detailed below. ## IV Linear analysis Assuming that the energy of the ions only allows for small fluctuations of their positions, we may expand the potential (2) up to second order in the deviations from their average positions. This approximation permits on one side analytically-solvable dynamical equations, since the system is described by coupled harmonic oscillators. On the other side, when compared to full numerical computations, it reveals the relevance of the non-linearity induced by the Coulomb interaction in the heat transport in the crystal. For vanishing temperature, the ions settle down at their equilibrium configuration \(\vec{u}_{0}\), where we have condensed all degrees of freedom in a single state vector \(\vec{u}=(\vec{r}_{1},\ldots,\vec{r}_{N},\vec{p}_{1},\ldots,\vec{p}_{N})\). We expand the dynamical equations (3) up to first order in the deviations from the equilibrium \(\vec{q}=\vec{u}-\vec{u}_{0}\), which yields \[\frac{d}{dt}\vec{q}=-\begin{pmatrix}0&-\mathds{1}\\ \mathbf{K}&\mathbf{\Gamma}\end{pmatrix}\cdot\vec{q}+\begin{pmatrix}0\\ \vec{\xi}(t)\end{pmatrix} \tag{11}\] where we have written in a compact way the dissipation-rate matrices \(\mathbf{\Gamma}=\text{diag}(\mathbf{\Gamma}_{i})\) and the stochastic forces \(\vec{\xi}(t)=(\vec{\xi}_{1}(t),\ldots,\vec{\xi}_{N}(t))\). The coherent dynamics is provided by the dynamical matrix \(\mathbf{K}=\vec{\nabla}\otimes\vec{\nabla}\left\langle\left\{\vec{r}_{i}\right\} \right\rangle\rvert_{\vec{u}_{0}}\). We diagonalize the dynamical matrix, \(\mathbf{U}^{T}\cdot\mathbf{K}\cdot\mathbf{U}=\mathbf{D}\), where the diagonal matrix \(\mathbf{D}\) and the unitary matrix \(\mathbf{U}\), contain the eigenfrequencies of the crystal and the spatial structure of the corresponding eigenmodes. Denoting by \(\vec{\theta}\) the state vector of the eigenmodes containing their amplitudes and momenta, we obtain an equivalent, more convenient, formulation of the Langevin equations: \[\frac{d}{dt}\vec{\theta}=\begin{pmatrix}\mathbf{U}^{T}&0\\ 0&\mathbf{U}^{T}\end{pmatrix}\cdot\frac{d\vec{q}}{dt}=-\underbrace{\begin{pmatrix} 0&-\mathds{1}\\ \mathbf{D}&\mathbf{\tilde{\Gamma}}\end{pmatrix}}_{\mathbf{\Omega}}\cdot\vec{\theta}+ \begin{pmatrix}\vec{0}\\ \vec{\Xi}\end{pmatrix} \tag{12}\] where \(\mathbf{\tilde{\Gamma}}=\mathbf{U}^{T}\cdot\mathbf{\Gamma}\cdot\mathbf{U}\) is the transformed dissipation matrix, and \(\vec{\Xi}=\mathbf{U}^{T}\cdot\vec{\xi}\) is the transformed stochastic force vector. The latter fulfills a fluctuation-dissipation theorem as that of Eq. (4), but with the transformed temperatures \(\mathbf{\tilde{T}}=\mathbf{U}^{T}\cdot\mathbf{T}\cdot\mathbf{U}\). Note that the modified dissipation matrix and the transformed temperature matrix are not necessarily diagonal anymore, which can be interpreted as a dynamical coupling of the motional modes. The Langevin equations (12) are formally solved by \[\vec{\theta}(t)=e^{-\mathbf{\Omega}t}\cdot\vec{\theta}(0)+\int_{0}^{t}e^{\mathbf{ \Omega}(s-t)}\cdot\begin{pmatrix}\vec{0}\\ \vec{\Xi}(s)\end{pmatrix}ds \tag{13}\] where the first term describes the damped oscillations of the initial mode populations, whereas the latter part describes the stochastic motion. We insert this solution into the second moment matrix \(\mathbf{C}(t)=\langle\vec{\theta}(t)\otimes\vec{\theta}(t)\rangle\), ob taining \[\mathbf{C}(t) =e^{-\mathbf{\Omega}t}\cdot\mathbf{C}(0)\cdot e^{-\mathbf{\Omega}^{T}t}\] \[+\int_{0}^{t}e^{\mathbf{\Omega}(s-t)}\begin{pmatrix}0&0\\ 0&2\mathbf{\tilde{T}}\cdot\mathbf{\tilde{T}}\end{pmatrix}\cdot e^{\mathbf{\Omega}^{T}(s-t )}ds, \tag{14}\] from which we can read off the dynamical temperatures of the different motional modes \(\tilde{\tau}_{i}=\mathbf{C}_{i+3N,i+3N}\), after carrying out the time integral. Finally, we can calculate the net heat flux for each motional mode, which is given by \[\langle\tilde{j}_{i}\rangle=\left(\mathbf{\tilde{\Gamma}}\cdot\mathbf{\tilde{T}}\right) _{i,i}-\sum_{l}^{3N}\mathbf{\tilde{\Gamma}}_{i,l}\mathbf{C}_{l+3N,i+3N}. \tag{15}\] The motional mode vectors are generally spatially extended and therefore the modes couple to the hotter and the colder bath simultaneously. We can think of the modes as harmonic oscillators coupled to two different thermal baths at the same time, and hence \(\langle\tilde{j}_{i}\rangle\) vanishes in the steady state. However, we can unambiguously split the dissipation matrix \(\mathbf{\tilde{\Gamma}}=\mathbf{\tilde{\Gamma}}^{h}+\mathbf{\tilde{\Gamma}}^{c}\) and the temperature matrix \(\mathbf{\tilde{T}}=\mathbf{\tilde{T}}^{h}+\mathbf{\tilde{T}}^{c}\) into the contributions coming from the different heat baths. This allows for the calculation of the heat flux from the hot bath into the motional modes, as well as of the heat flux dissipated into the colder bath, by inserting the respective parts of the matrices into Eq. (15). In the steady state these two terms add up to zero so that the absolute value of one of them gives the energy transported by the mode \(i\), the total flux (8) is then given by the sum over all modes. ## V Temperature distribution In this section, we investigate the temperature distribution in two-dimensional ion crystals in the presence of a topological defect, focusing on the impact of the symmetry breaking at the Aubry transition [32]. Throughout this section, we consider that the four left-most and four right-most ions of the crystal are coupled to thermal baths, see Fig. 1, with fixed dissipation rate \(\gamma/W=20\,\)kHz that is comparable to experimentally reached values [26]. We consider a temperature difference between the two heat baths \((T^{h}-T^{c})F=0.2\,\)mK, and set the average temperature \(\tilde{T}=(T^{h}+T^{c})/2\) to different values in order to assess the effects of thermal fluctuations on the Aubry transition. Figure 2 shows the steady-state temperature distributions for different trap aspect ratios \(\alpha\), for a zigzag crystal with and without a kink. For a defect-free zigzag crystal with \(\alpha=6.0\) we observe a sharp temperature edge at both ends of the crystal, and a flat profile for the inner ions, similar to the results observed in a linear ion chain [18]. Changing \(\alpha\) does not substantially change the profile, although for \(\alpha=7.0\) the central ions show a slight temperature gradient. The excellent agreement between the numerical results and the harmonic approximation indicates the irrelevance of nonlinearities for these temperatures. These findings differ from the results of Ref. [20] for the same particle number which showed the emergence of a temperature gradient in the zigzag phase. We assign this discrepancy to the much larger temperatures of several mK and stronger dissipation rates considered in that work. As pointed out in Ref. [20], thermal fluctuations lead to non-linearities being probed during the dynamics, which give rise to coupling and scattering of the phonon modes of the crystal. Ultimately, the break-down of the harmonic description was pinpointed as the reason for the growth of the temperature slope by a Fourier analysis of the ion positions. Our results of the defect-free zigzag complement this discussion since they show that the absence of a temperature gradient can be recovered for temperatures of the order of the Doppler temperature, and far away from the linear-zigzag transition. Figure 2: Temperature profiles of an ion crystal of \(N=30\) ions for the symmetric sliding phase \(\alpha=6.0\) (a) and the symmetry-broken pinned phase \(\alpha=7.0\) (b), the graphs are normalized by \(0.5(T^{h}-T^{c})\). Results in blue are for the defect-less zigzag, whereas results for a crystal with a topological defect are shown in red and green. The dashed lines depict the results of the harmonic approximation, the circles indicate numerical results for the same parameters. For the blue and green curves the bath temperatures have been set to \(0.5\,\)mK and \(0.7\,\)mK, whereas the graph in red is for \(0.05\,\)mK and \(0.25\,\)mK. The vertical lines indicate the position of the ions that are used for the calculation of \(dT\) (see text). The presence of a kink in the sliding phase smoothens the temperature profile, reducing the drops at the outer parts of the system, see Fig. 2(a). This results from the localization of the spatial shape of the motional modes induced by the defect, that breaks the local translation invariance in the zigzag region. The so-called Peierls-Nabarro (PN) potential enables a deeper insight into the properties of the kink [45; 49]. When understood as a quasiparticle inside the crystal, the defect moves inside an effective potential landscape that crucially depends on the trap configuration. In the sliding regime the defect is repelled from the edges of the inhomogeneous crystal, resulting in an approximately harmonic PN potential, with its minimum at the trap center. The deviations of the PN potential from its harmonic approximation are not probed at the considered temperature scale, and hence the presence of the defect does not result in significant nonlinear effects. As a consequence the steady state is well described by the linear theory, as seen in Fig. 2(a). The temperature profiles show a markedly different behavior when \(\alpha\) is tuned into the pinning regime, as seen in Fig. 2(b). Linear theory predicts a sharp drop of the temperature across the defect, and that the temperature profile does not present mirror symmetry, \(\tau(z)-\bar{T}=\bar{T}-\tau(-z)\), as in Fig. 2(a). These observations are explained by the emergence of asymmetric modes in the spectrum, which are localized on one side of the defect, and hence are unable to contribute to the transport of heat across the system. Since these modes are strongly coupled to only one thermal bath, their presence leads to a step-shaped temperature profile. For low average temperatures, the linear-analysis prediction is supported by our numerical simulations. We observe a non-uniform temperature gradient across the crystal with the largest slope at the position of the defect. Although the qualitative observations agree, the markedly stronger deviations from the harmonic approximation compared to the sliding regime indicate the relevance of the nonlinear dynamics of the kink in the pinned phase, even well below Doppler temperature. When the energy scale of the baths is increased, the profile becomes close to a linear gradient such that no sharp feature of the energy blockade can be observed. This observation stands in clear contrast to the predictions of linear theory, marking the onset of nonlinear dynamics. As shown above, the steady-state temperature distribution provides a clear signal of the Aubry transition. To gain a better understanding of the effect of the Aubry transition and its interplay with thermal fluctuations, we depict in Fig. 3 the temperature difference \(dT\) between the 11-th and the 20-th ion, as an indicator for the profile structure in the central region, see the vertical lines in Fig. 2. It is shown as a function of the trap aspect ratio \(\alpha\) and the average temperature \(\bar{T}\) of the two heat baths, blue regions indicate a close to vanishing temperature gradient whereas red to yellow marks strong temperature drops. As a benchmark, we show in the upper diagram \(dT\) as a function of \(\alpha\) for a defect-free zigzag crystal. As discussed above, the zigzag crystal exhibits only a small gradient in the centre, not larger than 15% of the temperature difference of the baths. The lower plot of Fig. 3 depicts the results for a crystal initialized with a defect. In the sliding phase, \(\alpha<6.4\), \(dT\) is on the same order as for the defect-free zigzag crystal. This result is independent of the average temperature and agrees with the profiles shown in Fig. 2. Increasing \(\alpha\) into the pinned phase, the temperature slope rises significantly at the critical point for small temperatures, clearly pinpointing the Aubry transition as the cause of the modification in the steady state distribution. Up to a value of \(\alpha\approx 7.5\) there exists a parameter window in which the temperature slope for the central 10 ions covers around 40% of the difference between the bath temperatures (note that the ions coupled to the thermal baths do not reach the bath temperatures due to the insufficient coupling strength \(\gamma\), see Fig. 2). When the average temperature of the system is increased for a fixed \(\alpha\) the system shows a transition into a phase with a weaker temperature slope for the defect ions, as observed in Fig. 2(b). The PN potential is again key to understand this feature. At the Aubry transition the globally confining, smooth PN potential develops periodic barriers [45]. Crucially for the discussion, a lo Figure 3: (top) Temperature difference between the 11-th and the 20-th ion normalized by the difference of the bath temperatures for a zigzag crystal. The dashed line depicts the linear theory result, and the circles indicate the numerical results for bath temperatures of 0.5 mK and 0.7 mK. (bottom) Same quantity for a crystal initialized with kink as a function of the trap aspect ratio \(\alpha\) and the mean temperature \(\bar{T}\) of the baths, normalized by the Doppler temperature \(T_{D}=0.5\) mK for Yb\({}^{+}\). The white dots indicate the parameter choices from figure 2. For \(\alpha>7.5\) the kink can be lost for large \(\bar{T}\) (see text). cal maximum rises at the crystal center, leading to the emergence of two degenerate local minima located off the center. For small energies, the defect falls spontaneously into one of these local minima, thereby breaking the symmetry of the crystal, and remains close to the randomly chosen configuration. Hence, the dynamical properties are to a certain extent characterized by the harmonic approximation of the potential around that configuration, and phenomena like the sharp temperature drop emerge. However, for large enough temperatures the defect exhibits a finite probability to overcome the energy barrier that emerged at the Aubry transition via thermal fluctuations, and can hence change its configuration between the two degenerate minimal energy states. Such a hopping results in the smoothening of the temperature profile (see Fig. 2(b)), and therefore in the reduction of \(dT\), as it induces heat transport between spatial regions which remained disconnected at low energies. In this thermally-delocalized regime, the steady state \(dT\) remains however larger than in the defect-free zigzag case, as a remaining signal of the symmetry breaking due to the finite dwelling time in one of the symmetry broken states. The observed thermal crossover resembles that induced by thermal fluctuations at the linear-to-zigzag structural transition [50; 51; 52], or by quantum fluctuations at the Aubry transition for much colder systems [53; 54; 41]. The critical temperature of the crossover is a non-trivial function of the trap aspect ratio. While it increases with increasing \(\alpha\) in the pinned phase, it saturates around \(\alpha=7\) and later decreases for \(\alpha>7.5\). For larger values of \(\alpha\) the kink undergoes a second transition into a localized shape, denoted as odd kink [45; 28]. Most importantly, the PN potential changes into an inverted harmonic oscillator, while the periodic modulations introduced at the Aubry transition remain and stabilize the defect for small energies in local minima. Since the crystal edges are now attracting the kink, a transition into a thermally-delocalized phase opens a channel for the complete loss of the kink. Indeed, the probability to travel to the crystal edges and vanish there becomes non-zero when the kink is able to overcome the barriers between local potential minima in the PN potential and statistically hop between them. We observe the losses of the defect in our simulations for \(\alpha>7.5\) but do not post-select those trajectories in which the kink stays confined for the calculation period. The reduction of the temperature gradient for large \(\alpha\) and \(\bar{T}\) observed in Fig. 3 is caused by the loss of the kink characterizing the delocalized phase in this regime. The critical temperature for the thermal crossover to the delocalized regime drops to a local minimum at \(\alpha=7.8\), coinciding with the crossover to the odd kink. Subsequently, the reduced heat transport becomes more robust again around \(\alpha\approx 8.0\). The two regions with robust \(dT\) observed in Fig. 3 match well the observed parameter windows of a strong blockade of a coherent excitation [23]. For \(\alpha>9.0\) the crystal with a kink is not a stable equilibrium of the system anymore as the modulations in the PN potential, which are crucial for the stability of the defect, decline in size for growing \(\alpha\) and vanish subsequently. One could expect that the crossover into the thermally-delocalized phase should occur when the temperature of the system becomes comparable to the PN barrier. Interestingly, although the energy barriers between different equilibrium states of the kink are typically of several mK [45], thermal delocalization occurs on the Doppler temperature scale (\(T_{D}\approx 0.5\,\)mK here), as seen in Fig. 3. Although we argue that the observed features can be understood from the form of the PN potential, this discrepancy indicates a more involved interplay, yet to be explored, between the thermal fluctuations of the kink and the residual motional modes of the crystal. ## VI Heat flux We analyze at this point the total heat flux \(J\), given by Eq. (8), transported in the steady state, see Fig. 4. For a defect-free zigzag crystal, both linear theory and the numerical results show a global minimum between \(\alpha=6.5\) and \(\alpha=8.5\). For larger \(\alpha\), \(J\) shows an approximately linear growth with \(\alpha\), whereas for lower \(\alpha\) it presents a more irregular growth. Nonlinearities lead to a speedup of heat transport, since our numerical results show an uniform offset to larger values compared to linear theory. A similar dependence on \(\alpha\) has been reported in Ref. [20], employing numerical simulations, although the presence of a trap configuration with minimal heat flux in the considered parameter window has not been discussed. We address this point in the final part of this section. For \(\alpha>6.0\), the presence of the kink markedly reduces the amount of heat the crystal can transport, even in the sliding phase in which the crystals symmetry persists, see the red graphs in figure 4. The results for \(J\) based on the harmonic approximation show in the sliding regime a heat flux slightly below the values for the defect-free case. At the Aubry transition, they display an abrupt decrease, followed by two minima of \(J\), one at \(\alpha\approx 7.0\) and the other at \(\alpha\approx 8.25\). These minima agree well with the regions of a robust temperature gradient observed in Fig. 3. For a low system temperature (red circles in Fig. 4), the numerical results remain close to the linear theory prediction, but, as for the defect-free case, they show an uniform offset towards faster transport. For larger temperatures, the kink becomes thermally delocalized and hence allows for faster energy transport (green circles in Fig. 4). Although the signal strength of the Aubry transition is therefore reduced, the decrease of \(J\) remains, even for these temperatures, a clear signature of the transition. As a final point, we address a subtle issue concerning the presence of a minimal heat flux for the zigzag crystal. In Ref. [20], the reduction of \(J\) when \(\alpha\) is quenched through the linear-to-zigzag transition is explained by the growing inter-ion distances [20] when the ions start exploring the radial (\(x\)) direction. This argument can not explain however our results, exhibiting a minimum in the heat flux for the defect-free zigzag crystal, as the ion distances are a monotonously-growing function with decreasing \(\alpha\). Furthermore, the linear theory results show finer features of the \(J\) curve, as shown in Fig. 4. In order to argue that both these observations can be explained from the structure of the motional modes, we compare in Fig. 5 the result for the heat flux observed in Fig. 4 with the outcome of the calculation in the limit of vanishing coupling \(\gamma\). In this discussion, we only compare the linear theory results, since the numerical calculation for \(\gamma\to 0\) demands unreachable equilibration times. We believe this approximation is however enough for the following discussion, since our results exhibit a good agreement with the numerics for the case of a defect-free zigzag crystal. Note, that the results are normalized by \(J_{0}=0.5\gamma(T^{h}-T^{\rm c})\) such that the trivial linear scaling of the flux with the coupling strength is accounted for. Changes in the heat conductivity are hence due to the altered system response. For the symmetric crystals, i.e. the zigzag and the kink in the sliding phase, the heat flux is independent of \(\alpha\) and shows additional negative peaks. The Aubry transition leads to a strong heat flux reduction in the presence of the kink, but the sharp features persist in the pinned regime. While the effect of the sliding-to-pinning transition remains observable for the value of \(\gamma/W=20\,\)kHz employed above, the resonance features observed at low \(\gamma\) were partially washed out. The fine modulations coincide with \(\alpha\) values for which two motional modes contributing to the energy transport become degenerate, resulting in oscillations with a fixed phase relation. Since the mode vectors have alternating symmetry (symmetric-antisymmetric under mirror transformation \(z\leftrightarrow-z\)) with increasing energy, for the symmetric crystal cases their correlated motion leads to a destructive interference in one half of the crystal. Ultimately, this results in a sharp decrease in heat conductivity at the resonance points. For the crystal with broken symmetry, the mode vectors do not possess a fixed symmetry anymore and hence a resonance can increase energy transport, as the presence of positive peaks in \(J\) in this phase only shows. Similarly as in a driven harmonic oscillator, the increase of the damping \(\gamma\) broadens the resonance peaks, which subsequently start to overlap and finally wash out. Based on these findings, we understand the reduction of the heat flux in zigzag crystals and the presence of a minimal conductivity configuration as a consequence of the density of mode crossings in the motional spectrum of the system in the considered temperature scale. This is supported by the fact that the calculated heat flux is only weakly dependent on \(\alpha\) when the crystal forms a chain [20]. In that phase the absence of mode frequency crossings as a function of \(\alpha\) yields an invariant flux, and a small decrease with \(\alpha\) can be explained due to the bunching of the radial mode frequencies. For off-resonant values of \(\alpha\) each of the four ions coupled to one of the heat reservoirs contributes \(J_{0}\) to the total heat flux \(J\) for each dimension with \(\gamma_{i}^{\mu}\neq 0\), which yields in our case \(J=12J_{0}\), a result that can be rigorously shown for harmonic oscillator models in the limit \(\gamma\to 0\)[55; 7]. ## VII Conclusion The presence of a topological soliton strongly affects heat transport in an ion crystal. A sloped tempera Figure 4: Total heat flux \(J\) through the crystal (normalized by \(0.5\gamma(T^{h}-T^{\rm c})\)) as a function of the trap aspect ratio \(\alpha\). The blue curves indicate the results for the defect-free zigzag crystal, while red and green graphs depict the case with a kink. The dashed lines indicate the linear theory results, and the circles our numerical results. For the latter the bath temperatures have been set to \(0.5\,\)mK and \(0.7\,\)mK for the blue and green graph, whereas in red we depict our results for \(0.05\,\)mK and \(0.25\,\)mK. The gray dashed bars indicate the Aubry transition and the crossover to the odd kink. Figure 5: Normalized total heat flux obtained from linear theory for different values of the coupling \(\gamma\). The dashed lines indicate the results without (blue) and with (red) kink for \(\gamma/W=20\,\)kHz, as in Fig. 4, whereas the solid lines depict the zero coupling limit, calculated for \(\gamma/W=0.002\,\)Hz. The vertical dashed lines indicate again the points of Aubry transition and the crossover to the odd kink. ture profile emerges when the defect is driven across the Aubry transition from the sliding to the pinning phase, whereas the defect-free system exhibits a negligible temperature gradient for temperatures close to the Doppler limit. Moreover, the nonlinear dynamics of the defect inside the crystal is exposed when the average temperature of the system increases, resulting in a thermally delocalized phase, and in the reduction of the effects of the Aubry transition. Our calculation of the heat flux clearly shows that the presence of the defect significantly hinders the energy transport, especially in the pinning phase, and demonstrate the importance of the motional mode spectrum due to the presence of resonances. In addition, we encountered results that remain unanswered and require further investigation. The interplay between the motion of the defect inside the effective Peierls-Nabarro potential and the residual degrees of freedom of the crystal proves to be non-trivial, as the mismatch between the energy scales for the crossover to the delocalized regime and the size of the Peierls-Nabarro barriers localizing the kink demonstrates. Here the formulation of the dynamical equations in a collective excitation formulation could reveal the influence of these two different types of degrees of freedom onto each other [56; 57]. Another aspect that has not been treated in this work is the dependence of the temperature profiles and heat flux on the system size. Especially the scaling of \(J(N)\) is of interest since a deviation from a \(\propto 1/N\) scaling would indicate a diverging thermal conductivity in a properly defined thermodynamic limit, keeping the ion density constant. This analysis could also reveal the importance of finite size effects for the temperature gradients, possibly recovering the temperature steps at the contacts to the heat baths in the pinned phase. Moreover, the response of the system to the temperature stress in a setup with a different axial confinement restoring (partly) the translational invariance could differ qualitatively from the results presented here. The inhomogeneous ion density in the crystals with harmonic confinement ultimately leads to the localization of the defect in the central region, which hinders its abilities to transport energy. In addition, the expansion of the investigation of heat transport to other geometries, such as disc-shaped two-dimensional or cigar shaped three-dimensional systems could shed light onto the influence of dimensionality, making use of the versatility of trapped ion crystals [58; 59; 60] Finally, we expect that the altered excitation dynamics in crystals with defects impact the sympathetic cooling of these systems as the observed reduced heat flux could lead to regions of weak dissipation rates. The latter would be of direct importance for experiments investigating the properties of topological defects. ###### Acknowledgements. This project has been funded by the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy - EXC-2123 QuantumFrontiers-390837967 and through SFB 1227 (DQmat) - Project-ID 274200144, project A07.
2310.18894
Emergence of Shape Bias in Convolutional Neural Networks through Activation Sparsity
Current deep-learning models for object recognition are known to be heavily biased toward texture. In contrast, human visual systems are known to be biased toward shape and structure. What could be the design principles in human visual systems that led to this difference? How could we introduce more shape bias into the deep learning models? In this paper, we report that sparse coding, a ubiquitous principle in the brain, can in itself introduce shape bias into the network. We found that enforcing the sparse coding constraint using a non-differential Top-K operation can lead to the emergence of structural encoding in neurons in convolutional neural networks, resulting in a smooth decomposition of objects into parts and subparts and endowing the networks with shape bias. We demonstrated this emergence of shape bias and its functional benefits for different network structures with various datasets. For object recognition convolutional neural networks, the shape bias leads to greater robustness against style and pattern change distraction. For the image synthesis generative adversary networks, the emerged shape bias leads to more coherent and decomposable structures in the synthesized images. Ablation studies suggest that sparse codes tend to encode structures, whereas the more distributed codes tend to favor texture. Our code is host at the github repository: \url{https://github.com/Crazy-Jack/nips2023_shape_vs_texture}
Tianqin Li, Ziqi Wen, Yangfan Li, Tai Sing Lee
2023-10-29T04:07:52Z
http://arxiv.org/abs/2310.18894v1
# Emergence of Shape Bias in Convolutional Neural Networks through Activation Sparsity ###### Abstract Current deep-learning models for object recognition are known to be heavily biased toward texture. In contrast, human visual systems are known to be biased toward shape and structure. What could be the design principles in human visual systems that led to this difference? How could we introduce more shape bias into the deep learning models? In this paper, we report that sparse coding, a ubiquitous principle in the brain, can in itself introduce shape bias into the network. We found that enforcing the sparse coding constraint using a non-differential Top-K operation can lead to the emergence of structural encoding in neurons in convolutional neural networks, resulting in a smooth decomposition of objects into parts and subparts and endowing the networks with shape bias. We demonstrated this emergence of shape bias and its functional benefits for different network structures with various datasets. For object recognition convolutional neural networks, the shape bias leads to greater robustness against style and pattern change distraction. For the image synthesis generative adversary networks, the emerged shape bias leads to more coherent and decomposable structures in the synthesized images. Ablation studies suggest that sparse codes tend to encode structures, whereas the more distributed codes tend to favor texture. Our code is host at the github repository: [https://github.com/Crazy-Jack/nips2023_shape_vs_texture](https://github.com/Crazy-Jack/nips2023_shape_vs_texture) ## 1 Introduction Sparse and efficient coding is a well-known design principle in the sensory systems of the brain [3; 29]. Recent neurophysiological findings based on calcium imaging found that neurons in the superficial layer of the macaque primary visual cortex (V1) exhibit even a higher degree of lifetime sparsity and population sparsity in their responses than previously expected. Only 4-6 out of roughly 1000 neurons would respond strongly to any given natural image [35]. Conversely, a neuron typically responded strongly to only 0.4% of the randomly selected natural scene images. This high degree of response sparsity is commensurated with the observation that many V1 neurons are strongly tuned to more complex local patterns in a global context rather than just oriented bars and gratings [34]. On the other hand, over 90% of these neurons did exhibit statistically significant orientation tuning, though mostly with much weaker responses. This finding is reminiscent of an earlier study that found similarly sparse encoding of multi-modal concepts in the hippocampus [30]. This leads to the hypothesis that neurons can potentially serve both as a super-sparse specialist code with their **strong responses**, encoding specific prototypes and concepts, and as a more distributed code, serving as the classical sparse basis functions for encoding images with much **weaker responses**. The specialist code is related to the idea of prototype code, the usefulness of which has been explored in deep learning for serving as memory priors [22] in image generation, for representing structural visual concepts [36; 37], or for constraining parsimonious networks [24] for object recognition. In computer vision community, recent studies found that Convolutional Neural Networks (CNNs) trained for object recognition rely heavily on texture information [11]. This texture bias leads to misclassification when objects possess similar textures but different shapes [2]. In contrast, human visual systems exhibit a strong'shape bias' in that we rely primarily on shape and structures over texture for object recognition and categorization [19]. For instance, a human observer would see a spherical object as a ball, regardless of its texture patterns or material make-up [32]. This poses an interesting question: What is the design feature in the human vision systems that lead to the shape bias in perception? In this paper, we explore whether the constraint of the high degree of strong-response sparsity in biological neural networks can induce shape bias in neural networks. Sparsity, particularly in overcomplete representations, is known to encourage the formation of neurons encoding more specific patterns [28]. Here, we hypothesize that these learned specific patterns contain more shape and structure information, thus sparsifying the neuronal activation could induce shape bias in neuronal representation. To test this hypothesis, we impose a sparsity mechanism by keeping the Top K absolute response of neuronal activation at each channel in one or multiple layers of the network, and zeroing out the less significant activation with K is a sparsity parameter that we can adjust for systematic evaluation. We found that this sparsity mechanism can indeed introduce more shape bias in the network. In fact, simply introducing the Top-K operation during inference in the pre-trained CNNs such as AlexNet [18] or VGG16 [31] can already push the frontier of the shape bias benchmark created by [10] (as shown in Figure 1). Additional training of these networks with the Top-K operation in place further enhances the shape bias in these object recognition networks. Furthermore, we found that the Top-K mechanism also improves the shape and structural bias in image synthesis networks. In the few-shot image synthesis task, we show that the Top-K operation can make objects in the synthesized images more distinct and coherent. To understand why Top-K operation can induce these effects, we analyzed the information encoded in Top-K and non-Top-K responses using the texture synthesis paradigm and found that Top-K responses tend to encode structural parts, whereas non-Top-K responses contribute primarily to texture and color encoding, even in the higher layers of the networks. Our finding suggests that sparse coding is important not just for making neural representation more efficient and saving metabolic energy but Figure 1: Shape bias of our sparse CNNs versus standard CNNs and SOTA transformer-based networks in comparison to the shape bias of human subjects, as evaluated on benchmark dataset [10] across 16 classes. The red dotted line shows the frontier of transformer-based networks with the best shape bias. The greed dotted line shows that sparse CNNs push the frontier of the shape bias boundary toward humans. also for contributing to the explicit encoding of shape and structure information in neurons for image analysis and synthesis, which might allow the system to analyze and understand 3D scenes in a more structurally oriented part-based manner [4], making object recognition more robust. ## 2 Related Works Shape Bias v.s. Texture BiasThere has been considerable debate over the intrinsic biases of Convolutional Neural Networks (CNNs). [11] conducted a pivotal study demonstrating that these models tend to rely heavily on texture information for object recognition, leading to misclassifications when objects have similar textures but distinct shapes. In addition, it has also been shown that using the texture information alone are sufficient to achieve object classification [6]. This texture bias contrasts markedly with human visual perception, which exhibits a strong preference for shape over texture - a phenomenon known as'shape bias'[19]. Humans tend to categorize and recognize objects primarily based on their shape, a factor that remains consistent across various viewing conditions and despite changes in texture [32]. These studies collectively form the foundation upon which our work builds, as we aim to bridge the gap in shape bias between computer vision systems and human visual systems. Improving Shape Bias of Vision ModelsFollowing the identification of texture bias in CNNs by [11], numerous studies sought to improve models' shape bias for better generalization. Training methods have been developed to make models more shape-biased, improving out-of-distribution generalization. Some approaches, like [11], involved training with stylized images to disconnect texture information from the class label. Such approach posed computational challenges and didn't scale well. Others, like [14], used human-like data augmentation to mitigate the problem, while [21] proposed shape-guided augmentation, generating different styles on different sides of an image's boundary. However, these techniques all rely on data augmentation. Our focus is on architectural improvements for shape bias, similar to [1] which created a texture-biased model by reducing the CNN model's receptive field size. We propose using sparsity operations to enhance shape bias of CNNs. Furthermore, [7] proposes to scale up the transformer model into 22 billion parameters and show a near human shape bias evaluation results. We, on the other hand, are not comparing with their network since we focus on CNNs which requires less computation and doesn't require huge data to learn. We demonstrate in the supplementary that the same sparsity constraint could also be beneficial to the ViT family as well, hinting the generalizability of our findings. Robustness In Deep LearningRobustness in deep learning literature typically refers to robustness against the adversarial attack suggested by [33] which showed that minuscule perturbations to images, imperceptible to the human eye, can drastically alter a deep learning model's predictions. Subsequent research [13; 20] corroborated with these findings, showing that deep neural networks (DNNs) are vulnerable to both artificially-induced adversarial attacks and naturally occurring, non-adversarial corruptions. However, the robustness we are mentioning in this paper is about the robustness against confusing textures that are misaligned with the correct object class, as illustrated by the cue-conflict datasets provided by [11]. Although sparsity has been shown to be effective against the adversarial attack [23], explicit usage of Top-K in shape bias has not been explored. ## 3 Method ### Spatial Top-K Operation in CNN Sparse Top-K LayerWe implement the sparse coding principle by applying a Top-K operation which keeps the most significant K responses in each channel across all spatial locations in a particular layer. Specifically, for a internal representation tensor \(X\in R^{c\times h\times w}\), Top-K layer produces \(\text{X}_{\texttt{Top\_K}}:=\texttt{Top\_K}(\text{X},\text{ K})\), where the \(\text{X}_{\texttt{Top\_K}}\) is defined as: \[\text{X}_{\texttt{Top\_K}}[\texttt{i},\texttt{j},\texttt{k}]:=\begin{cases} \text{X}[\texttt{i},\texttt{j},\texttt{k}],&\text{if }\texttt{abs}(\text{X}[\texttt{i},\texttt{j},\texttt{k}])\geq\texttt{Rank}( \texttt{abs}(\text{X}[\texttt{i},:,:]))[K]\\ 0,&\text{otherwise}\end{cases} \tag{1}\] Equation 1 specifies how each entry of a feature tensor \(\text{X}\in R^{c\times h\times w}\) would be transformed inside the Top-K layer. The zero-out operation in Equation 1 suggests that the gradient w.r.t. any non Top-K value, as well as the gradients that chain with it in the previous layers will become zero. However, our analysis later suggests that the network can still learn and get optimized, leading to improved dynamic spatial Top-K selection. Sparse Top-K With Mean ReplacementTo determine the relative importance of the Top-K values or the Top K positions in the Top-K operation, we create an alternative scheme in which all the Top-K responses in a channel are replaced by the mean of the Top-K responses in the channel Top_K_Mean_Rpl as defined below: \[\text{X}_{\text{Top\_K\_Mean\_Rpl}}[\mathtt{i},\mathtt{j},\mathtt{k}]:=\begin{cases} \frac{1}{n}\sum_{\mathtt{j},k\in\{\text{Top-K}\}_{n}}\text{X}[\mathtt{i}, \mathtt{j},\mathtt{k}],&\text{if }\mathtt{abs}(\text{X}[\mathtt{i},\mathtt{j}, \mathtt{k}])\geq\mathtt{Rank}(\mathtt{abs}(\text{X}[\mathtt{i},:,:]))[\mathtt{ k}]\\ 0,&\text{otherwise}\end{cases}\] This Top_K_Mean_Rpl operation reduces the communication rate between layers by 1000 times. We study the impact of this operation on object performance in an object recognition network (See section 4.3 for the results) and to determine which type of information (values versus positions) is essential for inducing the shape bias. ### Visualizing the Top-K code using Texture Synthesis We used a texture synthesis approach [9] with ablation of Top-K responses to explore the information contained in the Top-K responses in a particular layer using the following method. Suppose a program \(F(\cdot)\) denotes a pre-trained VGG16 network with a number of parameters N [31] and \(TS:R_{i}^{h\times w\times 3}\times N\to R_{i}^{h\times w\times 3}\) denotes the texture synthesis program from [9] where an input image \(I\) is iteratively optimized by gradient descent to best match the target image \(T\)'s internal activation when passing through \(F(T)\). We detail the operations inside \(TS\) below. Denote the internal representation at each layer \(i\) of the VGG16 network when passing an input image \(I\) through \(F(\cdot)\) as \(X_{i}(I)\), and suppose there exist \(L\) layers in the VGG16 network. We update the image \(I\) as follows: \[I\gets I-lr*\left(\tfrac{\partial}{\partial I}\sum_{i}^{L}[Gr(X_{i}(I))- Gr(X_{i}(T))]\right)\] ,where \(Gr(\cdot):R^{h\times w\times c}\to R^{c\times c}\) denotes the function that computes gram matrix of a feature tensor, i.e. \(Gr(X_{i}(I))=X_{i}(I)^{T}X_{i}(I)\). We adopt LBFGS [26] with initial learning rate 1.0 and 100 optimization steps in all our experiments with the texture synthesis program. Utilizing the above texture synthesis program \(TS(\cdot,\text{VGG16})\), we can obtain the synthesis results in \(S_{\text{w/o Top-K}}\) by manipulating the internal representation of VGG16 such that we only use the non-Top-K responses to compute the Gram matrix when forming the Gram matrix optimization objectives. This effectively computes a synthesis such that it only matches with the internal non-Top-K neural response. For a given target image \(T\), this leads to \(S_{\text{w/o Top-K}}\): \[S_{\text{w/o Top-K}}=TS(T,\mathtt{ZeroUntInternalTopK}(\text{VGG16}))\] , which would show the information encoded by non-Top-K responses. Next, we include the Top-K firing neurons when computing the Gram matrix to get \(S_{\text{w/ Top-K}}\): \[S_{\text{w/ Top-K}}=TS(T,\mathtt{IdentifFunction}(\text{VGG16}))\] . Comparing these two results will allow us assess the information contained in the Top-K responses. ### Visualizing the Top-K neurons via Reconstruction Similar to section 3.2, we provide further visualization of the information Top-K neurons encode by iteratively optimizing an image to match the internal Top-K activation directly. Mathematically, we redefine our optimization objective in section 3.2: \[I\gets I-lr*\left(\tfrac{\partial}{\partial I}\sum_{i}^{L}[X_{i}(I)- \mathtt{Mask}_{i}*X_{i}(T)]\right)\] , where \(\mathtt{Mask}_{i}\) a controllable mask for each layer \(i\). There are three types of mask we used in the experiments: \(\{\text{Top-K\_Mask},\text{ non\_Top-K\_Mask},\text{ Identity\_Mask}\}\). Top-K_Mask selects only the Top-K fired neurons while keeps the rest of the neurons zero, whereas non_Top-K_Mask only selects the opposite of the Top-K_Mask and Identity_Mask preserves all neurons. By comparing these three settings, one can easily tell the functional difference between Top-K and non Top-K fired neurons (See results in Figure 3). ### Shape Bias Benchmark To demonstrate our proposal that the Top-K responses are encoding the structural and shape information, we silence the non-Top-K responses during inference when using pre-trained CNNs. To test the networks' shape bias, we directly integrate our code into the benchmark provided by [10]. The benchmark contains a cue-conflict test which we use to evaluate the Top-K operation. The benchmark also includes multiple widely adopted models with pre-trained checkpoints and human psychological evaluations on the same cue-conflict testing images. ## 4 Results ### Top-K Neurons Encode Structural Information To test the hypothesis that the shape information is mostly encoded among the Top-K significant responses, whereas the non-Top-K responses are encoding primarily textures, we used the method described in Section 3.2 and compared the texture images synthesized with and without the Top-K responses for the computation of the Gram matrix. Figure 2 compares the TS output obtained by matching the early layers and the higher layers of the VGG16 network in the two conditions. One can observe that ablation of the Top-K responses eliminated much of the structural information, resulting in more texture images. We conclude from this experiment that (1) Top-K responses are encoding structural parts information; (2) Non Top-K responses are primarily encoding texture information. To provide further insights about the different information Top-K and non Top-K neurons are encoding, we show another qualitative demonstration in Figure 3 where we optimize images that would excite the Top-K neurons alone and the non Top-K neurons alone respectively (See full description in Section 3.3). From Figure 3, it is clear that optimizing images to match the Top-K fired neurons yields high level scene structures with details abstracted away while optimization efforts to match the non Top-K fired neurons produce low level local textures of the target images. Together, we provide evidence to Figure 3: Visualizing Top-K and non Top-K neurons through optimizing input images to match their activation. Figure 2: Texture Synthesis (TS) using [9]. i. shows the original image, ii. shows the TS results of \(S_{\text{w/ Top-K}}\) with both Top-K and Non Top-K activation intact, iii. shows the TS results \(S_{\text{w/o Top-K}}\) with Top-K activation deleted before performing TS. support our hypothesis that it is the strong firing neurons in the convolutional neural networks that provide structural information while the textures are encoded among the weakly activated neurons. Next, we demonstrate that this phenomenon will result in improved shape bias in both analysis and synthesis tasks. ### Top-K Responses already have Shape Bias without Training We test the Top-K activated CNNs with different degrees of sparsity on the shape bias Benchmark proposed from [10]. This benchmark evaluates the shape bias by using a texture-shape cue conflict dataset where the texture of an image is replaced with the texture from other classes of images. It defines the shape and texture bias in the following ways: \[\textbf{shape bias}=\frac{\textbf{\# of correct shape recognitions}}{\textbf{\# of correct recognitions}}\] \[\textbf{texture bias}=\frac{\textbf{\# of correct texture recognitions}}{\textbf{\# of correct recognitions}}\] It has been shown in previous work [10; 11] that CNNs perform poorly on shape-based decision tests whereas human subjects can make successful shape-based classification on nearly all the evaluated cases. This results in CNN models having relatively low shape bias scores while humans have close to 1 shape bias score. Interestingly, it has been observed that Vision Transformers (ViT) model family has attained significant improvement in shape bias [10]. Adding the Top-K operation to a simple pretrained such as AlexNet or VGG16 alone already can already induced a significant increase in shape bias, as shown in Fig.4. With the sparsity knob K equal to 10% and 20%, the Top-K operation alone appears to achieve as much or more shape bias as the state-of-the-art Vision Transformer models in the cue-conflict dataset, leading further support to the hypothesis that Top-K sparsity can lead to shape bias. We plot the best of the Top-K sparsified AlexNet and VGG16 for each evaluation of 16 object classes in Fig.5. We can observe that sparsity constraint improve shape-biased decision-making for most of the object classes, bringing the performance of the pre-trained model closer to human performance. With the proper settings of sparsity, certain classes (e.g. the bottle and the clock category) could attain human level performance in shape-biased scores. However, we should note that the confidence interval is quite large, indicating that the network performs differently across the different classes. A closer look at shape bias for each class is shown in Figure 5. ### Top-K training induces Shape Bias in Recognition Networks To evaluate the shape bias can be enhanced by training with Top-K operation, we trained ResNet-18 [12] on different subsets of ImageNet dataset [8]. Each subset contains randomly selected 10 original categories from ImageNet,for all the training and evaluation. Every experiment is run three times to obtain an error bar. During the evaluation, we employ AdaIn style-transfer using programs adopted from [17] to transform the evaluation images into a texture-ablated form as shown in Figure 6. The original texture of the image is replaced by styles of non-related images using style transform. Figure 4: Overall shape bias of sparse CNNs, CNNs, Transformer and humans This allows us to evaluate how much a trained model is biased toward the original texture instead of the underlying shape. In this experiment, we train classification networks with two non-overlapping subsets of the ImageNet, namely IN-\(S_{1}\) and IN-\(S_{2}\). We select the categories that are visually distinctive in their shapes. The details about the datasets can be found in the Supplementary Information. We trained ResNet-18 models over the selected IN-\(S_{1}\) and IN-\(S_{2}\) dataset with the standard Stochastic Gradient Descent (SGD, batch size 32) and a cosine annealing learning rate decay scheduling protocol with lr starting from 0.1. The same optimization is applied to ResNet-18 models with a 20% spatial Top-K layer added after the second bottleneck block of the ResNet-18. All models are then evaluated on the styleized version of IN-\(S_{1}\) and IN-\(S_{2}\) evaluation dataset after trained with 50 epochs. Table 1, shows that (i) the classification accuracy on the original evaluation dataset doesn't drop: mean top-1 accuracy of 87.8 (baseline) v.s. 89.4 (w. Top-K) on IN-\(S_{1}\) and 81.3 (baseline) v.s. 83.4 (w. \begin{table} \begin{tabular}{c c c c c} \hline Top-1 Acc. (\%) & IN-\(S_{1}\) (\(\uparrow\)) & Stylized-IN-\(S_{1}\) (\(\uparrow\)) & IN-\(S_{2}\)(\(\uparrow\)) & Stylized-IN-\(S_{2}\) (\(\uparrow\)) \\ \hline ResNet-18 [12] & 87.8 \(\pm\) 0.5 & 49.3 \(\pm\) 1.5 & 81.3 \(\pm\) 1.7 & 52.4 \(\pm\)2.2 \\ ResNet-18 w. Top-K during training & **89.4**\(\pm\) 0.6 & **55.4**\(\pm\) 0.8 & 83.4 \(\pm\) 0.9 & **59.7**\(\pm\) 0.6 \\ \hline ResNet-18 w. Top\_K\_Mean\_Rpl during training & 84.9 \(\pm\) 0.3 & 56.8 \(\pm\) 1.7 & 75.5 \(\pm\) 2.5 & 53.1 \(\pm\) 1.0 \\ \hline \end{tabular} \end{table} Table 1: Evaluation for models trained on IN-\(S_{1}\) and IN-\(S_{2}\) datasets, each of which consists 10 classes of all train/val data from ImageNet-1k dataset [8]. Figure 5: The classification result on the Shape Bias Benchmark proposed from[10]. This plot shows the shape bias of sparse CNNs, CNNs and humans on different class in texture-shape cue conflict dataset. It also show the shape bias of different sparsity degree. e.g. 5% means that only top 5% activation value would be passed to the next layer. Vertical lines means the average value. Figure 6: Evaluating shape bias of the network with stylized ImageNet subsets. Three pairs of images are presented sampled from the our evaluation datasets. Specifically, we transfer (a) \(\rightarrow\) (b), (c) \(\rightarrow\) (d) and (e) \(\rightarrow\) (f) by AdaIN [17] and keep the original class labels. During the evaluation, the transferred images are presented instead of the original test image to measure the network’s texture bias sensitivity. Top-K) on IN-\(S_{2}\) respectively even when we push the sparsification to K= 20%; (ii) The shape bias improves significantly: mean top-1 accuracy of 55.4 (w. Top-K) v.s. 49.3 (baseline) on Stylized-IN-\(S_{1}\) and 59.7 (w. Top-K) v.s 52.4 (baseline) on Stylized-IN-\(S_{2}\) respectively. This supports our conjecture that the sparse code could introduce more shape bias, in comparison to the dense representation, during learning. To further investigate why Top-K might induce shape bias, we evaluate whether the values of the Top-K responses matter by compressing the information in each channel to the mean responses of the Top-K responses of that channel at the Top-K positions. This reduces the information of each channel to only a binary mask indicating the Top-K responding locations and a single float number that relays the channel's weighted contribution to downstream layers, effective compressing the communication rate by 3 orders of magnitude (See Section 3.1 for a detailed description of the Top_K_Mean_Rpl). Despite the enormous amount of data compression by replacing the Top-K values with the mean, the network can still maintain the shape bias comparable to the normal ResNet-18 baseline (as indicated by the improved or on-par performance on the Stylized-IN-\(S_{1}\)/\(S_{2}\) between ResNet-18 and ResNet-18 w. Mean Top-K in Table 1). This suggests that the spatial map of the Top-K activation is more important than the precise values of the Top-K responses. This suggests a significant amount of the object shapes features are actually encoded in the occupancy map of significant neural activities, i.e. the binary mask of the Top-K. ### Towards Shape Biased Few Shot Image Synthesis Humans are great few-shot learners, i.e. learning from a few examples. This ability might be related to our cognitive ability to learn a generative model of the world that allows us to reason and imagine. Recurrent feedback in the hierarchical visual system has been hypothesized to implement such a generative model. We investigate whether the Top-K operation also induces shape bias in the synthesis network for the few-shot image synthesis task. We hypothesize that shape bias can benefit few-shot image synthesis by emphasizing on structural and shape information. Figure 6(b) shows that state-of-the-art few-shot synthesis program (FastGAN [25]) suffers severely from texture bias. We found that introducing Top-K operation in the fourth layer (the 32 x 32 layer) in FastGAN, significant improvement in the synthesis results can be obtained on datasets selected from ImageNet [8] (100 samples each from four diverse classes, synthesizing each class independently, see Supplementary for details) as shown in Table 7. Images from ImageNet possess rich structural complexity and population diversity. To be considered to be a good synthesis, generated samples would have to achieve strong global shape coherence in order to have good matching scores with the real images. A better quantitative evaluation result would suggest the emergence of a stronger shape or structural bias in the learned representation. Samples of the four classes are shown in Figure 6(a). To assess the image synthesis quality, we randomly sample 3000 latent noise vectors and pass them through the trained generators. The generated images are then compared with the original training images in the inception-v3 encoder space by Frechet Inception Distance (FID [15]) and Kernel Inception Distance (KID [5]) scores and documented in Table 2. Each setting is run 3 times to produce an error bar. First, the synthesis quality measurements for adding the Top-K operation to the FastGAN network show a consistent improvement in terms of FID and KID scores in Table 2. Figure 6(b) shows that the Top-K operation leads the generation of objects (e.g. the jeep) that are more structurally coherent and distinct, compared to the original FastGAN. Specifically, we observe a 21.1% improvement on FID scores and 50.8% improvement on KID scores for the Jeep-100 class when K= 5% sparsity was Figure 7: Few shot image synthesis datasets and qualitative comparison resutlts between our methods and FastGAN [25]. imposed during training (i.e. only keep 5% neurons active). Similarly, when 5% sparsity is imposed on Fish-100 and Train-100 datasets the FID is increased by 17.3% and 12% respectively and the KID performance is boosted by 48.5% and 33.4%. Lastly, we test the in-door table class which contains complex objects with many inter-connected parts and subparts. A K=15% sparsity leads to a gain of 9.3 % and 22.3% in FID and KID respectively for synthesizing similar in-door tables. Overall, our experiments show that introducing the Top-K operation in a challenging few-shot synthesis task can significantly improve the network's generated results on a diverse set of complicated natural objects. ### Parts Learning in Sparse Top-K Finally, we study the training dynamics of the Top-K layer's internal representation. In Figure 8, we can make the following observations. (1) By applying sparse constraint using Top-K, each channel is effectively performing a binary spatial segmentation, i.e. the image spatial dimension is separated into either Top-K region or non Top-K territory. (2) Although there is no explicit constraint to force the Top-K neurons to group together, the Top-K responses tend to become connected, forming object parts and subparts, as training evolves. We believe the development of this continuous map when training with Top-K operation might be due to two factors (1) CNN activations are locally smooth, i.e. two adjacent pixels in layer \(L_{n}\) are linked by the amount of overlap between their corresponding input patches in \(L_{n-1}\), (2) Top-K increase the responsibility of each individual neuron to the loss function. When neurons i and j are being selected as Top-K, their responses are likely similar to each other. However, if their corresponding spatial location in the output has different semantic meanings, they will receive diverse gradients which will be then amplified by the increased individual responsibility. The diverging gradients would lead to the value difference of two neuron i and j's gradients, resulting in one of them leaving the Top-K group while only the semantic similar ones remaining the Top-K sets. This might suggest a principle we call _neurons fire together, optimize together_ during CNN Top-K training, which could lead to the observed emerging of semantic parts and subparts. This localist code could further connect to the learning by recognition component theory [4] and could further leads to cleaner and easier time to achieve shape bias. In functional analysis of the brain, [27] also show that a local smoothness constraint could lead to the topological organization of the neurons, hinting that the the hypothesized factors here could have a neuroscientific grounding. \begin{table} \begin{tabular}{c|c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Jep-100} & \multicolumn{2}{c}{Fish-100} & \multicolumn{2}{c}{Train-100} & \multicolumn{2}{c}{Table-100} \\ \cline{2-9} & FID \(\downarrow\) & KIDP \(\downarrow\) & FID \(\downarrow\) & KIDP \(\downarrow\) & FID \(\downarrow\) & KIDP\(\downarrow\) & FID \(\downarrow\) & KIDP\(\downarrow\) \\ \hline FastGAN [25] & 49.0 \(\pm\) 1.4 & 12.0 \(\pm\) 1.9 & 46.2 \(\pm\) 1.8 & 13.4 \(\pm\) 0.6 & 46.1 \(\pm\) 1.8 & 11.2 \(\pm\) 0.8 & 67.2 \(\pm\) 0.1 & 19.7 \(\pm\) 0.3 \\ FastGAN w. Top-K (ours) & **38.7**\(\pm\) 0.5 & **5.9**\(\pm\) 0.7 & **38.2**\(\pm\) **1.4** & **6.9**\(\pm\) **0.7** & **40.2**\(\pm\) **0.8** & **7.4**\(\pm\) **0.2** & **60.9**\(\pm\) 0.3 & 15.3 \(\pm\) 0.2 \\ \hline \hline \end{tabular} \end{table} Table 2: Few-shot Image Synthesis results measured in FID [16] and KID [5] Note that KID * denotes KID scaled by a factor of \(10^{3}\) to demonstrate the difference. Figure 8: Even though Top-K Operation is not fully differentiable, the network is able to relocate the spatial activation mass smoothly towards a connected meaningful parts which eventually leads to component learning as shown in Figure 9 Figure 9: Synthesis network internal topk layers reveals semantic decomposition of parts and subparts. Top-K Sparsity HyperparameterWith the understanding from Section 4.5, we want to reiterate the importance of the sparsity hyperparameter we used. The amount of the sparsity can be directly translated to "size of the parts". Thus, depending on the image type, the composition of the scene, the network architecture, and the layers in which the Top-K sparsity operation is applied on, the results could be drastically different. We refers to the supplementary for more detailed ablation study. ## 5 Conclusion In this study, we discovered that an operation inspired by a well-known neuroscience design motif of sparse coding can induce shape bias in neural representation. We demonstrated this in object recognition networks and in few-shot image synthesis networks. We found that simply adding the Top-K sparsity operation can induce shape bias in pre-trained convolutional neural networks and that training of the CNNs and GAN with the simple Top-K operation can increase the shape bias further toward human performance, which makes object recognition more robust against texture variations and makes image synthesis generating structurally more coherent and distinct objects. Using texture synthesis, we are able to demonstrate that Top-K responses carry more structural information, while the non Top-K responses carry more texture information. The observation that sparse coding operation can induce shape bias in deep learning networks suggests sparsity might also contribute to shape bias in human visual systems. ## 6 Ethics Statement This study investigates whether the sparse coding motif in neuroscience can induce shape bias in deep learning networks. The positive results suggest that sparsity might also contribute to shape bias in the human visual systems, thus providing insights to our understanding of the brain. While deep learning can advance science and technology, it comes with inherent risks to society. We acknowledge the importance of ethical study in all works related to deep learning. A better understanding of deep learning and the brain however is also crucial for combating the misuse of deep learning by bad actors in this technological arm race. ## 7 Acknowledgement This work was supported by an NSF grant CISE RI 1816568 awarded to Tai Sing Lee. This work is also partially supported by the graduate student fellowship from CMU Computer Science Department.
2301.10551
Variation-Aware Semantic Image Synthesis
Semantic image synthesis (SIS) aims to produce photorealistic images aligning to given conditional semantic layout and has witnessed a significant improvement in recent years. Although the diversity in image-level has been discussed heavily, class-level mode collapse widely exists in current algorithms. Therefore, we declare a new requirement for SIS to achieve more photorealistic images, variation-aware, which consists of inter- and intra-class variation. The inter-class variation is the diversity between different semantic classes while the intra-class variation stresses the diversity inside one class. Through analysis, we find that current algorithms elusively embrace the inter-class variation but the intra-class variation is still not enough. Further, we introduce two simple methods to achieve variation-aware semantic image synthesis (VASIS) with a higher intra-class variation, semantic noise and position code. We combine our method with several state-of-the-art algorithms and the experimental result shows that our models generate more natural images and achieves slightly better FIDs and/or mIoUs than the counterparts. Our codes and models will be publicly available.
Mingle Xu, Jaehwan Lee, Sook Yoon, Hyongsuk Kim, Dong Sun Park
2023-01-25T12:35:17Z
http://arxiv.org/abs/2301.10551v1
# Variation-Aware Semantic Image Synthesis ###### Abstract Semantic image synthesis (SIS) aims to produce photorealistic images aligning to given conditional semantic layout and has witnessed a significant improvement in recent years. Although the diversity in image-level has been discussed heavily, class-level mode collapse widely exists in current algorithms. Therefore, we declare a new requirement for SIS to achieve more photorealistic images, variation-aware, which consists of inter- and intra-class variation. The inter-class variation is the diversity between different semantic classes while the intra-class variation stresses the diversity inside one class. Through analysis, we find that current algorithms elusively embrace the inter-class variation but the intra-class variation is still not enough. Further, we introduce two simple methods to achieve variation-aware semantic image synthesis (VASIS) with a higher intra-class variation, semantic noise and position code. We combine our method with several state-of-the-art algorithms and the experimental result shows that our models generate more natural images and achieves slightly better FIDs and/or mIoUs than the counterparts. Our codes and models will be publicly available. ## 1 Introduction Image synthesis, aiming to generate photorealistic images, has been received an improvement in recent years with generative adversarial network (GAN) [10, 22]. Seminal works produce images from random noise [10, 24], while current works produce images from given conditions, such as labels [2, 15], words [44, 45] and images [14, 37, 48]. In this paper, we focus on a special condition, semantic image synthesis (SIS) that aims to synthesize images from conditional semantic layouts [23, 31, 37]. Mode collapse is one notorious challenge for all image generation applications with GAN and refers to producing similar output [5, 10, 12, 14, 17, 23, 25, 32, 49, 51]. For example, in image to image translation, one desired thing is producing multiple possible images with a same conditional image [5, 12, 17, 49], in which the challenge is formally called multimodal image synthesis. Similarly, a single semantic layout can align with multiple photorealistic images [23, 32, 51]. Although multimodal SIS has been heavily discussed in recent years, most them consider it in _image-level_ where one image is different from another image and the diversity in _class-level_ is somehow ignored and underdeveloped. The class-level diversity is inspired by the fact that semantic layouts give more details than conditional object labels in image-level, where the pixels belonging to a same semantic class may not always have same pixels in generated images. Therefore, mode collapse can be analogical from image-level to class-level. As shown in Figure 1, class-level mode collapse can be observed in three current state-of-the-art algorithms. The generated images may be different from others, whereas similar patterns appear when similar semantic layout are given. To analyze the mode collapse challenge in SIS, we consider the diversity not only in image-level but also in class-level and hence, we define _inter-class_ and _intra-class_ variation. The inter-class variation is the diversity between different semantic classes while the intra-class variation stresses the diversity inside one class. Furthermore, we probe three current state-of-the-art algorithms, SPADE [23], CLADE [31], and OASIS [26, 29]. Through analysis, we find that the conditional normalization in the three algorithms elusively contributes to inter-class variation whereas the intra-class variation is somehow ignored and not enough. The details are discussed in Section 3.1. Besides, the less intra-class variation results in the mode collapse in class-level, as shown in Figure 1. Through our analysis, we declare a new requirement to ease the two-type mode collapse in SIS, _variation-aware_, consisting of inter- and intra-class variation. We term this goal variation-aware semantic image synthesis (VASIS) where the generated images should have not only inter-class variation but also intra-class variation. Holding the variation-aware requirement and to achieve VASIS, we introduce two simple methods to enlarge the intra-class variation, semantic noise and position code. Simultaneously, we argue that intra-class variation is entailed to produce realistic images for SIS. On one hand, semantic noise is Gaussian distribution with learnable shift and scale. As learned individually for each semantic class, we call it semantic noise. Distinguished from the shared noise for each semantic class [2, 29, 15, 2], semantic noise can improve the intra-class variation and maintain the inter-class variation. On the other hand, position code enables image generator heterogeneous regarding to the pixel position. When same semantic class exists in different position, the generated pixels may be diverse. Specially, two more types of position codes are discussed, except for relative position code [31] and the learnable position code achieves the best among them. We verify our ideas and the proposed two methods in three current state-of-the-art and the experimental result shows that our models generate more natural images and achieves slightly better FIDs and/or mIoUs than the counterparts. Our codes and models will be publicly available. ## 2 Related Work **Semantic image synthesis,** aiming to generate photorealistic images aligning the given semantic layout, is the main objective of this paper. The first challenge of this task is how to leverage the given semantic layout effectively and efficiently. To address this challenge, preliminary works adopt an encoder to extract the input semantic layout, followed by a decoder to produce images [37, 33, 14, 48]. However, this encoder-decoder strategy is declared to "wash away" the input semantic layout [23] and simultaneously, SPADE is proposed to utilize the input semantic layout more effectively. In SPADE [23], the encoder is discarded and the given semantic layout is directly integrated into a decoder via a conditional normalization layer. Following SPADE, the decoder with a conditional normalization layer is widely employed in many other papers for different goals, such as enabling every instance with specific style [50], high resolution semantic layout synthesis [18], semantic multi-modal image synthesis [30, 51], improving the synthesis quality [26], and efficient semantic image synthesis [31, 40]. In this paper, we focus on how to achieve the variation-aware semantic image synthesis which requires that each class should be different from any other classes (inter-class variation) and each class owns its diversity (intra-class variation). **Normalization layer** is firstly designed to accelerate the training process by reducing the internal covariate Shift [13]. The batch normalization layer consists of two steps, normalization with computed mean and variance and de-normalization with learnable shift and scale, both of which are adapted afterward. On one hand, the way to compute the mean and variance changes from a batch, such as instance normalization [35], group normalization [39], layer normalization [1]. On the other hand, the learnable shift and scale become conditional version, such as style-conditional [11], label-conditional [2, 15] and semantic layout-conditional [23, 31]. In this paper, the conditional normalization is borrowed to achieve a new objective, variation-aware semantic image synthesis, with extra semantic noise and position code. **Image variation** is heavily analyzed in image classification with convolution neural networks (CNN) [15, 43] and other work [41]. Image classification and other dense prediction ask model learn the inter-class variation via recognizing and overlooking the intra-class variations. But image generation and semantic image synthesis are required to synthesize intra-class variation. However, the diversity in class-level is somehow ignored and underdeveloped, though image-level diversity is a hot topic [2, 15, 49, 17]. In this paper, we introduce two simple methods to enlarge class-level diversity, semantic noise and position code. **Position code.** The convolution neural network (CNN), playing an important role in the era of deep learning, owns an inductive bias because of the local connection and shared weight, with which the position of a pattern is independent of the feature extracting process. However, the position-dependent sometimes maybe the desired character. By considering the issue, the absolute position map is concatenated with the input feature map before undergoing the CNN operation [19]. Moreover, from the CNN to transformers, the position code becomes more popular and is adopted in many research work [8, 9, 28, 36, 38]. Different from the CNNs, the position codes in transformers are utilized to mimic the word sequence from the natural language processing [7]. The absolute position code is originally borrowed [36] while the learnable absolute position code [9] shows its convenience with the changeable length and leads to better performance. Afterward, the relative position code is advocated as it captures the connections between two elements. Although it is introduced in SIS [31], position code was not systematic to be analyzed. In this paper, we discuss three types of position code to improve the intra-class variation when performing semantic image synthesis and ease the class-level mode collapse. ## 3 Method We observe and aim to ease the class-level mode collapse in semantic image synthesis (SIS). In this section, we first analyze how the current methods produce inter-class variation, the diversity between semantic classes, and intra-class variation, the diversity inside one specific semantic class. Then we propose two simple methods to enlarge the intra-class variation and thus ease the class-level mode collapse. ### Revisit SPADE and CLADE Let \(s=S^{N\times H\times W}\) be the input semantic image, where \(N\) is the number of semantic labels, \(H\) and \(W\) are the height and width. Given specific location \(h,w\), each entry \(s_{:,h,w}\) denotes the semantic label for the specific location in a one-hot label manner. The spatial-adaptively normalization (SPADE) can be formalized as \[\hat{x}_{i,j,k}=\gamma_{i,j,k}\times\frac{x_{i,j,k}-\mu_{i,j,k}(x)}{\sigma_{i,j,k}(x)}+\beta_{i,j,k}, \tag{1}\] with the coordinate channel \(i\), height \(j\), width \(k\). \(x\) is the input feature and \(\hat{x}\) denotes the output feature map. \(\mu(x)\) and \(\sigma(x)\) are the mean and variance of the input feature. Different from unconditional normalization [1, 35, 39, 13], Figure 1: Examples of class-level mode collapse on ADE20k. Similar patterns in red circle exist when similar semantic layout are given while our method eases the class-level mode collapse. the scale \(\gamma\) and shift \(\beta\) in SPADE [23] are computed from semantic layout \(s\). Specifically, \(\gamma=\mathcal{F}_{2}(\mathcal{F}_{1}(s))\) and \(\beta=\mathcal{F}_{3}(\mathcal{F}_{1}(s))\) where \(\mathcal{F}\) is a convolution neural network (CNN) with three as their kernel sizes. As CNN is locally connected and weight shared, the same semantic label results in the same scale and shift vector [40]. Mathematically, given two semantic labels \(s_{1}=s_{:,j_{1},k_{1}}\) and \(s_{2}=s_{:,j_{2},k_{2}}\) along with the vicinity semantic label \(s_{\mathcal{V}_{x1}}\) and \(s_{\mathcal{V}_{z2}}\), if \(s_{1}=s_{2}=s_{\mathcal{V}_{z1}}=s_{\mathcal{V}_{z2}}\), then \(\gamma_{:,j_{1},k_{1}}=\gamma_{:,j_{2},k_{2}}\) and \(\beta_{:,j_{1},k_{1}}=\beta_{:,j_{2},k_{2}}\). More efficiently, CLADE [31] replaces the three CNN layers in SPADE with a sampling strategy, which leads to much less number of parameters and computations (FLOPs). Hence, \(\gamma_{:,j_{1},k_{1}}=\gamma_{:,j_{2},k_{2}}\) and \(\beta_{:,j_{1},k_{1}}=\beta_{:,j_{2},k_{2}}\) are also satisfied for all \(s_{1}=s_{2}\) as CLADE does not fuse pixel with its vicinity region. Based on the above analysis, the first question is how SPADE and CLADE can generate different pixel values if each semantic label has the specific shift and scale vector. A quick answer is the coefficient of class boundary and conv-k3 (three as the kernel size of convolution layer). Let us define _class boundary_ in semantic layout, \(s_{1}\neq s_{\mathcal{V}_{x1}}\) that are from zero padding and input semantic layout. Conv-k3 fuses semantic labels on the class boundary, which makes different shift and scale vectors for each semantic label. To probe conv-k3 impact, we replace them with conv-k1 (one as the kernel size). As shown in Table 1, both SPADE and CLADE do not work, with unreasonable FID and mIoU. Although conv-k3 contributes to intra-class variation, it occurs must with the class boundary. Similar class boundaries result in similar patterns, which is termed as class-level mode collapse in this paper. However this mode collapse is not easy to observe due to two factors. First, natural semantic layout in current datasets is taken as the input that may with multiple class boundary and the class boundaries are heterogeneous in different images. As shown in Figure 2, the generated images from Cityscapes dataset seems reasonable because of diverse class boundaries in the input semantic layout. But when similar class boundary appears as shown in Figure 1, the mode collapse dominates even that the class boundary appears in diverse locations. Second, zero padding also leads to class boundary. As shown in the top row in Figure 2, the generated images with one or two semantic class in one image do not clearly show the mode collapse. But replacing the zero padding with reflect padding, mode collapse is amplified, such as the horizontal patterns in the sky and road case because of horizontal class boundary. Surprised, reflect padding benefits FID and lessen mIoU for both SPADE and CLADE as displayed in Table 1. To have a deeper understanding, we probe the standard deviation of feature maps in different blocks. As shown in Figure 2, reflect padding with only sky class gives no variation in all blocks, while zero padding make a fake variations. The deviation can be not distinguished when with multiple semantic classes and zero padding, which verifies that class-level mode collapse is elusive to observe. Besides, \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & & FID-t & mIoU & Acc \\ \hline \multirow{3}{*}{SPADE} & original & 51.98 & 62.21 & 93.48 \\ & reflect & 49.13 & 61.32 & 93.43 \\ & conv-k1 & 285.18 & 8.44 & 25.09 \\ \hline \multirow{3}{*}{CLADE} & original & 50.62 & 60.63 & 93.50 \\ & reflect & 49.98 & 60.07 & 93.33 \\ \cline{1-1} & conv-k1 & 255.55 & 9.86 & 29.82 \\ \hline \end{tabular} \end{table} Table 1: Understanding the intra-class variation in SPADE and CLADE. We probe the zero-padding and the kernel size of CNN by replacing the zero-padding with the reflect-padding (reflect) and the kernel size three with one (conv-k1). Figure 2: (a) The generated images by replacing the zero-padding with the reflect-padding with model trained in Cityscapes dataset. (b) The standard deviation of features in different blocks. the deviation is improved in the early several blocks as the class boundary is increasing. To conclude, the conditional normalization layer in SPADE [23], CLADE [31], and other similar papers [26, 29, 50] leads to inter-class variation via giving every semantic class a specific shift and scale vector. Simultaneously, the intra-class variation results from class boundary and conv-k3 but suffer mode collapse. **Discussion.** Our analysis supports previous findings. For example, SPADE has a slight superiority with conv-k3 than conv-k1 inside the conditional normalization layer because of chances to have more class boundaries. CLADE is inferior to SPADE as the class boundary is not used in its conditional normalization layer and CLADE-ICPE is better than plain CLADE. Noise is beneficial to OASIS [26, 29]. Among them, better one tends to have an improved intra-class variation. Besides, intra- and inter-class variation can explain the inferior FID in [27] since that the shared weight in a local area results in lower inter- and intra-class variation. ### Variation-aware semantic image synthesis Through the above analysis, we argue that the intra-class variation is not enough in the current algorithms, which results in undesired patterns and artifacts. To address this issue, we propose a requirement that a model should own intra-class and inter-class variation simultaneously to synthesize more natural images, termed variation-aware semantic image synthesis (VASIS). Further, we propose a simple VASIS model by adding semantic noise and position code into the original conditional normalization layer. As the conditional normalization layer enables the same semantic label with the same shift or scale vector, our motivation is to introduce randomness into the vectors. In another word, we aim to make the shift and scale for each semantic label vary in a distribution. Given a feature map \(x\in\mathbb{R}^{B\times C^{\prime}\times H^{\prime}\times W^{\prime}}\) with batch size \(B\), channel \(C^{\prime}\), height \(H^{\prime}\), and width \(W^{\prime}\), the scale in our VASIS model is formalized as (the shift is computed in the same way and omitted): \[\gamma=\gamma_{n}\bigoplus(\gamma_{s}\bigotimes\gamma_{p}), \tag{2}\] where \(\gamma\in\mathbb{R}^{B\times C^{\prime}\times H^{\prime}\times W^{\prime}}\). \(\bigoplus\) and \(\bigotimes\) denote the concatenation in channel-wise and the multiplication in element-wise, respectively. Our \(\gamma\) consists of three parts, semantic noise \(\gamma_{n}\in\mathbb{R}^{B\times\frac{C^{\prime}}{2}\times H^{\prime}\times W^ {\prime}}\), semantic-layout feature \(\gamma_{s}\in\mathbb{R}^{B\times\frac{C^{\prime}}{2}\times H^{\prime}\times W^ {\prime}}\) and position code \(\gamma_{p}\in\mathbb{R}^{B\times\frac{C^{\prime}}{2}\times H^{\prime}\times W ^{\prime}}\). The semantic layout feature is computed same as the original method, such as convolution layer in SPADE or sampling strategy in CLADE. **Semantic noise** is calculated as follows: \[\gamma_{n}=\mathcal{N}\bigotimes\mathcal{S}(s,\mathcal{N}_{1})+\mathcal{S}(s,\mathcal{N}_{2}), \tag{3}\] where \(\mathcal{N}\in\mathbb{R}^{B\times\frac{C^{\prime}}{2}\times H^{\prime}\times W ^{\prime}}\) denotes a standard normal distribution. \(\mathcal{N}_{1}\in\mathbb{R}^{N\times\frac{C^{\prime}}{2}}\) and \(\mathcal{N}_{2}\in\mathbb{R}^{N\times\frac{C^{\prime}}{2}}\) are learnable parameters. \(\mathcal{S}\) suggests the same guided sampling strategy in CLADE [31] and CSN [40] that samples a vector from \(\mathcal{N}_{1}\) or \(\mathcal{N}_{2}\) according to the semantic label \(s\in\mathbb{R}^{B\times N\times H^{\prime}\times W^{\prime}}\) where \(N\) is the number of semantic labels. Our semantic noise distinguishes from random noise as utilized in [26]. If the random noise are borrowed to compute the shift and scale for all classes, then the intra-class variation is eased but the inter-class variation is reduced as all classes shares one random noise distribution. In contrast, the semantic noise mitigates the former one without sacrificing the latter one. We emphasis that reducing the number of channels of semantic-layout feature, from \(C^{\prime}\) to \(\frac{C^{\prime}}{2}\), contributes to smaller number of parameters for SPADE and OASIS but seems no large impact on the performance. **Position code** is formalized as follows: \[\gamma_{p}=\mathcal{F}_{p}(p_{l}), \tag{4}\] where \(\mathcal{F}_{p}\) is a position code function, a convolution layer. \(p_{l}\in\mathbb{R}^{1\times 2\times H^{\prime}\times W^{\prime}}\) denotes learnable position code. Except for \(p_{l}\), other position codes are also verified, such as absolute position code \(p_{a}\)[19] and relative position code \(p_{r}\)[31]. To be specific, \(p_{a}\) calculates the position code according to the absolute location of each pixel in height-wise and width-wise. The relative position encoded the distance between one pixel and the center of its same semantic area. Table 2 lists the differences of their characters. We consider the position code in two ways. The first one is the computation and we find that the relative one needs computation, averagely 0.32 second per image in Cityscapes dataset. Second, is the value monotonic along a direction? The absolute and the relative ones are monotonic, which may result in a gradually changing pattern in the generated images. But the learnable one is not monotonic as it is learned via the optimization method. **Discussion.** We introduce two simple methods to improve the intra-class variation with minor changes, but some other alternatives can be considered. One may wonder the situation to replace noise with the input semantic layout in SPADE and CLADE. We argue that the input noise may be ignored by the model mainly because of the perception loss as mentioned in pix2pix [14]. Our preliminary verified the assumption. Besides, more loss function can be utilized, such VAE loss [16, 20, 23] and mode seeking loss [21]. A systematic analysis is leaved as our future work. Instead, we aim to probe the class-level mode collapse in semantic image synthesis and adapt randomness into the architecture to ease the issue. ## 4 Experiments ### Experimental settings **Dataset.** We execute experiments on three challenging datasets for SIS application, Cityscapes [6], ADE20k [46], and COCO-Stuff [3]. Cityscapes owns 2,975 images for training and 500 images for testing with 35 labels related to street scenes, in which instance-level annotation are given. ADE20K includes 20,210 training and 2,000 validation images, with 150 semantic classes but without instance-level annotation. COCO-stuff, with 182 semantic labels and instance-level annotation, is a more challenge dataset having 118,287 training images and 5,000 validation images. Similar to other algorithms [23, 29, 30], the original training and testing or validation dataset are directly utilized. **Implementation details.** We use the same loss functions with SPADE and CLADE for our version VA-SPADE and VA-CLADE, GAN loss in hinge, perceptual loss, and feature matching loss. Accordingly, our VA-OASIS only uses the GAN loss with a segmentation-based discriminator. Besides, same sizes of images are produced. To be specific, the resolution is \(256*512\) for Cityscapes and \(256*256\) for ADE20k and COCO-Stuff. Other training receipt are also same, including epochs and learning rate. Differently, we can not access to one sever with eight GPUs, instead four GPUs. Therefore, part of the experiments can not be trained with the same batch size because of the memory issue. To have a fair comparison, we also retrain parts of the compared methods with same number of batch size, given in Table 3 and emphasised with \(*\). **Evaluation metrics.** We utilize three metrics to compare different algorithms, FID, mIoU, and Acc. FID compute the distance of distribution between the generated images and the real images. Although it symbols the fidelity of the generated images, FID can not evaluate the matching of the generated images on the conditional semantic layout. Acc and mIoU are generative and detection metric where the generated images is segmented by a trained model and then compute the matching. Even the three metrics are widely used, there are no conforming program and thus it is hard to compare different algorithms. We aim to give a clear description and public codes. To compute FID1, two types of distribution are adopted, the aligned real validation images in the validation dataset (FID-v) and real training images (FID-t). Before computing FID, the real images are down-sampled in Bicubic way to the same resolution as the generated images. To calculate Acc and mIoU, pretrained model are leveraged as mentioned here2. To be more specific, pre-trained models from [42, 47] and [4] are utilized to perform the semantic segmentation for synthesized images from Cityscapes, ADE20k, and COCO-Stuff, respectively. Further, during the training process, the best model is recorded based on FID every five epochs for Cityscapes and ten for ADE20k and COCO-Stuff. The evaluation code will be public to encourage clear comparison3. Footnote 1: [https://github.com/mseitzer/pytorch-fid](https://github.com/mseitzer/pytorch-fid) Footnote 2: [https://github.com/NVlabs/SPADE/issues/39](https://github.com/NVlabs/SPADE/issues/39) Footnote 3: [https://github.com/xml94/VASIS](https://github.com/xml94/VASIS) **Compared methods.** SPADE [23], CLADE [31], and OASIS [26, 29] are baselines to prove our observation and method. SPADE discards the traditional encoder but leverages a conditional normalization to use the input semantic layout. Further, CLADE finds that sampling from the semantic layout contributes to less parameters and more efficient framework, but slightly shrinks the performance. OASIS employs the same conditional normalization as SPADE but with a segmentation-based discriminator. Figure 1 illustrates that all them suffer from the class-level mode collapse. Besides, LGGAN [34], adopting the traditional encoder-decoder scheme, is also compared. We also compare with CC-FPSE [20] that utilizes a prediction module for CNN weight, instead of shift and scale of normalization layer in SPADE and CLADE. More importantly, VAE loss is employed in CC-FPSE, which may ease the intra-class variation. ### Main results **Quantitative result.** The main comparison is given in Table 3. First, our methods tend to have better FID-t and FID-v but slightly worse mIoU and Acc when training with same number of batch size and GPUs in Cityscapes and ADE20k, which is similar to previous result that adding noise or improving intra-class variation leads to less mIoU and Acc in CLADE [31] and OASIS [26, 29]. For example, our VA-OASIS achieves FID-t 41.39 and FID-v 48.63 in Cityscapes, lower than 43.42 and 49.25 from OASIS. Except for ADE20k, our VA-CLADE gets worse FID but superior mIoU and Acc than CLADE-ICPE. Besides, our VA-SPADE and VA-OASIS employ much less number of parameters and computations than original SPADE and OASIS but achieve similar or even better results, which suggests that the computations and number of parameters among current algorithms can be reduced. Second, batch size and number of GPUs non-trivially play impacts on the performance. For instance, SPADE gets all better performance in Cityscapes with less number of GPUs \begin{table} \begin{tabular}{|l|c|c|c|} \hline & \(p_{a}\) & \(p_{l}\) & \(p_{r}\) \\ \hline no computation & yes & yes & no \\ \hline not monotonity & no & yes & no \\ \hline \end{tabular} \end{table} Table 2: Characters of the three position-codes, considered in two ways. Do we need to compute them and is the value monotonic along a direction? and same batch size as the original paper. However, the trend is not similar across all datasets and models. Especially, CLADE-ICPE receives a much worse performance in Citycapes, such as FID-t 54.53, compared to originally reported 42.38. We guess that the batch normalization layers used in the algorithms credits to this situation. Third, compared to non-encoders architectures, including SPADE, CLADE, OASIS, and our method, other methods may get similar results but with much bigger computation and larger number of parameters. For example, LGGAN and CC-FPSE obtains similar FID-t in ADE20k with SPADE and our method, but with more than double and triple FLOPs. **Qualitative result.** Figure 1 shows four generated images from ADE20k to prove our assumption and algorithm. The generated images by the current state-of-the-art have similar artifacts or patterns with heterogeneous semantic layout as input. Besides, we observe that the similar patterns exist in same class boundaries. Ours algorithm, VA-OASIS, largely eases the class-level semantic collapse due to semantic noise and position code that introduce diversity for same semantic class, although similar pattern appears, such as the second and last row in Figure 1. More results in Cityscapes and COCO-Stuff refer to supplementary material due to page limitation. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multirow{2}{*}{FID-t\(\downarrow\)} & \multirow{2}{*}{FID-v\(\downarrow\)} & \multirow{2}{*}{mIoU\(\uparrow\)} & \multirow{2}{*}{Acc\(\uparrow\)} & \multirow{2}{*}{b/n} & \multicolumn{2}{c|}{Generator} & \multicolumn{2}{c|}{Discriminator} \\ \cline{5-10} & & & & & & & para & FLOPs & para & FLOPs \\ \hline \multirow{10}{*}{Cityscapes} & LGGAN & 52.52 & 59.17 & 66.55 & 94.05 & 8/8 & 111.12 & 475.73 & 5.60 & 10.48 \\ & CC-FPSE & 43.80 & 50.43 & 65.33 & 93.91 & 32/16 & 128.06 & 739.30 & 5.18 & 6.65 \\ \cline{2-10} & SPADE & 51.98 & 58.71 & 62.18 & 93.47 & 16/8 & 93.05 & 281.64 & 5.60 & 10.48 \\ & SPADE* & 49.17 & 55.83 & 62.50 & 93.60 & 16/4 & 93.05 & 281.64 & 5.60 & 10.48 \\ & VA-SPADE & 46.28 & 53.30 & 61.08 & 93.40 & 16/4 & 83.34 & 189.61 & 5.60 & 10.48 \\ \cline{2-10} & CLADE & 50.62 & 56..69 & 60.69 & 93.50 & 16/8 & 67.90 & 75.58 & 5.60 & 10.48 \\ & CLADE* & 54.82 & 62.28 & 56.43 & 92.95 & 16/4 & 67.90 & 75.58 & 5.60 & 10.48 \\ & CLADE-ICPE & 42.38 & 50.50 & 60.71 & 93.35 & 16/8 & 67.90 & 75.58 & 5.60 & 10.48 \\ & CLADE-ICPE* & 54.53 & 61.13 & 58.88 & 93.17 & 16/4 & 67.90 & 75.58 & 5.60 & 10.48 \\ & VA-CLADE & 48.14 & 55.18 & 62.25 & 93.45 & 16/4 & 70.41 & 75.82 & 5.60 & 10.48 \\ \cline{2-10} & OASIS & 41.71 & 48.37 & 67.01 & 92.61 & 20/4 & 71.11 & 319.03 & 22.25 & 97.05 \\ & OASIS* & 43.42 & 49.25 & 68.42 & 92.81 & 16/4 & 71.11 & 319.03 & 22.25 & 97.05 \\ & VA-OASIS & 41.39 & 48.63 & 67.78 & 92.97 & 16/4 & 61.82 & 183.33 & 22.25 & 97.05 \\ \hline \multirow{10}{*}{ADE20k} & LGGAN & 27.60 & 35.23 & 38.95 & 79.63 & 24/8 & 115.09 & 312.57 & 5.84 & 7.83 \\ & CC-FPSE & 29.88 & 37.30 & 40.49 & 80.75 & 32/16 & 140.70 & 438.25 & 5.19 & 4.29 \\ \cline{2-10} & SPADE & 29.80 & 37.51 & 38.27 & 79.22 & 32/8 & 96.49 & 181.33 & 5.84 & 7.83 \\ & SPADE* & 29.31 & 37.28 & 37.19 & 78.98 & 32/4 & 96.49 & 181.33 & 5.84 & 7.83 \\ & VA-SPADE & 28.77 & 37.10 & 36.76 & 77.98 & 32/4 & 88.26 & 134.75 & 5.84 & 7.83 \\ \cline{2-10} & CLADE & 30.50 & 38.23 & 37.01 & 78.40 & 32/8 & 71.43 & 42.25 & 5.84 & 7.83 \\ & CLADE* & 29.93 & 37.80 & 36.26 & 78.40 & 32/4 & 71.43 & 42.25 & 5.84 & 7.83 \\ & CLADE-ICPE & 28.70 & 36.77 & 36.59 & 78.02 & 32/8 & 71.43 & 42.25 & 5.84 & 7.83 \\ & CLADE-ICPE* & 28.75 & 36.64 & 36.79 & 78.44 & 32/4 & 71.43 & 42.25 & 5.84 & 7.83 \\ & VA-CLADE & 29.16 & 36.70 & 36.90 & 78.68 & 32/4 & 74.16 & 42.37 & 5.84 & 7.83 \\ \cline{2-10} & OASIS & 27.49 & 35.39 & 45.62 & 82.93 & 32/4 & 74.31 & 194.56 & 22.26 & 49.02 \\ & OASIS* & 28.52 & 36.45 & 43.90 & 82.05 & 24/4 & 74.31 & 194.56 & 22.26 & 49.02 \\ & VA-OASIS & 27.43 & 35.37 & 43.36 & 81.45 & 24/4 & 66.05 & 125.93 & 22.26 & 49.02 \\ \hline \multirow{10}{*}{COCO-Stuff} & CC-FPSE & 25.44 & 30.03 & 41.96 & 70.79 & 32/16 & 141.94 & 456.12 & 5.19 & 4.56 \\ \cline{2-10} & SPADE & 27.70 & 32.75 & 38.15 & 68.86 & 32/8 & 97.48 & 191.32 & 5.90 & 8.54 \\ \cline{1-1} & VA-SPADE & 27.49 & 32.52 & 37.96 & 68.92 & 32/4 & 89.97 & 144.7 & 5.90 & 8.54 \\ \cline{1-1} \cline{2-10} & CLADE & 29.17 & 34.47 & 38.16 & 69.18 & 32/8 & 72.51 & 42.43 & 5.90 & 8.54 \\ \cline{1-1} & CLADE-ICPE & 27.77 & 32.96 & 37.77 & 68.66 & 32/8 & 72.51 & 42.43 & 5.90 & 8.54 \\ \cline{1-1} & VA-CLADE & 29.26 & 34.59 & 37.03 & 67.95 & 32/4 & 75.59 & 42.55 & 5.90 & 8.54 \\ \cline{1-1} \cline{2-10} & OASIS & 24.69 & 29.64 & 45.28 & 74.32 & 32/8 & 75.20 & 204.23 & 22.26 & 49.16 \\ \cline{1-1} & VA-OASIS & 24.68 & 29.79 & 43.53 & 73.65 & 24/4 & 67.50 & 135.60 & 22.26 & 49.16 \\ \hline \end{tabular} \end{table} Table 3: Comparison with other methods across datasets. We retrain SPADE, CLADE, CLADE-ICPE, and OASIS in our machine with less number of devices to have a fair comparison, which is denoted with *. b/n suggests batch size and number of GPUs in the training process. **More generated results.** Figure 3 displays more generated images for Cityscapes and COCO-Stuff dataset. From the figure, OASIS results in same shape of shadow in the first two rows while SPADE and CLADE-ICPE lead to repeating artifacts in the water (fourth and fifth row) or sky (the last two rows). The generated images by OASIS look like not nature, such as the grey water and dark sky, though it does not output similar pattern. In contrast, our method leads to more natural images without repeating artifacts. Specially, the second row suggests that our algorithm obtains diverse contrast and illumination for grasses and trees in one image and thus owns better intra-class variation. ### Ablation study We design an ablation study to understand the contribution from semantic noise and position code. Table 4 displays the experimental results on Cityscapes dataset. From the table, both them benefit FID-v but harm mIoU and Acc for SPADE while position code alone gives adverse impact. We believe that it results from the non-overlap for semantic layout in the normalization layer of CLADE. In contrast, the vicinity is already considered in SPADE via the convolution operation with three as the kernel size. Further, semantic noise shrinks the number of parameters and computations while position code increase both of them. Combining them together tends to contribute FID, as well as mIoU and ACC for CLADE. In terms of SPADE, the combination only contributes to FID but is inferior in mIoU and Acc mainly because of the exist of the convolution operation. To shortly conclude, context information is useful for semantic image synthesis, as well as intra-class variation. ### Variants Except for semantic noise and learnable position code, we aim to analyze their variants. For the semantic noise, we consider the number of channels with \(\gamma_{s}\in\mathbb{R}^{B\times 1\times H^{\prime}\times W^{\prime}}\), inspired by the relative position code in CLADE. But we believe that channel one results in worse performance because of its representation capacity. Further, the concatenation to combine noise \(\gamma_{n}\) and semantic layout feature \(\gamma_{s}\) can be replaced by element-wise plus with which the number of channels should be \(C^{\prime}\), instead of \(\frac{C^{\prime}}{2}\). Moreover, random noise replaces with semantic noise. The three cases are denoted as one-channel, plus, and rand. For position code, learnable version \(p_{l}\) can be replaced with absolute one \(p_{a}\) and relative one \(p_{r}\), as mentioned before. Similarly, the number of channel can also set as one. The experimental results are displayed in Table 5. From the table, we can see that one-channel slightly produces worse performance for both VA-SPADE and VA-CLADE with smaller decreases in number of parameters and computation. Element-wise plus of semantic noise \(\gamma_{n}\) and semantic layout feature \(\gamma_{s}\) benefits FID-v but cost at huge number of parameters and FLOPs, as well as lower mIoU and Acc, which suggests that balance the computation and performance is still a challenge in our algorithms. Besides, replacing semantic noise with random noise always outputs higher FID-v and a lower mIoU and Acc. Based on this situation, we argue that random noise contributes to intra-class variation but may lead to lower inter-class variation while our semantic noise achieves a balance. Finally, absolute and relative position code result in a little loss in all of the three evaluations with a less number of parameters. ## 5 Conclusion In this paper, we highlighted the class-level mode collapse in semantic image synthesis, the image-level mode collapse is heavily discussed in the literature though. We gave a deep analysis to show the phenomenon and reason of the collapse in three current state-of-the-art algorithms, SPADE, CLADE, and OASIS. Beside, inter- and intra-class variation are defined to understand the class-level mode collapse. We found that the conditional normalization layer in the current algorithms contributes to inter-class variation but the intra-class variation is not enough. Further, we introduce two simple mechanisms, semantic noise and learnable position code, to increase the intra-class variation. The extensive experimental results suggest that our method is beneficial to semantic image synthesis with better performance with same training conditions than current state-of-the-art. In spite of success, the performance of our method is still not visually applicable, compared to natural images, and we only focus on the architecture of generator. We leave it as our future work to consider the semantic image synthesis in a systematic way, including framework of generator and discriminator, along with loss functions.
2303.08567
Contributions of $K_0^*(1430)$ and $K_0^*(1950)$ in the charmed three-body $B$ meson decays
In this work, we investigate the resonant contributions of $K_0^*(1430)$ and $K_0^*(1950)$ in the three-body $B_{(s)}\to D_{(s)}K\pi$ within the perturbative QCD approach. The form factor $F_{k\pi}(s)$ are adopted to describe the nonperturbative dynamics of the S-wave $K\pi$ system. The branching ratios of all concerned decays are calculated and predicted to be in the order of $10^{-10}$ to $10^{-5}$. The ratio $R$ of branching fractions between $B^0\to \bar{D}^0 K_0^{*0}(1430) \to \bar{D}^0K^+\pi^-$ and $B_s^0 \to \bar{D}^0 \bar{K}_0^{*0}(1430)\to \bar{D}^0K^-\pi^+$ are predicted to be 0.0552, which implies the discrepancy for the LHCb measurements. We expect that the predictions in this work can be tested by the future experiments, especially, to resolve $R$ ratio discrepancy.
Bo-Yan Cui, Ya-Hui Chen
2023-03-15T12:35:28Z
http://arxiv.org/abs/2303.08567v2
Contributions of \(K_{0}^{*}(1430)\) and \(K_{0}^{*}(1950)\) in the charmed three-body \(B\) meson decays ###### Abstract In this work, we investigate the resonant contributions of \(K_{0}^{*}(1430)\) and \(K_{0}^{*}(1950)\) in the three-body \(B_{(s)}\to D_{(s)}K\pi\) within the perturbative QCD approach. The form factor \(F_{\bar{\nu}\pi}(s)\) are adopted to describe the nonperturbative dynamics of the S-wave \(K\pi\) system. The branching ratios of all concerned decays are calculated and predicted to be in the order of \(10^{-10}\) to \(10^{-5}\). The ratio \(R\) of branching fractions between \(B^{0}\to\bar{D}^{0}K_{0}^{*0}(1430)\to\bar{D}^{0}K^{+}\pi^{-}\) and \(B_{s}^{0}\to\bar{D}^{0}\bar{K}_{0}^{*0}(1430)\to\bar{D}^{0}K^{-}\pi^{+}\) are predicted to be 0.0552, which implies the discrepancy for the LHCb measurements. We expect that the predictions in this work can be tested by the future experiments, especially, to resolve \(R\) ratio discrepancy. pacs: 13.20.He, 13.25.Hw, 13.30.Eg ## I Introduction Decays of the type \(B\to Dhh^{\prime}\), where a \(B\) meson decays to a charmed meson and two light pseudoscalar mesons, have attracted people's attention in recent years. On the one hand, the studies of these three-body processes have shown the potential to constrain the parameters of the unitarity triangle. For instance, the decay \(B^{0}\to\bar{D}^{0}\pi^{+}\pi^{-}\) is sensitive to measure the CKM angle \(\beta\)[1; 2; 3], while Dalitz plot analysis of the decays \(B^{0}\to\bar{D}^{0}K^{+}\pi^{-}\) and \(B_{s}^{0}\to\bar{D}^{0}K^{+}K^{-}\)can further improve the determination of the CKM angle \(\gamma\)[4; 5; 6; 7]. On the other hand, the \(B\to Dhh^{\prime}\) decays provide opportunities for probing the rich resonant structure in the final states, including the spectroscopy of charmed mesons and the components in two light mesons system. A series of results in this area have been acquired from the measurements performed by the Belle [8; 9; 10], BaBar [11; 12; 13; 14] and LHCb [3; 5; 7; 15; 16; 17; 18; 19; 20] Collaborations. In theory, a direct analysis of the three-body \(B\) decays is particularly difficult on account of the entangled resonant and nonresonant contributions, the complex interplay between the weak processes and the low-energy strong interactions [21], and other possible final state interactions [22; 23]. Fortunately, most of three-body hadronic \(B\) meson decay processes are considered to be dominated by the low-energy \(S\)-, \(P\)- and \(D\)-wave resonant states, which could be treated in the quasi-two-body framework. By neglecting the interactions between the meson pair originated from the resonant states and the bachelor particle in the final states, the factorization theorem is still valid as in the two-body case [24; 25], and substantial theoretical efforts for different quasi-two-body \(B\) meson decays has been made within different theoretical approaches [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47]. As well, the contributions from various intermediate resonant state for the three-body decays \(B\to Dhh^{\prime}\) have been investigated in Ref. [48; 49; 50; 51; 52; 53]. The understanding of the scalar mesons is a difficult and long-standing issue [54]. The scalar resonances usually have large decay widths which make them overlap strongly with the background. In the specific regions, such as the \(K\bar{K}\) and \(\eta\eta\) thresholds, cusps in the line shapes of the near-by resonances will appear due to the contraction of the phase space. Moreover, the inner natures of scalars are still not completely clear. Part of them, especially the ones below 1 GeV, have also been interpreted as glueballs, meson-meson bound states or multi-quark states, besides the traditional quark-antiquark configurations [55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66]. The \(K_{0}^{*}(1430)\) is perhaps the least controversial of the light scalar mesons and generally believed to be a \(q\bar{q}\) state [67]. It predominantly couples to the \(K\pi\) channel and has been studied experimentally in many charmless three-body \(B\) meson decays [68; 69; 70; 71; 72; 73; 74; 75]. Recently, measurements of the charmed three-body decays \(B^{0}\to\bar{D}^{0}K^{+}\pi^{-}\) and \(B_{s}^{0}\to\bar{D}^{0}K^{+}\pi^{-}\) involving the resonant state \(K_{0}^{*}(1430)\) were also presented by LHCb [5; 16]. In addition, the subprocess \(K_{0}^{*}(1950)\to K\pi\) which often ignored in literatures has also been considered in Ref. [16]. In the framework of the PQCD approach [76; 77; 78], the investigation of \(S\)-wave \(K\pi\) contributions to the \(B_{(s)}^{0}\to\psi K\pi\) decays was carried out in Ref. [79]. In a more recent work [80], contributions of the resonant states \(K_{0}^{*}(1430)\) and \(K_{0}^{*}(1950)\) in the three-body decays \(B\to K\pi h\) (\(h=K,\pi\)) were studied systematically within the same method. The \(K_{0}^{*}(1430)\) is treated as the lowest lying \(q\bar{q}\) state in view of the controversy for \(K_{0}^{*}(700)\), and the scalar \(K\pi\) timelike form factor \(F_{K\pi}(s)\) was also discussed in detail. Motivated by the related results measured by LHCb[5; 16], we shall extent the previous work [80] to the study of the charmed three-body \(B\) decays and analyse the contributions of the resonances \(K_{0}^{*}(1430)\) and \(K_{0}^{*}(1950)\) in the \(B\to DK\pi\) decays in this work. The rest of this article is structured as follows. In Sec. II, we give a brief review of the framework of the PQCD approach. The numerical results and phenomenological discussions are presented in Sec. III and a short summary is given in Sec. IV, respectively. Finally, the relevant factorization formulae for the decay amplitudes are collected in the Appendix. ## II Framework In the light-cone coordinate system, the \(B\) meson momentum \(p_{B}\), the total momentum of the \(K\pi\) pair and the \(D\) meson momentum \(p_{3}\) under the rest frame of \(B\) meson can be written as \[p_{B}=\frac{m_{B}}{\sqrt{2}}(1,1,\mathbf{0}_{\rm T}),\qquad\qquad p=\frac{m_{ B}}{\sqrt{2}}(1-r^{2},\eta,\mathbf{0}_{\rm T}),\qquad\qquad p_{3}=\frac{m_{B}}{ \sqrt{2}}(r^{2},1-\eta,\mathbf{0}_{\rm T}), \tag{1}\] with \(m_{B}\) being the \(B\) meson mass and the mass ratio \(r=m_{D}/m_{B}\). The variable \(\eta\) equals to \(s/(m_{B}^{2}-m_{D}^{2})\) where \(s\) is the invariant mass squared of \(K\pi\) pair in the range from \((m_{K}+m_{\pi})^{2}\) to \((m_{B}-m_{D})^{2}\). We also set the momenta of the light quarks in the \(B\) meson, the \(K\pi\) pair and the \(D\) meson as \(K_{B}\), \(K\) and \(K_{3}\), and have the definitions as follow \[k_{B}=\left(0,\frac{m_{B}}{\sqrt{2}}x_{B},\mathbf{k}_{\rm BT}\right),\quad k= \left(\frac{m_{B}}{\sqrt{2}}(1-r^{2})z,0,\mathbf{k}_{\rm T}\right),\quad k_{3 }=\left(0,\frac{m_{B}}{\sqrt{2}}(1-\eta)x_{3},\mathbf{k}_{\rm 3T}\right), \tag{2}\] where \(x_{B}\), \(z\) and \(x_{3}\) are the momentum fractions and run from zero to unity. In the PQCD approach, the decay amplitude for the quasi-two-body decay \(B_{(s)}\to D_{(s)}K_{0}^{*}(1430,1950)\to D_{(s)}K\pi\) can be expressed as the convolution [81] \[\mathcal{A}=\phi_{B}\otimes H\otimes\phi_{D}\otimes\phi_{K\pi}, \tag{3}\] where the symbol \(H\) represents the hard kernel with single hard gluon exchange. \(\phi_{B}\) and \(\phi_{D}\) are the distribution amplitudes for the \(B\) and \(D\) mesons, respectively. \(\phi_{K\pi}\) denotes the distribution amplitude for the \(K\pi\) pair with certain spin in the resonant region. In this work, we use the same distribution amplitudes for the \(B_{(s)}\) and \(D_{(s)}\) mesons as in Ref [50] where one can easily find their expressions and the relevant parameters. Inspired by generalized distribution amplitude [82; 83; 84; 85], the generalized LCDA for two-meson system are introduced [81; 86] for three-body \(B\)-meson decay in the framework of PQCD approach and the heavy-to-light transition form factor in light-cone sum rules, respectively. The nonlocal matrix elements of vacuum to \(K\pi\) with various spin projector can be written as \[\langle K\pi|\bar{s}(x)\,\gamma_{\mu}\,q(0)|0\rangle = p_{\mu}\int_{0}^{1}\,dz\,e^{iz\,p\cdot x}\,\phi^{0}(z,s)\,, \tag{4}\] \[\langle K\pi|\bar{s}(x)\,q(0)|0\rangle = \sqrt{s}\int_{0}^{1}\,dz\,e^{iz\,p\cdot x}\,\phi^{s}(z,s)\,,\] (5) \[\langle K\pi|\bar{s}(x)\,\sigma_{\mu\nu}\,q(0)|0\rangle = -\frac{\sqrt{s}}{6}\left(p_{\mu}x_{\nu}-p_{\nu}x_{\mu}\right)\int _{0}^{1}\,dz\,e^{iz\,p\cdot x}\,\phi^{t}(z,s)\,, \tag{6}\] Figure 1: Typical diagrams for the quasi-two-body decays \(B_{(s)}\to D_{(s)}K_{0}^{*}(1430,1950)\to D_{(s)}K\pi\) including the emission diagram (a) with the \(B\to K_{0}^{*}(1430,1950)\) transition, the emission diagram (c) with the \(B\to D\) transition, and the annihilation diagrams (b) and (d). The symbol \(\otimes\) stands for the weak vertex and \(\times\) denotes possible attachments of hard gluons. the \(K\pi\)\(S\)-wave distribution amplitude is chosen as [80] \[\Phi_{K\pi}(z,s)=\frac{1}{\sqrt{2N_{c}}}\bigg{[}p\hskip-5.690551pt/\phi^{0}(z,s)+ \sqrt{s}\phi^{s}(z,s)+\sqrt{s}(n\hskip-5.690551pt/-1)\phi^{t}(z,s)\bigg{]}, \tag{7}\] where \(n=(1,0,{\bf 0}_{T})\) and \(v=(0,1,{\bf 0}_{T})\) are the dimensionless lightlike unit vectors. The twist-2 and twist-3 light-cone distribution amplitudes have the form \[\phi^{0}(z,s) = \frac{F_{K\pi}(s)}{2\sqrt{2N_{c}}}6z(1-z)\bigg{[}a_{0}(\mu)+\sum_{ m=1}^{\infty}a_{m}(\mu)C_{m}^{3/2}(2z-1)\bigg{]},\] \[\phi^{s}(z,s) = \frac{F_{K\pi}(s)}{2\sqrt{2N_{c}}},\] \[\phi^{t}(z,s) = \frac{F_{K\pi}(s)}{2\sqrt{2N_{c}}}(1-2z). \tag{8}\] Here, \(C_{m}^{3/2}\) are the Gegenbauer polynomials, \(a_{m}(\mu)\) are the Gegenbauer moments and \(F_{K\pi}(s)\) is the scalar form factor for the \(K\pi\) pair. In this work, we adopt the same formulae and parameters for the \(K\pi\)\(S\)-wave distribution amplitude as them in Ref. [80]. According to the typical Feynman diagrams as shown in Fig. 1 and the quark currents for each decays, the decay amplitudes for the considered quasi-two-body decays \(B\to DK_{0}^{*}(1430,1950)\to DK\pi\) are given as \[{\cal A}\big{(}B^{+}\to D^{0}[K_{0}^{*+}\to]K\pi\big{)} = \frac{G_{F}}{\sqrt{2}}V_{ub}^{*}V_{cs}\bigg{\{}a_{2}F_{TK}+C_{2}M _{TK}+a_{1}F_{AD}+C_{1}M_{AD}\bigg{\}}\;, \tag{9}\] \[{\cal A}\big{(}B^{+}\to\bar{D}^{0}[K_{0}^{*+}\to]K\pi\big{)} = \frac{G_{F}}{\sqrt{2}}V_{cb}^{*}V_{us}\bigg{\{}a_{2}F_{TK}+C_{2}M _{TK}^{\prime}+a_{1}F_{TD}+C_{1}M_{TD}\bigg{\}}\;,\] (10) \[{\cal A}\big{(}B^{+}\to D^{+}[K_{0}^{*0}\to]K\pi\big{)} = \frac{G_{F}}{\sqrt{2}}V_{ub}^{*}V_{cs}\bigg{\{}a_{1}F_{AD}+C_{1}M _{AD}\bigg{\}}\;,\] (11) \[{\cal A}\big{(}B^{+}\to D_{s}^{+}[\bar{K}_{0}^{*0}\to]K\pi\big{)} = \frac{G_{F}}{\sqrt{2}}V_{ub}^{*}V_{cd}\bigg{\{}a_{1}F_{AD}+C_{1}M _{AD}\bigg{\}}\;,\] (12) \[{\cal A}\big{(}B^{0}\to D^{0}[K_{0}^{*0}\to]K\pi\big{)} = \frac{G_{F}}{\sqrt{2}}V_{ub}^{*}V_{cs}\bigg{\{}a_{2}F_{TK}+C_{2}M _{TK}\bigg{\}}\;,\] (13) \[{\cal A}\big{(}B^{0}\to\bar{D}^{0}[K_{0}^{*0}\to]K\pi\big{)} = \frac{G_{F}}{\sqrt{2}}V_{cb}^{*}V_{us}\bigg{\{}a_{2}F_{TK}+C_{2}M _{TK}^{\prime}\bigg{\}}\;,\] (14) \[{\cal A}\big{(}B^{0}\to D^{-}[K_{0}^{*+}\to]K\pi\big{)} = \frac{G_{F}}{\sqrt{2}}V_{cb}^{*}V_{us}\bigg{\{}a_{1}F_{TD}+C_{1}M _{TD}\bigg{\}}\;,\] (15) \[{\cal A}\big{(}B^{0}\to D_{s}^{-}[K_{0}^{*+}\to]K\pi\big{)} = \frac{G_{F}}{\sqrt{2}}V_{cb}^{*}V_{ud}\bigg{\{}a_{2}F_{AK}+C_{2}M _{AK}\bigg{\}}\;,\] (16) \[{\cal A}\big{(}B^{0}\to D_{s}^{+}[K_{0}^{*-}\to]K\pi\big{)} = \frac{G_{F}}{\sqrt{2}}V_{ub}^{*}V_{cd}\bigg{\{}a_{2}F_{AD}+C_{2}M _{AD}\bigg{\}}\;,\] (17) \[{\cal A}\big{(}B_{s}\to D^{0}[\bar{K}_{0}^{*0}\to]K\pi\big{)} = \frac{G_{F}}{\sqrt{2}}V_{ub}^{*}V_{cd}\bigg{\{}a_{2}F_{TK}+C_{2}M _{TK}\bigg{\}}\;,\] (18) \[{\cal A}\big{(}B_{s}\to\bar{D}^{0}[\bar{K}_{0}^{*0}\to]K\pi\big{)} = \frac{G_{F}}{\sqrt{2}}V_{cb}^{*}V_{ud}\bigg{\{}a_{2}F_{TK}+C_{2}M _{TK}^{\prime}\bigg{\}}\;,\] (19) \[{\cal A}\big{(}B_{s}\to D^{+}[K_{0}^{*-}\to]K\pi\big{)} = \frac{G_{F}}{\sqrt{2}}V_{ub}^{*}V_{cd}\bigg{\{}a_{1}F_{TK}+C_{1}M _{TK}\bigg{\}}\;,\] (20) \[{\cal A}\big{(}B_{s}\to D_{s}^{-}[K_{0}^{*+}\to]K\pi\big{)} = \frac{G_{F}}{\sqrt{2}}V_{cb}^{*}V_{us}\bigg{\{}a_{1}F_{TD}+C_{1}M _{TD}+a_{2}F_{AK}+C_{2}M_{AK}\bigg{\}}\;,\] (21) \[{\cal A}\big{(}B_{s}\to D_{s}^{+}[K_{0}^{*-}\to]K\pi\big{)} = \frac{G_{F}}{\sqrt{2}}V_{ub}^{*}V_{cs}\bigg{\{}a_{1}F_{TK}+C_{1}M _{TK}+a_{2}F_{AD}+C_{2}M_{AD}\bigg{\}}\;, \tag{22}\] where \(G_{F}\) is the Fermi constant, \(V_{ij}\) is the CKM matrix element, and the combinations of the Wilson coefficients \(a_{1,2}\) are defined as \(a_{1}=C_{1}/3+C_{2}\) and \(a_{2}=C_{2}/3+C_{1}\). The expressions of individual amplitudes \(F_{TK}\), \(F_{TD}\), \(F_{AK}\), \(F_{AD}\), \(M_{TK}^{(\prime)}\), \(M_{TD}\), \(M_{AK}\) and \(M_{AD}\) from different subdiagrams in Fig. 1 are collected in the Appendix. At last, we give the definition of the differential branching ratio for the considered quasi-two-body decays \[\frac{d\mathcal{B}}{ds}=\tau_{B}\frac{|\vec{p}_{1}||\vec{p}_{3}|}{64\pi^{3}m_{B} ^{3}}\,|\mathcal{A}|^{2}. \tag{23}\] In the center-of-mass frame of \(K\pi\) system, the magnitudes of the momenta \(|\vec{p}_{1}|\) and \(|\vec{p}_{3}|\) can be expressed as \[|\vec{p}_{1}| = \frac{1}{2}\sqrt{[(m_{K}^{2}-m_{\pi}^{2})^{2}-2(m_{K}^{2}+m_{\pi} ^{2})s+s^{2}]/s},\] \[|\vec{p}_{3}| = \frac{1}{2}\sqrt{[(m_{B}^{2}-m_{D}^{2})^{2}-2(m_{B}^{2}+m_{D}^{2} )s+s^{2}]/s}. \tag{24}\] ## III Results In the numerical calculations, the masses of the involved mesons (GeV), the lifetime of the \(B\) mesons (ps), the resonance decay widths (GeV) and the Wolfenstein parameters are taken from the \(Review\)\(of\)\(Particle\)\(Physics\)[54] \[m_{B^{\pm}} = 5.279,\quad m_{B^{0}}=5.280,\quad m_{B^{0}_{s}}=5.367,\quad m_{D^ {0}/\bar{D}^{0}}=1.865,\quad m_{D^{\pm}}=1.870,\quad m_{D^{\pm}_{\pi}}=1.968,\] \[m_{K^{\pm}} = 0.494,\quad m_{K^{0}/\bar{K}^{0}}=0.498,\quad m_{\pi^{0}}=0.135, \quad m_{\pi^{\pm}}=0.140,\quad m_{K^{\ast}_{0}(1430)}=1.425,\quad m_{K^{\ast }_{0}(1950)}=1.945,\] \[\tau_{B^{0}} = 1.519,\quad\tau_{B^{\pm}}=1.638,\quad\tau_{B^{0}_{s}}=1.515, \quad\Gamma_{K^{\ast}_{0}(1430)}=0.270\pm 0.080,\quad\Gamma_{K^{\ast}_{0}(1950)}=0.201\pm 0.090,\] \[A = 0.790^{+0.017}_{-0.012},\quad\lambda=0.22650\pm 0.00048,\quad\bar{ \rho}=0.141^{+0.016}_{-0.017}\quad\bar{\eta}=0.357\pm 0.011. \tag{25}\] The decay constants of the \(B_{(s)}\) and \(D_{(s)}\) mesons are set to the values \(f_{B_{(s)}}=0.190\) (\(0.230\)) GeV and \(f_{D_{(s)}}=0.212\) (\(0.250\)) GeV [87]. By integrating the differential branching ratio in Eq. (23), we obtain the branching ratios for the considered quasi-two-body processes with the intermediate resonances \(K^{\ast}_{0}(1430)\) and \(K^{\ast}_{0}(1950)\) in Tables 1 and 2, respectively. The first error is induced by the shape parameters \(\omega_{B_{(s)}}=0.40\pm 0.04\) (\(0.50\pm 0.05\)) GeV in the distribution amplitude for the \(B_{(s)}\) meson. The second and third errors come from the Gegenbauer moments \(a_{3}=-0.42\pm 0.22\) and \(a_{1}=-0.57\pm 0.13\) in the \(K\pi\)\(S\)-wave distribution amplitude, respectively. The decay widths \(\Gamma_{K^{\ast}_{0}(1430)}=0.270\pm 0.080\) GeV and \(\Gamma_{K^{\ast}_{0}(1950)}=0.201\pm 0.090\) GeV contribute the fourth error. The last one is due to the parameter \(C_{D_{(s)}}=0.5\pm 0.1\) (\(0.4\pm 0.1\)) in the distribution amplitude for \(D_{(s)}\) meson. The uncertainties from other parameters are comparatively small and have been neglected. From the numerical results as listed in Tables 1 and 2, we have the following comments: \begin{table} \begin{tabular}{c l l l l} \hline \hline Mode & Unit & \(\mathcal{B}\) & Data \\ \hline \(B^{+}\to D^{0}K^{\ast+}_{0}(1430)\to D^{0}K^{0}\pi^{+}\) & \((10^{-6})\) & \(4.15\pm 0.30(\omega_{B})\pm 0.16(B_{3})\pm 0.15(B_{1})\pm 0.25(\Gamma_{K^{\ast}_{0}})\pm 0.02(C_{D})\) & - \\ \(B^{+}\to\bar{D}^{0}K^{\ast+}_{0}(1430)\to\bar{D}^{0}K^{0}\pi^{+}\) & \((10^{-5})\) & \(2.50\pm 0.17(\omega_{B})\pm 0.28(B_{3})\pm 0.04(B_{1})\pm 0.20(\Gamma_{K^{\ast}_{0}})\pm 0.03(C_{D})\) & - \\ \(B^{+}\to D^{+}K^{0}_{0}(1430)\to D^{+}K^{+}\pi^{-}\) & \((10^{-8})\) & \(2.14\pm 0.87(\omega_{B})\pm 0.55(B_{3})\pm 0.30(B_{1})\pm 0.07(\Gamma_{K^{\ast}_{0}})\pm 0.05(C_{D})\) & - \\ \(B^{+}\to D^{+}_{s}\bar{K}^{0}_{0}(1430)\to D^{+}_{s}K^{-}\pi^{+}\) & \((10^{-9})\) & \(2.75\pm 0.70(\omega_{B})\pm 1.18(B_{3})\pm 0.68(B_{1})\pm 0.06(\Gamma_{K^{\ast}_{0}})\pm 0.22(C_{D})\) & - \\ \(B^{0}\to D^{0}K^{\ast+}_{0}(1430)\to D^{0}K^{+}\pi^{-}\) & \((10^{-6})\) & \(3.90\pm 0.28(\omega_{B})\pm 0.01(B_{3})\pm 0.13(B_{1})\pm 0.23(\Gamma_{K^{\ast}_{0}})\pm 0.01(C_{D})\) & - \\ \(B^{0}\to\bar{D}^{0}K^{\ast+}_{0}(1430)\to\bar{D}^{0}K^{+}\pi^{-}\) & \((10^{-5})\) & \(2.23\pm 0.18(\omega_{B})\pm 0.24(B_{3})\pm 0.06(B_{1})\pm 0.15(\Gamma_{K^{\ast}_{0}})\pm 0.05(C_{D})\) & 0.71 [5] \\ \(B^{0}\to D^{-}K^{\ast+}_{0}(1430)\to D^{-}K^{0}\pi^{+}\) & \((10^{-7})\) & \(1.08\pm 0.17(\omega_{B})\pm 0.34(B_{3})\pm 0.12(B_{1})\pm 0.06(\Gamma_{K^{\ast}_{0}})\pm 0.02(C_{D})\) & - \\ \(B^{0}\to D^{-}_{s}K^{\ast+}_{0}(1430)\to D^{-}_{s}K^{0}\pi^{+}\) & \((10^{-6})\) & \(2.17\pm 1.08(\omega_{B})\pm 1.20(B_{3})\pm 0.75(B_{1})\pm 0.10(C_{K^{\ast}_{0}})\pm 0.10(C_{D})\) & - \\ \(B^{0}\to D^{+}_{s}K^{\ast-}_{0}(1430)\to D^{+}_{s}\bar{K}^{0}\pi^{-}\) & \((10^{-9})\) & \(4.24\pm 1.99(\omega_{B})\pm 1.97(B_{3})\pm 0.61(B_{1})\pm 0.14(\Gamma_{K^{\ast}_{0}})\pm 0.09(C_{D})\) & - \\ \(B^{0}_{s}\to D^{0}\bar{K}^{0}_{0}(1430)\to D^{0}K^{-}\pi^{+}\) & \((10^{-7})\) & \(2.07\pm 0.17(\omega_{B})\pm 0.20(B_{3})\pm 0.11(B_{1})\pm 0.13(\Gamma_{K^{\ast}_{0}})\pm 0.01(C_{D})\) & - \\ \(B^{0}_{s}\to D^{0}\bar{K}^{0}_{0}(1430)\to D^{0}K^{-}\pi^{+}\) & \((10^{-4})\) & \(3.76\pm 0.16(\omega_{B})\pm 0.43(B_{3})\pm 0.08(B_{1})\pm 0.23(\Gamma_{K^{\ast}_{0}})\pm 0.03(C_{D})\) & 3.00 [16] \\ \(B^{0}_{s}\to D^{+}_{s}K^{\ast-}_{0}(1430)\to D^{+}\bar{K}^{0}\pi^{-}\) & 1. In the \(B\to DR\to DK\pi\) decays, we can extract the two-body branching fractions \({\cal B}(B\to DR)\) by using the relation under the quasi-two-body approximation \[{\cal B}(B\to DR\to DK\pi)={\cal B}(B\to DR)\cdot{\cal B}(R\to K\pi)\;.\] (26) For the branching fractions of two-body decays with \(K_{0}^{*}(1430)\) and \(K_{0}^{*}(1950)\), we shall apply \[{\cal B}(K_{0}^{*0}\to K^{+}\pi^{-})={\cal B}(\bar{K}_{0}^{*0}\to K^{-}\pi^{+})= {\cal B}(K_{0}^{*+}\to K^{0}\pi^{+})={\cal B}(K_{0}^{*-}\to\bar{K}^{0}\pi^{-})= \frac{2}{3}{\cal B}(K_{0}^{*}\to K\pi).\] (27) and the values \[{\cal B}(K_{0}^{*}(1430)\to K\pi)=(93\pm 10)\%,\qquad{\cal B}(\bar{K}_{0}^{*}( 1950)\to K^{-}\pi^{+})=(52\pm 14)\%.\] (28) Combined with results listed in Tables 1 and 2, one can obtain the related two-body branching fractions, for examples, \(B_{s}^{0}\to\bar{D}^{0}\bar{K}_{0}^{*0}(1430)=6.06\pm 0.65\times 10^{-4}\) and \(B_{s}^{0}\to\bar{D}^{0}\bar{K}_{0}^{*0}(1950)=3.31\pm 0.89\times 10^{-5}\), where the errors are propagated from eq. (28). 2. The PQCD prediction for the branching fraction \({\cal B}(B_{s}^{0}\to\bar{D}^{0}\bar{K}_{0}^{*0}(1430)\to\bar{D}^{0}K^{-}\pi^{+})\) agrees with LHCb's data (\(3.00\pm 0.24\pm 0.11\pm 0.50\pm 0.44\)) \(\times 10^{-4}\)[16] within errors, while the PQCD predicted \({\cal B}(B^{0}\to\bar{D}^{0}K_{0}^{*0}(1430)\to\bar{D}^{0}K^{+}\pi^{-})\) is much larger than the value (\(0.71\pm 0.27\pm 0.33\pm 0.47\pm 0.08)\times 10^{-5}\) measured by LHCb [5] with significant uncertainties. By comparison, one can find that the decay modes \(B_{s}^{0}\to\bar{D}^{0}\bar{K}_{0}^{*0}(1430)\to\bar{D}^{0}K^{-}\pi^{+}\) and \(B^{0}\to\bar{D}^{0}K_{0}^{*0}(1430)\to\bar{D}^{0}K^{+}\pi^{-}\) contain the same decay topology when neglecting the differences of hadronic parameters between \(B^{0}\) and \(B_{s}^{0}\). Then, we evaluate the ratio \[R=\frac{{\cal B}(B^{0}\to\bar{D}^{0}K_{0}^{*0}(1430)\to\bar{D}^{0}K^{+}\pi^{- })}{{\cal B}(B_{s}^{0}\to\bar{D}^{0}K_{0}^{*0}(1430)\to\bar{D}^{0}K^{-}\pi^{+} )}\approx\left|\frac{V_{us}}{V_{ud}}\right|^{2}\cdot\frac{\tau_{B^{0}}}{\tau_{ B^{0}_{s}}}=0.0534\;,\] (29) which is close to the PQCD prediction 0.0593 by using the results listed in Table 1, but different from the value 0.0237 acquired from the central values of the measured branching ratio by LHCb [5; 16]. One can find that in the Ref [16], the \(K_{0}^{*}(1430)\) component receive 20% fit fraction of total \({\cal B}(B_{s}^{0}\to\bar{D}^{0}K^{-}\pi^{+})\), but in Ref [5; 16], \(K_{0}^{*}(1430)\) component receive only 5.1% of total \({\cal B}(B^{0}\to\bar{D}^{0}K^{+}\pi^{-})\). The \(K_{0}^{*}(1430)\) component playing a such different role in two different process, however, on the theoretical side, the decay amplitudes are exactly same for \(B^{0}\to\bar{D}^{0}K_{0}^{*0}(1430)\to\bar{D}^{0}K^{+}\pi^{-}\) and \(B_{s}^{0}\to\bar{D}^{0}\bar{K}_{0}^{*0}(1430)\to\bar{D}^{0}K^{-}\pi^{+}\) if we neglect the SU(3) symmetry breaking effect, \(R\) ratio will independent of theoretical framework. More precise measurements and more proper partial wave analysis are needed to resolve the discrepancy. \begin{table} \begin{tabular}{c c c c} \hline Mode & Unit & \({\cal B}\) & Data \\ \hline \(B^{+}\to D^{0}K_{0}^{*+}(1950)\to D^{0}K^{0}\pi^{+}\) & \((10^{-7})\) & \(1.39\pm 0.60(\omega_{B})\pm 0.13(B_{3})\pm 0.12(B_{1})\pm 0.04(\Gamma_{K_{0}^{*}})\pm 0.04(C_{D})\) & - \\ \(B^{+}\to\bar{D}^{0}K_{0}^{*+}(1950)\to\bar{D}^{0}K^{0}\pi^{+}\) & \((10^{-7})\) & \(5.52\pm 2.70(\omega_{B})\pm 0.57(B_{3})\pm 0.08(B_{1})\pm 0.34(\Gamma_{K_{0}^{*}})\pm 0.12(C_{D})\) & - \\ \(B^{+}\to D^{+}K_{0}^{*0}(1950)\to D^{+}K^{+}\pi^{-}\) & \((10^{-9})\) & \(1.31\pm 0.37(\omega_{B})\pm 0.17(B_{3})\pm 0.10(B_{1})\pm 0.10(\Gamma_{K_{0}^{*}})\pm 0.12(C_{D})\) & - \\ \(B^{+}\to D^{+}_{s}\bar{K}_{0}^{*0}(1950)\to D^{+}_{s}K^{-}\pi^{+}\) & \((10^{-10})\) & \(1.94\pm 0.44(\omega_{B})\pm 0.82(B_{3})\pm 0.44(B_{1})\pm 0.02(\Gamma_{K_{0}^{*}})\pm 0.21(C_{D})\) & - \\ \(B^{0}\to D^{0}K_{0}^{*0}(1950)\to D^{0}K^{+}\pi^{-}\) & \((10^{-7})\) & \(1.30\pm 0.57(\omega_{B})\pm 0.16(B_{3})\pm 0.12(B_{1})\pm 0.07(\Gamma_{K_{0}^{*}})\pm 0.0 1(C_{D})\) & - \\ \(B^{0}\to\bar{D}^{0}\bar{K}_{0}^{*0}(1950)\to\bar{D}^{0}K^{+}\pi^{-}\) & \((10^{-7})\) & \(5.23\pm 2.14(\omega_{B})\pm 0.23(B_{3})\pm 0.06(B_{1})\pm 0.24(\Gamma_{K_{0}^{*}})\pm 0.14(C_{D})\) & - \\ \(B^{0}\to\bar{D}^{-}K_{0}^{*+}(1950)\to D^{-}K^{0}\pi^{+}\) & \((10^{-9})\) & \(4.32\pm 0.38(\omega_{B})\pm 1.21(B_{3})\pm 0.42(B_{1})\pm 0.24(\Gamma_{K_{0}^{*}})\pm 0.43(C_{D})\) & - \\ \(B^{0}\to D^{-}_{s}K_{0}^{*+}(1950)\to D^{-}_{s}K^{0}\pi^{+}\) & \((10^{-7})\) & \(1.04\pm 0.57(\omega_{B})\pm 0.70(B_{3})\pm 0.29(B_{1})\pm 0.03(\Gamma_{K_{0}^{*}})\pm 0.05(C_{D})\) & - \\ \(B^{0}\to D^{+}_{s}K_{0}^{*-}(1950)\to D^{+}_{s}K^{0}\pi^{-}\) & \((10^{-10})\) & \(2.59\pm 1.18(\omega_{B})\pm 1.05(B_{3})\pm 0.49(B_{1})\pm 0.10(\Gamma_{K_{0}^{*}})\pm 0.0 7(C_{D})\) & - \\ \(B^{0}_{s}\to D^{0}\bar{K}_{0}^{*0}(1950)\to D^{0}K^{-}\pi^{+}\) & \((10^{-9})\) & \(9.77\pm 2.68(\omega_{B})\pm 0.69(B_{3})\pm 0.42(B_{1})\pm 0.38(\Gamma_{K_{0}^{*}})\pm 0.08(C_{D})\) & - \\ \(B^{0}_{s}\to\bar{D}^{0}\bar{K}_{0}^{*0}(1950)\to\bar{D}^{0}K^{-}\pi^{+}\) & \(( 3. For the CKM suppressed decay modes \(B_{s}^{0}\to D^{0}\bar{K}_{0}^{*0}(1430)\to D^{0}K^{-}\pi^{+}\), their branching ratios are much smaller than the corresponding results of \(B_{s}^{0}\to D^{0}\bar{K}_{0}^{*0}(1430)\to D^{0}K^{-}\pi^{+}\) decays as predicted by PQCD in this work. The major reason comes from the strong CKM suppression factor \[R_{CKM}=\left|\frac{V_{ub}^{*}V_{cd}}{V_{cb}^{*}V_{ud}}\right|^{2}\approx\lambda ^{4}(\bar{\rho}^{2}+\bar{\eta}^{2})\approx 3\times 10^{-4}\,,\] (30) as discussed in Ref [89]. The non-vanishing charm quark mass in the fermion propagator generates the main differences between the \(\frac{B_{s}^{0}\to D^{0}\bar{K}_{0}^{*0}(1430)\to D^{0}K^{-}\pi^{+}}{B_{s}^{0} \to D^{0}\bar{K}_{0}^{*0}(1430)\to D^{0}K^{-}\pi^{+}}\) and \(R_{CKM}\). Similarly, for the \(B^{+}\to D^{0}K_{0}^{*+}(1430)\to D^{0}K^{0}\pi^{+}\) decay and \(B^{+}\to\bar{D}^{0}K_{0}^{*+}(1430)\to\bar{D}^{0}K^{0}\pi^{+}\) decay, there still exist the CKM suppression but much moderate than the previous cases: \[R_{CKM}^{s}=\left|\frac{V_{ub}^{*}V_{cs}}{V_{cb}^{*}V_{us}}\right|^{2}\approx( \bar{\rho}^{2}+\bar{\eta}^{2})\approx 0.147\] (31) from Table 1 we have \[R_{CKM}^{s1}=\frac{\mathcal{B}(B^{+}\to D^{0}K_{0}^{*+}(1430)\to D^{0 }K^{0}\pi^{+})}{\mathcal{B}(B^{+}\to\bar{D}^{0}K_{0}^{*+}(1430)\to\bar{D}^{0}K ^{0}\pi^{+})} \approx 0.166\,,\] (32) \[R_{CKM}^{s2}=\frac{\mathcal{B}(B^{0}\to D^{0}K_{0}^{*0}(1430)\to D ^{0}K^{+}\pi^{-})}{\mathcal{B}(B^{0}\to\bar{D}^{0}K_{0}^{*0}(1430)\to\bar{D}^{ 0}K^{+}\pi^{-})} \approx 0.175\,.\] (33) The main differences between the \(R_{CKM}^{s}\) and \(R_{CKM}^{s1,s2}\) comes from the nonvanishing charm quark mass contributions in the non-factorizable \(B\to X_{0}^{*}(1430)\) emission diagram. We also suggest more study on the decay mode \(B_{s}^{0}\to D_{s}^{+}K_{0}^{*-}(1430)\to D_{s}^{+}\bar{K}^{0}\pi^{-}\) because it has a large branching ratio and can be found in future experiments. 4. \(K_{0}^{*}(1430)\) was often parameterized by LASS lineshape [90] in partial wave analysis, which incorporate both cusps resonance and slowly varying nonresonance contribution, and it was applied in LHCb measurements [5; 16]. However, rigorous theoretical calculation for nonresonance contribution in the context of PQCD framework is still absent [80], the comparison between theoretical calculations and experiment measurements focus only on the \(S\)-wave \(K_{0}^{*}(1430)\) contribution. More attempts can be make in future study to parameterize the nonresonance contribution for sake of giving a more reliable result. 5. The \(CP\)-averaged branching fraction of the charmless quasi-two-body decay involving the intermediate state \(K_{0}^{*}(1950)\) is predicted to be about one magnitude smaller than the corresponding process containing \(K_{0}^{*}(1430)\) in [80]. In quasi-two-body charmed decays, the ratio of branching fractions between Table 2 and 1 are about few percentage, which are smaller than that of charmless cases mainly due to the absence of \((S-P)(S+P)\) amplitude, which receive resonance pole mass enhancement as discussed in [80]. And the more compact phase space can also reduce the branching fractions for the decay mode involving \(K_{0}^{*}(1950)\). From the partial wave analysis in [16], the \(K_{0}^{*}(1950)\) mode is measured to be about 1.5% than that of \(K_{0}^{*}(1430)\) mode, which is about one third of our prediction, ie. 4.6%, more precise measurements and more reliable theoretical predictions are needed in the future study. 6. In Fig. 2, we show the \(K\pi\) invariant mass-dependent differential branching fraction for the quasi-two-body decays \(B_{s}^{0}\to\bar{D}_{0}^{*0}\bar{K}_{0}^{*0}(1430)\to\bar{D}_{0}^{*0}K^{-}\pi^{+}\) (solid line) and \(B_{s}^{0}\to\bar{D}_{0}^{*0}\bar{K}_{0}^{*0}(1950)\to\bar{D}_{0}^{*0}K^{-}\pi^{+}\) (dashed line). One can easily find that the main portion of the branching fraction comes from the region around the pole mass of the corresponding resonant states, the contributions from the \(m_{K\pi}\) mass region greater than 3 GeV is evaluated about 0.4% compared with the whole kinematic region (i.e. \([m_{K}+m_{\pi},m_{B}-m_{D}]\)) in this work and can be safely neglected. ## IV Conclusion Motivated by the phenomenological importance of the charmed three-body hadronic \(B\)-meson decays, in the present work we have studied the quasi-two-body decays \(B_{(s)}\to D_{(s)}K_{0}^{*}(1430,1950)\to D_{(s)}K\pi\) in the PQCD factorization approach with the help of the scalar form factor \(F_{k\pi}(s)\) as a nonperturbative input. The branching ratios of all concerned decays are calculated, and are of the order \(10^{-10}\) to \(10^{-5}\), the corresponding two-body branching fractions can be obtained by using the quasi-two-body approximation relation in Eq (26). Under SU(3) flavor symmetry, we found the theoretical framework independent ratio \(R=\frac{{\cal B}(B^{0}\to\bar{D}^{0}K^{*0}_{0}(1430)\to\bar{D}^{0}K^{+}\pi^{-})}{{ \cal B}(B^{0}_{s}\to D^{0}K^{*0}_{0}(1430)\to\bar{D}^{0}K^{-}\pi^{+})}\approx \left|\frac{V_{us}}{V_{ud}}\right|^{2}\cdot\frac{\tau_{B^{0}}}{\tau_{B^{0}_{s} }}\approx 0.0534\) by neglecting the differences of hadronic parameters between \(B^{0}\) and \(B^{0}_{s}\), this result is consistent with our PQCD prediction, but inconsistent with LHCb measurements. For the decays \(B^{0}_{s}\to D^{0}\bar{K}^{*0}_{0}(1430)\to D^{0}K^{-}\pi^{+}\) and \(B^{0}_{s}\to D^{0}\bar{K}^{*0}_{0}(1430)\to D^{0}K^{-}\pi^{+}\), the great difference in their corresponding branching fractions can be understood by a strong CKM suppression factor \(R_{CKM}\approx\lambda^{4}(\bar{\rho}^{2}+\bar{\eta}^{2})\approx 3\times 10^{-4}\), while the moderate difference between \(B^{+}\to D^{0}K^{*+}_{0}(1430)\to D^{0}K^{0}\pi^{+}\) and \(B^{+}\to\bar{D}^{0}K^{*+}_{0}(1430)\to\bar{D}^{0}K^{*+}(1430)\to\bar{D}^{0}K^{ 0}\pi^{+}\) as well as \(B^{0}\to D^{0}K^{*0}_{0}(1430)\to D^{0}K^{+}\pi^{-}\) and \(B^{0}\to\bar{D}^{0}K^{*0}_{0}(1430)\to\bar{D}^{0}K^{+}\pi^{-}\) are mainly due to the \(R^{s}_{CKM}\approx(\bar{\rho}^{2}+\bar{\eta}^{2})\approx 0.147\). More reliable theoretical predictions are needed in the future study for the nonresonance contribution and \(S\)-wave \(K^{*}_{0}(1950)\) contribution. We hope the predictions in this work can be tested by the future experiments, especially, to resolve \(R\) ratio discrepancy. ###### Acknowledgements. We are grateful to Ai-jun Ma for helpful comments. This work was supported by the National Natural Science Foundation of China under Grant No. 11947040. ## Appendix A Decay Amplitudes The factorization formulae for the individual amplitudes from different subdiagrams in Fig. 1 are \[F_{TK} = 8\pi C_{F}m_{B}^{4}f_{D}\int dx_{B}dz\int b_{B}db_{B}bdb\phi_{B }(x_{B},b_{B})\big{\{}\big{[}\sqrt{\eta(1-r^{2})}[\phi^{s}(r^{2}(-2z\eta+2z+1)\] \[+ (\eta-1)(2z-1))-\phi^{t}\big{(}1+\eta+r^{2}(2(\eta-1)z+1)+2z(1-\eta) \big{)}\big{]}-\phi^{0}((\eta-1)r^{4}z\] \[+ r^{2}(-2\eta(z+1)+2z+1)+(\eta-1)(z+1))\big{]}E_{1ab}(t_{1a})h_{1 a}(x_{B},z,b_{B},b)S_{t}(z)\] \[+ \big{[}r^{4}\phi^{0}(\eta-x_{B})+(\eta-1)(\eta\phi^{0}-2\phi^{s} \sqrt{\eta(1-r^{2})})+r^{2}[2\phi^{s}\sqrt{\eta(1-r^{2})}(2\eta-1-x_{B})\] \[+ (x_{B}-\eta^{2})\phi^{0}]\big{]}E_{1ab}(t_{1b})h_{1b}(x_{B},z,b_{B}, b)S_{t}(|x_{B}-\eta|)\big{\}},\] \[M_{TK} = 32/\sqrt{6}\pi C_{F}m_{B}^{4}\int dx_{B}dzdx_{3}\int b_{B}db_{B} b_{3}db_{3}\phi_{B}(x_{B},b_{B})\phi_{D}\big{\{}\big{[}-\phi^{0}(1+r^{2}-\eta)\] \[\times \big{(}\eta+r^{2}(\eta(2x_{3}+z-2)-x_{3}-x_{B}+1)-\eta(x_{3}+z)+x _{3}+x_{B}-1\big{)}\] \[+ \sqrt{\eta(1-r^{2})}[r^{2}\left(\phi^{s}(2x_{3}+x_{B}+z-2-(2x_{3} +z-2)\eta)+\phi^{t}(x_{B}+z\eta-z)\right)\] \[+ (\eta-1)z(\phi^{s}-\phi^{t})]\big{]}E_{1cd}(t_{1c})h_{1c}(x_{B},z,x _{3},b_{B},b_{3})+\big{[}z(2\eta-1)r^{4}\phi^{0}\] \[- r^{3}r_{c}\phi^{0}+rr_{c}(\phi^{0}(1+\eta)-4\phi^{s}\sqrt{\eta(1 -r^{2})})+(\eta-1)(z\sqrt{\eta(1-r^{2})}(\phi^{s}+\phi^{t})\] \[+ \phi^{0}(\eta-r^{4})+x_{3}((\eta-1)^{2}(1-r^{2}))+\eta(1-r^{4})]\] \[+ \phi^{s}[2r(r^{2}-1-\eta-x_{3}(\eta-1))]\sqrt{\eta(1-r^{2})}\big{]}E _{1ef}(t_{1f})h_{1f}(x_{3},z,b_{3},b)S_{t}(x_{3})\big{\}},\] \[M_{AK} = 32/\sqrt{6}\pi C_{F}m_{B}^{4}\int dx_{B}dzdx_{3}\int b_{B}db_{B} b_{3}db_{3}\phi_{B}(x_{B},b_{B})\phi_{D}\big{\{}\big{[}r^{4}\phi^{0}(x_{3}+x_{B}-1 \] \[- \eta(x_{3}+x_{B}-2)+r^{2}\phi^{0}(\eta^{2}(x_{3}+z-2)-\eta(x_{3}+x_{B}+z) +1)+r\sqrt{\eta(1-r^{2})}\] \[+ \big{[}\phi^{s}(\eta-\eta x_{3}+x_{B}-z+3)-\phi^{t}(\eta-\eta x_ {3}+xB+z)\big{]}+r^{3}\sqrt{\eta(1-r^{2})}\] \[\times (\phi^{s}+\phi^{t})-(\eta-1)\phi^{0}(\eta(x_{3}+z-1)-x_{3}-x_{B} )\big{]}E_{1gh}(t_{1g})h_{1g}(x_{B},z,x_{3},b_{B},b_{3})\] \[+ \big{[}\phi^{0}(r^{2}-\eta-1)\big{(}\eta+r^{2}(\eta(2x_{3}+z-2) -2x_{3}+x_{B}-z+1)-\eta z+z-1\big{)}\] \[+ r\sqrt{\eta(1-r^{2})}\big{(}\phi^{s}(r^{2}(1-z)+\eta(x_{3}-1)-x _{3}+x_{B}+z-1)\big{)}\] \[+ \phi^{t}(r^{2}(z-1)+\eta(x_{3}-1)-x_{3}+x_{B}-z+1)\big{)}]E_{1gh} (t_{1h})h_{1h}(x_{B},z,x_{3},b_{B},b_{3})\big{\}},\] \[F_{TD} = 8\pi C_{F}m_{B}^{4}F_{K\pi}(s)/\mu_{s}\int dx_{B}dx_{3}\int b_{B} db_{B}b_{3}db_{3}\phi_{B}(x_{B},b_{B})\phi_{D}\big{\{}(1+r)\big{[}\eta^{2}(r-1)x_{3}\] (A.5) \[+ \eta(2r-1)^{2}x_{3}-2r+r(x_{3}-1)-x_{B}-x_{B}-z+1)\big{]}E_{2ab}(t _{2a})h_{2a}(x_{B},x_{3},b_{B},b_{3})S_{t}(x_{3})\] \[+ \big{[}(\eta-1)r^{4}+2r^{3}(1-2\eta+r_{c})-r^{2}(\eta^{2}-2\eta r _{c}+r_{c}-1)+(\eta-1)(\eta x_{3}-r_{c})\] \[- 2r(\eta(r_{c}-x_{B}-1)+r_{c}+1)\big{]}E_{2ab}(t_{2b})h_{2b}(x_{B}, x_{3},b_{B},b_{3})S_{t}(x_{B})\big{\}},\] \[M_{TD} = 32/\sqrt{6}\pi C_{F}m_{B}^{4}\int dx_{B}dzdx_{3}\int b_{B}db_{B} bdb\phi_{B}(x_{B},b_{B})\phi_{D}\big{\{}\big{[}\eta^{2}(r^{2}(2-z-x_{3})\] (A.6) \[+ x_{B}+z-1)(r^{2}(x_{3}+z-1)-rx_{3}-x_{B}-z+1)\] \[+ \eta r(r(r(\eta-1)(x_{3}+z-2)-x_{B}-z+2)+x_{3}+x_{B}+z-2)\big{]}\] \[\times E_{2cd}(t_{2c})h_{2a}(x_{B},z,x_{3},b_{B},b)+\big{[}(r-1)(\eta+(2 \eta-1)r-1)(z(r^{2}-1)+x_{B})\] \[- x_{3}(1-\eta)(1-\eta+r(r(2\eta+r-1)-1))\big{]}E_{2cd}(t_{2d})h_{2d }(x_{B},z,x_{3},b_{B},b)\big{\}},\] \[F_{AD} = 8\pi C_{F}m_{B}^{4}f_{B}\int dzdx_{3}\int bdb_{3}db_{3}\phi_{D} \big{\{}\big{[}2r\phi^{s}\sqrt{\eta(1-r^{2})}(x_{3}(\eta-1)-2)\] (A.7) \[+ \phi^{0}(\eta+-r^{2}(2\eta+(\eta-1)^{2}x_{3}-1)+(\eta-2)\eta x_{3 }+x_{3}-1)\big{]}E_{2ef}(t_{2e})h_{2e}(z,x_{3},b_{3},b)S_{t}(x_{3})\] \[+ \big{[}r^{4}\phi^{0}(\eta-\eta z+z-1)+r^{2}\phi^{0}(\eta-1)(2z-1- \eta)+2r\sqrt{\eta(1-r^{2})}(\phi^{s}(1+z-\eta)\] \[+ \phi^{t}(\eta+z-1))-2r^{3}(z-1)\sqrt{\eta(1-r^{2})}(\phi^{s}+\phi^{t })-(\eta-1)z\phi^{0}-r^{2}r_{c}\sqrt{\eta(1-r^{2})}(\phi^{s}+\phi^{t})\] \[- 2r(1+\eta)r_{c}\phi^{0}+(1-\eta)r_{c}\sqrt{\eta(1-r^{2})}(\phi^{t }-\phi^{s})\big{]}E_{2ef}(t_{2f})h_{2f}(z,x_{3},b_{3},b)S_{t}(z)\big{\}},\] \[M_{AD} = 32/\sqrt{6}\pi C_{F}m_{B}^{4}\int dx_{B}dzdx_{3}\int b_{B}db_{B} bdb_{3}db_{3}\phi_{B}(x_{B},b_{B})\phi_{D}\big{\{}\big{[}r^{4}\phi^{0}(2(\eta-1)x_{3}\] (A.8) \[+ \eta(z-2)-z+1)-r^{2}\phi^{0}(\eta^{2}(x_{3}+z-2)-x_{3}+\eta(x_{B}+z )-x_{B}-2z+1)\] \[- r\sqrt{\eta(1-r^{2})}(\phi^{*}(\eta(x_{3}-1)-x_{3}+x_{B}+z+3)+\phi ^{t}(\eta(1-x_{3})+x_{3}+x_{B}+z-1))\] \[+ r^{3}\sqrt{\eta(1-r^{2})}(z-1)(\phi^{*}+\phi^{t})+(\eta-1)\phi^{ 0}(\eta(x_{B}+z-1)+x_{B}+z)\big{]}\] \[\times E_{2gh}(t_{2g})h_{2g}(x_{B},z,x_{3},b_{B},b)+\big{[}\phi^{0}(\eta- r^{2}-1)(\eta+x_{3}-1-\eta(x_{3}-x_{B}+z)\] \[+ r^{2}(\eta(2x_{3}+z-2)-x_{3}+1))+r\sqrt{\eta(1-r^{2})}\big{(} \phi^{t}(r^{2}(z-1)+\eta(x_{3}-1)-x_{3}\] \[+ x_{B}-z+1)-\phi^{s}(\eta+r^{2}(z-1)-\eta x_{3}+x_{3}+x_{B}-z-1) )\big{]}\] \[\times E_{2gh}(t_{2h})h_{2h}(x_{B},z,x_{3},b_{B},b)\big{\}},\] where the hard functions are written as \[h_{i}(x_{1},x_{2},(x_{3},)b_{1},b_{2}) = h_{1}(\beta,b_{2})\times h_{2}(\alpha,b_{1},b_{2})\] \[h_{1}(\beta,b_{2}) = \left\{\begin{array}{ll}K_{0}(\sqrt{\beta}b_{2}),&\beta\geq 0,\\ \frac{i\pi}{2}H_{0}^{(1)}(\sqrt{-\beta}b_{2}),&\beta<0,\end{array}\right.\] \[h_{2}(\alpha,b1,b_{2}) = \left\{\begin{array}{ll}\theta(b_{2}-b_{1})K_{0}(\sqrt{\alpha}b _{2})I_{0}(\sqrt{\alpha}b_{1}),&\alpha\geq 0,\\ \theta(b_{2}-b_{1})\frac{i\pi}{2}H_{0}^{(1)}(\sqrt{-\alpha}b_{2})J_{0}(\sqrt{- \alpha}b_{1}),&\alpha<0,\end{array}\right.\] (A.10) where \(E_{1mn},E_{2mn}(m=a,c,e,g\) and \(n=b,d,f,h\)) are the evolution factors, which given by \[E_{1ab}(t) = \alpha(t)exp[-S_{B}(t)-S_{K}(t)],\] \[E_{1cd}(t) = \alpha(t)exp[-S_{B}(t)-S_{K}(t)-S_{D}(t)]_{b=b_{B}},\] \[E_{1ef}(t) = \alpha(t)exp[-S_{D}(t)-S_{K}(t)],\] \[E_{1gh}(t) = \alpha(t)exp[-S_{B}(t)-S_{K}(t)-S_{D}(t)]_{b=b_{3}},\] \[E_{2ab}(t) = \alpha(t)exp[-S_{B}(t)-S_{D}(t)],\] \[E_{2cd}(t) = \alpha(t)exp[-S_{B}(t)-S_{K}(t)-S_{D}(t)]_{b_{3}=b_{B}},\] \[E_{2ef}(t) = E_{1ef}(t),\] \[E_{2gh}(t) = E_{1gh}(t),\] (A.11) in which the Sudakov exponents \(S_{(B,K,D)}(t)\) are defined as \[S_{B}(t) = s\big{(}\frac{x_{B}m_{B}}{\sqrt{2}},b_{B}\big{)}+\frac{5}{3}\int_ {1/b_{B}}^{t}\frac{d\bar{\mu}}{\bar{\mu}}\gamma_{q}(\alpha_{s}(\bar{\mu})),\] \[S_{K}(t) = s\big{(}\frac{z(1-r^{2})m_{B}}{\sqrt{2}},b\big{)}+s\big{(}\frac{ (1-z)(1-r^{2})m_{B}}{\sqrt{2}},b\big{)}+2\int_{1/b}^{t}\frac{d\bar{\mu}}{\bar {\mu}}\gamma_{q}(\alpha_{s}(\bar{\mu})),\] \[S_{D}(t) = s\big{(}\frac{x_{3}m_{B}}{\sqrt{2}},b_{3}\big{)}+2\int_{1/b_{3}} ^{t}\frac{d\bar{\mu}}{\bar{\mu}}\gamma_{q}(\alpha_{s}(\bar{\mu})),\] (A.12) where the quark anomalous dimension \(\gamma_{q}=-\alpha_{s}/\pi\). The explicit form for \(s(Q,b)\) at one loop can be found in [88]. \(t_{1x}\) and \(t_{2x}(x=a,b\cdots h)\) are hard scales which are chosen to be the maximum of the virtuality of the internal momentum transition in the hard amplitudes as \[t_{1a} = Max\big{\{}\sqrt{|\alpha_{1a}|},\sqrt{|\beta_{1a}|},1/b_{B},1/b \big{\}},\] \[t_{1b} = Max\big{\{}\sqrt{|\alpha_{1b}|},\sqrt{|\beta_{1b}|},1/b_{B},1/b \big{\}},\] \[t_{1c} = Max\big{\{}\sqrt{|\alpha_{1c}|},\sqrt{|\beta_{1c}|},1/b_{B},1/b _{3}\big{\}},\] \[t_{1c}^{\prime} = Max\big{\{}\sqrt{|\alpha_{1c}|},\sqrt{|\beta_{1c}^{\prime}|},1/b _{B},1/b_{3}\big{\}},\] \[t_{1d} = Max\big{\{}\sqrt{|\alpha_{1d}|},\sqrt{|\beta_{1d}|},1/b_{B},1/b _{3}\big{\}},\] \[t_{1d}^{\prime} = Max\big{\{}\sqrt{|\alpha_{1d}|},\sqrt{|\beta_{1d}^{\prime}|},1/b _{B},1/b_{3}\big{\}},\] \[t_{2a} = Max\big{\{}\sqrt{|\alpha_{2a}|},\sqrt{|\beta_{2a}|},1/b_{B},1/b _{3}\big{\}},\] \[t_{2b} = Max\big{\{}\sqrt{|\alpha_{2b}|},\sqrt{|\beta_{2b}|},1/b_{B},1/b _{3}\big{\}},\] \[t_{2c} = Max\big{\{}\sqrt{|\alpha_{2c}|},\sqrt{|\beta_{2c}|},1/b_{B},1/b \big{\}},\] \[t_{2d} = Max\big{\{}\sqrt{|\alpha_{2d}|},\sqrt{|\beta_{2d}|},1/b_{B},1/b \big{\}},\] \[t_{2e} = Max\big{\{}\sqrt{|\alpha_{2e}|},\sqrt{|\beta_{2e}|},1/b_{3},1/b \big{\}},\] \[t_{2f} = Max\big{\{}\sqrt{|\alpha_{2f}|},\sqrt{|\beta_{2f}|},1/b_{3},1/b \big{\}},\] \[t_{2g} = Max\big{\{}\sqrt{|\alpha_{2g}|},\sqrt{|\beta_{2g}|},1/b_{B},1/b _{3}\big{\}},\] \[t_{2h} = Max\big{\{}\sqrt{|\alpha_{2h}|},\sqrt{|\beta_{2h}|},1/b_{B},1/b _{3}\big{\}},\] (A.13) where we have \[\alpha_{1a} = z(1-r^{2})m_{B}^{2},\] \[\beta_{1a} = x_{B}z(1-r^{2})m_{B}^{2}=\beta_{1b}=\alpha_{1c}=\alpha_{1d}= \alpha_{1c}^{\prime}=\alpha_{1d}^{\prime},\] \[\alpha_{1b} = (1-r^{2})(x_{B}-\eta)m_{B}^{2},\] \[\beta_{1c} = -[z(1-r^{2})+r^{2}][(1-\eta)(1-x_{3})-x_{B}]m_{B}^{2},\] \[\beta_{1d} = \{r_{c}^{2}-[z(1-r^{2})][(1-\eta)x_{3}-x_{B}]\}m_{B}^{2},\] \[\beta_{1c}^{\prime} = \{r_{c}^{2}-[z(1-r^{2})+r^{2}][(1-\eta)(1-x_{3})-x_{B}]\}m_{B}^{2},\] \[\beta_{1d}^{\prime} = -[z(1-r^{2})][(1-\eta)x_{3}-x_{B}]m_{B}^{2},\] \[\alpha_{1e} = -[1-z(1-r^{2})-r_{c}^{2}]m_{B}^{2},\] \[\beta_{1e} = -[(1-r^{2})(1-z)][\eta+(1-\eta)x_{3}]m_{B}^{2}=\beta_{1f}=\alpha _{1g}=\alpha_{1h},\] \[\alpha_{1f} = -(1-r^{2})(\eta+(1-\eta)x_{3})m_{B}^{2},\] \[\beta_{1g} = \{1-[z(1-r^{2})+r^{2}][(1-\eta)(1-x_{3})-x_{B}]\}m_{B}^{2},\] \[\beta_{1h} = -[(1-z)(1-r^{2})][(1-\eta)x_{3}+\eta-x_{B}]m_{B}^{2},\] \[\alpha_{2a} = x_{3}(1-\eta)m_{B}^{2},\] \[\beta_{2a} = x_{3}x_{B}(1-\eta)m_{B}^{2}=\beta_{2b}=\alpha_{2c}=\alpha_{2d},\] \[\alpha_{2b} = x_{B}(1-\eta)m_{B}^{2},\] \[\beta_{2c} = -[(1-r^{2})(1-z)-x_{B}][\eta+(1-\eta)x_{3}]m_{B}^{2},\] \[\beta_{2d} = -(1-\eta)x_{3}[(1-r^{2})z-x_{B}]m_{B}^{2},\] \[\alpha_{2e} = -[1-x_{3}(1-\eta)]m_{B}^{2},\] \[\beta_{2e} = -[r^{2}+z(1-r^{2})](1-\eta)(1-x_{3})m_{B}^{2}=\beta_{2f}=\alpha _{2g}=\alpha_{2h},\] \[\alpha_{2f} = \{r_{c}^{2}-[r^{2}+z(1-r^{2})](1-\eta)\}m_{B}^{2},\] \[\beta_{2g} = \{1-[(1-r^{2})(1-z)-x_{B}][\eta+(1-\eta)x_{3}]\}m_{B}^{2},\] \[\beta_{2h} = -[r^{2}+z(1-r^{2})-x_{B}](1-\eta)(1-x_{3})m_{B}^{2}.\] (A.14)
2304.03495
Devil's on the Edges: Selective Quad Attention for Scene Graph Generation
Scene graph generation aims to construct a semantic graph structure from an image such that its nodes and edges respectively represent objects and their relationships. One of the major challenges for the task lies in the presence of distracting objects and relationships in images; contextual reasoning is strongly distracted by irrelevant objects or backgrounds and, more importantly, a vast number of irrelevant candidate relations. To tackle the issue, we propose the Selective Quad Attention Network (SQUAT) that learns to select relevant object pairs and disambiguate them via diverse contextual interactions. SQUAT consists of two main components: edge selection and quad attention. The edge selection module selects relevant object pairs, i.e., edges in the scene graph, which helps contextual reasoning, and the quad attention module then updates the edge features using both edge-to-node and edge-to-edge cross-attentions to capture contextual information between objects and object pairs. Experiments demonstrate the strong performance and robustness of SQUAT, achieving the state of the art on the Visual Genome and Open Images v6 benchmarks.
Deunsol Jung, Sanghyun Kim, Won Hwa Kim, Minsu Cho
2023-04-07T06:33:46Z
http://arxiv.org/abs/2304.03495v1
# Devil's on the Edges: Selective Quad Attention for Scene Graph Generation ###### Abstract Scene graph generation aims to construct a semantic graph structure from an image such that its nodes and edges respectively represent objects and their relationships. One of the major challenges for the task lies in the presence of distracting objects and relationships in images; contextual reasoning is strongly distracted by irrelevant objects or backgrounds and, more importantly, a vast number of irrelevant candidate relations. To tackle the issue, we propose the Selective Quad Attention Network (SQUAT) that learns to select relevant object pairs and disambiguate them via diverse contextual interactions. SQUAT consists of two main components: edge selection and quad attention. The edge selection module selects relevant object pairs, i.e., edges in the scene graph, which helps contextual reasoning, and the quad attention module then updates the edge features using both edge-to-node and edge-to-edge cross-attentions to capture contextual information between objects and object pairs. Experiments demonstrate the strong performance and robustness of SQUAT, achieving the state of the art on the Visual Genome and Open Images v6 benchmarks. ## 1 Introduction The task of scene graph generation (SGG) is to construct a visually-grounded graph from an image such that its nodes and edges respectively represent objects and their relationships in the image [30, 48, 51]. The scene graph provides a semantic structure of images beyond individual objects and thus is useful for a wide range of vision problems such as visual question answering [41, 42], image captioning [59], image retrieval [14], and conditional image generation [13], where a holistic understanding of the relationships among objects is required for high-level reasoning. With recent advances in deep neural networks for visual recognition, SGG has been actively investigated in the computer vision community. A vast majority of existing methods tackle SGG by first detecting candidate objects and then performing contextual reasoning between the objects via message passing [22, 24, 48] or sequential modeling [31, 41, 55]. Despite these efforts, the task of SGG remains extremely challenging, and even the state-of-the-art methods do not produce reliable results for practical usage. While there exist a multitude of challenges for SGG, the intrinsic difficulty may lie in the presence of distracting objects and relationships in images; there is a vast number of potential but irrelevant relations, _i.e._, edges, which quadratically increase with the number of candidate objects, _i.e._, nodes, in the scene graph. The contextual reasoning for SGG in the wild is thus largely distracted by irrelevant objects and their relationship pairs. Let us take a simple example as in Fig. 1, where 4 objects and 4 relations in its ground-truth scene graph exist in the given image. If our Figure 1: (a) The ground-truth scene graph contains only 4 ground-truth objects and 4 relations between the objects. (b) Only 13% of edges in a fully-connected graph have the actual relationships according to the ground-truths. (c) The overview of the quad attention. The node features are updated by node-to-node (N2N) and node-to-edge (N2E) attentions, and the edge features are updated by edge-to-node (E2N) and edge-to-edge (E2E) attentions. object detector obtains 6 candidate boxes, 2 of which are from the background (red), then the contextual reasoning,, message passing or self-attention, needs to consider 30 potential relations, 87% of which are not directly related according to the ground-truth and most of them may thus act as distracting outliers. In practice, the situation is far worse; in the Visual Genome dataset, the standard benchmark for SGG, an image contains 38 objects and 22 relationships on average [48], which means that only around 1% of object pairs have direct and meaningful relations even when object detection is perfect. As will be discussed in our experiments, we find that existing contextual reasoning schemes obtain only a marginal gain at best and often degrade the performance. The crux of the matter for SGG may lie in developing a robust model for contextual reasoning against irrelevant objects and relations. To tackle the issue, we propose the _Selective Quad Attention Network (SQUAT)_ that learns to select relevant object pairs and disambiguate them via diverse contextual interactions with objects and object pairs. The proposed method consists of two main components: edge selection and quad attention. The edge selection module removes irrelevant object pairs, which may distract contextual reasoning, by predicting the relevance score for each pair. The quad attention module then updates the edge features using edge-to-node and edge-to-edge cross-attentions as well as the node features using node-to-node and node-to-edge cross-attentions; it thus captures contextual information between all objects and object pairs, as shown in Figure 1 (c). Compared to previous methods [22, 24], which perform either node-to-node or node-to-edge interactions, our quad attention provides more effective contextual reasoning by capturing diverse interactions in the scene graph. For example, in the case of Fig. 1 (a), ['man', 'feeding', 'horse'] relates to ['man', 'holding', 'bracket'] and ['horse', 'eat from', 'bracket'], and vice versa; node-to-node or node-to-edge interactions are limited in capturing such relations between the edges. Our contributions can be summarized as follows: * We introduce the edge selection module for SGG that learns to select relevant edges for contextual reasoning. * We propose the quad attention module for SGG that performs effective contextual reasoning by updating node and edge features via diverse interactions. * The proposed SGG model, SQUAT, outperforms the state-of-the-art methods on both Visual Genome and Open Images v6 benchmarks. In particular, SQUAT achieves remarkable improvement on the SGDet settings, which is the most realistic and challenging. ## 2 Related work Scene graph generationThe vast majority of SGG methods [22, 24, 55] predict scene graphs in two stages: object detection and contextual reasoning. While the first stage is typically done by a pre-trained detection module [2, 36], contextual reasoning is performed by different types of message passing [35, 3, 22, 23, 24, 26, 15, 24, 48, 46, 47, 49, 54], which uses a graph neural network with node-to-edge and edge-to-node attentions, or sequential modeling [10, 31, 41, 55], which updates the node features with node-to-node attention and constructs edge features with edge-to-node attention. Unlike the previous methods, we propose quad attention, which comprises node-to-node, node-to-edge, edge-to-node, and edge-to-edge interactions, to capture all types of context exchange between candidate objects and their pairs for relational reasoning. In contextual reasoning, most of the methods consider all the candidate object pairs,, a fully-connected graph whose nodes are candidate objects. While Graph R-CNN [51] proposes a relation proposal network that prunes the edges from a fully-connected graph, it focuses on reducing the cost of message passing and does not analyze the effect of edge selection on the performance of scene graph generation. In contrast, we introduce an effective edge selection method and provide an in-depth analysis of it. On the other hand, since dataset imbalance/bias has recently emerged as a critical bottleneck for learning SGG1, several methods [4, 45, 19, 52, 39] propose to adopt the techniques from long-tailed recognition,, data re-sampling [8, 22] and loss reweighting [11, 18, 50]. Footnote 1: For example, in the Visual Genome dataset, the most frequent entity class is 35 times larger than the least frequent one, and the most frequent predicate class is 8,000 times larger than the least frequent one. Transformers for vision tasks and graph structuresTransformers [43] have been adapted to the various computer vision tasks,, object classification [9, 29], object detection [2, 37, 60, 29] and segmentation [58, 29], and also extended for graph structures [16, 38, 32, 25]. Despite their success, vision transformer networks typically suffer from high complexity and memory consumption. Several variants of transformer networks [37, 44, 60, 5, 17] have been proposed to tackle the issue and showed that a proper sparsification technique,, Sparse DETR [37], can not only reduce the cost of computation and memory but also improve the task performance. Our transformer network is designed to perform contextual reasoning for scene graph generation by capturing the inherent relationships between objects and relevant object pairs, and unlike existing sparsification methods, which focus on token pruning [37] or local attention [17, 60], our edge selection module prunes not only the query edges to update but also the key-value edges used for updating. ## 3 Problem Definition Given an image \(I\), the goal of SGG is to generate a visually grounded graph \(G=(\mathcal{O},\mathcal{R})\) that represents objects \(\mathcal{O}\) and their semantic relationships \(\mathcal{R}\) for object classes \(\mathcal{C}\) and predicate classes \(\mathcal{P}\). An object \(o_{i}\in\mathcal{O}\) is described by a pair of a bounding box \(b_{i}\in[0,1]^{4}\) and its class label \(c_{i}\in\mathcal{C}\): \(o_{i}=(b_{i},c_{i})\). A relationship \(r_{k}\in\mathcal{R}\) is represented by a triplet of a subject \(o_{i}\in\mathcal{O}\), an object \(o_{j}\in\mathcal{O}\), and a predicate label \(p_{ij}\in\mathcal{P}\): \(r_{k}=(o_{i},o_{j},p_{ij})\), which represents relationship \(p_{ij}\) between subject \(o_{i}\) and object \(o_{j}\). ## 4 Selective Quad Attention Networks To generate semantically meaningful scene graphs as described in Section 3, we propose the Selective Quad Attention Network (SQUAT) that consists of three main components as shown in Fig. 2: the node detection module (Sec. 4.1), the edge selection module (Sec. 4.2), and the quad attention module (Sec. 4.3). First, the node detection module establishes nodes for a scene graph by detecting object candidate boxes and extracting their features. All possible pairs of the nodes are constructed as potential edges. Second, among all the potential edges, the edge selection module selects valid edges with high relatedness scores. Third, the quad attention module updates the features of nodes and valid edges via four types of attention: node-to-node (N2N), node-to-edge (N2E), edge-to-node (E2N), and edge-to-edge (E2E). For the quad attention module, we use three edge selection modules: query edge selection module for entire quad attention (\(\mathrm{ESM}^{\mathcal{Q}}\)) and key-value edge selection modules for N2E attention (\(\mathrm{ESM}^{\mathrm{N2E}}\)) and E2E attention (\(\mathrm{ESM}^{\mathrm{E2E}}\)). The nodes and edges may require different sets of edges to update their features, and some pruned edges may help to update nodes or selected edges. For example, an edge between a person and a background, _e.g._, an ocean, is invalid but can help to predict the relationships between a person and other objects. Only the valid edges extracted by \(\mathrm{ESM}^{\mathrm{N2E}}\) and \(\mathrm{ESM}^{\mathrm{E2E}}\) are used to update the features of the nodes and valid edges from \(\mathrm{ESM}^{\mathcal{Q}}\). Finally, the output features are then passed into a classifier to predict relationship classes. The remainder of this section presents the details of each component and training procedure (Sec. 4.4). In this section, the calligraphic font, _i.e._, \(\mathcal{N}\) and \(\mathcal{E}\), denotes a set of features while the italic, _i.e._, \(N\) and \(E\), denotes a matrix of stacked features of the set. ### Node detection for object candidates Given an image \(I\), we use a pre-trained object detector, _i.e._, Faster R-CNN [36] in our experiments, to extract object bounding boxes and their class labels. Let \(b_{i}\in[0,1]^{4}\) be the \(i\)-th object box coordinate and \(v_{i}\in\mathbb{R}^{d_{v}}\) its visual feature where \(d_{v}\) is the dimension of the visual feature. We construct a node feature \(f_{i}\) by transforming \(b_{i}\) and \(v_{i}\) via \[f_{i}=W_{o}[W_{v}v_{i};W_{g}b_{i}], \tag{1}\] where \(W_{o}\), \(W_{v}\), and \(W_{g}\) are linear transformation matrices and \([\cdot;\cdot]\) is a concatenation operation. The edge feature \(f_{ij}\) is formed by concatenating two node features \(f_{i}\) and \(f_{j}\) and performing a linear transformation as \[f_{ij}=W_{p}[f_{i};f_{j}], \tag{2}\] where \(W_{p}\) is the linear transformation matrix. As in Fig. 2, the set of entire node features \(\mathcal{N}=\{f_{i}|1\leq i\leq n\}\) and the set of all possible edge features \(\mathcal{E}=\{f_{ij}|1\leq i,j\leq n,i\neq j\}\) are passed into the edge selection and quad attention modules, whose details are described below. We denote the stacks of the features in \(\mathcal{N}\) and \(\mathcal{E}\) as \(N\) and \(E\) for the sake of simplicity. Figure 2: The overall architecture of Selective Quad Attention Networks (SQUAT). SQUAT consists of three components: the node detection module, the edge selection module, and the quad attention module. First, the node detection module extracts nodes \(\mathcal{N}\) by detecting object candidate boxes and extracting their features. Also, all possible pairs of the nodes are constructed as initial edges \(\mathcal{E}\). Second, the edge selection module select valid edges \((\Omega^{\mathcal{Q}},\Omega^{\mathrm{E2E}},\Omega^{\mathrm{N2E}})\) with high relatedness scores. Third, the quad attention module updates the node and edge features via four types of attention. Finally, the output features are passed into a classifier to predict the scene graph. See Sec. 4 for the details. ### Edge selection for relevant object pairs While node features \(N\) and edge features \(E\) can be updated via attentive message passing, a large number of irrelevant edges in \(E\) interferes with the attention process. We thus propose to prune invalid edges (_i.e._, non-existing/false relationships) before proceeding to the quad attention module, which will be described in the next subsection. In order to remove such distracting edges, we introduce an edge selection module (ESM) that takes an edge feature \(f_{ij}\) between nodes \(i\) and \(j\) and predicts its relatedness score \(s_{ij}\) using a simple multi-layer perceptron. We choose the pairs with top-\(\rho\%\) highest relatedness scores as valid edges to use in the following quad attention module. As mentioned earlier, we use three edge selection modules: \(\mathrm{ESM^{Q}}\), \(\mathrm{ESM^{N2E}}\), and \(\mathrm{ESM^{E2E}}\). Each edge selection module \(\mathrm{ESM^{a}}\) takes the initial edge features \(\mathcal{E}\) as inputs and outputs the valid edge index set \(\Omega\) for each module, resulting in \(\Omega^{\mathrm{Q}}\), \(\Omega^{\mathrm{E2E}}\), and \(\Omega^{\mathrm{N2E}}\). ### Quad attention for relationship prediction To capture contextual interactions between the nodes and the edges, we propose a quad attention scheme inspired by the transformer decoder [43]. The main component of the quad attention is multi-head attention: \[\mathrm{MHA}(Q,K,V)=[\mathrm{HA}_{1};\cdots;\mathrm{HA}_{h}]W^{O} \tag{3}\] \[\mathrm{HA}_{i}=\mathrm{softmax}\left(\frac{(QW_{i}^{\mathrm{Q} })(KW_{i}^{\mathrm{K}})^{T}}{\sqrt{d_{k}}}\right)VW_{i}^{\mathrm{V}}, \tag{4}\] where \(Q,K\), and \(V\) are query, key, and value matrices. \(W_{i}^{\mathrm{Q}},W_{i}^{\mathrm{K}}\), and \(W_{i}^{\mathrm{V}}\) are learnable transformation parameters for \(Q,K\), and \(V\), respectively, \(d_{k}\) is the dimension of the query vector, and \(W^{\mathrm{O}}\) is a learnable transformation parameter for the output. Each attention head \(\mathrm{HA}_{i}\) captures the information from different representation subspaces in parallel, and the multi-head attention aggregates them. Fig. 3 shows the architecture of our quad attention layer. Following the transformer decoder layer, the \(t\)-th quad attention layer takes output edge features \(E_{t-1}\) and node features \(N_{t-1}\) from the \((t-1)\)-th layer as its input and update them with a self-attention first. Instead of updating all possible edge features \(E_{t-1}\), we only update the valid edge features \(E_{t-1}^{\mathrm{Q}}\), whose indices are in \(\Omega^{\mathrm{Q}}\) extracted from \(\mathrm{ESM^{Q}}\): \[G_{t} =\mathrm{LN}(N_{t-1}+\mathrm{MHA}(N_{t-1},N_{t-1},N_{t-1})), \tag{5}\] \[H_{t}^{\mathrm{Q}} =\mathrm{LN}(E_{t-1}^{\mathrm{Q}}+\mathrm{MHA}(E_{t-1}^{\mathrm{ Q}},E_{t-1}^{\mathrm{Q}},E_{t-1}^{\mathrm{Q}})), \tag{6}\] where \(\mathrm{LN}\) is layer normalization, \(G_{t}\) and \(H_{t}\) are the output of the self-attention layer for node features and valid edge features, respectively. For key-value edge features of N2E and E2E attentions, we extract the key-value set from the updated entire edge set \(\mathcal{H}_{t}=\mathcal{H}_{t}^{\mathrm{Q}}\cup\mathcal{E}^{\setminus\mathrm{ Q}}\), where \(\mathcal{H}_{t}^{\mathrm{Q}}\) is the set of updated valid edges for query and \(\mathcal{E}^{\setminus\mathrm{Q}}=\mathcal{E}-\mathcal{E}^{\mathrm{Q}}\). Then, \(H_{t}^{\mathrm{Q}}\) are refined by E2N and E2E attentions and \(G_{t}\) are refined by N2N and N2E attentions: \[\begin{split} G_{t}^{\prime}=\mathrm{LN}(G_{t}&+ \underbrace{\mathrm{MHA}(G_{t},G_{t},G_{t})}_{\text{node-to-node attention}}\\ &+\underbrace{\mathrm{MHA}(G_{t},H_{t}^{\mathrm{N2E}},H_{t}^{ \mathrm{N2E}}))}_{\text{node-to-edge attention}},\end{split} \tag{7}\] \[\begin{split} H_{t}^{\mathrm{Q}\prime}=\mathrm{LN}(H_{t}^{\mathrm{ Q}}&+\underbrace{\mathrm{MHA}(H_{t}^{\mathrm{Q}},G_{t},G_{t})}_{ \text{edge-to-node attention}}\\ &+\underbrace{\mathrm{MHA}(H_{t}^{\mathrm{Q}},H_{t}^{\mathrm{ E2E}},H_{t}^{\mathrm{E2E}}))}_{\text{edge-to-edge attention}},\end{split} \tag{8}\] where \(H_{t}^{\mathrm{N2E}}\) and \(H_{t}^{\mathrm{E2E}}\) are selected by the indices \(\Omega^{\mathrm{N2E}}\) and \(\Omega^{\mathrm{E2E}}\) from the stack of \(\mathcal{H}_{t}\), _i.e._, \(H_{t}\). Each attention explicitly represents a particular type of relationship between edges and nodes and helps to construct contextual information for the scene graph generation. Lastly, \(G_{t}^{\prime}\) and \(H_{t}^{\prime}\) are further updated by multi-layer perceptron (MLP) followed by the residual connection and a layer normalization: \[N_{t} =\mathrm{LN}(G_{t}^{\prime}+\mathrm{MLP}(G_{t}^{\prime})) \tag{9}\] \[E_{t} =\mathrm{LN}(H_{t}^{\prime}+\mathrm{MLP}(H_{t}^{\prime})), \tag{10}\] where \(H_{t}^{\prime}\) is the stack of \(\mathcal{H}_{t}^{\prime}=\mathcal{H}^{\mathrm{Q}\prime}\cup\mathcal{E}^{ \setminus\mathrm{Q}}\), and the quad attention layer outputs \(N_{t}\) and \(E_{t}\). The inputs \(N_{0}\) and \(E_{0}\) of the first quad attention layer are the entire node features \(N\) and all possible edge features \(E\), which are defined in Sec. 4.1. Every quad attention layer Figure 3: Detailed architecture of the quad attention. The node features are updated by node-to-node and node-to-edge attentions, and the valid edge features, selected by \(\mathrm{ESM^{Q}}\), are updated by edge-to-node and edge-to-edge attentions. The key-value of node-to-edge and edge-to-edge attentions are selected by \(\mathrm{ESM^{N2E}}\) and \(\mathrm{ESM^{E2E}}\). See Sec. 4.3 for the details. Best viewed in color. uses the same valid edge sets to update the node features and valid edge features by four types of attention. Given the output edge features \(E_{T}\) of the last \(T\)-th quad attention layer, each edge feature \(e_{ij}\in\mathcal{E}_{T}\) is passed into a feedforward MLP to produce a probability distribution \(y_{ij}\) over the predicate classes \(\mathcal{P}\). ### Training objective To train SQUAT, we use a combination of two loss functions: a cross-entropy loss for the predicate classification and a binary cross-entropy loss for the edge selection module. The first predicate classification loss is defined as: \[\mathcal{L}_{\mathrm{PCE}}=\frac{1}{|\mathcal{E}|}\sum_{i,j=1,i\neq j}^{| \mathcal{N}|}\mathcal{L}_{\mathrm{CE}}(y_{ij},\hat{y}_{ij}), \tag{11}\] where \(\mathcal{L}_{\mathrm{CE}}\) is the cross-entropy loss and \(\hat{y}_{ij}\) is a one-hot vector of ground-truth relationship labels \(\hat{p}_{ij}\) between object \(i\) and object \(j\). To train the edge selection module, we use auxiliary binary cross-entropy defined as: \[\mathcal{L}_{\mathrm{ESM}}^{a}=\frac{1}{|\mathcal{E}|}\sum_{i,j=1,i\neq j}^{| \mathcal{N}|}\mathcal{L}_{\mathrm{BCE}}(s_{ij}^{a},\hat{s}_{ij}), \tag{12}\] where \(\mathcal{L}_{\mathrm{BCE}}\) is the binary cross-entropy loss, \(\hat{s}_{ij}\) is the binary indicator of whether object \(i\) and object \(j\) have a relationship or not, and \(a\in\mathcal{A}=\{\mathrm{Q},\mathrm{E2E},\mathrm{N2E}\}\). The entire loss is defined as: \[\mathcal{L}=\mathcal{L}_{\mathrm{PCE}}+\lambda\frac{1}{|\mathcal{A}|}\sum_{a \in\mathcal{A}}\mathcal{L}_{\mathrm{ESM}}^{a}, \tag{13}\] where \(\lambda>0\) is a hyper-parameter. In training, \(\mathcal{L}_{\mathrm{CE}}\) does not affect the parameters of ESM directly due to the hard selection of ESM, and the gradient passes on to train the edge feature extraction; ESM is mainly trained by \(\mathcal{L}_{\mathrm{ESM}}\). ## 5 Experiments In this section, we perform a diverse set of experiments to evaluate the proposed model. We use two datasets: 1) Visual Genome (VG) [21] and 2) OpenImages v6 [20] datasets to train and evaluate model performances. We intend to show that our model can be generalized over heterogeneous cases by demonstrating competitive results on the two independent datasets. ### Datasets and evaluation metrics #### 5.1.1 Visual Genome [21] The Visual Genome dataset is composed of 108k images with an average of 38 objects and 22 relationships per image. However, most of the predicate classes have less than 10 samples. Therefore, we adopt the widely-used VG split [24, 55] to select the most frequent 150 object classes and 50 predicate classes. Following the [55], we first split the dataset into a training set (\(70\%\)) and a test set (\(30\%\)). Then, we sample 5k validation images from the training set to tune the hyperparameters. We evaluate SQUAT on three subtasks: Predicate Classification (PredCls), Scene Graph Classification (SGCls), and Scene Graph Detection (SGDet). The PredCls predicts the relationships given the ground-truth bounding boxes and object labels, the SGCls aims to predict the object labels and the relationships given the ground-truth bounding boxes, and the SGDet targets predicting the object bounding boxes, object labels, and relationships without any ground-truth. As the evaluation metrics, we adopt the mean recall@K (mR@\(K\)), as previously used in scene graph generation literature [3, 40]. mR@\(K\) is the average of recall@\(K\) for each relation. Following [48], we apply the graph-constraint, in which each object pair can have only one relationship, for evaluation. #### 5.1.2 OpenImages v6 [20] The OpenImages v6 dataset has 126,368 images for the training, 1,813 images for the validation, and 5,322 images for the test. Each image in the dataset has 4.1 objects and 2.8 relationships on average. The dataset has 301 object classes and 31 predicate classes. Compared with the Visual Genome dataset, the quality of annotation is far more robust and complete. For OpenImages v6, following [20, 57], we calculate Recall@50 (R@50), weighted mean AP of relationships (wmAP\({}_{\text{rel}}\)), and weighted mean AP of phrases (wmAP\({}_{\text{phr}}\)) as evaluation metrics. AP\({}_{\text{rel}}\) evaluates the two object bounding boxes, the subject box and the object box, and three labels, the triplets of the subject, the object, and the predicate. AP\({}_{\text{phr}}\) evaluates a union bounding box of subject and object and three labels as the same as AP\({}_{\text{rel}}\). To reduce the dataset bias in evaluation, we calculate wmAP\({}_{\text{rel}}\) and wmAP\({}_{\text{phr}}\) with weighted average of per-relationship AP\({}_{\text{phr}}\) and AP\({}_{\text{phr}}\), respectively. The weight of each relationship is calculated by their relative ratios in the validation set. The final score score\({}_{\text{wtd}}\) is obtained as \(0.2\times\text{R@}50+0.4\times\text{wmAP}_{\text{rel}}+0.4\times\text{wmAP}_{ \text{phr}}\). ### Implementation details As in the previous work [22, 41], we adopt ResNeXt-101-FPN as a backbone network and Faster R-CNN as an object detector. The model parameters of the pre-trained object detector are frozen during the training time. We use a bi-level sampling [22] to handle the long-tailed distribution of the datasets. The hyperparameters of bi-level sampling are set the same as in [22]. We set the hyper-parameter \(\lambda=0.1\) for the loss function. The keeping ratio \(\rho\) is set to 70% for the SGDet setting on both the Visual Genome dataset and the OpenImages v6 dataset in the training. In the early stages of training, the edge selection model is not reliable, causing instability during training. To tackle the issue, we pre-trained the edge selection module for a few thousand iterations using \(\mathcal{L}_{\mathrm{ESM}}\) while freezing all other parameters and then trained the entire SQUAT except for the object detector. Complete implementation details are specified in the supplementary material. ### Comparison with state-of-the-art models As shown in Table 1, on the Visual Genome dataset, SQUAT outperforms the state-of-the-art models on every setting, PredCls, SGCls and SGDet. Especially, SQUAT outperforms the state-of-the-art models by a large margin of 3.9 in mR@100 on the SGDet setting, which is the most realistic and important setting in practice as there is no perfect object detector. There are more invalid pairs in the SGDet setting than in other settings since the detected object bounding boxes from the pre-trained object detector includes many background boxes. This means that previous work for contextual modeling was most likely distracted by the invalid pairs; thus, SQUAT shows significant performance improvement on the SGDet setting. BGNN [22] also leverage a scoring function to scale the messages of the invalid edges, however, SQUAT shows better results with our edge selection module to discard invalid object pairs. This becomes doable with the quad attention mechanism with edge selection which helps to reduce noise and outliers from invalid pairs more effectively. Also, SQUAT shows the performance improvements by 2.3 and 0.5 on the SGCls and the PredCls settings with mR@100, respectively; the more complex and realistic the task, the more noticeable the performance improvement of SQUAT becomes. It shows that SQUAT, composed of edge selection and quad attention, is appropriate for contextual reasoning to generate scene graphs even in a complex scene. Also, as shown in Table 2, SQUAT achieve competitive results or even outperform compared with the state-of-the-art models on the OpenImages v6 dataset with \(\mathrm{score}_{\mathrm{wtd}}\). Since there are fewer objects and relationships in the images of the OpenImages v6 dataset than of the Visual Genome, the edge selection module seems less effective for the OpenImages v6 dataset. As there is a trade-off in recall and mean recall when bi-level sampling is utilized [3, 31], the result of SQUAT in Table 2 is a compromised metric for R@50. But still, the R@50 of SQUAT is still competitive with that from RU-Net [28] and outperforms other recent baselines, and we achieve the best performance in \(\mathrm{wmAP}_{\mathrm{phr}}\) by a large margin. It shows SQUAT is effective in improving the scene graph generation performance and also in simple scenes. Qualitative visualizations of SQUAT and more exper \begin{table} \begin{tabular}{l|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{PredCls} & \multicolumn{3}{c|}{SGCls} & \multicolumn{3}{c}{SGDet} \\ & mR@20 & mR@50 & mR@100 & mR@20 & mR@50 & mR@100 & mR@20 & mR@50 & mR@100 \\ \hline IMP+\({}^{\ddagger}\)[48] & 8.9 & 11.0 & 11.8 & 5.2 & 6.2 & 6.5 & 2.8 & 4.2 & 5.3 \\ Motifs\({}^{\ddagger}\)[55] & 11.5 & 14.6 & 15.8 & 6.5 & 8.0 & 8.5 & 4.1 & 5.5 & 6.8 \\ RelDN [57] & - & 15.8 & 17.2 & - & 9.3 & 9.6 & - & 6.0 & 7.3 \\ VCTree\({}^{\dagger}\)[41] & 12.4 & 15.4 & 16.6 & 6.3 & 7.5 & 8.0 & 4.9 & 6.6 & 7.7 \\ MSDN [24] & - & 15.9 & 17.5 & - & 9.3 & 9.7 & 6.1 & 7.2 & \\ GPS-Net [26] & - & 15.2 & 16.6 & - & 8.5 & 9.1 & - & 6.7 & 8.6 \\ RU-Net [28] & - & - & 24.2 & - & - & 14.6 & - & - & 10.8 \\ HL-Net [27] & - & - & 22.8 & - & - & 13.5 & - & - & 9.2 \\ VCTree-TDE [40] & 18.4 & 25.4 & 28.7 & 8.9 & 12.2 & 14.0 & 6.9 & 9.3 & 11.1 \\ Seq2Seq [31] & 21.3 & 26.1 & 30.5 & 11.9 & 14.7 & 16.2 & 7.5 & 9.6 & 12.1 \\ GPS-Net\({}^{\dagger}\) & 21.5 & 27.1 & 29.1 & 6.4 & 10.1 & 12.3 & 6.6 & 9.4 & 11.9 \\ JMSGG [49] & - & 24.9 & 28.0 & - & 13.1 & 14.7 & - & 9.8 & 11.8 \\ BGNN\({}^{\dagger}\)[22] & - & 30.4 & 32.9 & - & 14.3 & 16.5 & - & 10.7 & 12.6 \\ SQUAT \({}^{\dagger}\) (Ours) & **25.6** & **30.9** & **33.4** & **14.4** & **17.5** & **18.8** & **10.6** & **14.1** & **16.5** \\ \hline \hline \end{tabular} \end{table} Table 1: The scene graph generation performance of three subtasks on Visual Genome (VG) dataset with graph constraints. \(\dagger\) denotes that the bi-level sampling [22] is applied for the model. \(\ddagger\) denotes that the results are reported from the [40]. \begin{table} \begin{tabular}{l|c|c c|c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{R@50} & \multicolumn{2}{c|}{wmAP} & \multirow{2}{*}{\(\mathrm{score}_{\mathrm{wtd}}\)} \\ & & rel & phr & & \\ \hline RelDN [57] & 73.1 & 32.2 & 33.4 & 40.9 \\ VCTree [41] & 76.1 & 34.2 & 33.1 & 42.1 \\ Motifs [55] & 71.6 & 29.9 & 31.6 & 38.9 \\ VCTree+TDE [40] & 69.3 & 30.7 & 32.8 & 39.3 \\ GPS-Net [26] & 74.8 & 32.9 & 34.0 & 41.7 \\ GPS-Net\({}^{\dagger}\)[26] & 74.7 & 32.8 & 33.9 & 41.6 \\ BGNN\({}^{\dagger}\)[22] & 75.0 & 33.5 & 34.2 & 42.1 \\ HL-Net [27] & 76.5 & 35.1 & 34.7 & 43.2 \\ RU-Net [28] & **76.9** & **35.4** & 34.9 & **43.5** \\ SQUAT \({}^{\dagger}\) & 75.8 & 34.9 & **35.9** & **43.5** \\ \hline \hline \end{tabular} \end{table} Table 2: The scene graph generation performance on OpenImages v6 dataset with graph constraints. \(\dagger\) denotes that v6 dataset with graph constraints. \(\dagger\) denotes that the bi-level sampling [22] is applied for the model. iments on SQUAT with 1) additional measurement, _e.g_., recall and non-graph constraint measurement, on Visual Genome, 2) performance with plug-and-play long-tailed recognition techniques and 3) additional qualitative results are given in Supplementary. ### Ablation study #### 5.4.1 Model variants on edge selection To verify the effectiveness of the edge selection module, we evaluate the model from which each component of edge selection is removed on the Visual Genome dataset. As shown in Table 3, we observe that the quad attention module without the edge selection module shows much lower performance at mR@100 (-8.9%) than the full model which has the edge selection module; thus, to select the valid edges is important for the scene graph generation. On the other hand, the quad attention module without the edge selection module shows 20.4% higher performance than the BGNN and achieves 15.00 on mR@100. It shows the effectiveness of the quad attention module itself without the edge selection module. We also observe that the query selection is more critical than the key-value selection for the scene graph generation; it shows that selecting what to update is important for the scene graph generation. To evaluate the effectiveness of three distinct edge selection modules, we evaluate the models, some of which edge selection modules are shared. In Table 4, \(\mathrm{ESM}^{a}\), of which \(a\in\{\mathrm{Q},\mathrm{E2E},\mathrm{N2E}\}\) are denoted'shared' in the column, share the same parameters. We observe that the three fully-distinct edge selection modules boost the scene graph generation performances. It shows there exist differences between the edges needed to update both features and the edges that need to be updated. #### 5.4.2 Model variants on quad attention To verify the effectiveness of the quad attention module, we evaluate the model from which each attention is removed on the Visual Genome dataset. As shown in Table 5, the full model with quad attention outperforms the other model variants. We also observe that the SQUAT without updating edges, _i.e_., edge-to-node and edge-to-edge attentions, performs poorer than the SQUAT without updating nodes, _i.e_., node-to-node and node-to-edge attentions. It shows that updating edge features with context information is important for context reasoning. Especially, SQUAT without edge-to-edge attention shows worse performance than without edge-to-node attention since edge-to-edge attention, which is neglected in the previous work, can capture high-level information. ### The effect of the edge selection module We applied the edge selection module to the scene graph generation model with message passing methods. Since it is not known which pairs of objects have a relationship, the message passing methods use the fully-connected graph in the inference time. However, we empirically observe that message passing on fully-connected graph is meaningless or even harmful for the scene graph generation. We use three baselines, IMP [48], BGNN [22], and SQUAT, for ablation study on the Visual Genome dataset. Table 6 shows that message passing through fully-connected graph is harmful for the scene graph generation. Even though BGNN uses a gating function to scale down the messages from invalid edges, it does not work well. \begin{table} \begin{tabular}{c c c|c c c} \hline \hline & Variants & & & SGDet & \\ Q & E2E & N2E & mR@20 & mR@50 & mR@100 \\ \hline shared & shared & shared & 9.61 & 12.70 & 14.85 \\ distinct & shared & shared & 9.63 & 12.54 & 14.64 \\ \hline distinct & distinct & distinct & 10.57 & 14.12 & 16.47 \\ \hline \hline \end{tabular} \end{table} Table 4: The ablation study on model variants on edge selection. We share the edge selection module for query selection and key-value selection. ‘shared’ denotes the edge selection modules share the parameters. \begin{table} \begin{tabular}{c c c|c c c} \hline \hline & \multicolumn{2}{c|}{Variants} & \multicolumn{2}{c}{SGDet} \\ & & & & mR@20 & mR@50 & mR@100 \\ \hline \multicolumn{5}{c}{BGNN [22]} & \multicolumn{2}{c|}{7.49} & 10.31 & 12.46 \\ \hline & & & 9.12 & 12.45 & 15.00 \\ ✓ & & & 9.92 & 13.22 & 15.66 \\ & ✓ & ✓ & 9.84 & 13.04 & 15.60 \\ \hline ✓ & ✓ & ✓ & 10.57 & 14.12 & 16.47 \\ \hline \hline \end{tabular} \end{table} Table 3: The ablation study on model variants on edge selection. We remove the edge selection module for query selection and key-value selection. \begin{table} \begin{tabular}{c c c|c c c c} \hline \hline & \multicolumn{2}{c|}{Method} & \multicolumn{2}{c}{SGDet} \\ N2N & N2E & E2N & E2E & mR@20 & mR@50 & mR@100 \\ \hline ✓ & ✓ & & & 7.02 & 9.74 & 11.57 \\ ✓ & ✓ & & ✓ & 9.76 & 12.98 & 15.30 \\ ✓ & ✓ & ✓ & & 9.70 & 12.27 & 15.03 \\ & & ✓ & ✓ & 9.90 & 13.05 & 15.28 \\ & ✓ & ✓ & ✓ & 9.77 & 12.93 & 15.42 \\ ✓ & & ✓ & ✓ & 9.99 & 13.02 & 15.54 \\ \hline ✓ & ✓ & ✓ & ✓ & 10.57 & 14.12 & 16.47 \\ \hline \hline \end{tabular} \end{table} Table 5: The ablation study on model variants on quad attention. N2N, N2E, E2N, E2E denote the node-to-node, node-to-edge, edge-to-node, and edge-to-edge attentions, respectively. To further investigate the effect of edge selection, we applied message passing through ground-truth scene graphs to each model. In table 6, 'No' represents that message passing is not used, 'Full' and 'GT' indicate that message passing is used with the complete graph and the ground-truth scene graph, respectively; 'ES' means that the proposed edge selection module is used with message passing. As shown in Table 6, every model with the message passing through ground truth outperforms state-of-the-art models by a substantial margin, showing that removing the invalid edges is crucial for scene graph generation. The edge selection module clearly improves not only the performance of SQUAT but also that of BGNN, the previous state-of-the-art model. It indicates that the edge selection module effectively removes the invalid edges and can be used as a plug-and-play module for message-passing-based scene graph methods. ### Qualitative results Qualitative results for the edge selection module are shown in Fig. 4. As shown in Fig. 4 (a), the object detection module extracts 6 bounding boxes, then the fully-connected graph has 30 edges in total, where only 6 valid edges are in the ground-truth. After edge selection with keeping ratio \(\rho=35\%\), only 10 edges remain where 6 valid edges all remain. It significantly reduces noises from invalid edges. The other example in Fig. 4 (b) shows the same tendency. ## 6 Conclusion We presented a novel scene graph generation model that predicts a scene graph within an image. The method is designed to selectively utilize valid edges using our proposed quad attention module, and update the model from the valid edges only. The edge selection module effectively filters out invalid edges to sparsify a noisy scene graph, and thus it removes uncertainties brought by invalid edges. The quad attention module, which is composed of four components -- node-to-node, node-to-edge, edge-to-node, and edge-to-edge attentions -- captures the high-level information for accurately predicting relationships among different objects. We have shown the effectiveness of the SQUAT, and each component under various settings was properly validated in the experiments to demonstrate stability. Acknowledgements.This work was supported by the IITP grants (2021-0-00537: Visual common sense through self-supervised learning for restoration of invisible parts in images (50%), 2022-0-00959: Few-shot learning of causal inference in vision and language (40%), and 2019-0-01906: AI graduate school program at POSTECH (10%)) funded by the Korea government (MSIT). \begin{table} \begin{tabular}{c|c|c c c} \hline \hline \multirow{2}{*}{model} & \multirow{2}{*}{Graph} & \multicolumn{3}{c}{SGDet} \\ & & mR@20 & mR@50 & mR@100 \\ \hline \multirow{2}{*}{IMP [48]} & No & 4.09 & 5.56 & 6.53 \\ & Full & 2.87 & 4.24 & 5.42 \\ \hline \multirow{3}{*}{BGNN [22]} & No & 8.99 & 11.84 & 13.56 \\ & Full & 7.49 & 10.31 & 12.46 \\ \cline{1-1} & ES & 9.00 & 11.86 & 14.20 \\ \cline{1-1} & GT & 14.15 & 16.41 & 17.09 \\ \hline \multirow{3}{*}{ SQUAT} & No & 8.68 & 11.52 & 13.99 \\ \cline{1-1} & Full & 9.12 & 12.45 & 15.00 \\ \cline{1-1} & ES & 10.57 & 14.12 & 16.47 \\ \cline{1-1} & GT & 17.95 & 19.21 & 19.51 \\ \hline \hline \end{tabular} \end{table} Table 6: The ablation study on message passing for the scene graph generation. There are four settings depending on which graphs are used in the message passing: No, Full, ES, and GT. Every result is reproduced with the authors’ code. Figure 4: Qualitative results for edge selection module \(\mathrm{ESM^{Q}}\) for query selection. The selected edges after edge selection are drawn in the right graph. The green arrows denote the valid pairs, and the gray arrows denote the invalid pairs. The keeping ratio for the two settings is the same \(\rho=35\%\). All of the valid edges remain, and most of the invalid edges are removed. ## Supplementary Material In this supplementary material, we provide additional results and details of our method, Selective Quad Attention Networks (SQUAT). ### S1. Implementation details #### S1.1 Code base and GPUs. We implemented SQUAT using Pytorch [34] and some of the official code-base for BGNN [22]2. SQUAT was trained for \(\sim\)8 hours on 4 RTX 3090 GPUs with batch size 12. Footnote 2: [https://github.com/SHTUPLUS/PySGG](https://github.com/SHTUPLUS/PySGG) ### Edge selection module. Following [37], we use simple MLP with 4 linear layers and Layer Normalization [1] with GeLU [12] activation. To capture the global statistics of the edge features \(\mathcal{E}=\{f_{ij}\}_{i,j}\), we average half of the output dimensions of the first layer as a global feature \(g\): \[[h^{l}_{ij};h^{g}_{ij}] =l^{1}(f_{ij}) \tag{14}\] \[g =\frac{1}{|\mathcal{E}|}\sum_{i}\sum_{j}h^{g}_{ij}, \tag{15}\] where \(l^{1}\) is the first layer of the edge selection module and \([\cdot;\cdot]\) is the concatenation operation. The dimensions of the local part \(h^{l}_{ij}\) and the global part \(h^{g}_{ij}\) are the same. We concatenate the global feature \(g\) with each of the remaining local parts \(h^{l}_{ij}\) and pass into the remaining 3-layer MLP to calculate the relatedness scores \(s_{ij}\): \[s_{ij}=l^{2}([h^{l}_{ij};g]), \tag{16}\] where \(l^{2}\) is the remaining 3-layer MLP. In order to remove the invalid edges, we choose top-\(\rho\%\) highest relatedness score pairs \(\mathcal{E}^{\rho}\) as the valid edges. #### S1.3 Training details. To train SQUAT, we use Stochastic Gradient Descent (SGD) optimizer with a learning rate \(10^{-3}\). In the early stages of training, notice that the edge selection model is too naive to select the valid edges to construct feasible scene graphs and therefore causes instability during training. To make the training stable, we pre-trained the edge selection module for \(2000\) iterations with a learning rate of \(10^{-4}\) freezing all other parameters, and then we trained the entire SQUAT without the node detection module. We use the keeping ratio \(\rho=0.7\) and \(\rho=0.35\) in training time and inference time, respectively, for all the SGDet settings on the Visual Genome and the Open Images v6 datasets. Also, we use the keeping ratio \(\rho=0.9\) for the SGCls and the PredCls settings on Visual Genome. Since the background proposals do not exist in the SGCls and the PredCls settings, there are fewer invalid edges than in the SGDet setting; thus, we use a smaller keeping ratio. We use three quad attention layers for the SGDet setting and two quad attention layers for the SGCls and the PredCls settings. ### S2. Additional evaluations on Visual Genome #### S2.1 Trade-off between recall and mean recall Since the Visual Genome dataset3 has extremely long-tailed distribution, there is the trade-off between recall and mean recall [31, 40]. To evaluate various trade-offs of the scene graph generation methods, Zhang _et al._[56] propose the F@\(K\) measure, the harmonic mean of recall and mean-recall, recently. Table S2 shows the R@50/100, mR@50/100, and F@50/100 on the Visual Genome dataset. SQUAT outperforms all of the state-of-the-art methods at F@50/100 measurements. It shows that although the recall of SQUAT degrades, the trade-off between the recall and the mean recall is the best in the state-of-the-art methods. Footnote 3: The most frequent entity class is 35 times larger than the least frequent one and the most frequent predicate class is 8,000 times larger than the least frequent one. ### Mean recall with no-graph constraints Following [33, 55], we also evaluate SQUAT without the graph constraint, _i.e._, each edge can have multiple relationships. For each edge, while mR\(\otimes K\) evaluates only one predicate with the highest score, ng-mR\(\otimes K\) evaluates all 50 predicates. As shown in Table. S3, on the Visual Genome dataset, SQUAT outperforms the state-of-the-art models. Especially, SQUAT outperforms the state-of-the-art models by a large margin of ng-mR\(\otimes K\) on the SGDet settings as it does in the evaluation of mR\(\oplus K\). #### S2.3 Recall for head, body, and tail classes Following [22], we split the relationship classes into three sets according to the number of relationship instances: head (more than 10k), body (0.5k\(\sim\)10k), and tail (less than 0.5k) classes. Table S4 shows the mR@100 for each group. SQUAT outperforms the state-of-the-art methods for body and tail classes by a large margin. Especially for the tail classes, SQUAT achieves twice mR@100 as that of BGNN. It shows that the scene graphs from SQUAT have more meaningful predicates, _i.e._, tail classes such as 'walking in', instead of general predicates, _i.e._, head classes such as 'on'. ### Recall on simple, moderate, and complex scenes As shown in Tables 1 and 2 in the main paper, the SQUAT shows exceptionally high performance on the most complicated task, _i.e._, SGDet, and the most complex dataset, _i.e._, Visual Genome. Furthermore, to analyze the performance on the complexity of the scene, we divide the image sets in the Visual Genome into three disjoint sets according to the number of objects in the scene: simple (\(\leq 9\)), moderate (\(10\sim 16\)), and complex (\(\geq 17\)). As shown in Table S5, the SQUAT shows a higher performance gain on the more complex images; the SQUAT is more effective for realistic and complex scenes. ## S3 SQUAT with off-the-shelf method To reduce the biases of the scene graph generation datasets, many off-the-shelf methods [4, 7, 8, 11, 18, 19, 39, 45, 50, 52, 56] are proposed. For a fair comparison, we do not compare the off-the-shelf methods with SQUAT in the main paper. We applied Internal and External Data Transfer (IETrans) and reweighting (Rwt) [56], which are the state-of-the-art off-the-shelf learning methods for scene graph generation, to the SQUAT. For efficiency, we only report a model with the best performance for each off-the-shelf method. As shown in Table S6, without careful hyperparameter search, SQUAT+IETrans+Rwt model outperforms VCTree+IETrans+Rwt model and outperforms other off-the-shelf methods with Motifs [55], Transformer [40], and VCTree [41]. It shows that other off-the-shelf learning methods can be adopted for SQUAT to improve its performance. Figure S2. The qualitative results for SQUAT. (a) The detection results from pre-trained Faster R-CNN [36]. (b) The ground-truth scene graph. (c) The results from full SQUAT. (d) The results from SQUAT without edge update, _i.e._, the edge-to-edge and the edge-to-node attentions. (e) The results from SQUAT without node update, _i.e._, the node-to-edge and the node-to-node attentions. Full SQUAT shows more informative scene graphs than the other ablated models. The green arrows denote the true positives and the red arrows denote the false positives. Figure S3. The qualitative results for SQUAT. (a) The detection results from pre-trained Faster R-CNN [36]. (b) The ground-truth scene graph. (c) The results from full SQUAT. (d) The results from SQUAT without edge update, _i.e._, the edge-to-edge and the edge-to-node attentions. (e) The results from SQUAT without node update, _i.e._, the node-to-edge and the node-to-node attentions. Full SQUAT shows more informative scene graphs than the other ablated models. The green arrows denote the true positives and the red arrows denote the false positives. Figure S4. The qualitative results for the edge selection module on the Open Images v6 dataset. The graph denotes the results of the \(\mathrm{ESM}^{\mathrm{Q}}\) and the green arrows denote the valid edges. The boxes with the red class denote the incorrect prediction or the background.
2301.06333
Functional concurrent regression with compositional covariates and its application to the time-varying effect of causes of death on human longevity
Multivariate functional data that are cross-sectionally compositional data are attracting increasing interest in the statistical modeling literature, a major example being trajectories over time of compositions derived from cause-specific mortality rates. In this work, we develop a novel functional concurrent regression model in which independent variables are functional compositions. This allows us to investigate the relationship over time between life expectancy at birth and compositions derived from cause-specific mortality rates of four distinct age classes, namely 0--4, 5--39, 40--64 and 65+ in 25 countries. A penalized approach is developed to estimate the regression coefficients and select the relevant variables. Then an efficient computational strategy based on an augmented Lagrangian algorithm is derived to solve the resulting optimization problem. The good performances of the model in predicting the response function and estimating the unknown functional coefficients are shown in a simulation study. The results on real data confirm the important role of neoplasms and cardiovascular diseases in determining life expectancy emerged in other studies and reveal several other contributions not yet observed.
Emanuele Giovanni Depaoli, Marco Stefanucci, Stefano Mazzuco
2023-01-16T09:58:13Z
http://arxiv.org/abs/2301.06333v3
Functional concurrent regression with compositional covariates and its application to the time-varying effect of causes of death on human longevity ###### Abstract Multivariate functional data that are cross-sectionally compositional data are attracting increasing interest in the statistical modeling literature, a major example being trajectories over time of compositions derived from cause-specific mortality rates. In this work, we develop a novel functional concurrent regression model in which independent variables are functional compositions. This allows us to investigate the relationship over time between life expectancy at birth and compositions derived from cause-specific mortality rates of four distinct age classes, namely 0-4, 5-39, 40-64 and 65+. A penalized approach is developed to estimate the regression coefficients and select the relevant variables. Then an efficient computational strategy based on an augmented Lagrangian algorithm is derived to solve the resulting optimization problem. The good performances of the model in predicting the response function and estimating the unknown functional coefficients are shown in a simulation study. The results on real data confirm the important role of neo-plasms and cardiovascular diseases in determining life expectancy emerged in other studies and reveal several other contributions not yet observed. [ ]Emanuele Giovanni Depaoli M. Stefanucci ## 1 Introduction There is still considerable heterogeneity between countries (even if we focus only on high-income countries) in terms of longevity, and the time pattern with which the recent mortality levels have been reached is even more heterogeneous. Several studies have investigated these time patterns (see, for instance, Canudas-Romo, 2010), but recently some are trying to analyze the role of causes of death in determining them. For example, Bergeron-Boucher, Aburto and van Raalte (2020) try to determine which causes of death are associated with the extension of longevity in several countries. Woolf and Schoomaker (2019) attribute the recent stagnation of life expectancy in the US to increased midlife mortality caused by drug overdoses, alcohol abuses, suicides and some organ diseases. Mehta, Abrams and Myrskyla (2020) contest these findings, arguing that cardiovascular diseases are mainly responsible for this stagnation. This kind of studies are paramount in public health, but handling causes of death is challenging, as in a competing risk perspective, cause-specific mortality can decline because there has been a significant improvement in treatment and/or prevention of that disease or just because other causes have grown meanwhile. This has recently been considered by Stefanucci and Mazzuco (2022), who propose a combination of Functional Data Analysis (FDA) and Compositional Data Analysis (CDA) to analyze the cause-of-death time pattern, limiting to mortality at age 40-64. Although the study by Stefanucci and Mazzuco (2022) provides useful insights on the evolution of cause-specific mortality, it remains of descriptive nature, while it might be of interest to measure if and to what extent different cause-of-death compositions drove the evolution of overall mortality in recent years. We suggest that such an analysis can be performed by regressing the evolution of overall mortality (measured in terms of life expectancy at birth) with cause-of-death composition of mortality as defined by Stefanucci and Mazzuco (2022). Sun et al. (2020) have recently proposed a log-contrast regression model with functional compositional covariates. Here, we extend their previous work to cope with the functional essence of our response variable. Such an extension consists of a concurrent specification of the function-on-function linear regression model, with appropriate constraints due to the compositional nature of the covariates. Four age groups of causes of death are considered i.e., 0-4, 5-39, 40-64, and 65+, thus giving rise to four compositions, each with many components, as shown in Table 1. Since it is reasonable that only few of them are relevant to predict the outcome, the model specification assumes sparsity of the regression coefficients. In this way, variable selection is performed and interpretable results are obtained. An efficient computational strategy based on an augmented Lagrangian algorithm is also described to estimate the proposed model, and the performances of the method are illustrated through a simulation study. The article proceeds as follow: in Section 2 we describe the analyzed data and formalize all the relevant quantities, in Section 3 we introduce a novel concurrent functional regression model with compositional covariates and discuss its estimation. The results of a simulation study are presented in Section 4 and the results on real data are extensively commented in Section 5. Finally, Section 6 concludes the article. ## 2 Data and problem setup For each cause \(i\), age \(x\) and calendar year \(t\), we consider cause-specific mortality rates \[{}^{i}m_{x}^{t}=m_{x}^{t}\frac{{}^{i}D_{x}^{t}}{D_{x}^{t}},\] where \({}^{i}D_{x}^{t}\) is the number of deaths for cause \(i\) at age \(x\) and time \(t\), \(D_{x}^{t}\) is the number of deaths for all causes at age \(x\) and time \(t\), \({}^{i}m_{x}^{t}\) and \(m_{x}^{t}\) are the correspondent rates. For a given age \(x\), compositions of mortality rates can be regarded as compositions of \({}^{i}m_{x}^{t}\) using \(m_{x}^{t}\) as normalization constant. Otherwise, data with unit-sum constraints may be obtained from \({}^{i}D_{x}^{t}\), using \(\sum_{x,i}{}^{i}D_{x}^{t}\) as normalization constant. The latter approach was adopted by Oeppen (2008) and Kjaergaard et al. (2019) to model and forecast age-at-death distributions. In this way, the parts of the composition are related to different ages and the results could be difficult to interpret. Although it is not a problem for forecasting purposes, it is a major drawback for our perspective. The exact opposite of the previous approach is to study \({}^{i}m_{x}^{t}\) directly, that is, different compositions for each age. This would result in many predictors, making estimation problematic, especially for limited sample sizes. Moreover, as before, interpreting the results could be challenging. For these reasons, we focus on four age classes: 0-4, 5-39, 40-64, 65+, giving rise to four different compositions. From a demographic point of view, they account for infant, premature, early-adult and senescent mortality cause-of-death patterns, respectively. The underlying idea is that not only the cause-of-death composition changes among age groups, but also its effect on life expectancy varies with age. Data come from the WHO mortality database (_WHO mortality database_) and from the Human Cause-of-Death database (HCD) (_Human Cause-of-Death Database_) which contain time series of age-specific and cause-specific deaths for several countries. A primary issue is that the International Classification of Diseases (ICD) has changed significantly over the years, determining potentially biased results. Following Canudas-Romo, Adair and Mazzuco (2020) and Stefanucci and Mazzuco (2022), we use broad categories of causes, which are minimally affected by the classification revisions. The categories considered are shown in Table 1: the number of causes is higher with respect to Stefanucci and Mazzuco (2022), who limit their analysis to age group 40-64. Here, we also consider causes that are specific to infant ages (e. g., congenital anomalies) and senescent ones (e. g., mental disorders, including dementia and Alzheimer's disease). As can be seen in Table 1, only some of the 14 causes are included in the composition of a specific age group. For example, age 0-4 includes only 7 causes; the others are ignored as their role for that age group is negligible. On average, our classification accounts for \(88\%\) of the total number of deaths for the age class 0-4, \(92\%\) for the age class 5-39, and \(98\%\) for the age classes 40-64 and 65+. Regarding the countries used in this work, after some preliminary analyses, we decided to limit the study to the \(n=25\) nations reported in Table 2, with a population size exceeding one million and good data quality. In order to consider the same time window for each nation, we restrict the study to the years 1965-2012. Some years are still missing for a few countries, that is, 2005 for Australia, 1996--1997 for Poland, and 2000 for the UK. This is not an issue, since our methodology also works for a non-equispaced time grid. Furthermore, a small number of zero counts is present in the age class 0-4 for external causes, neoplasms, infectious, respiratory and nervous diseases, as well as for mental and digestive diseases in the age class 5-39 and mental diseases in the other two age groups. Since the data have to be log-transformed, we replace them by the maximum rounding error of 0.5, which is a common practice in CDA (Aitchison, 2003). Concerning life expectancy at birth, we use data from life tables from the Human Mortality Database (HMD) (_Human Mortality Database_), which contains detailed, consistent, and high-quality data on human mortality (Barbieri et al., 2015). We formulate the statistical problem in a very general way, considering an arbitrary number \(q\) of age classes and the possible inclusion of time-varying control variables. Let \(\mathbf{y}(t)=\left[y_{1}(t),\ldots,y_{n}(t)\right]^{\top}\in\mathbb{R}^{n}\) be the response vector whose \(i\)-th component is the life expectancy at birth at time \(t\in\mathcal{T}\) for the \(i\)-th country, with \(i=1,\ldots,n\). Let \(\mathbf{x}_{ij}(t)=\left[x_{ij1}(t),\ldots,x_{ijp_{j}}(t)\right]^{\top}\in\mathbb{ S}^{p_{j}-1}\) be the composition of \(p_{j}\) cause-specific mortality rates for the \(i\)-th nation and \(j\)-th age class at time \(t\), with \(j=1,\ldots,q\), and \(\mathbb{S}^{p-1}=\left\{\left[x_{1},\ldots,x_{p}\right]^{\top}\in\mathbb{R}^{p },x_{j}>0,\sum_{j=1}^{p}x_{j}=1\right\}\) denoting the positive simplex lying in \(\mathbb{R}^{p}\). Also, let \(\mathbf{x}_{i}(t)=\left[\mathbf{x}_{i1}(t)^{\top},\ldots,\mathbf{x}_{iq}(t)^{\top}\right]^{\top} \in\mathbb{R}^{q}\) be the vector containing all the \(q\) compositions, with \(p=\sum_{j=1}^{q}p_{j}\), and let \(\mathbf{X}(t)=\left[\mathbf{x}_{1}(t),\ldots,\mathbf{x}_{n}(t)\right]^{\top}\in\mathbb{R}^ {n\times p}\) be the matrix of functional predictors at time \(t\). Finally, \(\mathbf{Z}_{c}(t)\in\mathbb{R}^{n\times(p_{e}+1)}\) is the matrix of control variables at time \(t\), where the first column is a vector of ones \(\mathbf{1}_{n}\), to estimate the functional intercept. The observed life expectancies and compositions of mortality rates at each calendar year can be considered as discrete observations from \(\mathbf{y}(t)\) and \(\mathbf{X}(t)\), respectively. The main objective is to analyze the time-varying effect of causes of death on human longevity, studying whether variations in the cause-of-death composition can be predictive of life expectancy at birth. Since life expectancies in a given year are calculated based on age-specific mortality rates for the same year, we assume a concurrent relationship between the response variable and the covariates. In the next section, after a brief review of the linear log-contrast model in the scalar case, we introduce its extension to deal with functional covariates and response variable. ## 3 Methods ### Linear log-contrast model Since the pioneering work of Aitchison and Bacon-Shone (1984), log-contrast models have been very popular for regression problems with compositional covariates. Suppose that we observe a response vector \(\mathbf{y}=\left[y_{1},\ldots,y_{n}\right]^{\top}\in\mathbb{R}^{n}\) and a design matrix \(\mathbf{X}=\left[\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\right]^{\top}\in\mathbb{R}^{n\times p}\) with \(\mathbf{x}_{i}=\left[x_{i1},\ldots,x_{ip}\right]^{\top}\in\mathbb{S}^{p-1}\), for \(i=1\ldots,n\). Because of the unit-sum constraint, each row of the matrix \(\mathbf{X}\) cannot vary freely and the classical regression model is subject to identification problems. A naive solution is to omit one of the parts of the composition, but the method is not invariant to the choice of the removed component and the resulting coefficients are difficult to interpret. The idea of Aitchison and Bacon-Shone (1984) is to perform a log-ratio transformation of the compositional data so that the transformed data admit the familiar Euclidean geometry in \(\mathbb{R}^{p-1}\). For a given reference component \(r\in\{1,\ldots,p\}\), let \(\mathbf{Z}_{r}=\left[\mathbf{z}_{1},\ldots,\mathbf{z}_{n}\right]^{\top}\in\mathbb{R}^{n \times(p-1)}\) be the associated design matrix, where the \(j\)-the element of \(\mathbf{z}_{i}\) is given by \(z_{ij}=\log\left(x_{ij}/x_{ir}\right)\), for \(j=1,\ldots,r-1,r+1,\ldots,p\). The resulting linear log-contrast model is \[\mathbf{y}=\mathbf{1}_{n}\beta_{0}+\mathbf{Z}_{r}\mathbf{\beta}_{r}+\mathbf{e}, \tag{3.1}\] where \(\beta_{0}\) is the intercept, \(\mathbf{\beta}_{r}\in\mathbb{R}^{p-1}\) is the regression coefficient, and \(\mathbf{e}\in\mathbb{R}^{n}\) is the error vector, independent from \(\mathbf{Z}_{r}\) and distributed as \(\mathcal{N}(0,\sigma^{2})\). The log-contrast model can be written in the symmetric form \[\mathbf{y}=\mathbf{1}_{n}\beta_{0}+\mathbf{Z}\mathbf{\beta}+\mathbf{e},\quad\text{s.t.}\ \mathbf{1}_{p}^{\top}\mathbf{\beta}=0, \tag{3.2}\] where \(\mathbf{Z}\in\mathbb{R}^{n\times p}\) is the matrix resulting from log-transforming each element of the matrix \(\mathbf{X}\), \(\beta_{0}\) and \(\mathbf{e}\) are the same as in model (3.1), and the regression coefficient \(\mathbf{\beta}_{r}\) is the subvector obtained from \(\mathbf{\beta}\) by removing the \(r\)-th component. The log-contrast model obeys a landmark concept in CDA, called subcompositional coherence (Aitchison, 2003): if the \(j\)-th coefficient of \(\mathbf{\beta}\) is zero, then the results are unchanged if the model is applied to the subcomposition without the \(j\)-th component. In the classical regression framework, the least squares estimation can be performed indifferently assuming the model (3.1) or the constrained form in (3.2). However, in a high-dimensional setup where variable selection is required, the use of a Lasso penalization method (Tibshirani, 1996) determines the loss of equivalence between the symmetric and non-symmetric form. For example, consider the inclusion of a \(L_{1}\) penalty term for model (3.1), determining the optimization problem \[\operatorname*{arg\,min}_{\mathbf{\beta}_{r},\beta_{0}}\frac{1}{2}||\mathbf{y}-\mathbf{1}_ {n}\beta_{0}+\mathbf{Z}_{r}\mathbf{\beta}_{r}||_{2}^{2}+\lambda||\mathbf{\beta}_{r}||_{1}. \tag{3.3}\] The solution of problem (3.3) is not invariant to the choice of the reference category \(r\) and, in general, is different from that of the Lasso criteria related to the symmetric model (3.2), which determines the optimization problem \[\operatorname*{arg\,min}_{\bolds{\beta},\beta_{0}}\frac{1}{2}||\bolds{y}-\mathbf{1 }_{n}\beta_{0}+\bolds{Z}\bolds{\beta}||_{2}^{2}+\lambda||\bolds{\beta}||_{1}, \quad\text{s.t. }\mathbf{1}_{p}^{\top}\bolds{\beta}=0. \tag{3.4}\] The latter is proposed and studied in the context of gut microbiome and metagenomic data by Lin et al. (2014), who also provide theoretical guarantees for the resulting estimator. Moreover, the zero-sum constraint makes the model subcompositional coherent. ### Sparse functional concurrent log-contrast regression Although in practice the functional compositional predictors and the response variable are observed at each calendar year, here we assume that \(\bolds{X}(t)\) and \(\bolds{y}(t)\) are observed for each \(t\in\mathcal{T}\). Following the notation of Section 3.1 and Section 2, let \(\bolds{Z}(t)\in\mathbb{R}^{n\times p}\) be the matrix resulting from log-transforming each element of the matrix \(\bolds{X}(t)\) at time \(t\), and recall that \(\bolds{y}(t)\in\mathbb{R}^{n}\) is the functional response and \(\bolds{Z}_{c}(t)\in\mathbb{R}^{n\times(p_{c}+1)}\) is the functional matrix of control variables, including a vector of ones \(\mathbf{1}_{n}\) as the first column. The matrix \(\bolds{Z}(t)\) contains \(q\) compositions and thus we need to impose \(q\) zero-sum constraints to achieve subcompositional coherence. Following Lin et al. (2014) and Sun et al. (2020), we propose the functional concurrent log-contrast regression model \[\bolds{y}(t)=\bolds{Z}_{c}(t)\bolds{\beta}_{c}(t)+\bolds{Z}(t)\bolds{\beta}(t)+ \bolds{e}(t),\quad\text{s.t. }\bolds{L}\bolds{\beta}(t)=\mathbf{0}_{q}\quad\forall t\in \mathcal{T}, \tag{3.5}\] where \(\bolds{\beta}(t)=\left[\bolds{\beta}_{1}(t)^{\top},\ldots,\bolds{\beta}_{q}(t)^{ \top}\right]^{\top}\in\mathbb{R}^{p}\) is the functional regression coefficient, with \(\bolds{\beta}_{j}(t)=[\beta_{j1}(t),\ldots,\beta_{jp_{j}}(t)]^{\top}\in\mathbb{ R}^{p_{j}}\) for \(j=1,\ldots,q\), \(\bolds{\beta}_{c}(t)\in\mathbb{R}^{p_{c}+1}\) is the functional regression coefficient related to the control variables, and \(\bolds{e}(t)\in\mathbb{R}^{n}\) is the vector of functional errors distributed as \(\mathcal{N}(0,\sigma^{2})\). The set of linear constraints is represented by the matrix \[\bolds{L}=\begin{bmatrix}\mathbf{1}_{p_{1}}&\mathbf{0}_{p_{1}}&\cdots&\mathbf{ 0}_{p_{1}}\\ \mathbf{0}_{p_{2}}&\mathbf{1}_{p_{2}}&\cdots&\mathbf{0}_{p_{2}}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{0}_{p_{q}}&\mathbf{0}_{p_{q}}&\cdots&\mathbf{1}_{p_{q}}\end{bmatrix}^ {\top}\in\mathbb{R}^{q\times p}.\] For our study, it is reasonable to assume that the effects of causes of death on life expectancy are smooth over years. To achieve smoothness, each coefficient curve is represented by a linear expansion of \(k\) known basis functions, such that \[\bolds{\beta}(t)=\bolds{B}\bolds{\Phi}(t),\quad\bolds{\beta}_{c}(t)=\bolds{B}_{c} \bolds{\Phi}(t),\] where \(\bolds{B}=\left[\bolds{b}_{1},\ldots,\bolds{b}_{p}\right]^{\top}\in\mathbb{R}^{ p\times k}\) and \(\bolds{B}_{c}=\left[\bolds{b}_{0},\bolds{b}_{c_{1}},\ldots,\bolds{b}_{c_{p}} \right]^{\top}\in\mathbb{R}^{(p_{c}+1)\times k}\) are the coefficient matrices, and \(\bolds{\Phi}(t)=\left[\phi_{1}(t),\ldots,\phi_{k}(t)\right]^{\top}\in\mathbb{R} ^{k}\) is the vector of basis functions. For simplicity and since it is usually sufficient in practice, here we assume the same number \(k\) of basis functions for each predictor and control variable. Moreover, we assume that the elements of \(\bolds{\Phi}(t)\) are B-splines of order \(d\)(De Boor, 1978). A B-spline of order \(\mathrm{d}\) is a piecewise polynomial function of degree \(d-1\) and is defined by a set of knots, which are the points where the functions meet. The choice is not restrictive, and other basis functions can be adopted: see Ramsay and Silverman (2005) for a detailed discussion. The same consideration applies to the number of basis \(k\), which can be assumed to be different for each coefficient curve. Another reasonable assumption is that some compositional components have no effect on life expectancy. To enable variable selection, we induce sparsity by using a \(L_{1}\) penalization method. For model (3.5), the functional sparsity of the coefficient curves in \(\bolds{\beta}(t)\) translates into the row sparsity of the coefficient matrix \(\bolds{B}\). Many penalization methods have been proposed in Statistics and Machine Learning literature to induce sparsity, among which the Lasso (Tibshirani, 1996) is probably the most famous. The Group Lasso (Yuan and Lin, 2006) is an extension which considers the concept of groups of coefficients and fits for the purpose, since it allows the whole coefficient vectors \(\mathbf{b}_{j}\), for \(j=1,\ldots,p\), to be selected rather than their individual components. To formulate the optimization problem, the zero-sum constraints and the coefficient curves have to be expressed in terms of the elements of the matrices \(\mathbf{B}\) and \(\mathbf{B}_{c}\). For this purpose, it is convenient to express the problem in terms of \(\mathbf{b}=\text{vec}(\mathbf{B}^{\top})\in\mathbb{R}^{pk}\) and \(\mathbf{b}_{c}=\text{vec}(\mathbf{B}_{c}^{\top})\in\mathbb{R}^{(p_{c}+1)k}\). It can be easily seen that imposing \(\mathbf{1}_{p_{i}}^{\top}\mathbf{b}_{j}(t)=0\), for \(j=1,\ldots q\) and \(\forall t\in\mathcal{T}\), is equivalent to imposing zero-sum constraints on the columns of the matrix \(\mathbf{B}\), that is, \((\mathbf{L}\otimes\mathbf{I}_{k})\mathbf{b}=\tilde{\mathbf{L}}\mathbf{b}=\mathbf{0}_{qk}\) with \(\tilde{\mathbf{L}}\in\mathbb{R}^{qk\times pk}\). Moreover, we have that \[\mathbf{\beta}(t)=\left(\mathbf{I}_{p}\otimes\mathbf{\Phi}(t)^{\top}\right)\mathbf{b}= \tilde{\mathbf{\Phi}}(t)\mathbf{b},\] with \(\tilde{\mathbf{\Phi}}(t)\in\mathbb{R}^{p\times pk}\) and, similarly, \(\mathbf{\beta}_{c}(t)=\tilde{\mathbf{\Phi}}_{c}(t)\mathbf{b}_{c}\), with \(\tilde{\mathbf{\Phi}}_{c}(t)\in\mathbb{R}^{(p_{c}+1)\times(p_{c}+1)k}\). In accordance with the above considerations, we propose to estimate the parameters to solve the optimization problem \[\frac{1}{2}\operatorname*{arg\,min}_{\mathbf{b},\mathbf{b}_{c}}\int\mathbf{r}(t)^{\top} \mathbf{r}(t)dt+\lambda\sum_{j=1}^{p}||\mathbf{b}_{j}||_{2},\quad\text{s.t.}\ \tilde{\mathbf{L}}\mathbf{b}=\mathbf{0}_{qk}, \tag{3.6}\] where \(\mathbf{r}(t)=\mathbf{y}(t)-\mathbf{Z}_{c}(t)\tilde{\mathbf{\Phi}}(t)\mathbf{b}_{c}-\mathbf{Z}(t) \tilde{\mathbf{\Phi}}(t)\mathbf{b}\in\mathbb{R}^{n}\) and \(\lambda\) is a tuning parameter that controls the strength of the group-Lasso penalization. The proposed estimator has the same desirable properties as its counterparts in the classical regression framework (Lin et al., 2014) and in the functional case with scalar response (Sun et al., 2020). The zero-sum constraints for each composition guarantee that the estimator would remain unchanged under the transformation \(\mathbf{X}(t)\longmapsto\mathbf{S}\mathbf{X}(t)\), where \(\mathbf{S}=\text{diag}(s_{1},\ldots,s_{n})\), with \(s_{i}>0\) for \(i=1,\ldots,n\). Furthermore, the constraints ensure that the proposed methodology is subcompositional coherent: if we knew that some coefficient curves of \(\mathbf{\beta}(t)\) are zero and estimated the model using the compositions formed by excluding the parts associated with those curves, then the resulting estimator would be unchanged. Finally, a direct consequence of the symmetric formulation of the problem (3.6) is that the solution is invariant under any permutation of the components of each composition. ### Computation We propose to solve the convex optimization problem (3.6) using an augmented Lagrangian algorithm (Bertsekas, 1982). For a detailed review of the method and its extensions with applications in Statistics and Machine Learning, see Boyd et al. (2011). The problem (3.6) can be rewritten as \[\operatorname*{arg\,min}_{\mathbf{b},\mathbf{b}_{c}} \frac{1}{2}\mathbf{b}^{\top}\mathbf{K}\mathbf{b}-\mathbf{b}^{\top}\mathbf{J}+\frac{1} {2}\mathbf{b}_{c}^{\top}\mathbf{M}\mathbf{b}_{c}-\mathbf{b}_{c}^{\top}\mathbf{P}+\mathbf{b}_{c}^{\top} \mathbf{Q}\mathbf{b} \tag{3.7}\] \[+\lambda\sum_{j=1}^{p}||\mathbf{b}_{j}||_{2},\quad\text{s.t.}\ \tilde{\mathbf{L}}\mathbf{b}=\mathbf{0}_{qk},\] where the matrices containing functional inner products with weighting functions are denoted by \(\mathbf{K}=\int\tilde{\mathbf{\Phi}}(t)^{\top}\mathbf{Z}(t)^{\top}\mathbf{Z}(t)\tilde{\mathbf{\Phi} }(t)dt\in\mathbb{R}^{pk\times pk}\), \(\mathbf{J}=\int\tilde{\mathbf{\Phi}}(t)^{\top}\mathbf{Z}(t)^{\top}\mathbf{y}(t)dt\in\mathbb{R }^{pk}\), \(\mathbf{M}=\int\tilde{\mathbf{\Phi}}_{c}(t)^{\top}\mathbf{Z}_{c}(t)^{\top}\mathbf{Z}_{c}(t) \tilde{\mathbf{\Phi}}_{c}(t)dt\in\mathbb{R}^{(p_{c}+1)k\times(p_{c}+1)k}\), \(\mathbf{P}=\int\tilde{\mathbf{\Phi}}_{c}(t)^{\top}\mathbf{Z}_{c}(t)^{\top}\mathbf{y}(t)dt\in \mathbb{R}^{(p_{c}+1)k}\) and \(\mathbf{Q}=\int\tilde{\mathbf{\Phi}}_{c}(t)^{\top}\mathbf{Z}_{c}(t)^{\top}\mathbf{Z}(t)\tilde {\mathbf{\Phi}}(t)dt\in\mathbb{R}^{(p_{c}+1)k\times p_{c}k}\). Since \(\mathbf{b}_{c}\) is involved neither in the penalty term nor in the constraint, the optimization problem can be restated as \[\operatorname*{arg\,min}_{\mathbf{b}}\frac{1}{2}\mathbf{b}^{\top}\tilde{\mathbf{K}}\mathbf{b}-\bm {b}^{\top}\tilde{\mathbf{J}}+\lambda\sum_{j=1}^{p}||\mathbf{b}_{j}||_{2},\quad\text{s.t. }\tilde{\mathbf{L}}\mathbf{b}=\mathbf{0}_{qk}, \tag{3.8}\] where \(\tilde{\mathbf{K}}=\mathbf{K}-\mathbf{Q}^{\top}\mathbf{M}^{-1}\mathbf{Q}\in\mathbb{R}^{pk\times pk}\) and \(\tilde{\mathbf{J}}=\mathbf{J}-\mathbf{Q}^{\top}\mathbf{M}^{-1}\mathbf{P}\in\mathbb{R}^{pk}\). Then, once the solution \(\tilde{\mathbf{b}}\) is obtained, the estimate of the coefficient associated with the control variables is \(\tilde{\mathbf{b}}_{c}=\mathbf{M}^{-1}(\mathbf{P}-\mathbf{Q}\tilde{\mathbf{b}})\). The augmented Lagrangian associated with problem (3.8) is \[L_{\rho}(\mathbf{b},\mathbf{u})=-\mathbf{b}^{\top}\tilde{\mathbf{J}}+\frac{1}{2}\mathbf{b}^{\top} \tilde{\mathbf{K}}\mathbf{b}+\lambda\sum_{j=1}^{p}||\mathbf{b}_{j}||_{2}+\frac{\rho}{2}|| \tilde{\mathbf{L}}\mathbf{b}||_{2}^{2}+\mathbf{u}^{\top}\tilde{\mathbf{L}}\mathbf{b},\] where \(\mathbf{u}\in\mathbb{R}^{qk}\) is the Lagrange multiplier and \(\rho\) is the penalty parameter. The augmented Lagrangian method finds the solution of the original problem iterating between a minimization step and a dual ascent step. The procedure for a fixed \(\lambda\) is summarized in Algorithm 1. We allow the penalty parameter \(\rho\) to increase in each iteration if the error does not decrease sufficiently over the previous iteration. The adjustment scheme follows the guidelines in Bertsekas (1982, p. 123). The first step of the algorithm updates \[\mathbf{b}^{k}\leftarrow\operatorname*{arg\,min}_{\mathbf{b}}L_{\rho^{k-1}}(\mathbf{b},\bm {u}^{k-1}),\] and it is equivalent to solving a standard group-Lasso problem. In our implementation, we employ the Alternating Direction Method of Multipliers (Boyd et al., 2011), but other routines can be used to solve the problem. When the model is fitted for a path of \(\lambda\), the solutions \(\widehat{\mathbf{u}}\) and \(\widehat{\mathbf{b}}\) associated with the previous penalty term are used as a warm start for the subsequent iteration. ``` 0:\(\mathbf{b}^{0},\rho^{0},\mathbf{u}^{0},\epsilon,k_{\text{max}}\) \(k\gets 1\) \(\text{er}^{0}\leftarrow\max\tilde{\mathbf{L}}\mathbf{b}^{0}\) while\(\text{er}^{k-1}>\epsilon\ \&\ k\leq k_{\text{max}}\)do\(\mathbf{b}^{k}\leftarrow\operatorname*{arg\,min}_{\mathbf{b}}L_{\rho^{k-1}}(\mathbf{b},\bm {u}^{k-1})\) \(\text{er}^{k}\leftarrow\max\tilde{\mathbf{L}}\mathbf{b}^{k}\) if\(\text{er}^{k}>0.25\text{er}^{k-1}\)then \(\rho^{k}\gets 10\rho^{k-1}\) else \(\rho^{k}\leftarrow\rho^{k-1}\) \(\mathbf{u}^{k}\leftarrow\mathbf{u}^{k-1}+\rho^{k}\tilde{\mathbf{L}}\mathbf{b}^{k}\) \(k\gets k+1\) ``` **Algorithm 1** Augmented Lagrangian method to solve problem (3.8) As noted before, the functional compositional predictors and the response variable are observed at each calendar year and not continuously \(\forall t\in\mathcal{T}\). Therefore, all the integrals involved in the optimization problem have to be computed from discrete-time observations. In our study, we employ the trapezoidal rule, which is equivalent to approximate the discrete-time data to continuous-time curves by means of linear interpolation. ## 4 Simulations We performed a simulation study in order to compare the performance of our proposal based on a constrained group Lasso (CGL) with two possible competitors. The first candidate is a baseline method, that is, a standard group Lasso in which the reference level \(r\) is chosen randomly (BGL). The second competitor is based on a naive approach, which consists of estimating the log-contrast regression model with the Lasso penalty of Lin et al. (2014) at each time \(t\in\mathcal{T}\) and smoothing the resulting estimates. We generate the compositional data similarly to the previous works of Lin et al. (2014), Shi, Zhang and Li (2016), Sun et al. (2020). The discrete-time grid is equispaced within the interval \(\mathcal{T}=[0,1]\) and consists of \(20\) time points \(t_{1},\ldots,t_{20}\). We consider scenarios with \(q=4\) compositions, each with \(p_{j}\) components, \(j=1,\ldots,q\). To introduce dependence between the covariates, we use a compound symmetry correlation matrix \(\mathbf{\Sigma}_{X}\in\mathbb{R}^{p_{j}\times p_{j}}\) with unit variances and correlations \(\rho_{X}\). To account for time dependence, we consider a matrix \(\mathbf{\Sigma}_{T}\in\mathbb{R}^{20\times 20}\) with first-order autoregressive structure, unit variance and autoregressive parameter \(\rho_{T}\). For each observation \(i=1,\ldots,n\), the \(j\)-th composition over time is obtained by simulating \(\mathbf{w}_{ij}=[\mathbf{w}_{ij}(t_{1})^{\top},\ldots,\mathbf{w}_{ij}(t_{20})^{\top}]^{ \top}\sim\mathcal{N}(\mathbf{0}_{20p_{j}},\sigma_{X}^{2}(\mathbf{\Sigma}_{T}\otimes \mathbf{\Sigma}_{X}))\) and then normalizing the counts as \[w_{ijl}(t_{v})=\frac{\exp\left\{w_{ijl}(t_{v})\right\}}{\sum_{l=1}^{p_{j}}\exp \left\{w_{ijl}(t_{v})\right\}},\] for \(i=1,\ldots,n\), \(l=1,\ldots,p_{j}\) and \(v=1,\ldots,20\). The number of bases for cubic splines is set to \(k=5\) and the number of components \(p_{j}\) is the same across compositions and equal to \(p/q\). Only 3 coefficients are non-null for each composition. The coefficient vectors are \(\mathbf{b}_{1}=[1,-1,0,0,0]^{\top}\), \(\mathbf{b}_{2}=[0,0,-0.5,1,0]^{\top}\), \(\mathbf{b}_{3}=[-1,1,0.5,-1,0]^{\top}\), \(\mathbf{b}_{p_{1}+1}=[0.5,0,0,-0.5,1]^{\top}\), \(\mathbf{b}_{p_{1}+2}=[0,1,-1,0,-1]^{\top}\), \(\mathbf{b}_{p_{1}+3}=[-0.5,-1,1,0.5,0]^{\top}\), \(\mathbf{b}_{p_{2}+1}=[0.5,-1,-1,1,0]^{\top}\), \(\mathbf{b}_{p_{2}+2}=[0,1,1,0,0]^{\top}\), \(\mathbf{b}_{p_{2}+3}=[-0.5,0,0,-1,0]^{\top}\), \(\mathbf{b}_{p_{3}+1}=[1,0,0.5,0,-1]^{\top}\), \(\mathbf{b}_{p_{3}+2}=[0,0,-0.5,0,0]^{\top}\), \(\mathbf{b}_{p_{3}+3}=[-1,0,0,0,1]^{\top}\). We also consider scenarios with \(p=40\) and \(q=1\), with the same coefficients and the same degree of sparsity as for \(p=40\) and \(q=4\). For simplicity, we do not include either the intercept or other control variables. The response variables are generated from the model (3.5), with error terms distributed as \(\mathcal{N}(0,\sigma^{2})\), where \(\sigma^{2}\) set to achieve specific signal-to-noise ratios (SNR). We simulated different settings \((n,p,q)=(50,40,1),(50,40,4),(50,100,4)\) and several combinations of parameters \(\sigma_{X}^{2}=9\), \(\rho_{T}=(0.2,0.6)\), \(\rho_{X}=(0.2,0.6)\), \(\text{SNR}=(2,4)\). The tuning parameters \(\lambda\) and \(k\) are selected by ten fold cross-validation and one-standard error rule (Hastie et al., 2009, p. 244) We use four different measures to compare our proposal with competitors. The prediction error is calculated using the average prediction mean square error \(\sum_{v=1}^{20}||\mathbf{y}(t_{v})-\mathbf{1}_{n}^{\top}\widehat{\beta}_{0}(t_{v})- \mathbf{Z}(t_{v})\widehat{\mathbf{\beta}}(t_{v})||_{2}^{2}/(20n)\) computed from an independent test sample of size 1000. The estimation error is measured by \(\sum_{j=1}^{p}\left(\int_{\mathcal{T}}|\widehat{\beta}_{j}(t)-\beta_{j}(t)|^{ 2}dt\right)^{\frac{1}{2}}/p\). As variable selection measures, we use the false positive rate (FPR) and false negative rate (FNR), where positives and negatives refer to non-null and null coefficients, respectively. The naive method does not include a procedure for the selection of coefficient curves, but only a variable selection procedure at each time \(t\), therefore, we select active predictors based on empirical evidence. Consequently, to have a fair comparison, we use the same criteria for all three methods. As in Sun et al. (2020), the estimated index set \(\widehat{\mathcal{S}}\) of non-null coefficients is defined as \[\widehat{\mathcal{S}}=\left\{j:\frac{\left(\int_{\mathcal{T}}\widehat{\beta}_ {j}^{2}(t)dt\right)^{\frac{1}{2}}}{\sum_{j=1}^{p}\left(\int_{\mathcal{T}} \widehat{\beta}_{j}^{2}(t)dt\right)^{\frac{1}{2}}}\geq\frac{1}{p},j=1, \ldots,p\right\}.\] The means and standard errors of the performance measures for the scenario with SNR = 2 are reported in Table 3 and 4. From Table 3, we can see that the proposed CGL has a similar variable selection performance compared to BGL when \(n>p\), although the latter has the tendency to have higher false positive rates. This behavior is due to the automatic inclusion of the randomly chosen baseline for BGL and is, in fact, even more pronounced for \(q=4\). The advantages of the proposed CGL can be appreciated for the scenarios with \(p>n\), where it clearly outperforms competitors. As seen in Table 4, the proposed CGL performs slightly better in terms of prediction and estimation error and, as before, the difference with the competitors is emphasized for \(p>n\). Furthermore, increasing the correlation between the components leads to lower prediction errors, regardless of the method. This is because a small correlation determines few dominating components in each composition. As expected, the naive method has inferior performance in terms of all the measures in all the settings, since it is an unsophisticated approximation of functional nature of the data. Another expected behavior can be seen from Tables 5 and 6, which show that increasing the SNR leads to improved performance. To measure the relative importance of causes in the \(j\)-th age class, we consider the relative squared \(L_{2}\) norm of the group-specific coefficients between years \(t\) and \(t+1\) \[\sum_{l=1}^{p_{j}}\int_{t}^{t+1}|\beta_{jl}(t)|^{2}dt\Bigg{/}\sum_{j=1}^{4}\sum_ {l=1}^{p_{j}}\int_{t}^{t+1}|\beta_{jl}(t)|^{2}dt\.\] The results are reported in Figure 1 and show that, for both men and women, the most important age class is 40-64. The reason can be attributed to the inclusion of countries from Eastern Europe, for which the compositional trajectories in the age group 40-64 are very different from the other high-longevity nations. The result is consistent with the demographic literature, in which traditional life expectancy decomposition methods are applied longitudinally for single countries. For example, Mesle (2004) shows that in many former Soviet countries, decreases in life expectancy in the period 1965-2000 for males can be attributed to the rise in mortality at working ages. This is also in line with the substantial sex difference in the contribution of the age group 5-39. Another expected finding is the decline in importance for the age group 0-4, regardless of sex, which is associated with a progressive reduction in infant mortality. We also notice an increasing importance of age class 65+ for men. This can be explained by the faster progress of men in reducing heart disease-related mortality in recent decades, a pattern observed by Feraldi and Zarrulli (2022). To understand the results, it is worth recalling that the interpretation of coefficients for the log-contrast model is different from the standard linear regression model. The main rea \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \multicolumn{2}{c}{Configuration} & \multicolumn{5}{c}{Prediction error} & \multicolumn{5}{c}{Estimation error} \\ \hline \(\rho_{X}\) & \(\rho_{T}\) & \(n\) & \(p\) & \(q\) & CGL & BGL & Naive & CGL & BGL & Naive \\ \hline \(0.2\) & \(0.2\) & \(50\) & \(40\) & \(1\) & \(4.29\) (\(0.01\)) & \(4.31\) (\(0.01\)) & \(7.71\) (\(0.08\)) & \(2.98\) (\(0.03\)) & \(3.05\) (\(0.03\)) & \(5.97\) (\(0.04\)) \\ & & \(50\) & \(40\) & \(4\) & \(4.30\) (\(0.01\)) & \(4.36\) (\(0.02\)) & \(6.78\) (\(0.05\)) & \(2.85\) (\(0.03\)) & \(3.08\) (\(0.04\)) & \(5.71\) (\(0.05\)) \\ & & \(50\) & \(100\) & \(4\) & \(4.31\) (\(0.01\)) & \(4.46\) (\(0.02\)) & \(8.75\) (\(0.09\)) & \(1.53\) (\(0.02\)) & \(1.71\) (\(0.02\)) & \(3.28\) (\(0.02\)) \\ \(0.2\) & \(0.6\) & \(50\) & \(40\) & \(1\) & \(4.30\) (\(0.02\)) & \(4.33\) (\(0.02\)) & \(7.90\) (\(0.11\)) & \(3.00\) (\(0.03\)) & \(3.09\) (\(0.03\)) & \(6.02\) (\(0.05\)) \\ & & \(50\) & \(40\) & \(4\) & \(4.28\) (\(0.01\)) & \(4.34\) (\(0.02\)) & \(6.71\) (\(0.06\)) & \(2.86\) (\(0.03\)) & \(3.09\) (\(0.04\)) & \(5.69\) (\(0.05\)) \\ & & \(50\) & \(100\) & \(4\) & \(4.46\) (\(0.02\)) & \(4.61\) (\(0.02\)) & \(8.90\) (\(0.08\)) & \(1.52\) (\(0.02\)) & \(1.72\) (\(0.02\)) & \(3.29\) (\(0.02\)) \\ \(0.6\) & \(0.2\) & \(50\) & \(40\) & \(1\) & \(2.21\) (\(0.01\)) & \(2.22\) (\(0.01\)) & \(4.25\) (\(0.05\)) & \(2.93\) (\(0.03\)) & \(3.02\) (\(0.03\)) & \(6.00\) (\(0.04\)) \\ & & \(50\) & \(40\) & \(4\) & \(2.21\) (\(0.01\)) & \(2.19\) (\(0.01\)) & \(3.49\) (\(0.03\)) & \(2.85\) (\(0.03\)) & \(3.04\) (\(0.04\)) & \(5.72\) (\(0.04\)) \\ & & \(50\) & \(100\) & \(4\) & \(2.22\) (\(0.01\)) & \(2.28\) (\(0.01\)) & \(4.57\) (\(0.04\)) & \(1.51\) (\(0.02\)) & \(1.68\) (\(0.02\)) & \(3.28\) (\(0.02\)) \\ \(0.6\) & \(0.6\) & \(50\) & \(40\) & \(1\) & \(2.18\) (\(0.01\)) & \(2.18\) (\(0.01\)) & \(4.15\) (\(0.05\)) & \(3.02\) (\(0.04\)) & \(3.07\) (\(0.04\)) & \(6.03\) (\(0.04\)) \\ & & \(50\) & \(40\) & \(4\) & \(2.11\) (\(0.01\)) & \(2.14\) (\(0.01\)) & \(3.36\) (\(0.03\)) & \(2.92\) (\(0.03\)) & \(3.17\) (\(0.04\)) & \(5.81\) (\(0.04\)) \\ & & \(50\) & \(100\) & \(4\) & \(2.24\) (\(0.01\)) & \(2.32\) (\(0.01\)) & \(4.61\) (\(0.05\)) & \(1.53\) (\(0.02\)) & \(1.72\) (\(0.02\)) & \(3.30\) (\(0.02\)) \\ \hline \end{tabular} \end{table} Table 6: Means and standard errors (in parentheses) of prediction and estimation errors for the three methods with SNR = 4, based on 100 simulations. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \multicolumn{2}{c}{Configuration} & \multicolumn{5}{c}{FPR(\(\%\))} & \multicolumn{5}{c}{FNR(\(\%\))} \\ \hline \(\rho_{X}\) & \(\rho_{T}\) & \(n\) & \(p\) & \(q\) & CGL & BGL & Naive & CGL & BGL & Naive \\ \hline \(0.2\) & \(0.2\) & \(50\) & \(40\) & \(1\) & \(0.00\) (\(0.00\)) & \(0.00\) (\(0.00\)) & \(0.21\) (\(0.09\)) & \(1.75\) (\(0.34\)) & \(1.50\) (\(0.32\)) & \(7.17\) (\(0.43\)) \\ & & \(50\) & \(40\) & \(4\) & \(0.00\) (\(0.00\)) & \(0.00\) (\(0.00\)) & \(1.08\) (\(0.08\)) & \(1.33\) (\(0.31\)) & \(1.83\) (\(0.35\)) & \(7.83\) (\(0.33\)) \\ & & \(50\) & \(100\) & \(4\) & \(1.14\) (\(0.11\)) & \(3.48\) (\(0.17\)) & \(4.57\) (\(0.23\)) & \(0.00\) (\(0.00\)) & \(0.00\) (\(0.00\)) & \(5.08\) (\(0.51\)) \\ \(0.2\) & \(0.6\) & \(50\) & \(40\) & \(1\) & \(0.00\) (\(0.00\)) & \(0.00\) (\(0.04\)) & \(0.14\) (\(0.06\)) & \(1.33\) (\(0.31\)) & \(1.75\) (\(0.34\)) & \(7.25\) (\(0.37\)) \\ & & \(50\) & \(100\) & \(4\) son lies in the zero-sum constraint, which reflects the fact that one component increases its relative importance only if one or more of the others decreases (Coenders and Pawlowsky-Glahn, 2020). For the model (3.5), it can be shown that the following interpretation holds at each time \(t\). Multiplying by a factor \(c\) the ratio of one component \(\beta_{jl}(t)\) of the \(j\)-th composition over each of the other parts \(\beta_{jm}(t),m=1,\ldots,j-1,j+1,\ldots,p_{j}\) leads to a change of \(\log(c)\beta_{jl}(t)\) in the expected value of the response variable. Equivalently, we can also interpret the coefficients jointly as follows. The expected value of the response variable grows when increasing the relative importance of components with positive coefficient and reducing that of components with negative coefficient. However, interpretation over time is not straightforward and we made use of additional plots to elucidate it, following Sun et al. (2020). The idea is to compare the smoothed trajectories of log compositions for three clusters of countries with the estimated coefficient curves. For each predictor and each year, the nations are divided into three groups characterized by low, medium and high life expectancy, thus giving rise to time-varying partitions. For each group, the smoothed values together with their \(95\%\) confidence bands are calculated using local regression. In this way, we can also check whether our model describes relationships encountered in raw data. Figure 2 shows the resulting plots for four relevant causes. The graphs show that our model provides realistic results. We observe that increases (decreases) in the difference of the prevalence of a cause of death between high and low longevity countries are reflected in increasing (decreasing) coefficient curves. For example, considering the age class 40-64 for females, in the '60s, countries with higher prevalence of death by neoplasms and lower by circulatory diseases have higher life expectancy. In subsequent years, the difference in terms of prevalence of neoplasm between high- and low-longevity countries increases and this is reflected in the increasing estimated curve, while the reverse holds for circulatory diseases. The estimated coefficient curves for males are reported in Figure 3. The positive increasing trend of neoplasms in age classes 5-39 and 40-64 is a clear effect due to substitute mortality, which has been defined "that mortality which results from a decrease in another specific disease" (Van De Water, 1997). That is, in many countries with high longevity, cancer mortality has become the main cause of death due to the reduction of other conditions, such as those related to the circulatory system. In fact, circulatory diseases can be seen to have a negative effect for all age groups, excluded 0-4. Another cause with a negative decreasing effect in Figure 1: Relative magnitude of the age group–specific coefficients for females and males. age class 5-39 is digestive diseases. It can be linked to the high incidence of this class of diseases, particularly liver cirrhosis, observed in early adulthood for Eastern European nations (Blachier et al., 2013) and other Commonwealth countries, such as the UK (Lewer et al., 2020). For the age class accounting for senescent mortality, the effect of circulatory diseases is negative and strongly increasing, concurrently with the positive increasing effect of nervous, respiratory and infectious diseases. These are conditions whose susceptibility is higher in the elderly. The estimated positive increasing effect reflects the process of population aging, that is, the increase in proportion of population aged 65 and over, which is particularly vulnerable to the aforementioned diseases. It is interesting to highlight the sign change of infectious diseases, which means that in the first period this condition was associated with low-longevity countries. The results for females are reported in Figure 4. Compared to males, the increasing positive Figure 2: _Smoothed curves of log composition of some causes of death for three clusters of countries, with the estimated coefficient curves below. For each predictor and year, the nations are divided into three groups characterized by low (in yellow), medium (in light blue) and high (in green) life expectancy._ effect of neoplasms and the increasing negative effect of circulatory diseases in age class 40-64 overshadow all others in terms of magnitude. In this age group, differently to males, skin and urogenital diseases are selected. On the contrary, endocrine and infectious diseases, as well as lung cancer, are not included. One possible explanation for the non-inclusion of lung cancer is its high mortality rate in both low- and high-life expectancy countries for women (Jani et al., 2021). In the senescent age group, the effect of respiratory diseases is positive decreasing and, unlike males, there is an increase in the prevalence of urogenital diseases over time for high-longevity countries. This cause, which is also selected for age classes 40-64, appears to be a sex-specific cause. To assess the stability of the selection procedure, we generated 500 bootstrap samples and used leave-one-out cross-validation to select the tuning parameters, as for the model estimated with the original data. The results reported in Figure 5 show that the variable selection is quite stable. In general, our proposal appears to be able to select the relevant predictors, at the cost of including some causes which may not have much effect on life expectancy. This is the case of external diseases in the age class 5-39 for both sexes, as well as neoplasms in the age class 5-39 for females and lung cancer and circulatory diseases in the age class 40-64 for males. On the other hand, infectious diseases for the age group 0-4 is selected in more than \(70\%\) of the bootstrap samples for both sexes, indicating that it may play an important role, although its coefficient is estimated zero. Figure 3: Estimated coefficient curves for the four age classes, males. ## 6 Discussion We introduced a functional concurrent regression model with compositional covariates in the spirit of the proposal by Sun et al. (2020). It allows us to explain the evolution of life expectancy at birth for several countries as a function of the compositions derived from cause-specific mortality rates of four distinct age groups. The method involves a B-spline expansion of the unknown functional coefficients coupled with a group-Lasso penalty, enabling variable selection at the function level and consequently high interpretability of the results. The methodology is implemented within the R package fcrc, available at [https://github.com/emanuelegdepaoli/fcrc](https://github.com/emanuelegdepaoli/fcrc), where the code for reproducing the analysis, the simulation studies and all images of the paper is also included. One major finding is that life expectancy is mainly driven by mortality at age 40-64 for women, while for men, also 65+ and 5-39 age groups are relevant. Not surprisingly, we found that circulatory diseases are increasingly relevant in determining the life expectancy of countries: the lower the relative importance of circulatory diseases, the higher the life expectancy. We also found an increasing relevance of digestive diseases for young men and women and of lung cancer for young men only. Other results, such as the increasingly positive effect of neoplasms at age 40-64 and of diseases of nervous system at age 65+ (that is, the higher the relative importance of these causes, the higher the life expectancy) can be explained in terms of "substitution effect", which means that the increasing relevance of these causes is an indirect effect of the reduction of other causes. We should keep in mind that the Figure 4: Estimated coefficient curves for the four age classes, females. sample is made up of several countries with a different pattern of overall and cause-specific mortality. In particular, Eastern European countries that underwent a serious mortality crisis after the fall of the Soviet Union have a peculiar pattern that might have driven some of these results. The proposed model allows us to simultaneously consider all causes of death and age groups in determining the evolution of overall mortality. This is increasingly important, since it has been observed that the composition of cause-specific mortality is getting increasingly diversified (Bergeron-Boucher, Aburto and van Raalte, 2020), thus making analyses based on a single cause of death less reliable. We consider the summary measure of life expectancy at birth, but other measures such as the modal age at death Canudas-Romo (2008), which is not affected by infant mortality, or lifespan disparity Vaupel and Canudas-Romo (2003), which is a measure of compression of age-specific mortality, can be used as a response variable. The authors acknowledge financial support from the PRIN project "Unfolding the SEcrets of LongEvity: Current Trends and future prospects" (SELECT), project number 20177BRJXS. The authors also thank Emilio Zagheni, Ugofilippo Basellini Figure 5: Proportion of the causes of death selected in 500 bootstrap samples, for females and males. In gray, the bars of the selected predictors from fitting the model to the original data, in black the bars of the estimated null coefficients. and other scholars from the Max Planck Institute for Demographic Research for useful discussion during the presentation of earlier versions of this work.
2302.10001
STB-VMM: Swin Transformer Based Video Motion Magnification
The goal of video motion magnification techniques is to magnify small motions in a video to reveal previously invisible or unseen movement. Its uses extend from bio-medical applications and deepfake detection to structural modal analysis and predictive maintenance. However, discerning small motion from noise is a complex task, especially when attempting to magnify very subtle, often sub-pixel movement. As a result, motion magnification techniques generally suffer from noisy and blurry outputs. This work presents a new state-of-the-art model based on the Swin Transformer, which offers better tolerance to noisy inputs as well as higher-quality outputs that exhibit less noise, blurriness, and artifacts than prior-art. Improvements in output image quality will enable more precise measurements for any application reliant on magnified video sequences, and may enable further development of video motion magnification techniques in new technical fields.
Ricard Lado-Roigé, Marco A. Pérez
2023-02-20T14:21:56Z
http://arxiv.org/abs/2302.10001v2
# STB-VMM: Swin Transformer Based Video Motion Magnification ###### Abstract The goal of video motion magnification techniques is to magnify small motions in a video to reveal previously invisible or unseen movement. Its uses extend from bio-medical applications and deepfake detection to structural modal analysis and predictive maintenance. However, discerning small motion from noise is a complex task, especially when attempting to magnify very subtle, often sub-pixel movement. As a result, motion magnification techniques generally suffer from noisy and blurry outputs. This work presents a new state-of-the-art model based on the Swin Transformer, which offers better tolerance to noisy inputs as well as higher-quality outputs that exhibit less noise, blurriness, and artifacts than prior-art. Improvements in output image quality will enable more precise measurements for any application reliant on magnified video sequences, and may enable further development of video motion magnification techniques in new technical fields. keywords: Computer vision, Deep Learning, Swin Transformer, Motion Magnification, Image Quality Assessment ## 1 Introduction Video Motion Magnification (VMM) is a computer vision task consistent in magnifying small motions in a video sequence, having several uses in many fields from bio-medical applications [1; 2; 3] and deepfake detection [4] to structural modal analysis [5] and condition monitoring. These techniques act like a microscope for motion, to reveal previously invisible or unseen movements. Despite this simple premise, discerning small motions from noise is a complex task, especially when attempting to magnify very subtle, often sub-pixel movement. As a result, motion magnification techniques generally suffer from noisy and blurry outputs. Therefore, multiple authors have explored techniques to remediate these shortcomings and improve magnification quality and performance. Early motion magnification algorithms, such as [6], used a Lagrangian approach, reliant on motion tracking or optical flow, to isolate motion prior to magnification. However, this approach is very computationally expensive and difficult to execute artifact-free, especially in regions affected by occlusion boundaries and complex motion. On the other hand, more modern techniques [7; 8; 9; 10; 11] have relied on Eulerian approaches, which observe the changes in a fixed region of pixels instead of tracking features in time and space. These Eulerian approaches are less computationally expensive, perform better with small motions, and generally yield better magnification results. Nevertheless, these approaches still display noticeable blurring and artifacting due to the complex challenge of designing filters for noise removal, which at the same time, do not interfere with motion magnification. For this reason, Oh et al. [12] proposed a novel learning-based approach to VMM. Learning-based motion magnification departs from the use of hand-designed filters in favor of learning those filters using Convolutional Neural Networks (CNN) instead. This method achieved higher-quality magnification yielding fewer ringing artifacts and showing better noise characteristics than previously published methods. However, its reliance on additional temporal filtering to improve image quality sometimes produces errors in magnification. While it is possible to obtain fairly clear results with no temporal filtering, the image quality generally improves when filtering is applied as it removes unwanted motion and noise before learning-based magnification. The method presented in this work improves on the learnable filters and abandons temporal filtering to ensure correct magnification outputs. Resulting in a novel architecture capable of producing state-of-the-art results in terms of magnified image quality. The main contributions of this work are: 1. A novel motion magnification architecture based on the SWIN transformer. 2. A discussion, comparison, and validation of learning-based VMM techniques, both in a quantitative and qualitative sense. 3. The proposed novel architecture outperforms relevant VMM techniques in both quantitative evaluation and observed output quality, offering higher-quality magnification, less blurry frame reconstruction, better noise tolerance, and fewer artifacts than prior-art. The following section summarizes previous influential works and their relation to the development of the presented model. Section three describes in detail the model's architecture and its training process. The fourth section presents results and comparisons of the model's performance, focusing on magnification and image quality with respect to prior work. Finally, the conclusions of this paper are summarized in section five. ## 2 Related work ### Learning-based video motion magnification Eulerian approaches to video motion magnification function by decomposing video sequences into motion representations that can later be manipulated mathematically and then reconstructed into magnified frames. On the other hand, Lagrangian approaches explicitly track a pixel or feature's movement throughout a video sequence. This distinction between Lagrangian and Eulerian approaches is not dissimilar to the same terms used in fluid dynamics, where Lagrangian methods [6] track a volume of fluid through the flow, while Eulerian approaches [8; 7; 9] study the evolution of flow in a fixed volume in space. Eulerian-based methods generally have the upper hand when processing small motion but produce blurry results when encountering large motion. The technique presented in this paper belongs to the Eulerian approach and is inspired by the work of Oh et al.'s learning-based video motion magnification [12]. Eulerian techniques generally consist of three stages: spatial decomposition, motion isolation and manipulation, and representation denoising. From this blueprint, different authors have proposed increasingly sophisticated techniques to improve magnification quality and performance as reflected in table 1. In technical terms, the motion magnification problem can be summarized as follows. Given a signal \(I(x,t)\) representing image intensity at position \(x\) and time \(t\), and \(\delta(t)\) representing translational motion in time such that \[I(x,t)=f(x+\delta(t));\ I(x,0)=f(x) \tag{1}\] The goal of motion magnification is to synthesize the signal \[\hat{I}(x,t)=f(x+(1+\alpha)\cdot\delta(t)) \tag{2}\] for some amplification factor \(\alpha\). In practice, only certain frequencies of motion \(\delta(t)\) are useful to motion magnification, so a selector \(T(\cdot)\) is applied to \(\delta(t)\), which is typically a temporal bandpass filter. Prior to learning-based VMM (LB-VMM), magnification techniques relied on multi-frame temporal filtering to isolate motions of interest from random noise [7; 8; 9; 13; 11]. By contrast, the learning-based approach [12] directly employs CNNs to both filter noise and extract features, achieving comparable or better quality than prior-art without \begin{table} \begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline \hline **Method** & **Liu et al. [6]** & **Wu et al. [7]** & **Wadhwa et al. [8]** & **Wadhwa et al. [9]** & **Zhang et al. [11]** & **LB-VMM [12]** & **STB-VMM** \\ \hline **Spatial decomposition** & Tracking, optical flow & Laplacian pyramid & Laplacian pyramid & Steerable filters & Riesz pyramid & Steerable filters & Deep convolution layers & Swin Transformer \\ **Motion isolation** & - & Temporal bandpass filter & Temporal bandpass filter & Temporal bandpass filter & Temporal bandpass filter (2\(\mu\) order derivative) & Subtraction bandpass filter & Subtraction bandpass filter \\ **Representation denoising** & Expectation- Maximization & - & Amplitude weighted Gaussian filtering & Amplitude weighted Gaussian filtering & Amplitude weighted Gaussian filtering & Trainable convolution filtering & Swin Transformer \\ \hline \hline \end{tabular} \end{table} Table 1: Motion magnification techniques summary table. Adapted from [12]. using temporal filtering. The LB-VMM model is composed of three stages: encoder, manipulator, and decoder. Said model is designed to accept two frames and return a single motion-magnified frame. The goal of the encoder is to extract relevant features from each of the two input frames and yield a visual and a motion representation. The motion representation of both input frames is then passed to the manipulator, which will subtract both representations and magnify the result by an arbitrary parameter \(\alpha\) defined by the user. Finally, the results of the manipulator and the previously-obtained visual representation enter the decoder, where the motion and visual components are reconstructed into a motion-magnified frame. These three CNN-based components allow for flexible learnable filters that are better suited to the task of motion magnification and thus yield better quality magnification results. To train the model and given the impossibility of obtaining motion magnified video pairs, Oh et al. generated and used a fully-synthetic dataset for training their model, built by moving segmented objects from the PASCAL VOC [14] dataset over background images taken from MS COCO [15]. Careful consideration to the generation of the dataset was paid to ensure accurate pixel and sub-pixel motion as well as learnability. The dataset learning examples are parametrized to make sure they are within a defined range. Specifically, the dataset's magnification is upper-limited to an \(\alpha\) magnification factor of 100, and input motion is sampled so that magnified motion does not exceed 30 pixels. ### Transformers as a Computer Vision tool CNNs have been a staple of the Computer Vision (CV) field in the last few years, with many of the top-performing models having made extensive use of them [16; 17; 18]. This period roughly started after Krizhevsky et al. [17] won the ImageNet Large Scale Visual Recognition Challenge [19; 20] (ILSVRC) on September 30\({}^{th}\) 2012, and spurred many publications employing CNNs and GPUs to accelerate deep learning. Through the use of filters, these networks generate feature maps that summarize an image's most relevant parts. These filters capture relevant local information by the very nature of the convolution operation, which, combined with multi-scale architectures [21; 22] result in rich feature maps that can efficiently obtain a representation of an image's content, both in a local and global context. Recently, the CV field has been revolutionized yet again by the Vision Transformer (ViT) [23], which, employing the attention mechanism has demonstrated state-of-the-art performance in many CV tasks. The attention mechanism was first popularized in the field of Natural Language Processing (NLP) by Vaswani et al. [24], where the transformer architecture has become the de-facto standard. The attention mechanism can be described as mapping from a query and a set of key-value pairs into an output. The output, represented in vector format, is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function taking into account the query and the corresponding key [24]. The transformer was the first model which exclusively relied on self-attention to compute representations of its input and output without using sequence-aligned recursive neural networks or convolution operations. Unlike CNNs, trans formers lack translation invariance and a locally-restricted receptive field, in its place transformers offer permutation invariance. Said feature enabled NLP models to infer relations between words and ideas much further into a text than previous recurrent models could. However, CV applications require the processing of grid-structured data which can not trivially be processed by a transformer. The ViT [23] overcame this burden by mapping grid-structured data into sequential data by splitting the image into patches. Patches are then flattened into vectors and embedded into a lower dimension. These flattened patches are then summed with positional embeddings and fed as a sequence to a standard transformer encoder. Image patches essentially become sequence tokens just like words are when working in NLP, in fact, ViT uses the exact same encoder described in [24]. Later, Microsoft researchers improved on the ViT publishing the SWIN transformer, a hierarchical vision transformer using shifted windows [25]. This work further refined the solution to adapt the original transformer from language to vision. The SWIN transformer solved issues caused by large discrepancies in the scale of visual entities at the same time that limited self-attention computation to non-overlapping local windows, yet still allowing for cross-window interaction. The introduced limitation on the scope of self-attention significantly reduced the computational complexity, which scales quadratically with respect to image size, allowing for the processing of higher-resolution images that were previously unmanageable. Further developments in the CV field have implemented the SWIN transformer for various tasks achieving state-of-the-art performance [26; 27; 28]. ### SwinIR image restoration Inspired by the recent prominence of the transformer and its success in many CV problems such as image classification [29; 30; 25; 31; 32; 33], object detection [34; 35; 36], segmentation [30; 37; 38], crowd counting [39; 40] and image restoration [41; 42; 43], Liang et al. [27] proposed a new state-of-the-art image restoration model based on the Swin transformer [25]. The SwinIR model consists yet again of three modules: a shallow feature extractor, a transformer-based deep feature extractor and a high-quality image reconstruction module. This structure offers excellent performance in various image restoration tasks such as image super-resolution, JPEG compression artifact reduction, and image denoising. These applications are very interesting when working with VMM, as current state-of-the-art methods can be negatively affected by noisy input images, causing much noisier and blurrier results, especially at large magnification rates. This occurs as a result of noise not being properly filtered beforehand, therefore as the motion gets magnified, the noise gets magnified as well. ## 3 Methodology ### Residual Swin Transformer Block The Residual Swin Transformer Block (RSTB) [27] is used as one of the fundamental building blocks of the proposed architecture, appearing in parts of both the feature extractor and the reconstructor. The RSTB is a residual block combining multiple Swin Transformer Layers (STL) [25] and convolutional layers, compounding the benefits of the spatially invariant filters of the convolutional layers with the residual connections that allow for multilevel feature processing. The Swin transformer layer shown in figure 2 partitions an \(H\times W\times C\) image into non-overlapping \(\frac{HW}{M^{2}}\) local windows using an \(MxM\) sliding window and then computing its local attention, effectively reshaping the input image into \(\frac{HW}{M^{2}}\times M^{2}\times C\). The main difference with respect to the original transformer layer [24] lies in the local attention and the shifted window mechanism. For a local window feature \(F\in\mathbb{R}^{M^{2}\times C}\), the query, key, and value matrices \(Q\), \(K\), and \(V\in\mathbb{R}^{M^{2}\times d}\) are computed as \[Q=FW_{Q};\quad K=FW_{k};\quad V=FW_{V} \tag{3}\] where \(W_{Q}\), \(W_{K}\), and \(W_{V}\) are the learnable parameters shared across different windows, and \(d\) is the dimension of \(Q\), \(K\), and \(V\). Therefore, the attention matrix is computed for each window as \[Attention(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d}}+P)V \tag{4}\] where \(P\) is the learnable relative positional encoding. Computing the attention mechanism multiple times yields the results of the Multi-head Self Attention (MSA), which are then passed on to a Multi-Layer Perceptron (MLP). Therefore, the whole STL process can be summed up like so \[F=MSA(LayerNorm(F))+F \tag{5}\] then \[F=MLP(LayerNorm(F))+F \tag{6}\] where the MLP is formed by two fully-connected layers with a GELU activation layer in between. ### Network architecture The proposed model architecture, shown in figure 1, consists of three main functional blocks: the feature extractor, the manipulator, and the reconstructor. The feature extractor is further subdivided into the shallow and deep feature extractors, and their job is to extract a high-quality representation of an input frame. Next, the manipulator, using the features from two frames, magnifies the motion by multiplying the difference between the two feature spaces by a user-selected magnification factor \(\alpha\). Finally, the reconstructor converts the resulting manipulated feature space back into a magnified frame. Given two frames of a target sequence \([I_{A},I_{B}]\in\mathbb{R}^{H\times W\times C_{in}}\) (where \(H\) is the height of the image, \(W\) is the width of the image and \(C_{in}\) represents the number of input channels) the convolutional shallow feature extractor (\(G_{SF}\)) maps high-level features into a higher dimensional feature space, thus providing early local feature extraction (\(F_{AS},F_{BS}\)) and leading to a more stable optimization and better results [44]. \[[F_{AS},F_{BS}]=G_{SF}([I_{A},I_{B}]) \tag{7}\] Figure 1: Architecture overview of the proposed model. Then, the features extracted in the previous step are further processed in the deep feature extraction module (\(G_{DF}\)), which consists of \(N\) Residual Swin Transformer Blocks (RSTB). \[[F_{AD},F_{BD}]=G_{DF}([F_{AS},F_{BS}]) \tag{8}\] After feature extraction, both frames' feature spaces are then sent to the manipulator [12] (\(G_{M}\)), which works by taking the difference of both frames' feature spaces and directly multiplying by a magnification factor \(\alpha\). \[G_{M}(F_{AS}+F_{AD},F_{BS}+F_{BD})=(F_{AS}+F_{AD})+h(\alpha\cdot t(((F_{BS}+F_{ BD})-(F_{AS}+F_{AD})))) \tag{9}\] Where \(t(\cdot)\) is a \(3\times 3\) convolution followed by a ReLU activation, and \(h(\cdot)\) is a \(3\times 3\) convolution followed by a \(3\times 3\) residual block. \[F_{M}=G_{M}(F_{AS}+F_{AD},F_{BS}+F_{BD}) \tag{10}\] The conjoined manipulated feature space of both frames is then processed by the Mixed Magnified Transformer Block (MMTB) (\(G_{MMTB}\)) formed by \(N\) RSTB blocks. This stage enables the attention mechanism to affect the combined magnified features of both frames, resulting in a more coherent result after reconstruction. \[F_{MMTB}=G_{MMTB}(F_{M}) \tag{11}\] Finally, reconstruction is dealt with a convolutional block (\(G_{R}\)) that inverts the initial feature mapping, done in the shallow feature extractor, back onto a frame (\(I_{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{\hat{ \hat{\hat{\hat{\hathat{ \hathat{ \hathat{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\,.\}}}\,\,\,\,\,\,\,\,\,\,\,\,\\,\,\,\,\\\,\\\,\,\\,\,\\\,\,\\\\\ magnified videos on scenes totally unrelated to the dataset. These reasons led to the adoption of the dataset as the only source of training data. The L1-Loss cost function was chosen for end-to-end training and placed between the network's output \(I_{\hat{Y}}\) and the ground truth frame \(I_{Y}\). Additionally, in order to improve the feature extraction and make a more robust system, the perturbed \(c\) frames provided by the dataset were compared against their non-perturbed counterparts after feature extraction, using yet again L1-Loss. The resulting regularization loss was then added to the end-to-end loss of the whole network with a \(\lambda\) weight coefficient set to 0.1. Finally, the optimizer of choice for training the model was ADAM [45] with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), batch size set to 5 and a learning rate of \(10^{-5}\) with no weight decay. ### Modes of operation The proposed approach, STB-VMM, can be applied to any input video sequence containing two frames or more, regardless of the time scale between the two frames. Sequences can be treated in one of two modes, static or dynamic, borrowed from [12]. No changes to the network are made for these modes. Instead, the modes refer to the order in which the input frames are fed to the model. The static mode, which follows more closely the classical definition of motion magnification, uses the first frame of the sequence as reference. In terms of computation, the static mode can be expressed like so: \(model(I_{0},I_{t})\), where the \(t\) is the frame number increasing sequentially with time. On the Figure 2: Architectural details. other hand, the dynamic mode magnifies the difference between two consecutive frames [_model_(\(I_{t},I_{t+1}\))], therefore magnifying velocity between each frame. Note that in each of the modes, the magnification factor \(\alpha\) has different meanings. Oh et al. [12] proposed one additional operation mode with temporal filtering to mitigate the effects of undesired motion and noise. The filtering was applied in the manipulator to produce temporarily-filtered motion-magnified frames similar to those of classical techniques. On the downside, the temporal mode appears to cause blindness to small motions, resulting in patchy magnification. This phenomenon occurs because motion amplitude crosses the threshold to be large enough to be detected and causes some regions to be suddenly magnified mid-sequence. This performance degradation gets worst when the magnification factor is high and motion is small. While theoretically possible to incorporate a temporal mode into the proposed model, the magnification results do not suffer from excessive noise or blurring, therefore, temporal filtering is unnecessary and the full spectrum of frequencies is magnified all at once producing good results. ## 4 Results and discussion In the following section, the results yielded by the STB-VMM model are compared to the current state-of-the-art learning-based video motion magnification model [12]. Performance is measured quantitatively and qualitatively, showing that our model improves on the previous state-of-the-art in magnification quality and clarity. The video versions of all the comparisons are available in the supplementary materials. Quantitative comparison of image quality or Image Quality Assessment (IQA) is a complex topic involving many variables and methods. Said methods are divided into three main categories: full-reference, reduced-reference, and no-reference. A referenced algorithm [46; 47; 48] requires a pristine sample to assess the quality of a degraded image, while no-reference methods [49; 50; 51; 52] produce an image score without the need of any reference. When evaluating VMM it is impossible to obtain a pristine motion-magnified frame. Therefore, to evaluate the results presented in the following section the MUSIQ [53; 52] algorithm was chosen to compare the models' performance. The following results comparative analyzes the performance of Oh et al.'s Learning-Based Video Motion Magnification (LB-VMM) model and STB-VMM on ten different video benchmarks that showcase interesting motion magnification examples. In addition, a comparison against the baby [7] sequence is added to provide a fair point of comparison. The sequences were captured at 1080p 60fps on a mid-range smartphone to demonstrate the potential of STB-VMM with accessible video equipment. ### Quantitative comparison Table 2 shows the average, 1st, and 99th percentile average MUSIQ scores for the tested benchmark sequences ran on the Learning-based Video Motion Magnification model and the STB-VMM model. The values presented in the table are calculated for each individual frame of the full sequences and then summarized on an average score. The original sequences are also added as control, and scores are expected to be higher than both of the magnification methods. The results on table 2 demonstrate that STB-VMM produces better results than LB-VMM. On average, the scores obtained by STB-VMM are 9.63% higher, and boast a much higher 1% lows, implying that the quality of magnification is noticeably more consistent throughout the sequence. This trend can be observed in table 3 and figure 3, where STB-VMM shows remarkable stability on its output quality. LB-VMM only manages a single higher score than STB-VMM in the building benchmark by a difference of 0.23% (Building\({}_{00}\)). However, in the authors' opinion, STB-VMM produces better quality magnification with more stable edges and less blurry patches. On the other hand, none of the magnified scores fall above the original's, as expected. Nevertheless, magnified and original scores follow the same trend, implying that low-quality source videos produce worse outputs. However, STB-VMM is much more capable of dealing with low-quality input images, even closing the quality gap with respect to the original when input quality declines. The sharp quality declines seen in both car sequences can be, in part, attributed to the poor low light performance of the camera employed. ### Qualitative comparison To reinforce the previous section's scores and claims, this section presents a few qualitative comparisons that demonstrate the effectiveness of our proposed network against the current state-of-the-art in terms of resulting image \begin{table} \begin{tabular}{l c c c c c c c c} & \multicolumn{2}{c}{**Original**} & \multicolumn{3}{c}{**LB-VMM**} & \multicolumn{3}{c}{**STB-VMM**} \\ \hline & Avg. & \(\eta_{1}\) & \(\eta_{99}\) & Avg. & \(\eta_{1}\) & \(\eta_{99}\) & Avg. & \(\eta_{1}\) & \(\eta_{99}\) \\ \hline AC\({}_{00}\) & 72.11 & 69.65 & 72.75 & 55.73 & 49.61 & 58.69 & 62.45 & 61.05 & 63.29 \\ AC\({}_{01}\) & 69.15 & 68.30 & 70.05 & 48.35 & 34.07 & 51.22 & 59.27 & 57.72 & 60.96 \\ Baby & 74.39 & 69.71 & 74.87 & 55.51 & 53.26 & 59.95 & 57.12 & 54.41 & 62.90 \\ Building\({}_{00}\) & 66.84 & 66.01 & 75.45 & 52.46 & 49.51 & 62.75 & 52.30 & 50.07 & 56.43 \\ Car\({}_{00}\) & 52.55 & 50.65 & 54.41 & 31.40 & 18.27 & 35.50 & 43.37 & 23.28 & 48.06 \\ Car\({}_{01}\) & 55.81 & 54.77 & 57.01 & 33.51 & 30.67 & 64.99 & 50.28 & 48.08 & 52.07 \\ Crane\({}_{00}\) & 75.26 & 74.86 & 75.57 & 56.92 & 52.70 & 65.02 & 59.13 & 56.19 & 62.89 \\ Crane\({}_{01}\) & 75.09 & 74.63 & 75.44 & 51.05 & 45.25 & 57.37 & 54.93 & 51.11 & 64.70 \\ Truss\({}_{00}\) & 66.94 & 65.92 & 67.49 & 55.90 & 52.65 & 57.98 & 56.27 & 54.93 & 57.61 \\ Wheel\({}_{00}\) & 72.84 & 71.87 & 73.38 & 51.04 & 28.82 & 54.40 & 57.04 & 36.41 & 61.19 \\ Wheel\({}_{01}\) & 52.15 & 50.23 & 53.55 & 34.84 & 31.12 & 59.03 & 46.21 & 43.68 & 48.48 \\ \hline **Total avg.** & **66.13** & **51.25** & **75.45** & **48.05** & **32.32** & **60.09** & **54.42** & **45.68** & **63.29** \\ \hline **\% dev. to avg.** & _13.58\%_ & _22.50\%_ & _14.09\%_ & _20.34\%_ & _32.75\%_ & _25.04\%_ & _10.70\%_ & _16.07\%_ & _16.28\%_ \\ \end{tabular} \end{table} Table 2: Comparative MUSIQ scores of the original sequence, the sequence magnified using Learning-Based Video Motion Magnification (_\(\alpha\)3J_J_J_J_IM_2_bg_anoise_mix4_nl_n_J_ds3_ checkpoint), and the proposed method. (x20) quality. Figure 4 shows the same frame chosen at random from the \(\text{Car}_{00}\) sequence using both models. STB-VMM, shown on the right, yields a much superior result in terms of image clarity that can be appreciated in both edges and texture. The car sequence recording was filmed in a rather low light environment, thus yielding noisier/grainier video than otherwise could have been archived. This highlights one of the main benefits of the proposed architecture, which is a much better tolerance to noisy input. Regardless of clarity, both models perform well on motion magnification with very few artifacts, if any. The next example, shown in figure 5, was filmed in better lighting conditions, yet the quality score of the un-magnified video is no better. This might have been caused, in part, due to the framing of the sequence, which keeps only parts of the image in focus. Regardless of the base score set by the original, STB-VMM clearly outperforms LB-VMM, with better-defined letters and a much more clear background. In terms of motion magnification, both methods display good quality magnification. Figure 3: Graphic representation of the average MUSIQ scores per test sequence magnified x20. \begin{table} \begin{tabular}{l r r r} & **Avg. (\%)** & \(\eta_{1}\) **(\%)** & \(\eta_{99}\) **(\%)** \\ \hline \(\text{AC}_{00}\) & 9.32 & 16.42 & 6.32 \\ \(\text{AC}_{01}\) & 15.79 & 34.64 & 13.91 \\ Baby & 2.15 & 1.65 & 3.95 \\ Building00 & -0.23 & 0.86 & -8.38 \\ \(\text{Car}_{00}\) & 22.79 & 9.89 & 23.08 \\ \(\text{Car}_{01}\) & 30.06 & 31.78 & -22.65 \\ \(\text{Crane}_{00}\) & 2.93 & 4.67 & -2.81 \\ \(\text{Crane}_{01}\) & 5.17 & 7.85 & 9.71 \\ \(\text{Truss}_{00}\) & 0.55 & 3.46 & -0.55 \\ \(\text{Wheel}_{00}\) & 8.24 & 10.56 & 9.26 \\ \(\text{Wheel}_{01}\) & 21.81 & 25.01 & -19.72 \\ \hline **Total** & **9.63** & **26.07** & **4.24** \\ \end{tabular} \end{table} Table 3: MUSIQ score difference between STB-VMM and LB-VMM. On the other hand, the building sequence (Building\({}_{00}\)) is the only benchmark where LB-VMM outperforms on average STB-VMM. Nevertheless, the better edge stability offered by STB-VMM enables the authors to obtain better frequency readings from the magnified video. Such application is interesting in technical fields where vibration needs to be monitored, such as in structural health monitoring [54; 55; 56; 5]. Figure 6 shows the cropped upper right corner of the building [57] and the slice used for frequency measuring. Below, in figure 5(d), the FFTs obtained from the movement of the sequences are plotted. While both sequences detect a peak at 14.25 Hz, STB-VMM produces a much cleaner signal. During the experiment, the building was intentionally excited with an electrodynamic shaker reproducing a 14.25 Hz sine wave. The authors acknowledge that image quality can be a somewhat subjective metric and recommend watching the comparison videos attached in the supplementary materials. Figure 4: Qualitative comparison of the car sequence. Highlighted in the bottom row of the figure the car’s coolant reservoir, engine cover, and ventilation slits demonstrate that STB-VMM results are noticeably sharper and less distorted. ### Limitations In spite of the favorable comparisons, LB-VMM still has a significant advantage in computing time over STB-VMM. With our hardware setup1, LB-VMM magnifies the baby [7] sequence, consisting of 300 960x576 frames, in approximately 76 seconds. Meanwhile, STB-VMM almost doubles the compute time, clocking in at 130 seconds for the exact same sequence. Software optimizations combined with upcoming improvements in hardware might help mitigate STB-VMM's compute time shortcomings. Figure 5: Qualitative comparison of the wheel sequence. STB-VMM displays sharper letters and a better-defined background with respect to LB-VMM. ## 5 Conclusions This work presents a new state-of-the-art model for video motion magnification based on the Swin Transformer that has been shown to outperform previous state-of-the-art learning-based models. The new model displays better noise tolerance characteristics, a less blurry output image, and better edge stability, resulting in clearer and less noisy magnification with very few, if any, artifacts. On the downside, the new model requires more computing resources than previous models and cannot be run in real-time like phase-based methods [8]. Nevertheless, applications that require precise magnification for vibration monitoring [5] could greatly benefit from improvements in the technology. Further work will address the integration of this model in specific applications that require precise vibration monitoring and could benefit from a full-field solution like a camera instead of installing and wiring multiple contact sensors such as accelerometers. Figure 6: Vibration readings on the Buildingo sequence. While the noise floor remains the same on both readings, the FFT obtained using STB-VMM displays a much more prominent peak at 14.25 Hz. ## Acknowledgements The authors would like to gratefully acknowledge the support and funding of the Catalan Agency for Business Competitiveness (ACCIO) through the project INNOTEC ISAPREF 2021. Furthermore, the first author would like to acknowledge a Doctoral Scholarship from IQS. Finally, the authors would like to thank Dr. Eduardo Blanco from the University of Arizona and Dr. Ariadna Chueca de Bruijn for their help. ## Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could appear to influence the work reported in this paper.
2308.05331
A simple direct empirical observation of systematic bias of the redshift as a distance indicator
Recent puzzling observations such as the $H_o$ tension, large-scale anisotropies, and massive disk galaxies at high redshifts have been challenging the standard cosmological model. While one possible explanation is that the standard model is incomplete, other theories are based on the contention that the redshift model as a distance indicator might be biased. While these theories can explain the recent observations, they are challenged by the absence of a direct empirical reproducible observation that the redshift model can indeed be inconsistent. Here I describe a simple experiment that shows that the spectra of galaxies depend on their rotational velocity relative to the rotational velocity of the Milky Way. Moreover, it shows that the redshift of galaxies that rotate in the same direction relative to the Milky Way is significantly different from the redshift of galaxies that rotate in the opposite direction relative to the Milky Way (P$<0.006$). Three different datasets are used independently, each one was prepared in a different manner, all of them show similar redshift bias. A fourth dataset of galaxies from the Southern Galactic pole was also analyzed, and shows similar results. All four datasets are publicly available. While a maximum average $\Delta z$ of $\sim$0.012 observed with galaxies of relatively low redshift (z$<$0.25) might not seem dramatic, the bias is consistent, and can explain puzzling observations such as the $H_o$ tension.
Lior Shamir
2023-08-10T04:22:30Z
http://arxiv.org/abs/2308.05331v2
# A simple direct empirical observation of systematic bias of the redshift as a distance indicator ###### Abstract Recent puzzling observations such as the \(H_{o}\) tension, large-scale anisotropies, and massive disk galaxies at high redshifts have been challenging the standard cosmological model. While one possible explanation is that the standard model is incomplete, other theories are based on the contention that the redshift model as a distance indicator might be biased. While these theories can explain the recent observations, they are challenged by the absence of a direct empirical reproducible observation that the redshift model can indeed be inconsistent. Here I describe a simple experiment that shows that the spectra of galaxies depend on their rotational velocity relative to the rotational velocity of the Milky Way. Moreover, it shows that the redshift of galaxies that rotate in the same direction relative to the Milky Way is significantly different from the redshift of galaxies that rotate in the opposite direction relative to the Milky Way (P\(<0.006\)). Three different datasets are used independently, each one was prepared in a different manner, all of them show similar redshift bias. A fourth dataset of galaxies from the Southern Galactic pole was also analyzed, and shows similar results. All four datasets are publicly available. While a maximum average \(\Delta z\) of \(\sim\)0.012 observed with galaxies of relatively low redshift (z\(<\)0.25) might not seem dramatic, the bias is consistent, and can explain puzzling observations such as the \(H_{o}\) tension. ## 1 Introduction Recent observations have shown unexplained tensions and anomalies at cosmological scales. For instance, the \(H_{o}\) determined by the Cosmic Microwave Background (CMB) radiation is different from the \(H_{o}\) determined by using Ia supernovae and the reshift of their host galaxies (Wu and Huterer, 2017; Mortsell and Dhawan, 2018; Bolejko, 2018; Davis et al., 2019; Pandey et al., 2020; Camarena and Marra, 2020; Di Valentino et al., 2021; Riess et al., 2022). The relatively new JWST provides unprecedented imaging power, showing mature massive disk galaxies at high redshifts where such galaxies are not expected to form due to their young age. In fact, large disk galaxies at unexpectedly high redshifts were identified also before JWST saw first light (Neeleman et al., 2020). These unexpected observations challenge our understanding of the Universe. If the common distance indicators are complete, the standard cosmological theories are incomplete, or vice versa. Explaining these observations might therefore reinforce the modifications of some of the foundations of cosmology. In addition to theories that shift from the standard cosmological model, other theories are based on the contention that the redshift as used to measure distances at cosmological scales might be an incomplete model (Seshavatharam and Lakshminarayana, 2023; Pletcher, 2023; Gupta, 2023; Lee, 2023). While the assumption that the redshift is not necessarily a complete indicator of the distance can explain these observations without modifying the standard cosmological models, there is no clear reproducible empirical evidence that the redshift might indeed be biased. The redshift of a luminous moving object is determined by the linear component of the Doppler shift effect. But because galaxies have rotational velocity in addition to their linear velocity, their redshift can also be affected by the rotational velocity, as the rotational velocity of a luminous object does lead to a Doppler shift effect (Marrucci, 2013; Lavery et al., 2014; Liu et al., 2019). Since the rotational velocity of a galaxy is far smaller than its linear velocity relative to Earth, the rotational velocity component of the Doppler shift is often ignored when determining the distance of a galaxy based on its redshift. But while the Doppler shift effect driven by the rotational velocity of the galaxy is expected to be subtle, that has not yet been tested. It should also be reminded that the physics of galaxy rotation is one of the most provocative observations in nature, and its nature cannot be explained unless making assumptions such as dark matter (Zwicky, 1937; Oort, 1940; Rubin, 1983), modified Newtonian dynamics (Milgrom, 1983, 2007; De Blok and McGaugh, 1998; Sanders, 1998; Sanders and McGaugh, 2002; Swaters et al., 2010; Sanders, 2012; Iocco et al., 2015; Diaz-Saldana et al., 2018; Falcon, 2021), or other theories (Sanders, 1990; Capozziello and De Laurentis, 2012; Chadwick et al., 2013; Farnes, 2018; Rivera, 2020; Nagao, 2020; Blake, 2021; Gomel and Zimmerman, 2021; Larin, 2022). But despite over a century of research, there is still no single clear proven explanation to the physics of galaxy rotation (Sanders, 1990; Mannheim, 2006; Kroupa, 2012; Kroupa et al., 2012; Kroupa, 2015; Arun et al., 2017; Akerib et al., 2017; Bertone and Tait, 2018; Aprile et al., 2018; Skordis and Zlosnik, 2019; Sivaram et al., 2020; Hoffensiter and Criss, 2020; Byrd and Howard, 2021), and that phenomenon is still not fully understood. The purpose of this simple experiment is the test the impact of the rotational velocity component of galaxies on the Doppler shift effect, and consequently on the redshift as a distance indicator. ## 2 Data The experiment is based on one primary dataset, and two additional independent datasets to which the results are compared. The primary dataset includes SDSS DR8 galaxies with spectra sorted by their direction of rotation, as explained and used in (Shamir, 2020b). Instead of using galaxies in the entire SDSS footprint, this experiment is focused on galaxies that rotate in the same direction relative to the Milky Way, and galaxies that rotate in the opposite direction relative to the Milky Way. Therefore, only galaxies that are close to the Galactic pole are used, and the field is limited to the \(20\times 20\) degrees centered at the Northern Galactic pole. The analysis included objects with spectra in SDSS DR8 that have r magnitude of less than 19 and Petrosian radius of at least 5.5". The redshift of the galaxies in that initial set was limited to z\(<\)0.3, and the redshift error smaller than \(10^{-4}\). That selection eliminated the possible effect of bad redshift values, that in some cases can be very high and skew the dataset. The initial set of galaxies that meet these criteria in that field was 52,328. The process by which the galaxies were sorted by their direction of rotation is explained in detail in (Shamir, 2020b), and is similar to the process of annotating galaxies imaged by other telescopes (Shamir, 2016, 2020a, 2022b,f,c; Mcadam et al., 2023; Shamir and McAdam, 2022). In summary, the annotation is done by using the Ganalyzer algorithm (Shamir, 2011), where each galaxy image is transformed into its radial intensity plot such that the value of the pixel at Cartesian coordinates \((\theta,r)\) in the radial intensity plot is the median value of the 5\(\times\)5 pixels at coordinates \((O_{x}+\sin(\theta)\cdot r,O_{y}-\cos(\theta)\cdot r)\) in the original galaxy image, where \(r\) is the radial distance measured in percentage of the galaxy radius, \(\theta\) is the polar angle in degrees relative to the galaxy center, and \((O_{x},O_{y})\) is the coordinates of the galaxy center. A peak detection algorithm is then applied to the rows in the radial intensity plot, and the direction of the peaks determines the direction of the curves of the galaxy arms. Figure 1 displays examples of the original galaxy images, their radial intensity plots, and the detected peaks. The direction of the curves of the arms is determined by the sign of the slope, given that at least 30 peaks are identified in the radial intensity plot. If less than 30 peaks are identified the galaxy is not used, as its direction of rotation cannot be identified. The algorithm is described with experimental results in (Shamir, 2011), as well as (Shamir, 2020a, 2022b,f,c; Mcadam et al., 2023). The primary advantage of the algorithm is that its simple "mechanical" nature makes it fully symmetric. Experiments when mirroring the galaxy images lead to identical inverse results compared to when using the original images (Shamir, 2016, 2020a, 2022b,f,c; Mcadam et al., 2023). After applying the algorithm to the galaxy images, the final dataset included 1,642 galaxies with identifiable direction of rotation, such that 817 galaxies rotate clockwise, and 825 galaxies rotate counterclockwise. Applying the algorithm to the mirrored images led to an identical inverse dataset. Testing a random subset of 200 galaxies showed that all galaxies were annotated correctly. Figure 2 shows the redshift distribution of the galaxies. The dataset is available at [http://people.cs.ksu.edu/~lshamir/data/zdif_data](http://people.cs.ksu.edu/~lshamir/data/zdif_data). In addition to this dataset, two other previous public datasets were used, as will be describe in Section 4. ## 3 Results Table 1 shows the redshift differences in the 20\(\times\)20 degree field centered at the Northern Galactic pole, as well as the smaller 10\(\times\)10 degree field. The mean redshift of the galaxies in the dataset described in Section 2 that rotate in the opposite direction relative to the Milky Way (observed from Earth as rotating clockwise) is 0.09545\(\pm\)0.0017, while the mean redshift of the galaxies that rotate in the opposite direction in the same field is 0.08895\(\pm\)0.0016. That shows a \(\Delta z\) of \(\sim\)0.0065 between galaxies that rotate in the same direction relative to the Milky Way and galaxies that rotate in the opposite direction relative to the Milky Way. By applying a simple Student t-test, the two-tailed probability that the two means Figure 1: Examples of original galaxy images (left), the radial intensity plot transformations (center), and the peaks detected in the the radial intensity plot lines (right). Figure 2: The redshift distribution of the galaxies in the dataset. are different by mere chance is (P\(\simeq\)0.0058). If the observed difference in redshift is driven by the rotational velocity of the observed galaxy relative to the rotational velocity of the Milky Way, the difference should increase when the observe galaxies are closer to the Galactic pole. As Table 1 shows, \(\Delta z\) indeed increases in the 10\(\times\)10 field. Despite the lower number of galaxies the difference is still statistically significant. Most objects with spectra in SDSS are concentrated in the part of the sky that is close to the Northern Galactic pole. If the redshift difference peaks at the Northern Galactic pole, it is expected that when using galaxies that are more distant from the Galactic pole the redshift difference \(\Delta z\) would decrease. Figure 3 shows the change in \(\Delta z\) when the size of the field centered at the Galactic pole changes. As the figure shows, the \(\Delta z\) decreases as the field gets larger. That can be explained by the fact that when the field gets larger, it includes more galaxies that are more distant from the Galactic pole. While it does not fully prove a link to the rotational velocity, that observation is in agreement with the contention that the redshift difference is linked to the rotational velocity of the galaxies relative to the rotational velocity of the Milky Way. The figure includes two graphs. The first shows all galaxies inside the field. For instance, when the field size is 20\(\times\)20 degrees it also includes the galaxies inside the 10\(\times\)10 degree field centered at the Galactic pole. The other analysis excludes overlapping galaxies, so that a galaxy can only be used in one field. That is, the galaxies in the 20\(\times\)20 degree field centered at the Galactic pole excludes the galaxies in the 10\(\times\)10 degree field. That provides analysis with independent sets of galaxies that do not overlap. Table 2 shows the differences between the flux on the different filters, taken from the specObjAll table in SDSS DR8. The spectrum flux difference shows a consistent difference of \(\sim\)10% across the different filters. Unlike the redshift, the differences in the flux of the specific filters are not statistically significant, and therefore a definite conclusion about the flux differences cannot be made. ## 4 Comparison to other datasets The annotation algorithm used to sort the galaxies by their direction of rotation as discussed in Section 2 is simple and symmetric, and there is no known bias that can prefer the redshift of a certain set of galaxies as annotated by the algorithm. Also,experimenting with the same images when the images were mirrored leads to inverse results, as also shown in detail in (Shamir, 2016, 2020a, 2022b,f,c; Mcadam et al., 2023). To further test for a possible impact of unknown or unexpected biases in the annotation process, two additional annotation methods were used to test whether these algorithms provide different results. ### Comparison to annotations by _Galaxy Zoo_ The first annotation method that was used is the crowdsourcing-based _Galaxy Zoo 1_(Lintott et al., 2008). In _Galaxy Zoo_, anonymous volunteers used a web-based interface to sort galaxy images by their direction of rotation. After several years of work by over 100,000 volunteers, a relatively large set of over \(8\cdot 10^{5}\) galaxies were annotated. One of the downsides of _Galaxy Zoo_ was that in the vast majority of the cases the volunteers who annotated the galaxies made conflicting annotations, and the disagreement between the annotators makes it difficult to use the majority of the galaxies. Another substantial downside is that the annotations were subjected to the bias of the human perception, which is very difficult to model and fully understand, challenging the reliability of the annotations as a tool for primary science. Despite these known weaknesses, there is no known human perceptual bias that would associate galaxies with lower redshift to a certain direction of rotation. Therefore, although _Galaxy Zoo_ might not necessarily be considered a complete tool when used as the sole dataset, comparing to _Galaxy Zoo_ can provide an indication of whether a different annotation method leads to different results shown in Section 3. Because the annotations of the volunteers often disagree with each other, _Galaxy Zoo_ defined the "superclean" criterion as galaxies that 95% of the human annotators agree on the annotation. That is, if 95% of the annotations or more are for a galaxy that rotates clockwise, the annotation is considered "superclean". While these annotations are normally correct, only 1.7% of the galaxies annotated by _Galaxy Zoo 1_ meet that criterion. Out of the 667,944 galaxies in the specZoo table in SDSS DR8, just 324 galaxies meet that criterion and are also inside the 20\(\times\)20 degree field centered at the Northern Galactic pole. The mean z of the _Galaxy Zoo 1_ galaxies that rotate clockwise in that field is 0.073834\(\pm\)0.0041, and the mean z of the galaxies that rotate counterclockwise is 0.068292\(\pm\)0.00348. That shows a \(\Delta z\) of 0.00554, which is similar in both direction and magnitude to the \(\Delta z\) of 0.0065 observe with the dataset described in Section 2. The one-tailed P value of that the difference to occur by mere chance is 0.15. That is not statistically significant, and that can be attributed to the small size of the dataset, but the similar \(\Delta z\) in both direction and magnitude shows consistency between the annotation methods. From the 324 galaxies annotated by Galaxy Zoo, 263 were also included in the dataset described in Section 2. The Figure 3: The \(\Delta z\) when the size of the field changes. The analysis was done such that the larger field contains also the galaxies of the smaller field inside it (blue), and also when the galaxies in the smaller field are excluded so that the two fields are orthogonal, and do not have overlapping galaxies (green). value of the comparison is therefore not by providing a new dataset, but by using a different annotation method that is independent of the method used in Section 2, and therefore not subjected to the same possible unknown or unexpected biases in that method if such exist. ### Comparison to annotations by _SpArcFiRe_ Another dataset that is used is the dataset of SDSS galaxies annotated by the _SpArcFiRe_ (Scalable Automated Detection of Spiral Galaxy Arm) algorithm (Davis and Hayes, 2014; Hayes et al., 2017). _SpArcFiRe_ is implemented by an open source software 1, and the method is described in detail in (Davis and Hayes, 2014). In summary, the algorithm first identifies arm segments in the galaxy image, and then fits these segments to logarithmic spiral arcs to determine the direction of rotation based on the curves of the arms. One of the advantages of _SpArcFiRe_ is that it is not based on data-driven machine learning or deep learning approaches that are difficult to analyze, and is therefore not subjected to the complex biases that are often very difficult to notice (Dhar and Shamir, 2022). The downside of _SpArcFiRe_ is that it has an annotation error of about 15% (Macadam et al., 2023). More importantly, since _SpArcFiRe_ is a relatively sophisticated algorithm, it is more difficult to ensure that it is completely symmetric, and in some seldom cases a mirrored galaxy image is not annotated as rotating to the opposite direction compared to the original image. That characteristic of the algorithm is discussed in the appendix of (Hayes et al., 2017). That weakness of the algorithm can be addressed by repeating the analysis twice, such that in the first experiment the original images are used, and in the second experiment the mirrored images are used. Then, the results of the two experiments can be compared. While that practice might not be ideal, it can be used to compare the results to the results shown in Section 3. Footnote 1: [https://github.com/waynebbayes/SpArcFiRe](https://github.com/waynebbayes/SpArcFiRe) The dataset used here is the dataset of spiral galaxies annotated _SpArcFiRe_ used in (Mcadam et al., 2023), which is a reproduction of the experiment described in (Hayes et al., 2017). The dataset is available at [https://people.cs.ksu.edu/~lshamir/data/sparcfire](https://people.cs.ksu.edu/~lshamir/data/sparcfire). More details about the dataset are available in (Macadam et al., 2023). In summary, the dataset was prepared with the original images, and then again with the mirrored galaxy images. The dataset prepared with the original images contains 138,940 galaxies, and the dataset prepared with the mirrored images contains 139,852 galaxies. All of these galaxies have spectra, and therefore can be used to compare the reshift. As before, galaxies with redshift greater than 0.3 or redshift error greater than 10\({}^{-4}\) were ignored. Table 3 shows the mean redshift in the 10\(\times\)10 field centered at the Northern Galactic pole and in the 20\(\times\)20 field, for both the original images and the mirrored images. As the table shows, both the original images and the mirrored images show consistent results. These results are also consistent with the results shown in Section 3. The \(\Delta z\) is lower than the \(\Delta z\) observed with the dataset used in Section 3, and that could be due to the certain error rate of the _SpArcFiRe_ algorithm, which is expected to weaken the signal as also shown formally in Section 7.1 in (McAdam and Shamir, 2023). ### Comparison to galaxies from the Southern Galactic pole The data used in the experiments described above was all taken from the Northern hemisphere, and the galaxies it contains are around the Northern Galactic pole. To verify the observed redshift difference, it is also required to test if it exists also in the Southern Galactic pole. If the observed difference in redshift is also observed in the Southern Galactic pole, it can provide an indication that it is indeed related to the Galactic pole. Since the three experiments above all used data collected by SDSS, using a different telescope can show that the difference is not driven by some unknown or unexpected anomaly in a specific telescope system. The set of galaxies used for the analysis are galaxies imaged by DECam used in (Shamir, 2021) that had spectroscopic redshift through the Set of Identifications Measurements and Bibliography for Astronomical Data (SIMBAD) database (Wenger et al., 2000). As explained in (Shamir, 2021), DECam galaxy images were acquired through the API of the DESI Legacy Survey server. The galaxy images were then annotation by the Ganalyzer algorithm as described in Section 2, and also in (Shamir, 2021). The entire dataset contains \(\sim 8.07\cdot 10^{6}\) galaxies, but because only galaxies with spectra in the 20\(\times\)20 field centered at the Galactic pole are used, the dataset used here is reduced to 3,383 galaxies. The dataset is available at [http://people.cs.ksu.edu/~lshamir/data/zdiff_data](http://people.cs.ksu.edu/~lshamir/data/zdiff_data). Table 4 shows the mean redshift of the galaxies that rotate in the same direction relative to the Milky Way and in the opposite direction relative to the Milky Way. Due to the perspective of the observer galaxies that are close to the Southern Galactic pole that rotate in the same direction relative to the Milky Way seem to rotate in the opposite direction compared \begin{table} \begin{tabular}{l c c c c c c} \hline Field & \# CW & \# CCW & \(Z_{cw}\) & \(Z_{cw}\) & \(\Delta\) & t-test P \\ \hline 10\(\times\)10 & 204 & 202 & 0.0996\(\pm\)0.0036 & 0.08774\(\pm\)0.0036 & 0.0118496 & 0.02 \\ 20\(\times\)20 & 817 & 825 & 0.09545\(\pm\)0.0017 & 0.08895\(\pm\)0.0016 & 0.0065 & 0.0058 \\ \hline \end{tabular} \end{table} Table 1: The mean redshift difference of galaxies in 20\(\times\)20 field centered at the Galactic pole and the 10\(\times\)10 field centered at the Galactic pole. The P values are the two-tailed P values determined by the standard Student t-test. \begin{table} \begin{tabular}{l c c c c} \hline Band & & & & \\ \hline spectroFlux\_g & 25.969\(\pm\)0.8669 & 28.5541\(\pm\)0.0918 & -2.585 & 0.063 \\ spectroFlux\_r & 53.243\(\pm\)1.765 & 58.6214\(\pm\)2.3422 & -5.378 & 0.066 \\ spectroFlux\_j & 77.4189\(\pm\)2.513 & 85.0868\(\pm\)3.407 & -7.667 & 0.067 \\ \hline \end{tabular} \end{table} Table 2: Flux in different filter galaxies that rotate in the same direction relative to the Milky Way and galaxies that rotate in the opposite direction relative to the Milky Way. The t-test P values are the two-tailed P value. to galaxies in the Northern Galactic pole that rotate in the same direction. As the table shows, the redshift differences are statistically significant in both fields, and increases when the galaxies are closer to the Galactic pole. These results are in good agreement with the results shown with galaxies located around the Northern Galactic pole. The table also shows that the mean redshift is higher compared to the mean redshift observed with SDSS. That difference can be expected due to the superior imaging power of DECam compared to SDSS, allowing DECam image galaxies at deeper redshifts. ## 5 Conclusion Recent puzzling observations such as the \(H_{o}\) tension and large disk galaxies at high redshifts have been challenging cosmology. Explaining such observations require to assume that either the standard cosmological models are incomplete, or that the redshift as a model of distance is incomplete. This study shows first direct observational evidence of bias in the redshift as a distance indicator. While the bias can also be attributed to the algorithm that selects spectroscopic targets, it is difficult to think of how that algorithm could be affected by the direction of rotation relative to the Milky Way. Also, if the target selection algorithm has such unknown and complex bias, that bias is expected to be consistent throughout the sky, and is not expected to change based on the angular distance of the galaxy from the Galactic pole, or flip when analyzing galaxies from the opposite side of the Galactic pole. The fact that two different telescope systems show similar results further reduces the possibility that the results are driven by an unknown anomaly in the selection algorithm of the spectroscopic surveys. Another possible explanation to the observation is an unexpected anomaly in the geometry of the Universe and its large-scale structure. If the redshifts represent the accurate distances of the galaxies, and is not affected by their rotational velocity, the galaxies form a cosmological-scale structure formed by the alignment in the direction of rotation of the galaxies, and peaks around the Galactic pole. That explanation, however, requires the modification of the standard cosmological model and the fundamental assumptions it is based on (Aluri et al., 2023). As discussed also in (Shamir, 2022b,a,d,c,e), the observation of such large-scale structure that forms a cosmological-scale axis is aligned with alternative theories such as dipole cosmology (Allahyari et al., 2023; Krishnan et al., 2023), or theories that assume a rotating universe such as Black Hole Cosmology (Pathria, 1972; Stuckey, 1994; Easson and Brandenberger, 2001; Seshavatharam, 2010; Poplawski, 2010; Christillin, 2014; Dymnikova, 2019; Chakrabarty et al., 2020; Poplawski, 2021; Seshavatharam and Lakshminarayana, 2022; Gaztanaga, 2022a,b) which is also linked to holographic universe (Susskind, 1995; Bak and Rey, 2000; Bousso, 2002; Myung, 2005; Hu and Ling, 2006; Rinaldi et al., 2022). In that case, the alignment of such hypothetical axis with the Galactic pole is a coincidence. The experiments described here use galaxies with clear shape, and therefore is limited to a relatively low redshift of \(z<0.25\). Deeper and larger datasets of clear galaxies with spectra such as the data provided by the Dark Energy Spectroscopic Instrument (DESI) will allow a higher resolution profiling of the observed anomaly in higher redshift ranges. While the observations do not explain directly the existence of early massive disk galaxies, they demonstrate that the redshift model might be incomplete. In that case, the existence of such galaxies can be explained without the need to modify the standard cosmological models. The results shown here might also provide an indication that the \(H_{o}\) tension can be explained by the slight differences in the redshift. While \(H_{o}\) anisotropy has been reported in the past (Krishnan et al., 2022; Cowell et al., 2022; McConville and Colgain, 2023; Aluri et al., 2023), its nature is still unclear. Differences in the redshift that are based on the rotational velocity of the galaxies relative to the Milky Way can explain the \(H_{o}\) anisotropy, and potentially also the \(H_{o}\) tension. If the rotational velocity of Ia supernovae and their host galaxies relative to the Milky Way affect their estimated distance, when the rotational velocity relative to the Milky Way is normalized the \(H_{o}\) tension is expected to be resolved. That is, when using just galaxies that rotate in the same direction relative to the Milky Way, the computed \(H_{o}\) should be similar to the \(H_{o}\) determined by the CMB. Table 5 shows the the \(H_{o}\) computed when using the _SH0ES_ collection of Ia supernovae as described in (Khetan et al., 2021), with the open source and data [https://github.com/nanditakhetan/SBF_SNeIa_HO](https://github.com/nanditakhetan/SBF_SNeIa_HO). The table also shows the same experiment when using just galaxies that rotate in the same direction relative to the Milky Way, and when using galaxies that rotate in the opposite direction relative to the Milky Way. The experiment is described in (McAdam and Shamir, 2023). As the table shows, when using the galaxies regardless of the direction of their rotational velocity the \(H_{o}\) is \(\sim\)73.76 \(km/sMpc^{-1}\), which is similar to the value reported in (Khetan et al., 2021), and in tension with the \(H_{o}\) determined by the CMB. When limiting the SH0ES collection to galaxies that rotate in the same direction relative to the Milky Way, \(H_{o}\) drops to \(\sim\)69.05 \(km/sMpc^{-1}\), reducing the tension with the CMB. Although certain tension with the CMB still exists, the galaxies are not exactly at the Galactic pole, their inclination is not exactly 90\({}^{o}\), and their rotational velocity is \begin{table} \begin{tabular}{l c c c c c c} \hline Field & \# CW & \# CCW & \(Z_{cw}\) & \(Z_{cw}\) & \(\Delta\) & t-test P \\ \hline Original 10\(\times\)10 & 710 & 732 & 0.07197\(\pm\)0.0015 & 0.06234\(\pm\)0.0014 & 0.00963 & \(<\)0.0001 \\ Mirrored 10\(\times\)10 & 728 & 709 & 0.06375\(\pm\)0.0014 & 0.07191\(\pm\)0.0014 & -0.00816 & \(<\)0.0001 \\ Original 20\(\times\)20 & 2903 & 2976 & 0.07285\(\pm\)0.0007 & 0.07164\(\pm\)0.0007 & 0.001686 & 0.0443 \\ Mirrored 20\(\times\)20 & 3003 & 2914 & 0.07113\(\pm\)0.0007 & 0.07271\(\pm\)0.0007 & -0.00158 & 0.0505 \\ \hline \end{tabular} \end{table} Table 3: The mean redshift of galaxies annotate by the _SpArcFiRe_ algorithm. The t-test P values are the one-tailed P value. not identical to the rotational velocity of the Milky Way, and therefore the \(H_{o}\) is not expected to be fully identical to the \(H_{o}\) computed with the CMB. When using galaxies that rotate in the opposite direction relative to the Milky Way, not only that the \(H_{o}\) does not decrease, but it increases to make the tension with the CMB stronger. Since the lower number of galaxies increases the error of the computed \(H_{o}\), the results shown in Table 5 cannot provide a clear proof, but they are consistent with the contention that the possible slight differences caused by the rotational velocity of the observed galaxies might be linked to the \(H_{o}\) tension. Further analysis with larger sets than _SH0ES_ might be needed to better understand whether such link exists. If the rotational velocity affects distance indicators such as the redshift, observations such as deep fields imaged by space-based telescopes might be more informative when the field is close to the Galactic pole, allowing to separate some of the observed galaxies by their rotational velocity relative to the Milky Way. The observed \(\Delta z\) between galaxies with opposite rotational velocities as shown here is between around 0.0065 to 0.012. If that difference is due to the rotational velocity, that difference corresponds to velocity of between roughly 2,000 to 3,600 km\(\cdot\)s\({}^{-1}\). That is about 5 to 8 times the rotational velocity of the Milky Way compared to the observed galaxies, which is \(2\cdot 220=\sim 440\) km\(\cdot\)s\({}^{-1}\), assuming that the observed galaxies have the same rotational velocity as the Milky Way. That velocity difference is in good agreement with the velocity difference predicted in (Shamir, 2020) by using analysis of the photometric differences between galaxies rotating with or against the rotational velocity of the Milky Way. That analysis was based on the expected and observed differences in the total flux of galaxies that rotate in the same direction relative to the Milky Way and the flux of galaxies that rotate in the opposite direction. Based on the expected flux difference due to the Doppler shift driven by the rotational velocity as shown in (Loeb and Gaudi, 2003), it was predicted that light emitted from the observed galaxies agrees with rotational velocity that is 5-10 times faster than the rotational velocity of the Milky Way (Shamir, 2020; McAdam and Shamir, 2023). These predictions are close to the results of comparing the redshift as done here. There is no immediate physical explanation to the difference between the redshifts of galaxies that rotate with or against the direction of rotation of the Milky Way. While a certain difference is expected, the magnitude of the difference is expected to be far smaller given the rotational velocity of the Milky Way. The observed redshift difference, if indeed linked to the rotational velocity of the Milky Way and the observed galaxies, corresponds to a much higher rotational velocity than the \(\sim\)220 km\(\cdot\)s\({}^{-1}\) of the Milky Way. On the other hand, the physics of galaxy rotation is one of the most puzzling phenomena in nature, and despite over a century of research it is still not fully understood (Opik, 1922; Babcock, 1939; Oort, 1940; Rubin and Ford Jr, 1970; Rubin et al., 1978, 1980, 1985; Sanders, 1990; Sofue and Rubin, 2001; Mannheim, 2006; Kroupa, 2012; Kroupa et al., 2012; Kroupa, 2015; Arun et al., 2017; Akerib et al., 2017; Bertone and Tait, 2018; Aprile et al., 2018; Skordis and Zlosnik, 2019; Sivaram et al., 2020; Hofmeister and Criss, 2020; Byrd and Howard, 2021). Due to the unexplained tensions in cosmology, the unknown physics of galaxy rotation should be considered as a factor that can be associated with these tensions and explain them.
2303.07654
The improved saturation model in nuclei
We consider the nuclear shadowing in deep-inelastic scattering corresponding to kinematic regions accessible by future experiments at electron-ion colliders. The gluon distribution at small $x$ is obtained using an improved dipole model depended on the impact parameter for atomic nucleus and compared with nCETQ15 parametrization group. The nuclear shadowing at small $x$ is defined within the color dipole formalism with respect to the mass number $A$. Its behavior is predicted for light nuclei in a wide range of the impact parameter $b$ and the transverse dipole size $r$. The nuclear saturation at large-$r$ (small $\mu^2$) is observable. The behavior of the nuclear ratio $\sigma^{A}_{\mathrm{dip}}/\sigma_{0}$ is similar to the Golec-Biernat-W$\ddot{\mathrm{u}}$sthoff (GBW) model in a wide range of $r$ for light and heavy nuclei at small $x$.
G. R. Boroun, B. Rezaei
2023-03-14T06:50:42Z
http://arxiv.org/abs/2303.07654v4
# The improved saturation model in nuclei ###### Abstract We develop the relationship between the gluon distribution obtained using a dipole model fitted to low \(x\) data on the proton structure function for nuclear shadowing in deep-inelastic scattering corresponding to kinematic regions accessible by the future experiments at electron-ion colliders. The improved dipole model on the impact parameter for nuclei shows the nuclear shadowing at small-\(x\) and the nuclear saturation at large-\(r\). Nuclear shadowing is treated within the color dipole formalism with respect to the mass number \(A\). The magnitude of nuclear shadowing in the impact parameter saturation model (IP-sat) is predicted for light nuclei in a wide range of the impact parameter \(b\) and the transverse dipole size \(r\). We compare the model, originally proposed for gluon density within the dipole framework long ago by R.S.Thorne, with nCETQ15 and show that the nuclear ratio \(\sigma_{\rm dip}^{A}/\sigma_{0}\) has a similar behavior with the Golec-Biernat-Wusthoff (GBW) model in a wide range of \(r\) and \(A\) at low \(x\). We find that at \(x=10^{-6}\) and for heavy nuclei, the ratio of dipole cross sections have a saddle-shaped behavior in the range 2\(\times\)10\({}^{-2}\)\(\lesssim\)\(r\)\(\lesssim\)2\(\times\)10\({}^{-1}\) fm, whose magnitude increases with an increase of the atomic number A. This behavior becomes softer when the \(q\overline{q}g\) components in the diffractive systems are considered. ## I. Introduction In the deep inelastic scattering (DIS) process, the microscopic structure of hadrons at high energies in the future circular collider hadron-electron (FCC-he) and the large hadron electron collider (LHeC)[1] is described in terms of various quark and gluon distribution functions (PDFs) in the \(\gamma^{*}A\) interaction (where \(A\) is the number of nucleons in a nuclear target). At small values of the Bjorken variable \(x\), the shadowing effects are a very important feature for the study of nuclear structure and nuclear collisions at the electron - Ion collider (EIC) [2,3]. The nuclear shadowing is a consequence of multiple scattering as, at high energies, a hadron becomes a dense system and the saturation effects inherent to the QCD dynamics may become visible. This effect is due to the gluon density inside the proton grows with the energy. The gluonic structure of protons and nuclei can be studied in the high-density regime of QCD. Therefore, new dynamical non-linear QCD effects associated to the unitarity corrections are expected to slow down its further growth [4-9]. The saturation (non-linear QCD) approaches are characterized by a typical scale, denoted the saturation scale \(Q_{s}^{2}(x)\), which is energy dependent, and marks the transition between the linear (leading twist) perturbative QCD regime and the saturation domain. The saturation regime of hadronic and nuclear wave functions at small longitudinal momentum fraction \(x\) is characterized by this scale. The photoabsorption cross section data from HERA at small \(x\), in a wide range of \(x\) and \(Q^{2}\), lie on a single curve when plotted against the variable \(Q^{2}/Q_{s}^{2}\), with \(Q_{s}^{2}\)\(\sim\)\(x^{-\lambda}\) and \(\lambda\)\(\simeq\)0.3 [10]. The same scaling ansatz accounts for nuclear photoabsorption cross sections is observed [11,12]. Nuclei have more gluons than protons, therefore the non-linear effects are visible in the evolution of the nuclear gluon distribution. Non-linear effects are expected when \(\alpha_{s}T_{A}(b)xg(x)\)\(\sim\)\(Q^{2}\), where \(T_{A}(b)\) is the nuclear thickness and \(g(x)\) is the gluon density (in the following, \(G(x,Q^{2})=xg(x,Q^{2})\) where \(G(x,Q^{2})\) is the gluon distribution). The nuclear shadowing becomes important, in DIS on nuclei, at the Bjorken variable \(x\)\(\ll\)\(x_{A}=\frac{1}{m_{N}R_{A}}=0.15A^{-1/3}\), where \(R_{A}\) is the radius of the target nucleus and \(m_{N}\) is the nucleon mass [13]. The saturation scale is even enhanced in the case of nuclear collisions, with the nuclear saturation momentum scaling as \(Q_{s,a}^{2}\)\(\propto\)\(A^{j}Q_{s}^{2}\), where \(j\simeq\frac{1}{3}\) or \(\frac{4}{9}\) as reported in Refs.[4-6,11-12,14]. For large nuclei, the value \(j\) corresponds to 1/3. The key feature is the connection of the dipole-target amplitude to the integrated gluon density. The parton saturation models shed light on the behavior of the gluon density at very low-\(x\) and this knowledge is crucial for instance to describe the exclusive processes in ep and eA collisions. An equivalent explanation in the frame in which the nucleus is moving fast, is that gluon recombination due to the overlap of the gluon clouds from different nucleons, makes gluon density in nucleus with mass number A smaller than A times that in a free nucleon. Recently, the authors in Ref.[15] show this behavior using the "brute force" method in the momentum space. Indeed, the saturation effects play an important role in the processes \(e+A\)\(\rightarrow\)\(e+X\) and in the kinematical range of the future EIC data. The experiment at EIC is DIS off a proton or a nucleus with the variable center-of-mass energy within the range \(20<\sqrt{s}<140\) GeV, where this is lower than at HERA with \(\sqrt{s}=318\) GeV, but the luminosity is higher by a factor of 1000. The EIC will combine the experience from HERA to deliver polarized electron beams with the experience from RHIC to be the first machine that provides the collision of polarized electrons with polarized protons [18,19]. The kinematic regions in experiments at the proposed EIC [2,3] at the Brookhaven National Laboratory are shown in Fig. 1. This collider (i.e., EIC) would have a strong impact, in particular on understanding the small and large-\(x\) regions of nuclear shadowing and the EMC effect in comparison with fixed-target kinematics, which DIS data considerably restricts their range in \(x\) and \(Q^{2}\), and only with limited statistics for various nuclei [20]. In this paper we present a simple model for nuclear dipole cross sections in the region of small \(x\) (\(x\)\(\leq\)0.01) and of small and moderate \(Q^{2}\) in the improved dipole picture. One of the goals of this paper is considering the bSat and bCGC models in the nuclear improved saturation model at the kinematical range that will be probed by the EIC and LHeC. We show that the geometrical scaling (GS) holds for the nuclear improved saturation model in a wide kinematic region \(rQ_{s}\). The paper is organized as follows. In the next section, we present a brief overview of the formalism needed for the description of the exclusive processes in ep collisions and discuss the distinct models for the dipole-proton scattering amplitude employed in our analysis. In Section III, we exhibit the results for the asymptotic behavior of the gluon density. Moreover, we present our predictions for the nuclear gluon density and nuclear dipole cross section at the EIC and LHeC energies. In Section IV a comparison of the results of the model with available data on \(G/A\) will be shown, together with the dipole cross sections will be discussed. Finally, in the last Section conclusions will be outlined. ## II. The Dipole Cross-Section Models The color dipole formulation provides an intuitive picture of hard processes in high energy scattering for inclusive and exclusive processes in electron-proton (\(ep\)) and lepton-nucleus (\(lA\)) scattering. It is well known that the dipole picture is a factorization scheme for DIS, which is particularly convenient for the inclusion of unitarity corrections at small \(x\). In the mixed representation, the scattering between the virtual photon \(\gamma^{*}\) and the proton is seen as the color dipole where the transverse dipole size \(r\) and the longitudinal momentum fraction \(z\) with respect to the photon momentum are defined. The amplitude for the complete process is simply the production of these subprocess amplitudes, as the DIS cross section is factorized into a light-cone wave function and a dipole cross section by the following form \[\sigma_{L,T}^{\gamma^{*}p}(x,Q^{2})=\int dzd^{2}{\bf r}|\Psi_{L,T}({\bf r},z,Q ^{2})|^{2}\sigma_{\rm dip}^{p}(\widetilde{x}_{f},{\bf r}). \tag{1}\] Here \(\Psi_{L,T}\) are the appropriate spin averaged light-cone wave functions of the photon, where the subscript \(L\) and \(T\) refer to the transverse and longitudinal polarization state of the exchanged boson. \(\sigma_{\rm dip}(\widetilde{x}_{f},r)\) is the dipole cross-section and contains all information about the target and the strong interaction physics, that is related to the imaginary part of \((q\overline{q})p\) forward scattering amplitude and \(\widetilde{x}_{f}\)\(\equiv\)\(x(1+4m_{f}^{2}/Q^{2})\) is equivalent to the Bjorken variable and provides an interpolation for the \(Q^{2}\)\(\rightarrow\)\(0\) limit, \(m_{f}\) is the mass of the quark of flavour \(f\). The variable \(z\), with \(0\leq z\leq 1\), characterizes the distribution of the momenta between quark and antiquark [15,21-22]. In Ref.[23-24], the dipole cross section was proposed to have the eikonal-like form \[\sigma_{\rm dip}^{p}(\widetilde{x}_{f},r)=\sigma_{0}(1-e^{-r^{2}Q_{z}^{2}/4}), \tag{2}\] where the resulting dipole cross section presents the colour transparency property, i.e. \(\sigma_{\rm dip}\sim r^{2}\) when \(r\)\(\rightarrow\)\(0\), which is purely pQCD phenomenon and the saturation property, i.e. \(\sigma_{\rm dip}\sim\sigma_{0}\) at large \(r\), which imposes the unitarity condition. The GBW model was updated in [15,21] to improve the large \(Q^{2}\) description of the proton structure function by a modification of the small \(r\) behavior of the dipole cross section to include evolution of the gluon distribution. Bartels-Golec-Bienat-Kowalski (BGBK) improved the dipole cross section by adding the collinear effects, as the implementation of QCD evolution in the dipole cross section depends on the gluon distribution by the following form \[\sigma_{\rm dip}^{p}=\sigma_{0}\{1-\exp(-\frac{\pi^{2}r^{2}\alpha_{s}(\mu^{2}) xg(\widetilde{x}_{f},\mu^{2})}{3\sigma_{0}})\}, \tag{3}\] Figure 1: The \(Q^{2}\) and \(x\) coverage of EIC with the electron beam energy \(E_{e}=20\) GeV and the ion beam energy per nucleon \(E_{N}=250\) GeV [2,3]. where the hard scale is assumed to have the form \[\mu^{2}=C/r^{2}+\mu_{0}^{2}, \tag{4}\] and the parameters \(C\) and \(\mu_{0}^{2}\) are obtained from the fit to the DIS data. Although BGBK model is successful in describing dipole cross section at large values of \(r\) as the two models (GBW and BGBK) overlap in this region but they differ in the small \(r\) region where the running of the gluon distribution starts to play a significant role. Indeed the improved model of \(\sigma_{\rm dip}\) significantly improves agreement at large values of \(Q^{2}\) without affecting the physics of saturation responsible for transition to small \(Q^{2}\). The dipole model further improve by introducing the impact parameter (IP) of the proton into the dipole dynamics, as \[\sigma_{\rm dip}^{p}(x,r)=\int d^{2}b\frac{d\sigma_{\rm dip}^{p}}{d^{2}b} \tag{5}\] where \(b\) is a particular IP \[\frac{d\sigma_{\rm dip}^{p}}{d^{2}b}=2(1-{\rm Re}\ S(b)), \tag{6}\] and \(S(b)\) is the S-matrix element of the elastic scattering. The cross section at a given impact parameter \(b\) is proportional to the dipole area, the strong coupling, the number of gluons in the cloud and the shape function by the following form [22] \[\frac{d\sigma_{\rm dip}^{p}}{d^{2}b}=2\Big{[}1-\exp\Big{(}-\frac{\pi^{2}r^{2} \alpha_{s}(\mu^{2})xg(\widetilde{x}_{f},\mu^{2})T(b)}{2N_{c}}\Big{)}\Big{]}, \tag{7}\] where the exponential form of the function \(T(b)\) is determined from the fit to the data as \[T(b)=\frac{1}{2\pi B_{G}}\exp(-b^{2}/2B_{G}), \tag{8}\] where the parameter \(B_{G}\) was found to be 4.25 GeV\({}^{-2}\). For multi Pomeron exchange, the eikonalised dipole scattering amplitude of Eq.(8) can be expanded as \[N(x,r,b)=\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n!}\Big{[}\frac{\pi^{2}}{2N_{c} }r^{2}\alpha_{s}(\mu^{2})xg(\widetilde{x}_{f},\mu^{2})T(b)\Big{]}^{n}, \tag{9}\] where \(d\sigma_{\rm dip}/d^{2}b=2N(x,r,b)\) and the \(n\)-th term in the expansion corresponds to \(n\)-Pomeron exchange [22]. Eq.(8) is known as the Glauber-Mueller dipole cross section [25] and can also be obtained within the McLerran-Venugopalan model [26]. The saturated version of the dipole model may in principle be derived from the Color Glass Condensate (CGC) effective theory for QCD according to Eq.(8) where at small \(r\) this expression (i.e., Eq.(8)) becomes \[\frac{d\sigma_{\rm dip}^{p}}{d^{2}b}=\frac{\pi^{2}r^{2}\alpha_{s}(\mu^{2})xg (\widetilde{x}_{f},\mu^{2})T(b)}{N_{c}}. \tag{10}\] Eq.(8) is referred to as the IP-Sat model, while Eq. (10) is referred to as the IP Non-Sat model. The Balitsky-Kovchegov (BK) equation [27-29] for a dipole scattering amplitude was proposed in terms of the hierarchy of equations for Wilson line operators in the limit of large number of colors \(N_{c}\). The geometrical scaling (GS) [30] at the high-energy limit of perturbative QCD is obtained from the BK equation [27-29] and the Colour Glass Condensate formalism [31]. The BGBK and CGC models considered only the dipole cross section integrated over the impact parameter \(b\)[32]. The BGBK model was modified to include the impact parameter dependence as denoted by the IP-Sat model and the CGC model was also modified to include the impact parameter dependence as denoted by the b-CGC model. The dipole cross section can be calculated in the CGC approach from the relation \[\sigma_{\rm dip}^{p}(x,r)=\sigma_{0}{\cal N}(x,r), \tag{11}\] where \(\sigma_{0}=2\pi R_{p}^{2}\) and \[{\cal N}(x,r)=\Big{\{}^{N_{0}(\frac{G\alpha_{s}}{2})^{2(\gamma_{s}+(1/k\lambda Y )\ln(2/rQ_{s}))}}_{1-e^{-4\ln^{2}(2pQ_{s})}}:rQ_{s}>2} \tag{12}\] where \(Y=\ln(1/x)\) and \(k=\chi^{\prime\prime}(\gamma_{s})/\chi^{\prime}(\gamma_{s})\) where \(\chi\) is the LO BFKL characteristic function. The scattering amplitude \({\cal N}(x,r)\) can vary between zero and one, where \({\cal N}=1\) is the unitarity limit. To introduce the impact parameter dependence into the CGC model, the b-CGC model for the dipole cross section is defined by the following form [32] \[\frac{d\sigma_{\rm dip}^{p}}{d^{2}b} = 2{\cal N}(x,r,b) \tag{13}\] where the impact parameter dependence of the saturation scale \(Q_{s}\) was introduced by \[Q_{s}{\equiv}Q_{s}(x,b)=(\frac{x_{0}}{x})^{\lambda/2}\Big{[}\exp(-\frac{b^{2} }{2B_{CGC}})\Big{]}^{1/2\gamma_{s}}, \tag{14}\] where the parameter \(B_{CGC}\), instead of \(\sigma_{0}\) in the CGC dipole model, is a free parameter and is determined by other reactions, namely the \(t\) distribution of the exclusive diffractive processes at HERA. ## III Asymptotic Behavior of Gluon Density The main goal of the EIC and at the LHC (LHeC) is to achieve a deeper knowledge of the hadronic structure at high energies where the gluon density grows with the energy, and a hadron becomes a dense system that the saturation effects become visible. Considering the gluon density, in inclusive and exclusive processes, in a wide \(Q^{2}\) region at low \(x\), is desirable [4-6,33]. The gluon density, in the dominant double logarithmic (DLA) contribution, is given by \[xg(x,\mu^{2}){\propto}\exp{\left[\frac{16N_{c}}{\beta_{0}}\ln\frac{x_{0}}{x}\ln \frac{t}{t_{0}}\right]}, \tag{15}\] where \(\frac{t}{t_{0}}{\equiv}\ln(\frac{\mu^{2}}{\Lambda_{QCD}^{2}})/\ln(\frac{Q_{0}^ {2}}{\Lambda_{QCD}^{2}})\) and \(\beta_{0}=11-\frac{2}{3}n_{f}\) (\(n_{f}\) is the number of active flavours). In the improved saturation model, a matching between the dipole model gluon distribution and the collinear approach is obtained [34] by using a leading order gluon anomalous dimension \(\gamma_{gg}\) as \[xg(x,\mu^{2}){\propto}I_{0}{\left(2\sqrt{\frac{12}{\beta_{0}}\ln \frac{x_{0}}{x}\ln\frac{t}{t_{0}}}\right)}\exp{\bigg{[}-\delta\ln\frac{t}{t_{0} }\bigg{]}}, \tag{16}\] where \(\delta=(11+\frac{2n_{f}}{2^{2}})/\beta_{0}\). In Ref.[33], the author is defined the integrated gluon distribution by using the leading twist relation between the uninegrated and integrated gluon distribution \(g(x,Q^{2})=\int_{0}^{Q^{2}}\frac{dk^{2}}{k^{2}}f(x,k^{2})\) at fixed coupling by the following form \[xg(x,Q^{2}) = \frac{3\sigma_{0}}{4\pi^{2}\alpha_{s}}{\bigg{[}-Q^{2}\exp(-Q^{2} /Q_{s}^{2}) \tag{17}\] \[+Q_{s}^{2}(1-\exp(-Q^{2}/Q_{s}^{2}))\bigg{]}}.\] The expression for the nuclear dipole cross section \(\sigma_{\rm dip}^{A}\) is the same expect for the change \(g(x,Q^{2}){\rightarrow}g^{A}(x,Q^{2})\), where \(g^{A}(x,Q^{2})\) is obtained from Eq.(17) with the replacement \(Q_{s}^{2}{\rightarrow}Q_{s}^{2A}\). In Ref.[14], the area of the target scales as \(A^{2/3}\), and \(Q_{s}^{2A}=A^{1/3}Q_{s}^{2}\). In Ref.[11], the extension to the nuclear case is done through \(\sigma^{\gamma^{A}A}=\left(\frac{\pi R_{0}^{2}}{\pi R_{p}^{2}}\right)\sigma^{ \gamma^{*}p}\) and \(Q_{s}^{2A}=\left(\frac{A\pi R_{p}^{2}}{\pi R_{A}^{2}}\right)^{1/6}Q_{s}^{2}\), where for a nuclear target with the mass number \(A\), the nuclear radius is given by the usual parameterization \(R_{A}=(1.12A^{1/3}-0.86A^{-1/3})\) fm, and \(\delta=0.79{\pm}0.02\) which translate into a growth of the nuclear saturation scale faster than \(A^{1/3}\) for large nuclei. The area of the proton, \(\pi R_{p}^{2}=1.55{\pm}0.02\) fm\({}^{2}\), is extracted from a fit to the HERA data. In Ref.[8], the author is defined several relations for \(Q_{s}^{2A}\). In the first relation, he has used a relation coming from the running of the coupling by the following form \[Q_{s}^{2A}{\ln}{\left(\frac{Q_{s}^{2A}}{\Lambda_{\rm QCD}^{2}} \right)}{\propto}{\left(\frac{T_{A}(b)}{T_{A}(0)}\right)}A^{1/3}Q_{s}^{2}{\ln }{\left(\frac{Q_{s}^{2}}{\Lambda_{\rm QCD}^{2}}\right)}, \tag{18}\] where \(T_{A}(b)\) is the nuclear profile function normalized to unity, \(\int d^{2}b\)\(T_{A}(b)=1\), with \(b\) the impact parameter (IP) of the center of the dipole relative to the center of the nucleus. In the limit \(r{\rightarrow}0\), the author is obtained \(Q_{s}^{2A}=\frac{1}{2}AT_{A}(b)\sigma_{0}Q_{s}^{2}\) by imposing the first scattering approximation in the dipole cross section, and according to the maximum of the unintegrated gluon distribution it is obtained in momentum space as \(Q_{s}^{2A}{\simeq}{\left[4{\ln}{\left(\frac{2AT_{A}(b)\sigma_{0}}{2AT_{A}(b) \sigma_{0}-1}\right)}\right]}^{-1}Q_{s}^{2}\). In Ref.[35], the authors are defined the saturation scale, due to the nuclear thickness function \(T_{A}(b)=\frac{3R_{A}}{2\pi r_{0}}\sqrt{1-\frac{b^{2}}{R_{A}^{2}}}\) where this is obtained from a hard-sphere model for the nuclear distribution in the rest frame \(\rho_{A}(r)=\frac{3}{4\pi r_{0}^{2}}\theta(R_{A}-r)\), as \(Q_{s}^{2A}{\approx}Q_{s}^{2}A^{1/3}\sqrt{1-\frac{b^{2}}{R_{A}^{2}}}\) with \(r_{0}=1.12\) fm. In Ref.[7], the authors assumed that the positions of the nucleons \(\{{\bf b}_{i}\}\) are distributed according to the Woods-Saxon distribution \[T_{A}(b)=\int dz\frac{C}{1+\exp[(\sqrt{b^{2}+z^{2}}-R_{A})/d]}, \tag{19}\] where the dipole-nucleus cross-section has been written as \[\frac{d\sigma_{\rm dip}^{A}}{d^{2}b}=2\int\prod_{i=1}^{A}\{d^{2}b_{i}T_{A}(b_{i })\}{\left[1-\prod_{i=1}^{A}S_{p}({\bf r},{\bf b}-{\bf b}_{i};x)\right]}, \tag{20}\] which, in this model, is approximated by \[\frac{d\sigma_{\rm dip}^{A}}{d^{2}b} \approx 2{\left[1-(1-\frac{T_{A}({\bf b})}{2}\sigma_{\rm dip}^{p})^{A} \right]} \tag{21}\] \[{\simeq}2{\left[1-\exp(-AT_{A}({\bf b})\sigma_{\rm dip}^{p}/2) \right]}.\] The Woods-Saxon distribution is used for \(A>20\) and for light nuclei (\(A<20\)) a gaussian profile is used [36] as \[T_{A}(b)=\frac{3}{2\pi R_{A}^{2}}\exp(-3b^{2}/2R_{A}^{2}), \tag{22}\] where the nuclear radius parametrized as \(R_{A}=0.82A^{1/3}+0.58\) fm (except deuteron). ## IV IV. Numerical Results The gluon distribution for a nuclear target reads \[xg^{A}(x,Q^{2}) = f(A)\frac{3\sigma_{0}}{4\pi^{2}\alpha_{s}(Q^{2})}{\bigg{[}-Q^{2} \exp(-Q^{2}/Q_{s}^{2A}) \tag{23}\] \[+Q_{s}^{2A}(1-\exp(-Q^{2}/Q_{s}^{2A}))\bigg{]}},\] where the functions \(f(A)\) and \(Q_{s}^{2A}\) are defined in Refs.[11, 14]. Note that the nuclear gluon distribution function scales by \(f(A)\). Since the photon wave function depends on mass of the quarks in the \(q\overline{q}\) dipole, therefore we modify the Bjorken variable \(x\) in the gluon distribution and dipole cross section by the following form \[x{\rightarrow}\widetilde{x}=\frac{Q^{2}+4m_{f}^{2}}{Q^{2}+W^{2}}, \tag{24}\] where \(W^{2}\) is an invariant energy squared of the \(\gamma^{*}p\) system. Other parameters, according to Ref.[16], are determined from a fit to HERA data as \[\sigma_{0}=29.12\ {\rm mb},\ \lambda=0.277,\ x_{0}=0.41{\times}10^{-4},\] \[m_{l}=0.14,\ m_{c}=1.40 \tag{25}\] The ratio of the color dipole cross section in the nuclear improved saturation model, \(\sigma_{\rm dip}^{A}/\sigma_{0}\), is \[\frac{\sigma_{\rm dip}^{A}}{\sigma_{0}}=1-\exp(-\frac{\pi^{2}r^{2}\alpha_{s}( \mu^{2})xg^{A}(\widetilde{x}_{f},\mu^{2})}{3\sigma_{0}}), \tag{26}\] where \(\mu^{2}=C/r^{2}+\mu_{0}^{2}\) with \(C=0.38\) and \(\mu_{0}^{2}=1.73\ {\rm GeV}^{2}\)[16]. In the leading order running coupling we set \(\Lambda_{\rm QCD}=120\ {\rm MeV}\), which for the one-loop coupling gives \(\alpha_{s}(M_{Z}^{2})=0.118\). The results of our numerical studies of the saturation gluon distribution in \(eA\) processes, and comparison with the nCTQ15 [37] for \({\rm Au}-197\) at \(Q^{2}=16\) and \(100\ {\rm GeV}^{2}\) are shown in Fig.2. In this figure (i.e., Fig.2), we present results of the nuclear gluon distribution function divided by \(A\) for the heavy nucleus of \({\rm Au}-197\) as a function of the momentum fraction \(x\). The dot and dashed curves show our results at \(Q^{2}=16\ {\rm GeV}^{2}\) and \(Q^{2}=100\ {\rm GeV}^{2}\), respectively. They are compared to the nCTEQ15 parametrization at the corresponding values of \(Q^{2}\) given by the solid and dashed-dot curves, respectively. This figure indicates that the obtained results from the present analysis are in good agreements with the ones obtained from the nCTEQ15 parametrization. In comparison, we observe an inconsistency between the results at some points as this behavior of gluon density is due to the unintegrated gluon distribution using fixed coupling [33]. In Ref.[33], the author shows that the behavior of the gluon density at low \(Q^{2}\) is flattening at low \(x\). The results for shadowing effects in the gluon distribution of nuclei \(\frac{1}{A}\frac{G^{A}(x,Q^{2})}{G^{\alpha}(x,Q^{2})}\) at \(Q^{2}=16\ {\rm GeV}^{2}\) for a wide range of nuclei including C-12, Ca-40, Ag-108, Au-197 are shown in Fig.3. We observe that, as expected, the shadowing effects are important for small \(x<10^{-3}\) and their magnitude decreases with a decrease of \(x\) and with an increase of the atomic number A [38]. These results are comparable with the results of Ref.[39] of gluon shadowing correction corresponding to the \(|q\overline{q}G>\) Fock component of the photon containing one gluon. These behaviors are observable in other phenomenological parametrizations, such as GBW,KST [40],BGBK and IP-sat models. In Ref.[39], predictions for the gluon shadowing correction from the \(q\overline{q}G\) fluctuation of the photon are shown by the following form \(\frac{1}{A}\frac{G^{A}(x,Q^{2})}{G^{\alpha}(x,Q^{2})}{\sim}1-\frac{1}{A}\frac {\sigma_{tot}(q\overline{q}G)}{\sigma_{tot}^{A}(x,Q^{2})}\), where \(\Delta\sigma_{tot}(q\overline{q}G)\) is the inelastic correction to the total cross section \(\sigma_{tot}^{\gamma^{*}N}(x,Q^{2})\). In the improved saturation model, the connection between the nuclear dipole cross section, \(\sigma_{dip}^{A}\), and the integrated nuclear gluon density is crucial for describe the exclusive processes in eA collisions [4]. The evolution Figure 2: Results of the nuclear gluon distribution functions for the nucleus of \({\rm Au}-197\). The gluon \(G(x,Q^{2})\) distributions per nucleon (dot and dashed lines) are shown as a function of \(x\) for \(Q^{2}=16\ {\rm GeV}^{2}\) and \(Q^{2}=100\ {\rm GeV}^{2}\), respectively. For comparison, the solid and dashed-dot curves show the results of the nCTEQ15 parametrization at the corresponding values of \(Q^{2}\), respectively. of the analytical nuclear gluon distribution divided by A for A=12 and 197 as a function of the dipole transverse size, \(r\), is shown in Fig.4. In this figure (i.e., Fig.4), we observe a slow decrease of the nuclear gluon distribution in the large dipole domain, for \(x=10^{-2}\) for light and heavy nuclei. This behavior in the large dipole domain is strongly decreases as the Bjorken value decrease and the number of nucleons in a nuclear target increase. Figure 5 quantifies the size of the dipole cross sections as a function of the mass number A. It presents the ratio \(\sigma^{A}_{\rm dip}/\sigma_{0}\) as a function of \(r\) for a wide range of nuclei including C-12, Ca-40, Ag-108, Au-197 and the free proton. The ratio for the free proton (short dashed-dot) is compared with the GBW model (short dot-thin, Eq.(2)) in a wide range of \(r\) for \(x=10^{-3}\). It is clearly seen where saturation is visible for the free proton at \(r\)\(\sim\)1 fm and this value decrease as \(A\) increases. The improved saturation model in nuclei gives a similar behavior of the ratio \(\sigma^{A}_{\rm dip}/\sigma_{0}\) in comparison with the GBW saturation model at low \(x\) in a wide range of the dipole transverse size \(r\). Calculations have been performed at the Bjorken variable \(x\) to vary in the interval \(x=10^{-6}...10^{-2}\) for Au-197 in Fig.6. The improved saturation model for nuclei gives a good description of the ratio \(\sigma^{A}_{\rm dip}/\sigma_{0}\) in comparison with the GBW saturation model at low \(x\) in a wide range of the momentum transfer \(Q^{2}\). In Fig.6 we observe that, in the interval \(2.10^{-2}\) fm\(\lesssim\)\(r\)\(\lesssim\)\(5.10^{-1}\) fm, a depletion occurs for \(x<10^{-3}\). This depletion is strongly dependence to the mass number A. In Fig.7 this behavior for the light and heavy nuclei is shown for \(x=10^{-6}\), which significantly enhances the importance of the nonlinear corrections for heavy nuclei compared to the proton case. This effect is visible in the range \(1.75\) GeV\({}^{2}<\mu^{2}<3.3\) GeV\({}^{2}\) at very low \(x\) (i.e., \(x=10^{-6}\)) for heavy nuclei. One can see from the figure 7 that the nonlinear effects clearly become more important with increasing A, for small values of \(x\) and \(Q^{2}\). Indeed, the deviation from unity in this ratio is an indication of saturation physics. A depletion in this ratio is called "shadowing", whereas an enhancement is called "anti-shadowing". The anti-shadowing is related to the coherent multiple scattering where it introduces the medium size enhanced (in powers of \(A^{1/3}\)) nuclear effects [41-46]. The nuclear shadowing is controlled by the interplay of photon lifetime and coherent time fluctuations for transition between no shadowing and saturated shadowing at very small \(x\)[47,48]. In Fig.8, we have plotted the ratio \(\sigma^{2A}_{\rm dip}/\sigma^{2}_{0}\) for the diffractive \(q\overline{q}\) production in the color singlet state as a function of \(r\) at \(x=10^{-6}\) for a wide range of nuclei including C-12, Ag-108, Pb-208 and the free proton. The diffractive \(\gamma^{*}A\)\(\rightarrow\)\(q\overline{q}A^{\prime}\) cross section is proportional to \(\sigma^{2A}_{\rm dip}(x,r)\), where at small values of the diffractive mass \(M^{2}\sim Q^{2}\) the elastic scattering of the \(q\overline{q}\) pair dominates. In this figure (i.e., Fig.8), we observe that the saddle point decrease as the mass number increases. This behavior of the ratio \(\sigma^{2A}_{\rm dip}/\sigma^{2}_{0}\) for heavy nuclei is Figure 4: Results of the nuclear gluon distribution functions for the nucleus of \({\rm C-12}\) and \({\rm Au-197}\). The gluon \(G^{A}(x,\mu^{2})\) distributions per nucleon are shown as a function of the dipole transverse size, \(r\), for \(x=10^{-2}\) (solid), \(x=10^{-4}\) (dashed-dot) and \(x=10^{-6}\) (short dashed), respectively. Figure 5: The ratio \(\sigma^{A}_{\rm dip}/\sigma_{0}\) as a function of \(r\) at \(x=10^{-3}\) for a wide range of nuclei including C-12 (solid), Ca-40 (dashed-dot), Ag-108 (dot), Au-197 (short-dash) and the free proton (short dashed-dot). The ratio for the free proton (short dashed-dot) is compared with the GBW model (short dot-thin). deeper than the ratio \(\sigma^{A}_{\rm dip}/\sigma_{0}\). In Fig.9, we have added the \(q\overline{q}g\) contribution (due to gluon production in the final diffractive state) for the diffractive processes at larger values of the mass \(M^{2}\)\(\gg\)\(Q^{2}\) by a weight factor \(C_{A}/C_{F}=2N_{C}^{2}/(N_{C}^{2}-1)\) with \(C_{A}=N_{c}=3\) and \(C_{F}=\frac{N_{C}^{2}-1}{N_{C}}=\frac{4}{3}\) where \(N_{C}\) is the number of colors. This component was computed in the two gluons exchange approximation with a color octet dipole \(8\overline{8}\) where the coupling of two \(t\)-channel gluons is relative by the weight factor. This weight factor increases the saddle point because this behavior is tamed at low values of \(x\) for the \(q\overline{q}g\) contribution. Figure 8: Results of the nonlinear effects due to the mass number A in the the simplest case of the \(q\overline{q}\) system for the ratio \(\sigma^{2A}_{\rm dip}/\sigma_{0}^{2}\) as a function of \(r\) at \(x=10^{-6}\) for a wide range of nuclei including C-12, Ag-108, Pb-208 and the free proton. Figure 6: The extracted ratio \(\sigma^{A}_{\rm dip}/\sigma_{0}\) as a function of \(r\) at \(x=10^{-6}...10^{-2}\) (curves from left to right, respectively) for Au-197. Figure 7: Results of the nonlinear effects due to the mass number A for the ratio \(\sigma^{A}_{\rm dip}/\sigma_{0}\) as a function of \(r\) at \(x=10^{-6}\) for a wide range of nuclei including C-12, Ag-108, Pb-208 and the free proton. heavy nuclei. A comparison between the \(q\overline{q}\) and \(q\overline{q}g\) components of the diffractive system in the ratio \(\sigma_{\rm dip}^{2A}/\sigma_{0}^{2}\) as a function of \(r\) at \(x=10^{-6}\) for Au-197 is shown in Fig.9. We observe that the saturation point decreases from \(r\raisebox{-2.15pt}{$\stackrel{{<}}{{\sim}}$}10^{-1}\) to \(r\raisebox{-2.15pt}{$\stackrel{{<}}{{\sim}}$}10^{-2}\) at very low \(x\) for heavy nuclei. In Figs. 10 and 11, we consider the differential cross section \(d\sigma_{\rm dip}^{A}/d^{2}b\) at a given impact parameter \(b\), using the definition of the total cross section of the \(q\overline{q}\) pair on the proton \(\sigma_{b\overline{q}}^{p}\), by the integrated Woods-Saxon distribution \(T_{A}(b)\) scaled by the number of nucleons, for \(x=10^{-3}\)[35, 49]. In these figures (i.e., Figs.10 and 11), the nuclear dipole scatters at impact parameter \(b\) are calculated for the nuclei C-12 and Ca-40 in a wide range of the parameters \(b\) and \(r\), respectively. We observe that the saturation is visible at \(r\)\(\simeq\)1 fm for C-12 in a wide range of \(b\), 0\(\leq\)\(b\)\(\leq\)8 GeV\({}^{-1}\), and increase towards lower \(r\) (i.e., \(r<1\) fm) when the mass number A increases (see Fig.11 for Ca-40). These 3D figures have a broken line in the behavior of \(d\sigma_{\rm dip}^{A}/d^{2}b\) as it increases from approximately \(\simeq\)0.1 to 0.5 with an increase \(A\) from 12 to 40, respectively. We see that the two functions for C-12 and Ca-40 differ in the small-\(r\) region where the running of the gluon distribution starts to play a significant role, with an increase of the mass number \(A\). Indeed, the behavior of the \(d\sigma_{\rm dip}^{A}/d^{2}b\) is directly dependent on the gluon density and the mass number \(A\). These behaviors clearly indicate that the IP saturation model can be used to study nuclear effects in the future experiments at electron-ion colliders. ## V Conclusions In this paper, we studied the improved saturation model for nuclei with respect to the gluon density obtained long ago by Thorne [33] within the color dipole approach. Nuclear cross-section is evaluated by quantifying the impact of nuclear gluon density at small \(x\). We presented the study of the shadowing in deep-inelastic scattering off nuclei in the kinematic regions accessible by future electron-ion colliders. The dipole cross sections are considered in the description of the inclusive and diffractive DIS at small \(x\) in a wide range of the mass number \(A\). The ratio \(\sigma_{\rm dip}^{A}/\sigma_{0}\) due to the nuclear effects is similar with the GBW saturation model at low \(x\), although the saturation region decreases with increase of the mass number \(A\). A saddle-shaped behavior is predicted at very low \(x\) for heavy nuclei in a range 2\(\times\)10\({}^{-2}\)\(\raisebox{-2.15pt}{$\stackrel{{<}}{{\sim}}$}\)\(r\raisebox{-2.15pt}{$\stackrel{{<}}{{\sim}}$}\)2\(\times\)10\({}^{-1}\) fm due to the nonlinear effects. In the diffractive DIS processes where the component \(q\overline{q}g\) deviates from the GBW and CGC models, the behavior at very low \(x\) for heavy nuclei is tamed. This behavior increases the saturation region with the increase of the mass number of \(A\). Nuclear corrections to the impact parameter dependent dipole cross section in a wide range of the impact parameter \(b\) and the dipole size \(r\) are considered. The saturation region in the IP-Sat model increases as \(r\) decreases and the mass number of \(A\) increase, in a wide range of \(b\). Indeed, we have tested the IP-Sat model with impact parameter dependence with increases of the mass number of \(A\). While the influence of the impact parameter structure decreases as the mass number of \(A\) increases and gives a possibility to test various models for the nuclear dipole cross section at small \(x\) at future Figure 10: The nuclear dipole cross section at impact parameter \(b\) as a function of \(r\) and \(b\) at \(x=10^{-3}\) for C-12. Figure 11: The same as Fig.10 for Ca-40. colliders such as EIC and the LHeC. ###### Acknowledgements. The author is grateful to Razi University for the financial support of this project.
2310.00003
Derivation of a 2D PCCU-AENO method for nonconservative problems. Theory, Method and theoretical arguments
In this paper, we introduce a methodology to design genuinely two-dimensional (2D) secondorder path-conservative central-upwind (PCCU) schemes. The scheme studies dam-break with high sediment concentration over abrupt moving topography quickly spatially variable even in the presence of resonance. This study is possible via a 2D sediment transport model (including arbitrarily sloping sediment beds and associated energy and entropy) in new generalized Shallow Water equations derived with associated energy and entropy in this work. We establish an existence theorem of global weak solutions. We show the convergence of a sequence of solutions of the proposed model. The second-order accuracy of the PCCU scheme is achieved using a new extension AENO (Averaging Essentially Non-Oscillatory) reconstruction developed in the 2D version of this work. We prove by rigorous demonstrations that the derived 2D scheme on structured meshes is well-balanced and positivity-preserving. Several tests are made to show the ability and superb performance of the proposed numerical modeling. The results obtained are compared with those existing in the literature and with experimental data. The current modeling improves some recent results in sediment transport and shows a good ability to simulate sediment transport in large-range environments.
Ngatcha Ndengna Arno Roland
2023-07-24T03:20:07Z
http://arxiv.org/abs/2310.00003v1
Derivation of a 2D PCCU-AENO method for nonconservative problems. Theory, Method and theoretical arguments NGATCHA NDENGNA ARNO ROLAND ###### Abstract In this paper, we introduce a methodology to design genuinely two-dimensional (2D) second-order path-conservative central-upwind (PCCU) schemes. The scheme studies dam-break with high sediment concentration over abrupt moving topography quickly spatially variable even in the presence of resonance. This study is possible via a 2D sediment transport model (including arbitrarily sloping sediment beds and associated energy and entropy) in new generalized Shallow Water equations derived with associated energy and entropy in this work. We establish an existence theorem of global weak solutions. We show the convergence of a sequence of solutions of the proposed model. The second-order accuracy of the PCCU scheme is achieved using a new extension AENO (Averaging Essentially Non-Oscillatory) reconstruction developed in the 2D version of this work. We prove by rigorous demonstrations that the derived 2D scheme on structured meshes is well-balanced and positivity-preserving. Several tests are made to show the ability and superb performance of the proposed numerical modeling. The results obtained are compared with those existing in the literature and with experimental data. The current modeling improves some recent results in sediment transport and shows a good ability to simulate sediment transport in large-range environments. **Keywords:**_Sediment transport model, 2D PCCU method, 2D AENO hydrostatic reconstruction, Resonance condition, Dam break test, Coastal environment._ ## I Introduction This work proposes a new second order finite volume method to solve a new averaged hyperbolic sediment transport model (STM) that includes arbitrarily sloping sediment beds for application in coastal or estuarine environments. _Sediment transport models_ Sediment transport models can be based on Saint-Venant equations, homogeneous Shallow Water equations, or nonhomogeneous Shallow Water equations. Classical sediment transport models based on homogeneous shallow water equations and the Exner model use empiric or heuristic bedload sediment flux formulas and do not able to capture internal topography waves. It's possible during the evolution of topography to observe resonance phenomena. We also can observe the situation where the flow is near the resonance. The resonance phenomenon appears when the free internal wavelength that satisfies the unforced equations coincides with the wavelength of topography forcing. The presence of the resonance permits to obtain a hypersurface on which all the characteristic fields linearly degenerate. This phenomenon is completely ignored in several nonhomogeneous or homogeneous Shallow Water based models recently developed and which state that _the sediment velocity is equal to fluid velocity_ and the topography moves with the fluid velocity [1], [2], [3][4]. In subcritical and supercritical flow conditions these statements are not applicable. All these shortcomings make the Exner-based models a very limited model to describe the morphodynamics with accuracy. A new bedload sediment transport model that captures bed waves and accounts for the phase lag effect is proposed. Note that when the bed moves, the classical Exner equation is not enough to properly describe the morphodynamic evolution of the channel (regular or irregular). To control the local velocity of sediment and more generally the characteristic velocity of the advection of the bed sediment form, a non-heuristic formula is used. This term corresponds to the impulse of the entrained mass that must instantly assume the characteristic velocity of the moving at the bed interface. Here, the alternative formulation of the bed evolution equation proposed extends the classical Exner model and applies to a wide range of environmental contexts. #### Hyperbolicity and mathematical analysis of sediment transport models From a mathematical point of view, the two-dimensional averaged sediment transport models developed in the literature admit two major difficulties related to the hyperbolicity study and mathematical analysis. It is difficult to show the existence of entropy solutions and the regularity and uniqueness of weak solution when they exist. For some sediment transport available in the literature this important part is often neglected. A rigorous mathematical analysis of a sediment transport model is performed by Birnir and Rowlett [5]. In this work, a brief mathematical analysis of the model is presented. We expose some important results. Hyperbolicity can fail due to the morphodynamic equation used that can require complex sediment transport flux formula. To address the hyperbolicity of ST models, there are some alternatives used in the literature. Some authors use the Lagrange theorem [4] or Gerschgorin theorem [6] to find the eigenvalues of the STM when the Exner equation (integrating Grass or other complicated formulas) is used. The finding of eigenvalues of the sediment transport system of equations can depend on the choice of empiric sediment flux formula used [7] or the bedload model used [8]. It's also possible to use a splitting flux technique (which can fail in some situations) as in [9]. With splitting flux, the system becomes hyperbolic or weakly hyperbolic and the eigenstructure can be easily found without the use of Lagrange or Gerschgorin theorems. For some ST models, when the eigenvalues cannot be explicitly calculated, we use a decoupled approach for solving the problem. Such a technique is used by [10]. Due to strong and quick interactions between the flow and the moving topography, the coupled approach is often used in the literature. This technique is most appropriate than the decoupled approach that can reduce the number of total waves involved in the physic of the model. We show in [2] and [4] that the decoupled approach may fail, producing unphysical instabilities. The question of hyperbolicity study remains open for several sediment transport models when the bed evolution equations are complex. The difficulty to have a genuinely 2D hyperbolic without any ad-hoc assumptions remains for several scientists. A simple non-heuristic bedload equation is proposed here according to the kinematic equation of the bed interface to address this issue. ### Numerical schemes and limitations The proposed nonconservative model is addressed by a finite volume method (FVM) with special reconstruction procedures. \(\mathrm{FVM}\) is an important building block of numerical methods for hyperbolic systems. Numerically, the formal consistency with a particular definition of weak solutions does not imply that the limits of the numerical approximations are weak solutions this major difficulty can appear with the presence of large shocks which do not satisfy the jump condition for the definition of weak solutions. Some numerical methods have been developed to solve sediment transport problems (see for instance [11], [1], [10] ). Flux-limiter scheme based on the Lax-Wendroff method coupled with a non-homogeneous Riemann solver and a flux limiter function developed by [12], needs an explicit knowledge of the eigenstructure of the system. This makes the Flux-limiter scheme to be an expensive scheme from the computational point of view and less expensive and more accurate schemes are still desired. A well-balanced positive HLLC-based scheme has been developed by Castro et al., [13]. This scheme requires an increasing number of intermediate waves and can become computationally expensive and even complex to study 2D sediment transport problems. Central-Upwind (CU) scheme or upwind numerical method requires the knowledge of the eigenstructure of a problem and often suffers from instability and robustness problems when the bed load integrates complex empiric formula. Roe-type methods based on a special linearization of the nonlinear system of governing \(\mathrm{PDE}\) often account for all the intermediate waves and require also an explicit knowledge of the eigenstructure of the system. This makes the Roe-based method computationally expensive. Riemann HLL (Harten-Levy-Lax) solver [14] is often solicited for use in solving ST problems [15]. HLL solver is an incomplete Riemann solver and accounts only for the fastest and slowest speeds of propagation. One major drawback of the Riemann HLL solver is the increase of numerical diffusion (or dissipation). Its variants as HLLC of Toro, Spruce, and Speares [16] and HLLEM [17] have less dissipation but require other spectral information. The use of the HLL Riemann solver to evaluate the flux is possible but can be difficult when the number of intermediate wave increase. The use of an HLLC solver can require a resolution of complex nonlinear problems via the Newton method and integrate some empiric considerations or choice of functions (see [18]). More general path-conservative incomplete Riemann schemes or its extension can also be used for sediment transport. Amounts these schemes we cite the PVM (polynomial viscosity matrix) and RVM (Rational viscosity matrix) of Castro et al., [19], [20] or both PVM and RVM solvers and their variants (see for instance [18]). All these schemes require a choice of function to control the numerical diffusion and some other empiric considerations whose designing is not easy. Based on a formalism of path-conservative [21], PCCU has been designed to improve some classical nonconservative schemes developed in one-dimensional. There is no this scheme in two-dimensional rigorously established in literature to address sediment transport problems. Here a two-dimensional scheme is developed to address the drawbacks above. The first goal of this paper is to show that the 2D PCCU method accommodates very well to two-dimensional nonconservative sediment transport equations. We will show here that when the conservative laws exist, the 2D PCCU schemes can reproduce some other well-known schemes such as classical 2D path-conservative, 2D CU schemes, 2D path-conservative HLL schemes, 2D HLL solver, and so on. Out of cell these schemes, the PCCU has seen the least far interest in more general two-dimensional nonlinear hyperbolic of nonconservative systems related to sediment transport. This numerical method has originally developed for shallow water equations by Castro et al., [22] and was recently extended for Saint-Venant-Exner with a novel well-balanced discretization strategy by Ngatcha et al., [23]. The PCCU scheme has the advantage to combine conservative and nonconservative terms discretization and can achieve a high order of accuracy easily with the use of high-order polynomials reconstruction. The presence of sediment transportation/deposition, sediment exchange, friction terms, and bedload equation modifies the design of the scheme. Some numerical methods lose when the sediment transport and morphodynamics are investigated. For sediment transport problems, the numerical scheme must ensure the C-property, captures the shocks and preserves the positivity of water depth. A two-dimensional strategy of well-balanced discretization is developed here to capture the steady-state solutions. We develop here, a 2D hydrostatic reconstruction that preserves positive water depth for all reconstructed values. ### Numerical strategies and Flux approximation techniques The numerical strategy used here to solve the STM proposed here is the coupled numerical method. In this strategy, the fluid model, sediment concentration, and morphodynamic model are solved at the same time. The interest of this method is that all the unknowns of the system are updated at the same time steps during the simulation. The discrete flux can be evaluated using three techniques. The first consists to consider only the flux on the center edges of each cell (see Fig.1b). The interest of this strategy is that we get rid of empirical considerations on the evaluation of the flux. The second technique consists to calculate the flux on the vertex and edges of each cell (see Fig.1c). The third strategy consists to calculate the flux only at the vertex of each cell (see Fig.1a). In this paper, the fluxes are evaluated only on the edges of each cell (see Fig. 1b). Here, an STM including arbitrarily sloping sediment beds and associated entropy and energy is proposed and solved by a derived 2D well-balanced preserving-positivity PCCU-AENO method on structured meshes. Moreover, an existence theorem of global weak solutions of the model is established and a convergence study is discussed. ### Objectives of paper The main objectives of this paper are to: (i) derive a new sediment transport model in a coastal or estuarine environment. The resulting model can be viewed as the generalization of a class of averaged sediment transport models. (ii) derive first and second-order 2D PCCU schemes on structured meshes. (iii) develop a 2D AENO nonlinear reconstruction technique to achieve the second order of accuracy. Figure 1: 2D Finite volume gridding. Flux evaluated at the vertex of each cell (a); Flux interfaces at the edges of the cell (b); Total flux contribution cells and vertex of each cell (c). #### Goals of paper One goal of this paper is to introduce a methodology to design a 2D PCCU-AENO scheme on structured meshes for solving nonconservative equations. Another goal is to propose a physical and mathematical analysis (hyperbolicity, existence theorem and convergence) of the model. #### Highlights of paper The highlights of the paper are to: (i) Integrate some physicals and hydro-morphodynamic processes to describe the sediment transport in the coastal environment. (ii) Propose an existence theorem of global weak solutions of the model. (iii) Implementation of a methodology to design 2D second-order structured PCCU scheme. #### Scientific Contributions The novelties of this paper are the development of a new bedload model; the existence of global weak solutions and convergence results of the model; the development of 2D PCCU schemes with 2D AENO reconstruction; mathematical and physics analysis of the model; some validations with experimental data. The rest of the paper is presented as follows. Section (II) is dedicated to introducing the mathematical model which couples generalized Shallow Water equations and sediment transport equations, is presented. We study the hyperbolicity of the model in 1D and 2D cases, we give the Rankine-Hugoniot relations and we study the steady-state solutions of the system. We propose the existence theorem of global weak solutions and we expose a convergence result. In section (III), after a brief preliminary on the path-conservative method, a methodology to design a 2D well-balanced PCCU scheme on structured meshes is developed and exposed. We develop for the first time 2D AENO nonlinear reconstruction to obtain a second-order accuracy. In section (IV) some tests are made and the numerical results are compared and discussed. ## II Mathematical modelling ET hyperbolicity study ### Governing equations. First, we consider the two-phase equations where each phase \(k=s,f\) (sediment '\(s\)'or fluid'\(f\) ') satisfies the Navier-Stokes (NS) equations as follows: \[\frac{\partial\alpha_{k}\rho_{k}}{\partial t}+div(\alpha_{k}\rho_ {k}U_{k})=0, \tag{1}\] \[\frac{\partial\alpha_{k}\rho_{k}\mathbf{u}_{k}}{\partial t}+ \nabla.(\alpha_{k}\rho_{k}\mathbf{u}_{k}\otimes\mathbf{u}_{k})+\frac{\partial (\alpha_{k}\rho_{k}\mathbf{u}_{k}w_{k})}{\partial z}+\nabla P_{k}=\mathcal{F}_ {k,x,y},\] \[\frac{\partial\alpha_{k}\rho_{k}w_{k}}{\partial t}+\nabla.( \alpha_{k}\rho_{k}\mathbf{u}_{k}w_{k})+\frac{\partial(\alpha_{k}\rho_{k}w_{k} w_{k})}{\partial z}+\frac{\partial P_{k}}{\partial z}=\mathcal{F}_{k,z},\] \[div(U_{f})=0.\] Here, \(U_{k}=(u_{k},w_{k}),\rho_{k},\ P_{k},\alpha_{k},\mathcal{F}_{k}\) are respectively the 3D velocity, the density, the pressure the volume fraction and the source terms of the phase \(k=s,f\). Next, we consider the 3D classical NS equations for the evolution of mixing quantities and sediment volume rate obtained by summing the two phases of the system(1). One has: \[\frac{\partial\rho}{\partial t}+\frac{\partial\left(\rho u_{i} \right)}{\partial x_{i}}=0,\] \[\frac{\partial\rho u_{i}}{\partial t}+\frac{\partial\left(\rho u _{i}u_{j}\right)}{\partial x_{j}}+\frac{\partial P}{\partial x_{i}}=\mathcal{F }_{i}, \tag{2}\] \[\frac{\partial u_{i}}{\partial x_{i}}=0,\] where \(u_{i}\), \(i=1,2,3\)is the 3D velocity components, Hydrostatic assumption gives an analytical formulation of the pressure according to the atmospheric pressure (considered here constant) and the vertical water column. The hydrostatic assumption consists to neglect the fluid vertical acceleration in the flow i.e. the particular derivation \(\frac{d\left(\rho w\right)}{dt}=0\). This leads to: \(P=\int_{Z_{b}}^{\eta}\rho gdz\), where \(g\) is the gravitational constant, \(\eta\) is the free surface and \(Z_{b}\) is the bed interface. In the pressure term, the mixing density \(\rho\) is given by: \[\rho=\rho_{w}(1-c)+\rho_{s}c, \tag{3}\] where \(c\) is the instantaneous sediment concentration, \(\rho_{w}\), \(\rho_{s}\) are respectively the water density and sediment density (assumed constant in time and space). We consider three layers having different densities: a layer of suspension zone, a layer of clear fluid and a layer of bed-load. The suspension is not potential and can be approximately described by the first equation of the system (2) and by using Fick's law as in [8](see also [24]). These equations describe the evolution of fluid mixing in a domain bounded by a dynamic water surface and water bed. We write the mass balance equations in the Saint-Venant formalism [25]. The conservation of momentum is expressed by using the well-known Newton's second law. This law for a control volume states that the net rate of momentum entering the volume (momentum flux) plus the sum of all external forces acting on the volume be equal to the rate of accumulation of momentum. On the free surface, we consider a no-stress condition. On the sediment bed, we consider a no penetration condition. Considering that, we take into account the kinematic boundary conditions on the moving surfaces. A point on the free surface is \(M(x,y,z,t)=-z+\eta\big{(}x,y,t\big{)}\), where \(\eta\) is a smooth function. Assuming that any particle that is on the free surface at the initial instant will remain so at all instants, we have \(\dfrac{dM}{dt}=0\), where the operator \(\dfrac{d(.)}{dt}\) is defined by \(\dfrac{d(.)}{dt}=\dfrac{\partial(.)}{\partial t}+\big{(}\mathbf{u}.\nabla\big{)} (.)\). If the free surface volume exchange per time unit, we have \(F_{u}(t)=-z+\eta(x,y,t)\). At the bottom surface or bed interface, one has \(z=Z_{b}(x,y,t)\). Therefore we can define the kinematic boundary conditions on both moving surfaces. On the bed surface we have: \[\dfrac{\partial}{\partial t}+\big{(}\mathbf{u}.\nabla\big{)}\eta-u_{3}\big{(} \eta\big{)}=\dfrac{dF_{u}}{dt}, \tag{4}\] On the free surface we have: \[\dfrac{\partial Z_{b}}{\partial t}+\mathbf{u}(Z_{b})\nabla Z_{b}=\dfrac{dF_{b} }{dt}+u_{3}(Z_{b}), \tag{5}\] where the term \(\dfrac{dF_{b}}{dt}\) with \(F_{b}(t)=x_{3}(t)-Z_{b}(t,\mathrm{x}(t))\) describes the erosion/deposition exchange; \(\dfrac{dF_{u}}{dt}\) accounts for the effect of lateral contributions. In this work we have assumed that \(\dfrac{dF_{u}}{dt}=0\). We neglect also the vertical transport at the bed interface i.e. \(u_{3}(Z_{b})=0\). Here, we consider as respectively the volume of sediment deposited and the volume of sediment eroded on the bed. One has: \[dF_{b}=dV_{s}^{D}-dV_{s}^{E}+\phi^{*}dF_{b},\Rightarrow\dfrac{dF_{b}}{dt}= \dfrac{dV_{s}^{D}}{dt}-\dfrac{dV_{s}^{E}}{dt}+\phi^{*}\dfrac{dF_{b}}{dt}, \Rightarrow\big{(}1-\phi^{*}\big{)}\dfrac{dF_{b}}{dt}=D-E,\] with \(D=\dfrac{dV_{s}^{D}}{dt},E=\dfrac{dV_{s}^{E}}{dt}\). To retrieve the generalized shallow water-based equations prop water-based we apply an average along the depth of equations (2) using Leibniz's formula to obtain simplified equations. We take an Eulerian approach for the sediment transport equations, rather than the more computationally expensive Lagrangian approach, and make a macroscopic assumption. We introduce into the model an alternative to the bed load equation. Note that for simplicity the diffusion effects of sediment do not integrate into the bedload equation because the effect of advection is more important than the effect of diffusion near the bed due to turbulence or the presence of strong interaction fluid-fluid. Therefore, the bedload sediment transport must depend on the flow regime and size of the grain and the characteristic velocity of the body sedimentary. These parameters aren't incorporated into the classical Exner equation using a sediment transport empirical formula. We recall that sediment transport formulae predicate sediment transport from a given set of hydrodynamic and physical parameters related to sediment and fluid. ## The model The final two-dimensional model (also named the alternative formulation the of sediment transport model) developed in this paper, is given by the following system: \[\frac{\partial h}{\partial\tau}+\frac{\partial u}{\partial x}+\frac{\partial nv }{\partial y}=\frac{E-D}{\left(1-p\right)}\] \[\frac{\partial u}{\partial\tau}+\frac{\partial}{\partial x}\left(hua+\frac{1}{2} gh^{2}\right)+\frac{\partial}{\partial y}\left(hu\wedge\right)+gh\frac{ \partial Z_{b}}{\partial x}+\frac{\left(\rho_{s}-\rho_{w}\right)}{2\rho}gh \frac{\partial hC}{\partial x}-\frac{\left(\rho_{s}-\rho_{w}\right)}{2\rho}ghC \frac{\partial h}{\partial x}=-C_{s}u\left\|\mathbf{u}\right\|\ -\frac{\left(E-D\right)}{\left(1-p\right)}u(Z_{b})\] \[\frac{\partial hv}{\partial\tau}+\frac{\partial}{\partial y}\left(hu\wedge \right)+\frac{\partial}{\partial y}\left(hnv+\frac{1}{2}gh^{2}\right)+gh\frac{ \partial Z_{b}}{\partial y}+\frac{\left(\rho_{s}-\rho_{w}\right)}{2\rho}gh \frac{\partial hC}{\partial y}-\frac{\left(\rho_{s}-\rho_{w}\right)}{2\rho} ghC\frac{\partial h}{\partial y}=-C_{s}v\left\|\mathbf{u}\right\|\ -\frac{\left(E-D\right)}{\left(1-p\right)}v(Z_{b})\] \[\frac{\partial(hC)}{\partial\tau}+\frac{\partial(huC)}{\partial x}+\frac{ \partial(hC)}{\partial y}=\frac{\partial}{\partial x}\left(f_{s}h_{m}\frac{ \partial C}{\partial x}\right)+\frac{\partial}{\partial y}\left(f_{s}h_{m}\frac {\partial C}{\partial y}\right)+\left(E-D\right)\] \[\frac{\partial Z_{b}}{\partial\tau}+u(Z_{b})\frac{\partial(Z_{b})}{\partial x} +v(Z_{b})\frac{\partial(Z_{b})}{\partial y}=-\frac{E-D}{\left(1-p\right)}\] (6) Here, \(h\left[m\right]\) is the water depth, \(u\), \(v\) are the averaged \(x\) -velocity and \(y\) -velocity respectively ( with \(\mathbf{u}=(u,v)\left[m\ /\ s\right]\)), \(hu\), \(hv\) ( with \(\mathbf{q}=(q_{1},q_{2})=(hu,hv)\left[m^{2}\ /\ s\right]\)) are the water discharges in both directions and \(y\), \(Z_{b}\left[m\right]\) is the bed level. \(g\) is the gravitational constant. The friction source term is given by Manning's laws: \(C_{f}=n^{2}gh^{-1/3}\), where \(n[s\ /\ m^{1/3}]\) is the manning coefficient and where \(g\left[m\ /\ s^{2}\ \right]\) is the constant gravitational. \(p\) is the bed porosity. Here, \(\rho_{w},\rho_{s}\left[Kg\ /\ m^{3}\right]\), \(C\left[m^{3}\ /\ m^{3}\right]\) being the water density, sediment density and sediment concentration volumetric respectively. In momentum equations (suspension zone), for sake simplicity we have taken \(\mathbf{u}(Z_{b})=\mathbf{u}\). The transport mode parameter \(f_{s}\) is given by: \[f_{s}=\min\left(1;\ 2.5e^{-Z}\right), \tag{7}\] where \(Z=\frac{W_{s}}{\kappa u_{*}}\) is the Rouse number and where \(K\) is von Karman number (\(\kappa=0.4\) ). \(\mathbf{u}_{*}=\sqrt{C_{f}\parallel\mathbf{u}\parallel^{2}}\) is the shear stress velocity. \(E\left[Kg\ /\ m^{2}\ /\ s\right]\) and \(D\) are the erosion and deposition given by [1] \[\begin{split}& E=\begin{cases}\varphi(\theta-\theta_{cr,50})h^{-1}\left\| \mathbf{u}\right\|\mathbf{d}_{50}^{-0.2},&\text{if}\ \theta\geq\theta_{cr,50},\\ 0,&\text{otherwise};\end{cases}\\ & D=W_{s}(1-C_{a})^{m}C_{a}\end{split} \tag{8}\] The deposition rate of sediments \(D\) is almost equal to the vertical flux of particle at the boundary. For erosion rate \(\theta=\dfrac{\mathbf{u}_{*}}{g(s-1)d_{{}_{50}}}\) is the Shields parameter and the critical Shields parameter is given by: \[\theta_{cr,50}=\dfrac{0.3}{(1+1.2D_{*})}+0.055(1-\exp(-0.02D_{*}))\] where \(D_{*}\) is the dimensionless grain size parameter, depending of submerged specific gravity of sediment. \(\varphi\!\left[m^{1.2}\right]\)is a coefficient that controls the erosion force. For sediment deposition, \(m\) represents the effect of hindered settling due to high sediment concentration and \(W_{s}\) is the fall velocity of sediment given by: \[W_{s}=\sqrt{\left(\dfrac{13.95\nu}{d_{{}_{50}}}\right)^{\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! waves (\(T_{w}\ =\!10s\)) and tidal (\(T_{t}\ =\!12\ hours\)) can be easy to identify. The scale of mean current (\(T_{c}\ \approx\!400s\)) is a lot shorter than that tidal. The creation of a dune of sand after a flooding can be an approximation as (\(T_{sz}\ =\!3\ days\)) the scale of their migration on any distance of \(400\,m\) with a velocity estimated to \(2m/d\omega\) can be \(T_{sx}\ =\!200\ days\) Then, we have \(T_{w}\ll T_{c}\ll T_{t}\ll T_{sz}\ll T_{sx}\). Therefore it is important to differentiate the difference between the sediment velocity and fluid velocity i.e. \(\mathbf{u}_{b}(Z_{b})\neq\mathbf{u}_{s}\neq\mathbf{u}\) (where \(\mathbf{u}_{s}\) is the sediment velocity). The equation is not heuristic. Particularly, it's derived from Shallow Water Exner (SW-Exner) model by assuming the \(T_{c}\ll T_{sz}\). This allows us to consider the hydrodynamic equations as stationary with respect to the bed evolution equation (Exner equation) to find the quasi-stationary solution of the mean current. We consider the following SW-Exner system: \[\nabla.(h\mathbf{u})=0,\ \ \mathrm{in}\ \ \Omega \tag{10}\] \[\nabla.\left[\ h\mathbf{u}\otimes\mathbf{u}+\frac{1}{2}gh^{2} \ \right]+gh\nabla Z_{b}=-C_{f}\mathbf{u}\left|\mathbf{u}\right|\!\right],\ \ \mathrm{in}\ \ \Omega\] \[\frac{\partial Z_{b}}{\partial t}+\frac{1}{(1-p)}\nabla Q_{b}=0 \ \ \mathrm{in}\ \ \Omega.\] Here, the sediment flux at the bed is given by the more general formula following: \[Q_{b}=a\mathbf{u}^{b},\ \ (a,b)\in\mathbb{R}^{\,2}\] That integrates a large range of sediment transport flux formulas. After writing the bed sediment evolution equation in terms of hydrodynamic variables, we integrate the energy equation of stationary model (10): \[\mathbf{u}\nabla.\!\left(h\mathbf{u}\otimes\mathbf{u}+\frac{1}{2}gh^{2}\ \right)+gh\mathbf{u}\nabla Z_{b}=-C_{f}\mathbf{u}\mathbf{u}\left|\mathbf{u} \right|\!\right|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\frac{1}{1-p}\frac{\partial Q_{b}}{\partial h}\] describes the sensibility of the sediment transport to water depth. Therefore, we have \[\text{still}\quad\frac{\partial Q_{b}}{\partial h}\neq\mathbf{u}\,.\] The characteristic velocity of advection of the quantity \(\nabla Z_{b}\) given by (12) precisely depends on the sensibility of water depth and Froude number. Therefore, the body's sedimentary movement is directed by the flow regime. The proposed model is one of the more general existing in the literature and has some advantages as the capability to integrate several sediment transport flux for \((a,b)\in\mathbb{R}^{2}\) and to differentiate the water velocity from sediment bed velocity (phase lag). The classical Exner model uses some empiric formulas which give approximations results only. These sediment fluxes formulae are designed on uniform flow assumptions and assume that the sediment velocity is equal to fluid velocity. The above system is given by (6) is proven more appropriate to describe the morphodynamics bed evolutions with accuracy and no requires any empirical consideration. ## 3 Some properties of the model ### Rankine-Hugoniot relations In the following, we will assume that \(W_{L}\), \(W_{R}\) are the left and right states in a Riemann problem. Let us define the average and jump operators by \[\llbracket\cdot\rrbracket=(.)_{R}-(.)_{L}\quad\text{and}\ \big{\{}\{\cdot\} \big{\}}=\frac{(.)_{R}+(.)_{L}}{2}\,.\] The Rankine-Hugoniot relation is given by: \[\llbracket hu\rrbracket=\sigma\llbracket h\rrbracket, \tag{13}\] \[\llbracket hu^{2}+\frac{1}{2}gh^{2}\rrbracket+g\big{\{}\{h\}\big{\}} \llbracket Z_{b}\rrbracket+\frac{g\delta\rho}{2\rho}\big{\{}\{h\}\big{\}} \llbracket hC\rrbracket-\frac{g\delta\rho}{2\rho}\big{\{}\{hC\}\big{\}} \llbracket h\rrbracket=\sigma\llbracket hu\rrbracket,\] \[\llbracket huv\rrbracket=\sigma\llbracket hv\rrbracket,\] \[\llbracket F_{corr}hUC\rrbracket=\sigma\llbracket hC\rrbracket,\] \[\big{\{}\{u_{b}\}\big{\}}\llbracket Z_{b}\rrbracket=\sigma \llbracket Z_{b}\rrbracket,\] where \(\sigma\) is jump of discontinuities and where \(\overline{\rho}=\big{\{}\{\rho\}\big{\}}\). One goal of this paper is to study the steady-state solution of this new model which is not trivial. The well-balanced scheme proposed here preserves the 1D steady states "'at lake rest". Indeed, for a smooth solution, we have the following equations: \[\begin{array}{l}h\equiv\mbox{constant, }\ hu\equiv\mbox{constant in time, }\ Z_{b}\equiv\mbox{constant in time, }\ C\equiv\mbox{constant in time, }\\ \rho\equiv\mbox{constant in time, }\end{array} \tag{14}\] with the machine accuracy. The structure of 2D steady-state is not easy, but it is possible to find a quasi 1D steady-state solutions: \[\begin{array}{l}h\equiv\mbox{constant, }\ hu\equiv\mbox{constant, }\ hv\equiv 0,\ \partial_{x}Z_{b}\equiv\mbox{constant in time,}\\ \partial_{y}Z_{b}^{*}\equiv 0,\ \partial_{x}C\equiv\mbox{constant in time, }\partial_{y}C\equiv 0,\ \ \ \rho\equiv\mbox{constant in time,}\end{array} \tag{15}\] or \[\begin{array}{l}h\equiv\mbox{constant, }\ \ hv\equiv\mbox{constant, }\ hu\equiv 0,\ \ \ \partial_{y}Z_{b}\equiv\mbox{constant in time,}\\ \partial_{x}Z_{b}\equiv 0,\ \partial_{y}C\equiv\mbox{constant in time, }\partial_{x}C\equiv 0,\ \ \ \rho\equiv\mbox{constant in time.}\end{array} \tag{16}\] On the other hand, at a point of discontinuity, the steady solutions should verify the Rankine-Hugoniot jump conditions given by (13) with \(\sigma=0\). As well as the dissipation entropy is given by \[\left[\left(g(h+Z_{b})+\frac{u^{2}}{2}+\frac{1}{2}\frac{(\rho-\rho_{w})}{\rho }gh\right)hu\right]\leq 0\] The R-H relations allow us to conclude that in the whole domain (where the solution is regular and across the discontinuity), the steady states are preserved. The well-balanced 2D PCCU scheme proposed here respect both the "lake at rest" and "dry-lake". Note that the dry lake is obtained when (14) and (15) reduce to \[\begin{array}{l}u=0,\ hu=0,\ v=0,\ hv=0\end{array} \tag{17}\] ## 4 Hyperbolicity study. Let us \(\mathbf{W=(}h,hu,hv,hC,Z_{b}\mbox{\tiny\char 12}^{r}\mbox{, with }W\in\mathbb{R}^{5}\) the state vector of conservative variables and \(F=(F_{1},F_{2}\mbox{\tiny\char 12}^{r})\) the physical fluxes. We can rewrite the proposed model Eq. (6) in nonconservative form as follows: \[\begin{split}\frac{\partial\mathbf{W}}{\partial t}+\frac{\partial F _{1}(\mathbf{W})}{\partial x}+\frac{\partial F_{2}(\mathbf{W})}{\partial y}+B _{1x}^{*}(\mathbf{W})\frac{\partial Z_{b}}{\partial x}+B_{2x}^{*}(\mathbf{W}) \frac{\partial hC}{\partial x}+B_{3x}^{*}(\mathbf{W})\frac{\partial h}{ \partial x}+...\\...\;\;B_{1y}^{*}(\mathbf{W})\frac{\partial Z_{b}}{\partial y}+B _{2y}^{*}(\mathbf{W})\frac{\partial hC}{\partial y}+B_{3y}^{*}(\mathbf{W}) \frac{\partial h}{\partial y}=\hat{\mathbf{S}}\left(\mathbf{W}\right);\end{split} \tag{18}\] where \(\mathbf{x}=(x,y)\in\Omega\subset\mathbb{R}^{2}\), \(t\in(0,T)\). The vector unknowns \(\mathbf{W}\colon\mathbb{R}^{2}\times\mathbb{R}^{+}\rightarrow\Upsilon\) is a function from space \(\left(x,y\right)\in\mathbb{R}\times\mathbb{R}\) and time \(t\) to the system's state \(\Upsilon\) and each components of the flux \(F_{1},\;F_{2}:\Upsilon\rightarrow\mathbb{R}^{5}\) is given by \[F_{1}(\mathbf{W})=\begin{pmatrix}hu\\ huu+\frac{1}{2}gh^{2}\\ huv\\ huC\\ 0\end{pmatrix},\;\;F_{2}(\mathbf{W})=\begin{pmatrix}hu\\ huv\\ hvv+\frac{1}{2}gh^{2}\\ hvC\\ 0\end{pmatrix}. \tag{19}\] The vectors \(B_{1x}^{*},B_{2x}^{*},B_{3x}^{*},B_{1y}^{*},B_{2y}^{*},B_{3y}^{*}\) are reads \[B_{1x}^{*}=\begin{pmatrix}0\\ gh\\ gh\\ 0\\ u_{b}\end{pmatrix},\;\;B_{2x}^{*}=\begin{pmatrix}0\\ \frac{ghh\partial\rho}{2\rho}\\ 0\\ 0\\ 0\end{pmatrix},B_{3x}^{*}=\begin{pmatrix}0\\ -\frac{ghC\partial\rho}{2\rho}\\ 0\\ 0\\ 0\end{pmatrix},B_{1y}^{*}=\begin{pmatrix}0\\ 0\\ gh\\ 0\\ v_{b}\end{pmatrix},\;\;B_{2y}^{*}=\begin{pmatrix}0\\ 0\\ 0\\ 0\\ 0\end{pmatrix},B_{3y}^{*}=\begin{pmatrix}0\\ 0\\ -\frac{ghC\partial\rho}{2\rho}\\ 0\\ 0\end{pmatrix} \tag{20}\] The source term reads: \[\hat{\mathbf{S}}\left(\mathbf{W}\right)=S_{e}+S_{F}+S_{D},\] where \(S_{F},S_{e},S_{D}\) are respectively friction source term, the sediment exchange source term and diffusion source term and given respectively by: \[S_{{}_{F}}=\begin{pmatrix}0\\ -C_{{}_{J}}u\left|\mathbf{u}\right|\\ -C_{{}_{J}}v\left|\mathbf{u}\right|\\ 0\\ 0\end{pmatrix},S_{{}_{E}}=\begin{pmatrix}\frac{E-D}{1-p}\\ -\frac{(E-D)u}{(1-p)}\\ -\frac{(E-D)v}{(1-p)}\\ E-D\\ -\frac{E-D}{1-p}\end{pmatrix},\ \ S_{{}_{D}}=\begin{pmatrix}0\\ 0\\ 0\\ 0\\ \frac{\partial}{\partial x}\Bigg{(}f_{s}h\nu_{m}\,\frac{\partial C}{\partial x }\Bigg{)}+\frac{\partial}{\partial y}\Bigg{(}f_{s}h\nu_{m}\,\frac{\partial C}{ \partial y}\Bigg{)}\\ 0\end{pmatrix}. \tag{21}\] The numerical solution of nonconservative problem is completed with boundaries conditions and initial conditions of the form \(\ \ \ W=W_{{}_{0}}\), on \(\mathbb{R}^{5}\times\left\{t=0\right\}.\) The system can be written in the form of Eq. and is hyperbolic if the Jacobian matrix defined by Eq. has only real eigenvalues and if a full set of linearly independent eigenvectors exists. Therefore, the Jacobian matrix \[\mathcal{A}_{{}_{1}}\left(\ \mathbf{W}\ \right)=\begin{pmatrix}0&1&0&0&0\\ -u^{{}^{2}}+gh-\frac{\delta\rho}{2\,\rho}\,gh\,C&2\,u&0&\frac{\delta\rho}{2\, \rho}\,gh&gh\\ -uv&v&u&0&0\\ -u\,C&C&0&u&0\\ 0&0&0&0&u_{{}_{b}}\end{pmatrix} \tag{22}\] The quasi-1D system has five distinct eigenvalues: \[\lambda_{{}_{1}}=u_{{}_{b}},\ \lambda_{{}_{2,3}}=u,\ \lambda_{{}_{4,5}}=u\pm \sqrt{gh} \tag{23}\] A 2D system is hyperbolic in sense that for each state \(\ \ W\in\Omega\\) and an outer unitary normal vector \(\nu=\left(\nu_{{}_{1}},\nu_{{}_{2}}\right)\), the matrix given by: \[\mathcal{A}_{{}_{\nu}}\left(\ \mathbf{W}\ \right)=\mathcal{A}\left(\ \mathbf{W},\nu\ \right)=\nu_{{}_{1}}\mathcal{A}_{{}_{1}}\left(\ \mathbf{W}\ \right)+\nu_{{}_{2}}\mathcal{A}_{{}_{2}}\left(\ \mathbf{W}\ \right) \tag{24}\] has \(\ N+1\\) distinct eigenvalues. According to equation (24) the two-dimensional system has the following eigenvalues: \[\lambda_{{}_{1}}=\mathbf{u}_{{}_{b}}\mathcal{V},\ \lambda_{{}_{2,3}}=\mathbf{u} \mathcal{V},\ \lambda_{{}_{4,5}}=\mathbf{u}\mathcal{V}\pm\sqrt{gh}\, \tag{25}\] The eigenvectors for associated eigenvalues for 1D case are given by: \[E_{1}=\left(\begin{array}{c}1\\ -u_{b}\\ 0\\ C\\ gh-u^{2}+2uu_{b}-u_{b}^{2}\end{array}\right),E_{2}=\left(\begin{array}{c}\dfrac{ \delta\rho}{2\rho}\\ \dfrac{\delta\rho}{2\rho}u\\ 0\\ \dfrac{\delta\rho}{2\rho}C-1\\ 0\end{array}\right),E_{3}=\left(\begin{array}{c}\dfrac{\delta\rho}{2\rho} \\ \dfrac{\delta\rho}{2\rho}u\\ 1\\ \dfrac{\delta\rho}{2\rho}C-1\\ 0\end{array}\right),E_{4}=\left(\begin{array}{c}1\\ u-\sqrt{gh}\\ 0\\ C\\ 0\end{array}\right),E_{5}=\left(\begin{array}{c}1\\ u+\sqrt{gh}\\ 0\\ C\\ 0\end{array}\right) \tag{26}\] The third and fourth eigenvalues correspond to genuinely non-linear characteristic fields in the sense of Lax. While remaining eigenvalues correspond to linearly degenerate characteristic fields. \(\lambda_{4,5}\) are associated with shock and rarefaction. Riemann invariants are constant across linearly degenerate waves and rarefaction waves whereas for shock waves generalized jump conditions should be satisfied. **Remark: A resonance condition** From the eigenstructure of the proposed model, we can see that the conditions for resonance are satisfied if the free internal wavelength that satisfies the unforced equations coincides with the wavelength of topography forcing. This situation appears in our case when [8]: \[\left(u-u_{b}\right)^{2}=gh\,,\qquad\quad\text{in}\qquad\quad\mathbf{\Omega}^{ \emptyset} \tag{27}\] It's convenient to set \[\mathcal{C}=\left\{\mathbf{W}\in\Omega^{0},\,\,\left(u_{b}-u\right)^{2}=gh \right\}, \tag{28}\] which is the hypersurface on which all the characteristic fields are linearly degenerated. Therefore, the proposed model can predict bed evolution even in the presence of resonance phenomena. In fact, during evolution, a wavelength can be observed in the bedforms between some distances. We also can observe for some waves, the situation where the flow is near the resonance. In the case of floods with sediment transport, for example, resonance situations could occur only when the flood decelerates slowly. In presence of resonance, the above system can be weakly hyperbolic and in this case the vectors \(E_{1},E_{2},E_{3},E_{4},E_{5}\) are linearly dependent. ## 5 An existence theorem of global weak solutions of the model ### Definition: Weak solution Let \(\mathbf{\Omega}\subset\mathbb{R}^{2}\) with \(\,\,\mathrm{x}=\left(x,y\right)\in\mathbb{R}^{2}\,\,\) an open domain and let \(\,T>0\,\,\) we consider the system given by (6) with the following initial conditions(IC) : \[h(x,0)=h_{0},\ (hu)(x,0)=h_{0}u_{0},\ (hv)(x,0)=h_{0}v_{0},\ \ (hC)(x,0)=h_{0}C_{0},\ Z_{ \,{}_{b}}(x,0)=Z_{\,{}_{b0}}. \tag{29}\] These IC satisfy the following regularity: \[\begin{array}{l}h_{0}\in L^{2}\left(\Omega\right),\ \sqrt{h_{0}}\in L^{2}\left(\Omega \right),\ \nabla h_{0}\in\left(L^{2}\left(\Omega\right)\right)^{2},\ Z_{\,{}_{b0}}\in L ^{2}\left(\Omega\right),\ \frac{\left(h_{0}u_{0}\right)^{2}}{h_{0}}\in L^{1}\left( \Omega\right),\ \ u_{\,{}_{b0}}\in L^{1}\left(\Omega\right),\\ \nabla Z_{\,{}_{b0}}\in\left(L^{2}\left(\Omega\right)\right)^{2},\ \ \nabla C_{0}\in \left(L^{2}\left(\Omega\right)\right)^{2},\ \ \nabla\sqrt{h_{0}}\in \left(L^{2}\left(\Omega\right)\right)^{2}.\end{array} \tag{30}\] We say that \(\ \ \left(h,hu,hv,hC,Z_{\,{}_{b}}\right)\\)weak solution of model (6) in \(\ \mathcal{D}(0,T)=C_{c}^{\ \ \[\frac{dE}{dt}+g\hbar\mathbf{u}\nabla(Z_{b}+h)+\frac{(\rho_{s}-\rho_{w})}{2\rho}gh^{2 }\mathbf{u}\nabla C=0\ \, \tag{32}\] where \(\ E=\left[\frac{1}{2}\ h\left|\mathbf{u}\right|^{2}+\frac{1}{2}gh^{2}+ghZ_{b}\right]\) is the mechanical energy of the system. \[\frac{dE}{dt}+\nabla\ G\leq 0\ \, \tag{33}\] where \(\ G=\left(2g(Z_{b}+\frac{h}{2})+\frac{\mathbf{u}}{2}+\frac{(\rho_{s}-\rho_{w} )}{2\rho}g\hbar C\right)\hbar\mathbf{u}\) ### Existence of global weak solution and convergence result. _Theorem [existence of global weak solutions]_ There exists a global weak solution \(\left(h,hu,hv,hC,Z_{b}\right)\) of model given by the system of equations (6) satisfying the energy equality and the entropy inequality (31) and (33) respectively. Moreover, its satisfies also the following inequality: \[\int\limits_{\alpha}\!\frac{d}{dt}\!\!\left[h\left|\mathbf{u}\right|^{2}+\! \frac{3}{2}gh^{2}\right]+\!\int\limits_{\alpha}\!gZ_{b}\partial_{t}h-\!\!\! \left[\!\left(C_{J}\mathbf{u}^{2}\left|\mathbf{u}\right|\right|\!\right]\ -\frac{\left(E-D\right)}{\left(1-p\right)}\!\left(\mathbf{u}^{2}(Z_{b})-2g(h +Z_{b})\right)+\!\left(\frac{(\rho_{s}-\rho_{w})C}{2\rho}\right)\!\!S_{1} \right]\leq 0 \tag{34}\] _Proposition [error estimates]_ According to relations (31) and (33), the following estimates holds: \[\left|\!\left|h\right|\!\right|\!\right|\!\right|\!\right|\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(S_{1}^{n}=\frac{\left(E-D\right)}{\left(1-p\right)}\), \(S_{2}^{n}=-C_{f,n}\mathbf{u}_{n}\left\|\mathbf{u}_{n}\right\|\)\(-\frac{\left(E-D\right)}{\left(1-p\right)}\mathbf{u}_{n}(Z_{b})\), \(S_{3}^{n}=\left(1-p\right)S_{1}^{n}\), \(S_{4}^{n}=-S_{1}^{n}\). We assume that its initial values satisfy (for \(c\) constant): \[C_{0}^{n}\to C_{0}\] strongly in \[L^{2}(\Omega)\] ; \[h_{0}^{n}\to h_{0}\] strongly in \[L^{2}(\Omega)\] ; \[Z_{b0}^{n}\to Z_{b0}\] strongly in \[L^{2}(\Omega)\] ; \[h_{0}^{n}\mathbf{u}_{0}^{n}\to h_{0}\mathbf{u}_{0}\] strongly in \[L^{1}(\Omega)\] ; \[h_{0}^{n}\mathbf{u}_{0}^{n}\mathbf{u}_{0}^{n}=\frac{\left(q_{0}^{ n}\right)^{2}}{h_{0}^{n}}\to h_{0}\mathbf{u}_{0}\mathbf{u}_{0}=\frac{\left(q_{0} \right)^{2}}{h_{0}}\] strongly in \[L^{1}(\Omega)\]. The following relations holds \[\left\|h_{0}\right\|_{L^{2}(\Omega)}\leq c;\] \[\left\|\nabla h_{0}\right\|_{L^{2}(\Omega)^{2}}\leq c;\] \[\left\|Z_{b0}\right\|_{L^{2}(\Omega)}\leq c;\] \[\left\|h_{0}\mathbf{u}_{0}\right\|_{L^{2}(\Omega)}\leq c; \tag{36}\] Moreover, these values verify the following inequality: \[\int_{\Omega}h_{0}\left|\mathbf{u}_{0}\right|^{2}+\varepsilon\left|h_{0} \right|^{2}+\left|h_{0}Z_{b0}\right|\leq c, \tag{37}\] **Theorem [convergence]** There exists a global weak sequence solution \(\left(h_{n},\left(hu\right)_{n},\left(hv\right)_{n},\left(hC\right)_{n},Z_{b,n }\right)\) of the system (6) with initial values (36) satisfying (34) and (33). ## 3 Path-conservative based method for nonconservative equations. This section is devoted to presenting some concepts related to the path-conservative method widely used to solve nonconservative problems of the form (18). **1. A simple classical path-conservative scheme without any intermediate wave (preliminaries)** The path-conservative approach is used in this work and especially for non-conservative systems. The main idea of this approach is to split the fluctuation into two paths corresponding to left-moving and right-moving waves arising in the Riemann fan solution. This fluctuation is defined \(\forall\)\(\mathbf{W}^{+},\mathbf{W}^{-}\in\Omega\) as: \[\mathbf{D}(\mathbf{W}^{+},\mathbf{W}^{-},\nu)=\int_{0}^{1}\!\mathcal{A}( \mathbf{\Psi}(s,\mathbf{W}^{+},\mathbf{W}^{-},\nu))\frac{\partial\mathbf{ \Psi}(s,\mathbf{W}^{+},\mathbf{W}^{-},\nu)}{\partial s}\,ds=\mathbf{D}^{-}( \mathbf{W}^{+},\mathbf{W}^{-},\nu)+\mathbf{D}^{+}(\mathbf{W}^{+},\mathbf{W}^{ -},\nu) \tag{38}\] where \(\nu=\left(\nu_{1},\nu_{2}\right)\) is out normal of the edge and where \(\mathbf{D}^{-}(\mathbf{W}^{+},\mathbf{W}^{-},\nu)\) and \(\mathbf{D}^{+}(\mathbf{W}^{+},\mathbf{W}^{-},\nu)\) are two continuous functions satisfying the following equation: \[\mathbf{D}^{-}(\mathbf{W},\mathbf{W},\nu)=\mathbf{D}^{+}(\mathbf{W},\mathbf{W}, \nu)=0,\mathbf{W}\in\Omega\,. \tag{39}\] In Eq. (38), the term \(\int_{0}^{1}\!\!\mathcal{A}\big{(}\mathbf{\Psi}(s,\mathbf{W}^{+},\mathbf{W}^{-}, \nu)\big{)}\dfrac{\partial\mathbf{\Psi}(s,\mathbf{W}^{+},\mathbf{W}^{-},\nu)} {\partial s}\,ds\) includes the conservative and nonconservative fluxes and writes as follows: \[\begin{split}\big{[}\mathcal{A}\big{(}\mathbf{W},\nu\big{)} \nabla\mathbf{W}\big{]}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Using this parametrization, the numerical scheme for solving the sediment transport problem is said to be well-balanced if the following properties are satisfied: - The scheme solves exactly any smooth stationary solution \(\mathbf{W}\in\widetilde{\mathcal{W}}\), - The scheme solves up to order \(k\) any solution \(\mathbf{W}\in\widetilde{\mathcal{W}}\) Note that these properties are strongly connected to the relationship between the paths and \(\mathcal{W}\). \(\bullet\)**Finite volume gridding for a path-conservative scheme** Elementary computational cells centered on \(\left(x_{i},y_{k}\right)=\left(i\Delta x,\ k\Delta y\right),\) where \(i,k\in\mathbb{Z}^{2}\), are denoted \(V_{ik}\)\(=\)\(\left[x_{i-1/2},\ x_{i+1/2}\right]\times\left[\mathcal{Y}_{k-1/2},\ \mathbf{Y}_{k+1/2}\right],\) where are the corresponding cell interfaces denoted by half integers. The numerical approximations \(\mathbf{W}^{\Delta t}\) is the piecewise constant function such that \(\forall\ i,k\in\mathbb{Z}^{2},\mathbf{W}^{\Delta t}\left(t^{n},x_{i},y_{k} \right)=\mathbf{W}_{i,k}^{n}\) on each cells centered \(V_{ik}\) with \(t^{n}=n\Delta t,\ n\in\mathbb{N}\). The initial data of \(\mathbf{W}\) is denoted by \(\mathbf{W}\left(0,x_{i},y_{k}\right)=\mathbf{W}_{i,k}^{0}\in L^{\infty}\left( \mathbb{R}^{2}\right).\) Once such grid has been designed, we can define at certain time level \(t\) the average value of \(\mathbf{W}\) overs \(V_{ik}\) as: \[\overline{\mathbf{W}}_{i,k}=\frac{1}{\left|V_{ik}\right|}\int_{V_{ik}}\mathbf{ W}(x,y,t)dxdy\,, \tag{45}\] where \(mes(V_{ik})=\Delta x\Delta y\). The set of all the cell on the domain \(\Omega\) is denoted by \(\mathcal{K}_{c}\) the subscript 'c' denotes the 'center cell centered". The set of all the edges of \(\mathcal{K}_{c}\) is denoted \(\mathcal{E}_{c}=\mathcal{E}_{c}^{\text{\tiny{ext}}}\cup\mathcal{E}_{c}^{\text {\tiny{int}}}\) where \(\mathcal{E}_{c}^{\text{\tiny{ext}}}\) and \(\mathcal{E}_{c}^{\text{\tiny{int}}}\) are respectively the exterior edges and interior edges respectively. ## 2 Methodology to design a 2D path-conservative central-upwind (PCCU) scheme on structured mesh. In this section, we propose a strategy to design a two-dimensional version of the PCCU scheme on structured meshes. This scheme is a new path-conservative-based scheme where the conservative flux is evaluated using a central-upwind technique and where the fluctuations are evaluated following the Fig. (1a). To this end, we follow a concept developed in [22]. We start by developing a two-dimensional CU scheme in the version of path-conservative using the definition of the path-conservative solution presented above. The semi-discrete two- path-conservative form is given by: \[\frac{d\,\overline{\mathbf{W}}_{i,k}}{dt} = -\frac{1}{\Delta x}\Big{(}\mathcal{F}_{i+1/2,k}-\mathcal{F}_{i-1/2,k }\,\Big{)}-\frac{1}{\Delta y}\Big{(}\mathcal{G}_{i,k+1/2}-\mathcal{G}_{i,k-1/2 }\Big{)}+\hat{\mathbf{S}}\Big{(}\overline{\mathbf{W}}_{i,k}\,\Big{)} \tag{46}\] \[= -\frac{1}{\Delta x}\Big{(}\mathcal{F}_{i+1/2,k}-F_{1}\Big{(} \mathbf{W}_{i+1/2,k}^{-}\Big{)}-\mathcal{F}_{i-1/2,k}+F_{1}\Big{(}\mathbf{W}_{ i-1/2,k}^{+}\Big{)}+F_{1}\Big{(}\mathbf{W}_{i+1/2,k}^{-}\Big{)}-F_{1}\Big{(} \mathbf{W}_{i-1/2,k}^{+}\Big{)}\Big{)}\] \[-\frac{1}{\Delta y}\Big{(}\mathcal{G}_{i,k+1/2}-F_{2}\Big{(} \mathbf{W}_{i,k+1/2}^{-}\Big{)}-\mathcal{G}_{i,k-1/2}+F_{2}\Big{(}\mathbf{W}_ {i,k-1/2}^{+}\Big{)}\Big{)}+\hat{\mathbf{S}}\Big{(}\overline{\mathbf{W}}_{i,k }\,\Big{)}\] \[= \ \frac{1}{\Delta x}\Bigg{(}D_{i+1/2,k}^{-}+D_{i-1/2,k}^{+}+\int \limits_{0}^{1}A_{1}\Big{(}\mathbf{P}_{i,k}(\mathbf{x})\Big{)}\frac{d\mathbf{ P}_{i,k}(\mathbf{x})}{d\mathbf{x}}\,d\mathbf{x}\,\Bigg{)}\] \[\ \ \ \ -\frac{1}{\Delta y}\Bigg{(}D_{i,k+1/2}^{-}+D_{i,k-1/2}^{+}+ \int\limits_{0}^{1}A_{2}\Big{(}\mathbf{P}_{i,k}(\mathbf{x})\Big{)}\frac{d \mathbf{P}_{i,k}(\mathbf{x})}{d\mathbf{x}}\,d\mathbf{x}\,\Bigg{)}+\hat{\mathbf{ S}},\] where the fluctuations function defined by: \[D_{i+1/2,k}^{\pm}= \Bigg{(}\frac{1}{2}\pm\frac{1}{2}\frac{a_{i+1/2,k}^{+}-a_{i+1/2,k }^{-}}{a_{i+1/2,k}^{+}-a_{i+1/2,k}^{-}}\Bigg{)}\Big{(}F_{1}(\mathbf{W}_{i+1/2, k}^{-})-F_{1}(\mathbf{W}_{i+1/2,k}^{+})\Big{)}\pm\frac{1}{2}\Bigg{(}\frac{-2a_{i+1 /2,k}^{+}a_{i+1/2,k}^{-}a_{i+1/2,k}^{-}}{a_{i+1/2,k}^{+}-a_{i+1/2,k}^{-}} \Big{(}W_{i+1/2,k}^{+}-W_{i+1/2,k}^{-}\Big{)}\Bigg{)} \tag{47}\] \[= \ \frac{1\pm\lambda_{\mathrm{q}}^{i+1/2,k}}{2}\int_{0}^{1}\! \!\Big{[}A_{1}\Big{(}\mathbf{\Psi}\Big{(}s,\mathbf{W}^{+},\mathbf{W}^{-} \Big{)},\nu\Big{)}\Big{]}\frac{\partial\mathbf{\Psi}\Big{(}s,\mathbf{W}^{+}, \mathbf{W}^{-}\Big{)}}{\partial s}\,ds\pm\frac{\lambda_{\mathrm{q}}^{i+1/2,k} }{2}\Big{(}W_{i+1/2,k}^{+}-W_{i+1/2,k}^{-}\Big{)}\] and with (48) where \[\lambda_{1}^{i+1/2,k}= \Bigg{(}\frac{a_{i+1/2,k}^{+}-a_{i+1/2,k}^{-}}{a_{i+1/2,k}^{+}-a_ {i+1/2,k}^{-}}\Bigg{)},\ \ \ \lambda_{0}^{i+1/2,k}=\frac{-2a_{i+1/2,k}^{+}a_{i+1/2,k}^{-}}{a_{i+1/2,k}^{+}-a _{i+1/2,k}^{-}}\,, \tag{49}\] \[\lambda_{1}^{i,k+1/2}= \Bigg{(}\frac{b_{i,k+1/2}^{+}-b_{i,k+1/2}^{-}}{b_{i,k+1/2}^{+}-b _{i,k+1/2}^{-}}\Bigg{)},\ \ \ \lambda_{0}^{i,k+1/2}=\frac{-2b_{i,k+1/2}^{+}b_{i,k+1/2}^{-}}{b_{i,k+1/2}^{+}-b _{i,k+1/2}^{-}}\,.\] Here, the numerical fluxes \(\ \mathcal{F}_{i+1/2,k},\ \mathcal{G}_{i,k+1/2}\) are given using a CU technique: \[\begin{split}\mathcal{F}_{i+1/2,k}=&\frac{a_{i+1/2,k}^{ +}}{a_{i+1/2,k}^{+}-a_{i+1/2,k}^{-}}F_{1}(\mathbf{W}_{i+1/2,k}^{-})-\frac{a_{i +1/2,k}^{-}}{a_{i+1/2,k}^{+}-a_{i+1/2,k}^{-}}F_{1}(\mathbf{W}_{i+1/2,k}^{+})- \frac{1}{2}\Bigg{(}\frac{-2a_{i+1/2,k}^{+}a_{i-1/2,k}^{-}}{a_{i+1/2,k}^{+}-a_{ i+1/2,k}^{-}}\Big{(}W_{i+1/2,k}^{+}-W_{i+1/2,k}^{-}\Big{)}\Bigg{)}\\ \mathcal{G}_{i,k+1/2}=&\frac{b_{i,k+1/2}^{+}}{b_{i,k+ 1/2}^{+}-b_{i,k+1/2}^{-}}F_{2}(\mathbf{W}_{i,k+1/2}^{-})-\frac{b_{i,k+1/2}^{-}} {b_{i,k+1/2}^{+}-b_{i,k+1/2}^{-}}F_{2}(\mathbf{W}_{i,k+1/2}^{+})-\frac{1}{2} \Bigg{(}\frac{-2b_{i,k+1/2}^{+}b_{i,k+1/2}^{-}}{b_{i,k+1/2}^{+}-b_{i,k+1/2}^{-} }\Big{(}W_{i,k+1/2}^{+}-W_{i,k+1/2}^{-}\Big{)}\Bigg{)}\end{split} \tag{50}\] The topography source term is discretized using a well-balanced discretization strategy proposed in [4]. In Eq. (46)-(48), the functions \(\mathbf{W}^{-}=\mathbf{W}\left(x_{\sigma}^{-}\right)=\mathbf{P}_{i,k}\left(x_ {\sigma}^{-}\right)\) and \(\mathbf{W}^{+}=\mathbf{W}\left(x_{\sigma}^{+}\right)=\mathbf{P}_{i+1,k}\left( x_{\sigma}^{+}\right)\) such that \(\mathbf{W}^{+}\neq\mathbf{W}^{-}\)with \(\lim\limits_{x\to x_{\sigma}^{-}}\mathbf{W}=\mathbf{W}^{-}\)and \(\lim\limits_{x\to x_{\sigma}^{-}}\mathbf{W}=\mathbf{W}^{+}\), where \(x_{\sigma}\) is a discontinuity point. We denoted \(\mathbf{W}^{+}\) and \(\mathbf{W}^{-}\)the left and right intermediate values of polynomial reconstruction: \[\widetilde{\mathbf{W}}(x,y,t)=\sum_{i}\sum_{k}\mathbf{P}_{i,k}\mathcal{X}_{V_ {ik}}(x),\ \ \ \mathbf{P}_{i,k}=\left(P_{i,k}^{(1)},P_{i,k}^{(2)},....,P_{i,k}^{(N)}\right)^{T} \tag{51}\] Here, \(\mathcal{X}\) is the characteristic function, \(P_{i}^{(j)}\) are the polynomials of a certain degree satisfying the conservation and accuracy requirements defined for all \(i,k\) by: \[\frac{1}{\left|\,V_{ik}\,\right|}\int_{V_{ik}}\mathbf{P}_{i,k}\left(x,y\right) dx=\widetilde{\mathbf{W}}_{i,k}\,,\ \text{and}\ \ P_{i,k}^{(j)}(x,y)=W^{(j)}(x,y)+O(\left|\,V_{ik}\,\right|^{s}),\ \ \ x,y\in V_{ik}\,, \tag{52}\] with \(s\) a (formal) order of accuracy. \(\mathbf{W}(x,y)=\left(W^{(1)},....,W^{(N)}\right)^{t}\) is the exact smooth solution. We are interested in left and right limiting values of reconstruction polynomials, often called boundary extrapolated values. The polynomial reconstruction is used to ameliorate the solution approximations at each mesh\(V_{ik}\).The order of the scheme depends on the choice of the \(\mathbf{P}_{i}\) functions. For some smooth solution \(\mathbf{W}\), we have: \[\mathbf{W}^{\pm}=\mathbf{W}(\mathbf{x}_{\sigma})+\mathcal{O}(\left|\,V_{ik}\, \right|^{s}),\ \ \ \ \forall(i,k)\in\mathbb{Z}^{2} \tag{53}\] The design of the PCCU scheme requires the choice of sufficiently smooth paths \[\mathbf{\Psi}_{i+1/2,k}(s)=\left(\Psi_{i+1/2,k}^{(1)},\Psi_{i+1/2,k}^{(2)},....,\Psi_{i+1/2,k}^{(N)},\Psi_{i+1/2,k}^{(N+1)}\right)\coloneqq\mathbf{\Psi}(s,\mathbf{W}^{-},\mathbf{W}^{+}) \tag{54}\] connecting the two states \(\mathbf{W}^{-}\) and \(\mathbf{W}^{+}\) across the jump discontinuity at \(\mathbf{x}=\mathbf{x}_{0}\) such that a local-Lipschitz application \(\mathbf{\Psi}\mathbf{\cdot}[0,1]\times\Omega\times\Omega\rightarrow\Omega\) satisfies the following property: \[\mathbf{\Psi}\Big{(}s,\mathbf{W}^{+},\mathbf{W}^{-}\Big{)}=s\mathbf{W}^{+}+ \left(1-s\right)\mathbf{W}^{-},\ \forall\ \mathbf{W}^{+},\mathbf{W}^{-}\in\Omega \tag{55}\] We have in this scheme taken a simplest linear segment path in each direction: \[\Psi_{i+1/2,k}(s)=\mathbf{W}_{i+1/2,k}^{-}+s(\mathbf{W}_{i+1/2,k}^{+}-\mathbf{W}_ {i+1/2,k}^{-}),\ \ \Psi_{i,k+1/2}^{-}(s)=\mathbf{W}_{i,k+1/2}^{-}+s(\mathbf{W}_{i,k+1/2}^{+}- \mathbf{W}_{i,k+1/2}^{-}),\ \ s\in\] [0,1] The values \(\ \mathbf{W}\ \ \ \ \text{at point}\ \ (i-1/2,k)\), \((i+1/2,k)\), \((i,k-1/2)\), \((i,k+1/2)\) are given as follows: \[\mathbf{W}_{i+1/2,k}^{+}= \ \mathbf{P}_{i+1,k}\left(x_{i+1/2}-0,y_{k}\right),\ \mathbf{W}_{i-1/2,k}^{-}=\mathbf{P}_{i,k}\left(x_{i-1/2}-0,y_{k}\right) \tag{57}\] \[\mathbf{W}_{i+1/2,k}^{+}= \ \mathbf{P}_{i,k+1}(x_{i},y_{k+1/2}-0),\ \mathbf{W}_{i-1/2,i}^{-}=\mathbf{P}_{i,k}\left(x_{i},y_{k-1/2}+0\right)\] Note that all the above quantities depend on time, but we simplify the notation by suppressing this dependence. ### Proposition III.1 The one-sided local speeds of propagation \(a_{i+1/2,k}^{\pm}\) and \(b_{i+1/2,k}^{\pm}\) are upper/lower bounds on the largest/smallest eigenvalues of Jacobian matrix given above: \[a_{i+1/2,k}^{+}= \max\left\{u_{i+1/2,k}^{-}+\sqrt{\mathcal{B}_{i+1/2,k}^{-}},u_{i+ 1/2,k}^{+}+\sqrt{\mathcal{B}_{i+1/2,k}^{+}},u_{i+1/2,k}^{+},u_{i+1/2,k}^{-},u_ {b,i+1/2,k}^{+},u_{b,i+1/2,k}^{-},0\right\}\ ; \tag{58}\] \[a_{i+1/2,k}^{-}= \min\left\{u_{i+1/2,k}^{-}-\sqrt{\mathcal{B}_{i+1/2,k}^{-}},u_{i+ 1/2,k}^{+}-\sqrt{\mathcal{B}_{i+1/2,k}^{+}},u_{i+1/2,k}^{+},u_{i+1/2,k}^{-},u_ {b,i+1/2,k}^{+},u_{b,i+1/2,k}^{-},0\right\};\] \[b_{i,k+1/2}^{+}= \max\left\{u_{i,k+1/2}^{-}+\sqrt{\mathcal{B}_{i,k+1/2}^{-}},u_{i,k +1/2}^{+}+\sqrt{\mathcal{B}_{i,k+1/2}^{+}},u_{i,k+1/2}^{+},u_{i,k+1/2}^{-},u_ {b,i,k+1/2}^{+},0\right\}\ ;\] \[b_{i,k+1/2}^{-}= \min\left\{u_{i,k+1/2}^{-}-\sqrt{\mathcal{B}_{i,k+1/2}^{-}},u_{i,k +1/2}^{+}-\sqrt{\mathcal{B}_{i,k+1/2}^{+}},u_{i,k+1/2}^{+},u_{i,k+1/2}^{-},u_ {b,i,k+1/2}^{+},u_{b,i,k+1/2}^{-},0\right\}.\] Moreover, the CFL condition reads: \[\Delta t\leq CFL\min\left(\frac{\Delta x}{4a},\frac{\Delta y}{4b}\right];\ \ \ 0< CFL<1,\ \ \ a=\max(a_{i+1/2,k}^{+},-a_{i+1/2,k}^{-}),\ b=\max(b_{i,k+1/2}^{+},-b_{i,k+1/2}^{-}), \tag{59}\] where \(\ \ \Delta\ t\ \ \ \ \text{is the step time}\). ### Remark III.2 For conservative equations, the fluctuation terms given by equations (47) and (48) contains only the terms associated to derivative of flux: \[\int\limits_{0}^{1}A_{\zeta}\left(\mathbf{P}_{i,k}(\mathbf{x}) \right)\frac{d\mathbf{P}_{i,k}(\mathbf{x})}{d\mathbf{x}}d\mathbf{x}= F_{\zeta}\left(\mathbf{W}_{i+1/2,k}^{-}\right)-F_{\zeta}\left(\mathbf{W}_{i-1/2,k }^{+}\right),\text{and}\] \[\int\limits_{0}^{1}\!\!\!\left[A_{\zeta}\left(\mathbf{\Psi}\left( s,\mathbf{W}^{+},\mathbf{W}^{-}\right),\nu\right)\right]\!\!\frac{\partial \mathbf{\Psi}\!\left(s,\mathbf{W}^{+},\mathbf{W}^{-}\right)}{\partial s}\,ds=F_ {\zeta}\left(\mathbf{W}_{i+1/2,k}^{+}\right)-F_{\zeta}\left(\mathbf{W}_{i+1/2,k}^{-}\right),\ \ \ \zeta=1,2. \tag{60}\] For nonconservative systems, we will write \(\mathcal{A}_{\zeta}\) given above instead of \(A_{\zeta}\). When the fluxes are computed in CU sense, the resulting path-conservative central-upwind scheme is a version of path-conservative HLL Riemann solver. ## 3 The 2D PCCU scheme on structured meshes For two-dimensional path-conservative central-upwind method without topography source term, the fluctuation terms are now given by: \[\mathbf{D}_{i,k+l/2}^{\pm}=\ \frac{1\pm\lambda_{1}^{i,k+l/2}}{2}\int_{0}^{1} \!\!\left[\mathcal{A}_{2}\Big{(}\mathbf{\Psi}\Big{(}s,\mathbf{W}^{+},\mathbf{ W}^{-}\Big{)},\nu\Big{)}\right]\!\frac{\partial\mathbf{\Psi}\Big{(}s,\mathbf{W}^{+}, \mathbf{W}^{-}\Big{)}}{\partial s}ds\pm\frac{\lambda_{0}^{i,k+l/2}}{2}\Big{(} \mathbf{W}_{i,k+l/2}^{+}-\mathbf{W}_{i,k+1/2}^{-}\Big{)} \tag{61}\] \[\mathbf{D}_{i+1/2,k}^{\pm}=\ \frac{1\pm\lambda_{1}^{i+l/2,k}}{2}\int_{0}^{1} \!\!\left[\mathcal{A}_{4}\Big{(}\mathbf{\Psi}\Big{(}s,\mathbf{W}^{+},\mathbf{ W}^{-}\Big{)},\nu\Big{)}\right]\!\frac{\partial\mathbf{\Psi}\Big{(}s,\mathbf{W}^{+}, \mathbf{W}^{-}\Big{)}}{\partial s}ds\pm\frac{\lambda_{0}^{i+l/2,k}}{2}\Big{(} \mathbf{W}_{i+1/2,k}^{+}-\mathbf{W}_{i+1/2,k}^{-}\Big{)} \tag{62}\] According to the scheme given by the equation (46), a two-dimensional version of the PCCU scheme can be easily designed on structured meshes. The definition of fluctuation given by (61) and (62) permit to show that the Central-upwind scheme can be written as in version of path-conservative without major difficulty. The first order semi-discrete PCCU scheme reads: \[\frac{d\overline{\mathbf{W}}_{i,k}}{dt}=-\frac{1}{\Delta x}\Big{(} \mathcal{F}_{i+1/2,k}-\mathcal{F}_{i-1/2,k}\Big{)}-\frac{1}{\Delta y}\Big{(} \mathcal{G}_{i,k+1/2}-\mathcal{G}_{i,k-1/2}\Big{)}-\left(\sum_{m=1}^{3}\!\! \left(B_{m}^{*}\right)_{i,k}^{\overline{\mathbf{\Psi}}}\right)\] \[+\frac{1}{\Delta x\Delta y}\Bigg{[}\frac{a_{i+1/2,k}^{-}}{a_{i+1/ 2,k}^{+}-a_{i+1/2,k}^{-}}\Bigg{(}\sum_{m=1}^{3}\!\left(B_{m\gamma}^{*}\right)_ {i+1/2,k}^{\overline{\mathbf{\Psi}}}\Bigg{)}-\frac{a_{i+1/2,k}^{+}}{a_{i+1/2, k}^{+}-a_{i+1/2,k}^{-}}\Bigg{(}\sum_{m=1}^{3}\!\left(B_{m\gamma}^{*}\right)_{i-1/2,k}^{ \overline{\mathbf{\Psi}}}\Bigg{)}\Bigg{]} \tag{63}\] \[\frac{1}{\Delta x\Delta y}\Bigg{[}\frac{b_{i,k+1/2}^{-}}{b_{i,k+1 /2}^{+}-b_{i,k+1/2}^{-}}\Bigg{(}\sum_{m=1}^{3}\!\left(B_{m\gamma}^{*}\right)_ {i,k+1/2}^{\overline{\mathbf{\Psi}}}\Bigg{)}-\frac{b_{i,k+1/2}^{+}}{b_{i,k+1/2 }^{+}-b_{i,k+1/2}^{-}}\Bigg{(}\sum_{m=1}^{3}\!\left(B_{m\gamma}^{*}\right)_{i,k -1/2}^{\overline{\mathbf{\Psi}}}\Bigg{)}\Bigg{]}+\widehat{S}_{i,k}\] where we have let \[\left(B_{m\gamma,\gamma}^{*}\right)_{i,k}=\int_{\gamma_{i,k}}\left(B_{m}^{*} \right)\!\left(\mathbf{P}_{i,k}(\mathbf{x})\right)\!\left(\frac{dP_{i,k}^{(1)} }{d\mathbf{x}},\frac{dP_{i,k}^{(2)}}{d\mathbf{x}},...,\frac{dP_{i,k}^{(N)}}{d \mathbf{x}}\right)^{\!\!\!^{T}}d\mathbf{x}, \tag{64}\] \[\left(B_{m\gamma}^{*}\right)_{i+1/2,k}^{\overline{\mathbf{\Psi}}}=\int_{0}^{1 }\!\!\left(B_{m\gamma}^{*}\right)\!\left(\mathbf{\Psi}_{i+1/2,k}(s)\right)\left( \frac{d\psi_{i+1/2,k}^{(1)}}{d\mathbf{s}},....,\frac{d\psi_{i+1/2,k}^{(N)}}{d \mathbf{s}}\right)^{\!\!\!^{T}}d\mathbf{s},\ \ m=\!1,2,3\ \ \ \text{and} \tag{65}\] \[\left(B_{m\gamma}^{*}\right)_{i,k+1/2}^{\overline{\mathbf{\Psi}}}=\int_{0}^{1 }\!\!\left(B_{m\gamma}^{*}\right)\!\left(\mathbf{\Psi}_{i,k+1/2}(s)\right)\left( \frac{d\psi_{i,k+1/2}^{(1)}}{d\mathbf{s}},....,\ \frac{d\psi_{i,k+1/2}^{(N)}}{d \mathbf{s}}\right)^{\!\!\!^{T}}d\mathbf{s},\ \ m=\!1,2,3\] Using the linear path, a very accurate numerical approximation of the characteristic velocity of body sedimentary also can be given by: \[u_{b}^{*}=\int_{0}^{s}u_{b}(s)ds=\sum_{g=1}^{NGP}w_{g}u_{b}(s_{g})\, \tag{66}\] where \(NGP\) is a number of points Gauss quadrature rule, \(w_{g}\) are the weights and \(s_{g}\) are the positions distributed in the unit interval \([0,1]\). Here we have considered: \[s_{1}=\frac{1}{2},\ s_{2,3}=\frac{1}{2}\pm\frac{\sqrt{15}}{10},\ w_{1}=\frac{8 }{18},\ w_{2,3}=\frac{5}{18}. \tag{67}\] In all the numerical simulation one point-Gauss quadrature is used and therefore we have \[u_{b}^{*}=\frac{8}{18}u_{b}\left(\frac{1}{2}\right). \tag{68}\] This choice allows us to ensure the achievement of second of accuracy. The semi-discrete first order PCCU scheme is given by Equations (50)-(58) and (63)-(65). To achieve the second-order, we use an AENO-type reconstruction technique in conjunction with ADER schemes for hyperbolic equations. ## 4 2D AENO nonlinear reconstruction procedure and properties of the scheme ### 2D AENO reconstruction Here, we describe a new second-order extension of the PCCU in space using a modified version of the averaging essentially non-oscillatory (AENO) procedure originally developed in one-dimensional by Toro et al., [27]. Here, an original two-dimensional version of AENO nonlinear reconstruction is developed to improve numerical solutions (which allows achieving the second-order accuracy in space of the scheme). We start by writing a 2D piecewise operator of the form: \[\mathbf{P}_{i,k}(x,y)=\mathbf{\bar{W}}_{i,k}+\Delta^{x}_{\ i}(x-x_{i})+\Delta^ {y}_{\ k}(y-y_{\ k});\ \ x,y\in V_{ik}\,,\ \ x_{i}=\frac{x_{i+1/2}-x_{i-1/2}}{2},\ y_{\ k}=\frac{y_{k+1/2}-y_{k-1/2}}{2} \tag{69}\] where \(\Delta_{i,k}=(\nabla\mathbf{W})_{i,k}\) are the slopes that approximate \((\nabla\mathbf{W}(x_{i},y_{\ k},t^{n})\) in a non-oscillatory manner using a nonlinear slope obtained by convex combination of \(\Delta^{x}_{\ i}\) and \(\Delta^{y}_{\ k}\) as follows \[(\nabla\mathbf{W})^{x}_{\ i,k}=\Delta^{x}_{\ i,k}=\beta^{x}\Delta^{n}_{i+1/2, k}+(1-\beta^{x})\Delta^{n}_{i-1/2,k},\ \ \ \beta^{x}\in[0,1] \tag{70}\] \[(\nabla\mathbf{W})^{y}_{\ i,k}=\Delta^{y}_{\ i,k}=\beta^{y}\Delta^{n}_{i,k+1/2 }+(1-\beta^{y})\Delta^{n}_{i,k-1/2},\ \ \ \beta^{y}\in[0,1] \tag{71}\] where \[\beta^{x}(r^{x})=\frac{r^{x}}{\sqrt{l^{2}+{r^{x}}^{2}}};\ \ \mathrm{with}\ \ \ r^{x}=\frac{\left|\Delta_{i-l/2,k}\right|}{\Delta_{i+1/2,k}+\epsilon},\ \ \beta^{y}(r^{y})=\frac{r^{y}}{\sqrt{l^{2}+\left(r^{y}\right)^{2}}};\ \ \mathrm{with}\ \ \ r^{y}=\frac{\left|\Delta_{i,k-1/2}\right|}{\Delta_{i,k+1/2}+\epsilon}\ \ \mathrm{and}\] where \[\Delta_{i+1/2,k}=\frac{\overline{\mathbf{W}}_{i+1,k}-\overline{\mathbf{W}}_{i,k}}{\Delta x},\ \ \Delta_{i-1/2,k}=\frac{\overline{\mathbf{W}}_{i,k}-\overline{\mathbf{W}}_{i -1,k}}{\Delta x},\ \Delta_{k+1/2,i}=\frac{\overline{\mathbf{W}}_{i,k+1}-\overline{\mathbf{W}}_{i,k}}{\Delta y},\ \ \Delta_{k-1/2,i}=\frac{\overline{\mathbf{W}}_{i,k}-\overline{\mathbf{W}}_{i,k-1}}{\Delta y},\] \(l\) is a positive parameter, \(\ \ \ \epsilon\) is a small positive tolerance to avoid division by zeros. The resulting semi-discrete second-order two-dimensional PCCU-AENO scheme for the is then given by Equations (50)-(58) and (63)-(69). ### Well-balanced property At the steady states, \(\ \forall i,k\) \[\overline{h}_{i,k}^{n}=\overline{h}_{i+1/2,k}^{n,\pm}=h_{0},\ \overline{hC}_{i,k}^{n}=\overline{hC}_{i+1/2,k}^{n,\pm}=K_{0},\ \overline{Z}_{b,i,k}^{n}=\overline{Z}_{b,i+1/2,k}^{n,\pm}=b_{0},\ \overline{u}_{i,k}^{n}=\overline{u}_{i+1/2,k}^{n,\pm}=0,\ \ E-D=0,\ \ \rho_{i,k}^{n}=C_{1}. \tag{72}\] Thus, \(\ \overline{h}_{i,k}^{n}\,\overline{(h\,u)}_{i,k}^{n}\,\overline{(h\,v)}_{i,k}^{n}\, \overline{h\,C}_{i,k}^{n}\,\ \overline{Z}_{b,i,k}^{n}\ \ \mathrm{are}\) reconstructed, for all steps time. Then, we have also: \[\overline{\eta}_{i,k}^{n}=\overline{\eta}_{i+1/2,k}^{n,\pm}=\overline{h}_{i,k} ^{n}+\overline{Z}_{b,i,k}^{n}=\overline{h}_{i+1/2,k}^{n,\pm}+\overline{Z}_{b, i+1/2,k}^{n,\pm}=\eta_{{}_{1}} \tag{73}\] i.e. \(\overline{\eta}_{i,k}^{n}\) is constant at the lake at rest steady states. According to the above equations, we have: \[\mathbf{W}_{i,k+1/2}^{+}-\mathbf{W}_{i,k+1/2}^{-}=0\ \ \ \mathrm{and}\ \ \ \mathbf{W}_{i+1/2,k}^{+}-\mathbf{W}_{i+1/2,k}^{-}=0 \tag{74}\] At steady states we have: \[\mathcal{F}_{i+1/2,k}^{(1)}-\mathcal{F}_{i-1/2,k}^{(1)}+S_{e,i,k}^ {(1)}=0 \tag{75}\] \[\mathcal{F}_{i+1/2,k}^{(2)}-\mathcal{F}_{i-1/2,k}^{(2)}-B_{1x,i,k }^{*(2)}-B_{2x,i,k}^{*(2)}+\mathcal{H}_{i+1/2,k}^{(2)}-\mathcal{H}_{i-1/2,k}^{ (2)}=0\] \[\mathcal{F}_{i+1/2,k}^{(3)}-\mathcal{F}_{i-1/2,k}^{(3)}+S_{e,i,k}^ {(3)}=0\] \[\mathcal{F}_{i+1/2,k}^{(4)}-\mathcal{F}_{i-1/2,k}^{(4)}+S_{e,i,k}^ {(4)}=0\] We have therefore the following well balanced discretization topography term: \[B_{1x,i,k}^{*(2)}+B_{2x,i,k}^{*(2)}+B_{3x,i,k}^{*(2)}=\mathcal{F}_{i+1/2,k}^{ (2)}-\mathcal{F}_{i-1/2,k}^{(2)}+\mathcal{H}_{i+1/2,k}^{(2)}-\mathcal{H}_{i-1/ 2,k}^{(2)} \tag{76}\] Here, the numerical flux \(\ \mathcal{F}_{i+1/2}^{(2)}\) is given in CU sense and reads: \[\mathcal{F}_{i+1/2,k}^{(2)}\ = \frac{a_{i+1/2,k}^{+}}{a_{i+1/2,k}^{+}-a_{i+1/2,k}^{-}}\left(0.5g(h _{i+1/2,k}^{-})^{2}\right)-\frac{a_{i+1/2,k}^{-}}{a_{i+1/2,k}^{+}-a_{i+1/2,k}^{ -}}\left(0.5g(h_{i+1/2,k}^{+})^{2}\right). \tag{77}\] The nonconservative contribution \(\mathcal{H}_{i+1/2,k}^{(2)}\) reads \[\mathcal{H}_{i+1/2,k}^{(2)}= \tag{78}\] where \(B_{i+1/2,k}^{\Psi(2)}=-g\,\frac{\left(h_{i+1/2,k}^{+}+h_{i+1/2,k}^{-}\right)} {2}\left(h_{i+1/2,k}^{+}-h_{i+1/2,k}^{-}\right)\) The well-balanced scheme is obtained by replacing the discrete topography term \(B_{i+1/2,k}^{\Psi(2)}\) given in (65) by that given by Eq. (79). ### _Well-balanced discrete source terms_ Here, we use the reconstructed unknowns to discretize the source term in well-balanced sense. The terms \(S_{e},S_{{}_{D}}\) and \(S_{{}_{F}}\) are discretized as follows: \[S_{e,i,k}=\begin{pmatrix}\frac{E-D}{1-p}\\ -\frac{(E-D)}{(1-p)}u_{i,k}\\ -\frac{(E-D)}{(1-p)}v_{i,k}\\ E-D\\ -\frac{E-D}{1-p}\end{pmatrix}\ \text{and}\ \ S_{{}_{F,i,k}}=\begin{pmatrix}0\\ -g\,\frac{\left(h_{i+1/2,k}^{-}+h_{i-1/2,k}^{+}\right)}{2}\,S_{{}_{F,i,k}}\\ -g\,\frac{\left(h_{i,k+1/2}^{-}+h_{i,k-1/2}^{+}\right)}{2}\,S_{{}_{F,i,k}}\\ 0\\ 0\\ \end{pmatrix} \tag{80}\] \[S_{{}_{D,i,k}}=\begin{pmatrix}&0\\ &0\\ &0\\ \left(f_{s,i+1/2,k}h_{i+1/2,k}V_{f}\,\frac{C_{i+1,k}-C_{i,k}}{dx}\right)+ \left(f_{s,i,k+1/2}h_{i,k+1/2}V_{f}\,\frac{C_{i,k}-C_{i,k-1}}{dy}\right)\\ 0\end{pmatrix}\] With (80), the proposed 2D AENO-PCCU scheme satisfies the C-property. ### Preserving-positivity reconstruction Here we expose a discretization strategy developed in [23] that preserves positive the water depth. This procedure has been improved for a one-dimensional total sediment transport in [8]. The 2D version of this methodology is presented in follows: We note \(\mathbf{W}=\left(\eta=h+Z_{b},hu,hv,hC,Z_{b}\right),\) where \(\eta\) is the free surface. Let us \[\mathcal{W}_{\tau}=\left\{\overline{\mathbf{W}}_{i,k}^{a}=\left(\overline{h} _{i,k}^{a},\overline{q}_{i,k}^{a},\overline{q}_{2,k}^{a},\overline{hC}_{i,k}^ {a},\overline{Z}_{b,k}^{a}\right)\in\mathbb{R}_{t}^{5},\ \overline{h}_{i,k}^{a}\succ h_{i \pm 1/2,k},\overline{h}_{i,k\pm 1/2}^{a,-}\geq 0\right\}, \tag{81}\] the discrete admissible space that preserves the positivity of water depth. The left/right velocities and concentrations are calculated as: \[u_{i+1/2,k}^{\pm}=\frac{\left(hu\right)_{i+1/2,k}^{\pm}}{h_{i+1/2,k}^{\pm}}, \quad v_{i+1/2,k}^{\pm}=\frac{\left(hv\right)_{i+1/2,k}^{\pm}}{h_{i+1/2,k}^{ \pm}}\ \ \ \text{and}\ \ C_{i+1/2,k}^{\pm}=\frac{\left(hC\right)_{i+1/2,k}^{\pm}}{h_{i+1/2,k}^{\pm }}, \tag{82}\] The bottom reconstruction at left and right is given by: \[\overline{Z}_{b,j+1/2,k}^{+}=\min\left(\max\left(\overline{Z}_{b,j,k}, \overline{Z}_{b,j+1,k}\right),\overline{\eta}_{i+1,k}\right),\ \overline{Z}_{b,j+1/2,k}^{\ \ Therefore, we have \(\overline{h}_{i}^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ We uses zero-order extrapolation at all of the boundaries. The initial condition is displayed in Fig. 2. The domain of simulation is \(\Omega=[0,1]\times[0,1]\). We run the 2D PCCU method with AENO reconstruction using 400 structured cells and the obtained results are displayed in Fig. 3. at time \(t=10\,s\). The results shows that our well balanced discretization of bed slope terms preserves exactly "lake at rest" which is still physically significant. Figure 2: Initial condition for well-balanced test. Really, in the nature the situation where the water not moving in a river or channel is impossible. In this test, very small variations of the sediment bed, velocity, sediment concentration and water depth are observed during the simulation. This is correct according to the observation in the nature. The constant water depth cannot be observed in reality. For sediment concentration, a small variation is observed at the beginning of the simulation after that the stable equilibria are retrieved. It is expected that the water-free surface remains practically constant and the sediment concentration should be zero at all times. The sediment concentration is practically zeros during the simulation since the variation scale is very negligible compared to the rest. The water discharge of fluid remains constant during these exchanges. The water discharge varies around zero. All these small variations are observed on a microscopic scale to see really the behavior of steady-state solutions during a long time simulation. Therefore, the proposed PCCU-AENO method preserves the C-property to the machine's precision. We verify the convergence of the proposed method by using the measure of the difference between the solutions computed on two consecutive grids. The \(L^{1}-norm\) is given by: Figure 3: Computational solutions of well-balanced test. Water height \(h\), bed level \(Z_{b}\), water discharge \(h\,u\) and deposit mass \(C\) profiles at time t=10s. \[\left\|\Phi^{N}-\Psi^{N}\right\|_{1}=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{k=1}^{N} \left|\Phi_{i,k}^{N}-\Psi_{i,k}^{N}\right|, \tag{90}\] where \(\Phi^{N}\coloneqq\left\{\Phi_{i,k}^{N}\right\}\) and \(\Psi^{N}\coloneqq\left\{\Psi_{i,k}^{N}\right\}\) are two functions prescribed on structured mesh of \(N\times N\) cells. The rates of convergence are calculated as: \[\mathcal{O}(L^{1})=Log_{2}\left(\frac{\left\|\varphi^{N/2}-\varphi^{N/4} \right\|_{1}}{\left\|\varphi^{N}-\varphi^{N/2}\right\|_{1}}\right), \tag{91}\] where we have noted that \(Log_{b}(x)=y\Rightarrow b^{v}=x\). ## 2 Experimental validation 1D test. In this test, the 1D version of the model is solved and the results are compared with experimental data and classical Exner model. A similar test is done by [11] using explicit staggered finite volume scheme and by using 1D PCCU scheme. We test the capability of our model to reproduce the sediment transport even in an experimental channel. The initial conditions are given by: \[h(x,0)=\begin{cases}0.1&\text{if }x\leq 0\\ 0&\text{if }x\text{>}0\end{cases},\;\;u\left(x,0\right)=\;0,\;\;Z_{b}\left(x,0 \right)=0,\;\;E-D=0. \tag{92}\] For the classical Exner model, the sediment diameter is \(d_{50}=0.0032\), sediment density is \(\rho_{s}=1.540\), the domain of simulation is \(\Omega=\left[-1.25;1.25\right]\). Grass formula is used for \(Q_{b}\). The free surface \(\eta=h+Z_{b}\) and bed level profiles at different times \(t=0.5\), \(t=0.7\), \(t=1\) using our proposed model are shown in Fig. 4, and those obtained using classical Shallow Water Exner model are plotted in Fig. 5. They show a good agreement between the numerical computation and the experimental data (available in [28] see also [29] ) with respect to the water level and sediment profiles. We have used in all the simulation \(CFL=0.1\), \(N=100\) cells. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \(h\) & & \(hu\) & \(hC\) & & \(Z_{b}\) \\ \hline \(N\) & \(L^{1}\) & \(\mathcal{O}(L^{1})\) & \(L^{1}\) & \(\mathcal{O}(L^{1})\) & \(L^{1}\) & \(\mathcal{O}(L^{1})\) & \(L^{1}\) & \(\mathcal{O}(L^{1})\) \\ 400 & 4.034E-4 & / & 2.87E-2 & / & 7.343E-4 & / & 2.044E-4 & / \\ 800 & 1.018E-3 & 1.98 & 6.348E-3 & 2.07 & 1.547E-4 & 1.96 & 1.31E-3 & 1.92 \\ 1600 & 2.448E-4 & 2.06 & 1.708E-3 & 2.06 & 4.001E-5 & 2.05 & 3.41E-4 & 1.79 \\ 3200 & 6.082E-5 & 1.99 & 4.01E-4 & 1.95 & 9.457E-6 & 2.01 & 9.08E-3 & 2.03 \\ \hline \end{tabular} \end{table} Table 2: Estimate error for well-balanced test. We observe that the water level and sediment bed profiles are better approximated using PCCU-AENO scheme. The waves of the model are well captured during the simulation. These profiles are different from those obtained by using the classical Shallow water Exner model as presented in Fig. (5). The classical Exner model coupled with a bed-load sediment flux formula widely used to describe the morphodynamics of coastal environments does not give good results according to experimental data. However, the main drawbacks of this model remains its lack of robustness. This observation is also done in [8]. Figure 4: Computational solution of the proposed model using PCCU. Comparison with experimental data. Figure 5: Computational solutions obtained by Shallow Water Exner model with Grass formula, \(C\!F\!L\!=\!0.5\), \(N\!=\!100\). Multiple grains size test. Sediment diffusion effect. We perform now the same previous test with erosion/deposition effect \(E-D\neq 0\) and with sediment diffusion effect. We use the same initial conditions as in the previous test (experimental validation test). Ones compare different profiles of sediment concentration using PCCU-AENO scheme with different sediment diameters \(d_{{}_{1}}=0.002,d_{{}_{1}}=0.0032,d_{{}_{1}}=0.008,d_{{}_{1}}=0.02(mm)\). It's well known that the deposition/erosion exchange depends strongly of sediment diameter (see the formula of these function in appendix). The obtained results are plotted in Fig. (6). The test shows that the proposed model is able to simulate a wide range of sediment class size. We expected that our bed sediment model does not depend on sediment diameter as the classical Exner model. Classical models use some empiric formula that gives approximate results only on a range of flow regimes and sediment diameters. Some of these formulas become uncertain when the sediment diameter becomes greater. The diffusion effect is well visible in the profiles of sediment concentration. It's observed that the sediment concentration is more adapted for fine grains which are associated with low velocity due to mixing flow. The presence of sediment Figure 6: Numerical solution of sediment concentration, bed level, water height and velocities using PCCU-AENO scheme. Comparison between different sediment diameters. We have used N=100 cells, t=0.25, \(CFL=0.1\). in the water reduces its flow velocity. When the size of the sediment becomes greater, the concentration becomes low and fluid/sediment velocity has the same behavior as fluid velocity. The profiles obtained by our simulation are in agreement with what could be observed in nature or an experimental channel. Particularly, the profiles of sediment concentration are interesting and very close to the results obtained by [30] even if they are not the same conditions. The proposed shock-capturing scheme can serve to produce more realistic simulations in real environment conditions. ## 4 Bed evolution movement. We study here the bed evolution when the sediment bed is not fixed. The initial conditions are given by: \[h(x,y,0)=1-Z_{{}_{b}}(x,y,0),\ \ \mbox{with}\ \ Z_{{}_{b}}(x,y,0)=0.02+0.1\exp((-x-0.5)^{2}-(y-0.5)^{2}) \tag{93}\] and \(\ \mbox{u}(x,y,0)=0,v(x,y,0)=0,\ \mbox{C}(x,y,0)=0.01\). This initial values are displayed in Fig. (7). The numerical solution obtained by applied 2D well-balanced positivity-preserving PCCU scheme is plotted in Fig. (8). Figure 7: Initial condition. Bed and water height profiles. The movement of bed is well described and the water height profile is agreement with respect to the physic of the problem studied here. The movement of the bed and the water level are well computed and the phase lag effect is well accounts. These profiles are well observed in the nature. ## 5 2D Riemann problem We consider here, the 2D Riemann problem with initial data given in Table. (2). This Riemann problem consists of dam-break over erodible bed with sediment transport. We recall that the initial condition for the local Riemann problem is given by: \[\mathbf{W}\left(x,y,0\right)=\begin{cases}\mathbf{W}_{LW}\ \ \mathbf{if}\ \ x<0,y>0\\ \mathbf{W}_{RW}\ \ \mathbf{if}\ \ x<0,y>0\\ \mathbf{W}_{LD}\ \ \mathbf{if}\ \ x<0,y>0\\ \mathbf{W}_{RD}\ \ \mathbf{if}\ \ x<0,y>0\end{cases}\] This test simulates rapid spatial and temporal 2D deformations of the free surface and sediment bed. The boundary condition is free, the computational domain is \(\Omega=\left[-1,1\right]\times\left[-1,1\right].\) \begin{table} \begin{tabular}{l c c c c} \hline Domain & \(h(x,y,0)\) & \(u(x,y,0)\) & \(v(x,y,0)\) & \(Z_{{}_{b}}(x,y,0)\) \\ \hline \(x\in\left[-0.5,0.5\right];y\in\left[-0.5,0.5\right]\) & 2 & 0 & 0 & 2 \\ \hline \(x\in\left[-1,1\right];y\in\left[-1,1\right]\) & 1 & 0 & 0 & 1 \\ \hline \end{tabular} \end{table} Table 3: Initial condition for the Riemann problem Figure 8: Movement of sediment bed and free surface at \(t=0.3s\) with \(N_{x}=N_{y}=400\) cells. The initial concentration volume is \(C=0.001\). The rest of computational parameters is given by Table. 1. We can analyze the solution of this two-dimensional Riemann problem computed on uniform grid \(N_{x}=400\), \(N_{y}=400\) cells. The initial conditions is given in Fig. (9). The computed solution of the Riemann problem using 2D PCCU scheme are plotted in Fig. (10). Figure 9: Initial conditions for 2D Riemann problem We plot here the sediment concentration and the bed evolution profiles when the time is in Fig. (11). It's observed interesting physic related to the dynamic of sediment. We expected that after a long time, the sediment deposition/erosion exchange is very high. The 2D profile of sediment concentration is well captured during the simulation. It's very rare to find a such test in the literature showing really a 2D behavior of sediment transport in regular channels. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{\(h\)} & & \\ & & & & \\ & & & & \\ \hline \(N\) & \(L^{1}\) & \(\mathcal{O}(L^{1})\) & \(L^{1}\) & \(\mathcal{O}(L^{1})\) \\ 100 & 9.818E-3 & / & 2.11E-2 & / \\ 200 & 2.717E-3 & 1.73 & 5.83E-3 & 1.85 \\ 400 & 7.188E-4 & 1.961 & 1.34E-3 & 1.972 \\ 800 & 1.887E-4 & 1.922 & 3.58E-4 & 1.882 \\ 1600 & 4.571E-5 & 1.990 & 8.91E-5 & 1.992 \\ \hline \hline \end{tabular} \end{table} Table 4: Estimate error for 2D Riemann problem. Figure 10: Computational solution of the Riemann problem. Bed level and free surface profiles after fourth simulations. CFL=0.5. ## 5 Conclusion and perspectives A two-dimensional sediment transport model in nonhomogeneous shallow water equations has been proposed in this work. The model integrates a phase lag effect via a new alternative bedload equation which does not appear in some other models existing in the literature. Moreover, the model captures well the bed wave and the resonance condition via this model can be easily expressed. We proposed an existence theorem of global weak solutions of the model and a convergence study is discussed. It was proved that with this alternative formulation of the bedload equation, the model is still hyperbolic and the finding of the total eigenstructure becomes easy. A new well-balanced positive finite volume method for a 2D sediment transport model has been proposed to solve coastal engineering problems. This method can be applied to several 2D sediment transport models without major modifications. 2D AENO nonlinear reconstruction and second-order Strang method have been presented to obtain second-order accuracy of the fully discrete scheme. High-order accuracy can be simply obtained by increasing the order of derivatives. Considerable attention is paid to the validation of the proposed model by comparing its solutions with experimental data found in the literature. It has proven that our model gives the best result and would need more attention. It is shown that the proposed model describes quite accurately sediment processes even for a large range of sediment diameters. The proposed shock-capturing method can be used for other nonconservative problems associated with other physics without any difficulty. The strategies developed to achieve second order can be modified to other engineering applications or environmental contexts. ## Perspectives * A multi-dimensional version of PCCU scheme based on unstructured meshes with mobile domain can be easily design using the same approach. * Another problem encountered here in the design of the 2D scheme is that the fluxes are computed only at interfaces of the cells and do not account the fluxes at the vertex of each Figure 11: Solution of the Riemann problem sediment concentration evolution and morphodynamic profiles, CFL=0.5. cell. Therefore, it necessary to design a two-dimensional PCCU scheme more general than the current scheme. - A high order scheme can also be obtained easily but remains an open problem. - The hydrostatic reconstruction proposed here can be improved to other applications. - The proposed scheme can be applied to simulate the flooding with sediment deposition in TONGO BASSA basin located in Douala, Cameroon. ## Data availability The data that support the findings of this study are available on request from the corresponding author. ## Conflict of Interests The author declares that there is no conflict of interest regarding the publication of this paper. ## Acknowledgment The author would like to thank an anonymous referee for giving very helpful comments and suggestions that have greatly improved this paper.
2303.17447
Derived $δ$-Rings and Relative Prismatic Cohomology
We characterize the relative prismatic cohomology of Bhatt and Scholze by a universal property by endowing it with the additional structure of a ``derived $\delta$-ring". This involves introducing an analogue of prismatic envelopes in the setting of filtered derived commutative rings and applying this construction to the Hodge filtration on infinitesimal cohomology. In particular, we recover relative prismatic cohomology from infinitesimal cohomology via a purely algebraic process.
Adam Holeman
2023-03-30T15:19:59Z
http://arxiv.org/abs/2303.17447v1
# Derived \(\delta\)-Rings and Relative Prismatic Cohomology ###### Abstract We characterize the relative prismatic cohomology of Bhatt and Scholze by a universal property by endowing it with the additional structure of a "derived \(\delta\)-ring". This involves introducing an analogue of prismatic envelopes in the setting of filtered derived commutative rings and applying this construction to the Hodge filtration on infinitesimal cohomology. In particular, we recover relative prismatic cohomology from infinitesimal cohomology via a purely algebraic process. ###### Contents * 1 Introduction * 1.1 Cech-Alexander Complexes * 1.2 Outline * 1.3 Acknowledgements * 1.4 Conventions * 2 Derived \(\delta\)-Rings * 2.1 Recollections on Derived Commutative Rings * 2.2 Functors of \(k\)-Algebras * 2.3 The \(LSym^{\delta}\)-Monad * 2.4 Constructions and Examples * 3 Prismatic Cohomology * 3.1 Recollections on Infinitesimal Cohomology * 3.2 \(I\)-adic Envelopes * 3.3 Derived Prismatic Cohomology Introduction Fix a prime \(p\). To any \(p\)-adic formal scheme \(X\), one can attach a plethora of cohomology theories, each of which captures a combination of geometric and arithmetic information present in \(X\). One of the basic aims of \(p\)-adic Hodge theory is to compare these cohomology theories, and in doing so, explicate precisely what information is retained by each such theory. In [6], Bhatt and Scholze introduced the theory of _relative prismatic cohomology_, and established ample evidence that this theory occupies a privileged position within the landscape of such theories: most other known integral \(p\)-adic cohomology theories can be recovered from relative prismatic cohomology via a well-defined specialization procedure. Let \((A,I)\) be a prism, and denote by \(\overline{A}:=A/I\). For any \(\overline{A}\)-algebra \(R\), the relative prismatic cohomology \(\mathbb{A}_{R/A}\) is a commutative algebra object in the derived \(\infty\)-category of \(A\)-modules (which we will denote by \(\operatorname{Mod}_{A}\)), but in certain circumstances this object admits additional structures which universally characterize it. Namely, **Proposition 1.0.1**.: (Proposition 7.10 in [6]) Suppose \((A,I)\) is a perfect prism and \(R\) is a quasi-regular semi-perfectoid \(\overline{A}\)-algebra. Then \(\mathbb{A}_{R/A}\) is discrete and naturally admits the structure of a prism over \((A,I)\). Moreover, it is the final object in the relative (or absolute) prismatic site of \(R\). This paper generalizes Proposition 1.0.1 to arbitrary animated commutative rings \(R\) and arbitrary prisms \((A,I)\), yielding a universal characterization of relative prismatic cohomology in full generality. The universal property makes use of the notion of _derived commutative rings_ originally due to Akhil Mathew, and systematically studied by Arpon Raksit in [15]. For the remainder of this introduction, we will only discuss these objects informally, referring to the body of the paper for more details. ### Cech-Alexander Complexes In the case that \(R\) is a smooth \(\overline{A}\)-algebra, relative prismatic cohomology is defined as the derived global sections of the structure sheaf on the relative prismatic site \((R/A)_{\underline{\mathbb{A}}}\). A useful tool that features prominently in the study of \(\mathbb{A}_{R/A}\) is the notion of a Cech-Alexander complex. This is a special instance of the generality that for any ringed site \((\mathcal{C},\mathcal{O})\), given a cover of the final object \(\mathcal{F}\xrightarrow{f}\star\) in the associated topos \(Shv(\mathcal{C})\), one can compute the derived global sections of \(\mathcal{O}\) via the Cech complex of \(f\): \[R\Gamma(\mathcal{C},\mathcal{O})\simeq lim_{\Delta}Hom_{Shv(\mathcal{C})}( \check{\mathrm{C}}ech(f)^{\star},\mathcal{O}).\] The key insight in the case of prismatic cohomology is that there is a very simple procedure for producing a cover of the final object. 1. Choose a surjection \(P\to R\) where \(P\) is a polynomial \(A\)-algebra. Denote by \(J\) the kernel of this surjection. 2. Let \(F\) denote the free \(\delta\)-\(A\)-algebra on \(P\). 3. The pair \((F,J\cdot F)\) then receives a map of \(\delta\)-pairs from \((A,I)\), and so one can form the prismatic envelope \(F\{\frac{J\cdot F}{I}\}\). The resulting object \(F\{\frac{J\cdot F}{I}\}\) naturally resides within the prismatic site \((R/A)_{\underline{\delta}}\), and the sheaf represented by this object is a cover of the final object in the associated topos. In this way, one obtains Cech-theoretic access to relative prismatic cohomology. Our strategy for studying \(\mathbb{A}_{R/A}\) is to imitate the above framework, but with one key difference: we will never make a choice of resolution. Let us begin by explaining how the first step can be recast in an entirely choice-free way. The pair \(J\to P\) arising in the first step of the construction can be rewritten in terms of the (derived) infinitesimal cohomology of \(R\) with respect to \(P\) together with its Hodge filtration: \[(J\to P)\simeq(F^{1}_{H}\mathbb{I}_{R/P}\to\mathbb{I}_{R/P}).\] Hodge-filtered infinitesimal cohomology can be computed using a Cech-Alexander complex, and so the pair \(J\to P\) should be viewed as a mere approximation to the pair \(F^{1}_{H}\mathbb{I}_{R/A}\to\mathbb{I}_{R/A}\), the Hodge-filtered derived infinitesimal cohomology of \(R\) with respect to \(A\). We will take this as our choice-free avatar for the first step of the above construction. Of course, the derived infinitesimal cohomology \(\mathbb{I}_{R/A}\) is not generally a discrete commutative ring, rather it is a (typically non-connective) \(\mathbb{E}_{\infty}\)-ring object in the derived category \(\operatorname{Mod}_{A}\). We are thus confronted with the task of making sense of the notion of \(\delta\)-rings and prismatic envelopes in the context of higher algebra. We offer no definition of \(\delta\)-rings at the level of general \(\mathbb{E}_{\infty}\)-rings, but rather work in the setting of derived commutative rings. Forthcoming work of Benjamin Antieau ([1]) studies derived infinitesimal cohomology from this perspective, and we will adopt this viewpoint. Towards these ends, we introduce an \(\infty\)-category of derived \(\delta\)-\(A\)-algebras over any base \(\delta\)-ring \(A\), denoted by \(DAlg^{\delta}(\operatorname{Mod}_{A})\). This category generalizes the notion of animated \(\delta\)-rings (as in [2]) to the non-connective setting. The category \(DAlg^{\delta}(\operatorname{Mod}_{A})\) can be identified with the \(\infty\)-category of derived commutative \(A\)-algebras \(DAlg(\operatorname{Mod}_{A})\) (as in [15]) equipped with a lift of Frobenius modulo \(p\) (see Theorem 2.4.4). We then prove: **Theorem 1.1.1**.: The forgetful functor \[DAlg^{\delta}(\operatorname{Mod}_{A})\to DAlg(\operatorname{Mod}_{A})\] admits both left and right adjoints. We will denote the left adjoint by \(Free^{\delta}_{A}\). It therefore makes mathematical sense to contemplate the free \(\delta\)-\(A\) algebra on \(\mathbb{I}_{R/A}\). This will be refined to a filtered statement (see Notation 2.4.7) in the course of the paper, incorporating the Hodge filtration on infinitesimal cohomology. The final remaining input is to define a satisfactory analogue of prismatic envelopes in the setting of (filtered) derived commutative rings. Importing the prism condition into the non-connective setting presents many subtleties, but in the relative setting we can sidestep most of these technicalities by declaring that the usual rigidity condition on maps of prisms continues to hold. In particular, as soon as we specify that our 'prisms' receive a map from \((A,I)\), the datum of the Cartier divisor is completely determined by \(I\) itself, and we will take the \(\infty\)-category of \((p,I)\)-complete \(\delta\)-\(A\)-algebras, \(\widehat{DAlg^{\delta}}_{A}\) as our analogue of prisms. Recall, the prismatic envelope takes as input a \(\delta\)-pair \((B,J)\) receiving a map from \((A,I)\) and associates to it the universal prism under \((A,I)\) receiving a map from \((B,J)\). Viewing the ideal \(J\) as defining a filtration, we will recast the pair \((B,J)\) as a \(\delta\)-ring object in filtered modules over \(I^{\star}A\), denoted \(DAlg^{\delta}(F^{\geq 0}\mathrm{Mod}_{I^{\star}A})\). Given the above discussion surrounding rigidity of maps of prisms, we thus arrive at our definition of prismatic envelopes: **Definition 1.1.2**.: The functor \[I^{\star}\colon\widehat{DAlg^{\delta}}(\mathrm{Mod}_{A})\to DAlg^{\delta}(F^{ \geq 0}\mathrm{Mod}_{I^{\star}A})\] which endows an object \(B\in\widehat{DAlg^{\delta}}_{A}\) with the \(I\)-adic filtration admits a left adjoint, \(\mathrm{Env}_{I}^{\underline{\wedge}}\), which we refer to as the derived \(I\)-adic envelope. Our candidate construction for prismatic cohomology then may be described as \[L\mathbb{A}_{R/A}:=\mathrm{Env}_{I}^{\underline{\wedge}}(Free_{A}^{\delta}(F _{H}^{\star}\mathbb{I}_{R/A}))\] which is a precise choice-free incarnation of the Cech-Alexander approach to computing relative prismatic cohomology. Using this construction, we prove the following generalization of Proposition 1.0.1: **Theorem 1.1.3**.: The functor \[L\mathbb{A}_{-/A}\colon\widehat{DAlg}(\mathrm{Mod}_{\overline{A}})\to\widehat {DAlg^{\delta}}(\mathrm{Mod}_{A})\] described above is left adjoint to \(-\otimes_{A}\overline{A}\). Moreover, for any \(p\)-complete animated commutative \(\overline{A}\)-algebra \(R\), there is a canonical isomorphism \[L\mathbb{A}_{R/A}\to\mathbb{A}_{R/A}\] where \(\mathbb{A}_{R/A}\) is the derived prismatic cohomology of [6]. As an application this result, we give a completely formal proof of affineness of the relative prismatization, as established in Section 7.3 of [3]. **Lemma 1.1.4**.: For any \(p\)-complete animated commutative \(\overline{A}\)-algebra \(R\), there is a canonical equivalence \[WCart_{Spf(R)/A}\simeq Spf(L\mathbb{A}_{R/A}).\] ### Outline Our main task in Section 2 is to construct a satisfactory theory of derived \(\delta\)-rings. In Section 2.1, we review the necessary background on derived commutative rings and Goodwillie calculus following [15] and [7]. In Section 2.2, we investigate a non-linear enhancement of the results of Section 2.1 which apply to problem of extending functors of polynomial rings rather than functors of finitely generated vector spaces. This section is somewhat lengthy, so we refer the reader to Construction 2.2.17 for the main takeaway. In Section 2.3, we apply the results of the preceding sections to derived the free \(\delta\)-ring monad, thereby introducing the notion of a derived \(\delta\)-ring (see Definition 2.3.10). In Section 2.4, we extend several classical results about \(\delta\)-rings to the derived setting, and study filtrations and completions of derived \(\delta\)-rings. Our main task in Section 3 is to give a conceptual construction of relative prismatic cohomology utilizing the theory of derived \(\delta\)-rings. In Section 3.1, we review the theory of Hodge-filtered derived infinitesimal cohomology from the perspective of derived commutative rings. The Gauss-Manin connection on infinitesimal cohomology is studied, from which we construct the vertical filtration (see Theorem 3.1.11) which plays an important technical role later on. In Section 3.2, we introduce the notion of \(I\)-adic envelopes and relate them to prismatic envelopes in the cases of a regular sequence (see Corollary 3.2.18). In Section 3.3, we apply the tools of the preceding sections to give a universal construction of relative prismatic cohomology, and compare it to the site-theoretic theory of [6]. ### Acknowledgements It is a pleasure to thank Ben Antieau, Deven Manam, Kirill Magidson, and Noah Riggenbach for many enlightening conversations on this work and relevant related topics. Noah Riggenbach and Ben Antieau also offered extensive feedback on early drafts of the paper which significantly improved both the mathematical content and exposition. I am also grateful for the hospitality of Northwestern University, where the author was supported by the National Science Foundation under Grant No. DMS-2102010 during the Winter and Spring quarters of 2021-2022. ### Conventions Throughout the paper, we use the language of higher category theory and higher algebra as developed in [13], [12], and [14]. All statements and constructions should be interepreted in a homotopy-invariant sense. For a commutative ring \(A\), we will denote by \(\mathrm{Mod}_{A}\) the \(\infty\)-category of \(A\)-module spectra, and by \(CAlg(\mathrm{Mod}_{A})\) the \(\infty\)-category of \(\mathbb{E}_{\infty}\)-ring objects in \(\mathrm{Mod}_{A}\). If \(A\) is discrete and we wish to reference the \(1\)-category of \(A\)-modules or algebras, we will decorate our categories \(\mathrm{Mod}_{A}^{\heartsuit}\) or \(CAlg(\mathrm{Mod}_{A})^{\heartsuit}\) with the \(\heartsuit\) symbol to make this explicit. Derived \(\delta\)-Rings Fix a prism \((A,I)\) and an \(\overline{A}:=A/I\) algebra \(R\). The relative prismatic cohomology \(\mathbb{A}_{R/A}\) enjoys not only the structure of an \(\mathbb{E}_{\infty}\)-ring in \(\operatorname{Mod}_{A}\), but also comes equipped with a (relative) Frobenius endomorphism \(\varphi\colon\mathbb{A}_{R/A}\otimes_{A,\varphi_{A}}A\to\mathbb{A}_{R/A}\). In the case that \(R\) is smooth over \(\overline{A}\), this extra piece of structure arises from the fact that the structure sheaf on the relative prismatic site \((R/A)_{\underline{\Lambda}}\) takes values in \(\delta\)-rings. Since \(\delta\)-rings themselves come equipped with a notion of Frobenius, this structure assembles over the derived global sections. If \(R\) is not smooth, one can consider the left Kan extension. When working in positive or mixed characteristic, there are two predominant paradigms in which to express commutative multiplicative structures: \(\mathbb{E}_{\infty}\)-rings and animated (i.e. simplicial) commutative rings. Each of these theories supports its own generalization of the Frobenius endomorphism, but neither of these generalizations are directly related to the endomorphism \(\varphi\) of prismatic cohomology. The setting of _derived commutative rings_ provides an intermediary between these two paradigms, and supports a natural extension of the animated Frobenius into the non-connective setting. It is in this context in which the endomorphism \(\varphi\) is most naturally expressed. Our goal in this section is to introduce the notion of a \(\delta\)-ring object in derived commutative rings. We will begin in Section 2.1 by reviewing some of the basic theory of derived commutative rings, referring to [15] and [7] for many proofs. The basic theme of this section is roughly to provide a suite of techniques for extending functors defined on the heart of a \(t\)-structure to the entire stable \(\infty\)-category in question. In Section 2.2, we turn our attention towards a generalization of the techniques encountered in Section 2.1 fit to handle extension of functors defined on algebra objects in the heart of the \(t\)-structure. Such techniques yield access to non-connective generalizations of truncated Witt vectors, as well as Frobenius endomorphisms. In Section 2.3, we will then apply these techniques to construct an \(\infty\)-category of derived \(\delta\)-rings. These will be seen to be equivalent to derived commutative rings equipped with a lift of the Frobenius modulo \(p\), generalizing the analogous statement for animated \(\delta\)-rings. ### Recollections on Derived Commutative Rings Throughout this paper we will need to work extensively with functors \(\operatorname{Mod}_{A}\to\mathcal{D}\) where \(\operatorname{Mod}_{A}\) is the derived category of a \(\delta\)-ring, and \(\mathcal{D}\) is some presentable \(\infty\)-category (often with additional structures). Naming such functors and the concomitant coherences can be rather unwieldy. In the favorable situations encountered in this work, we will often be able to begin by constructing a functor \[\operatorname{Mod}_{A}^{fpp}\to\mathcal{D}\] from the category of finitely presented projective \(A\)-modules, and then we will take an assortment of Kan extensions. The main example of this procedure we will encounter is the construction of the \(LSym^{\delta}\)-monad, where the reduction is made transparent by adopting the framework of _derived algebraic contexts_, first introduced in [15]. The purpose of this subsection is to review the basic facts from [15] and fix notation which will be in use throughout the paper. **Definition 2.1.1**.: A _derived algebraic context_ consists of a presentable symmetric monoidal stable \(\infty\)-category \(\mathcal{C}\), a \(t\)-structure \((\mathcal{C}_{\geq 0},\mathcal{C}_{\leq 0})\) compatible with the symmetric monoidal structure, and a small full subcategory \(\mathcal{C}^{0}\subset\mathcal{C}^{\heartsuit}\) satisfying * The \(t\)-structure is right complete. * The subcategory \(\mathcal{C}^{0}\) is a symmetric monoidal subcategory, closed under symmetric powers in \(\mathcal{C}^{\heartsuit}\). * The subcategory \(\mathcal{C}^{0}\) is closed under finite coproducts and \(\mathcal{P}_{\Sigma}(\mathcal{C}^{0})\simeq\mathcal{C}_{\geq 0}\). Compatibility of the \(t\)-structure with the symmetric monoidal structure means \(\mathcal{C}_{\leq 0}\) is closed under filtered colimits, the unit object is connective, and the tensor product of connective objects is again connective. For the most part, we will be primarily interested in the following two examples. **Example 2.1.2**.: Let \(k\) be a commutative ring. The derived \(\infty\)-category \(\mathcal{C}:=\mathrm{Mod}_{k}\) equipped with its usual symmetric monoidal structure and \(t\)-structure, along with the full subcategory \(\mathcal{C}^{0}:=\mathrm{Mod}_{k}^{fpp}\) of finitely presented projective \(k\)-modules is a derived algebraic context. The next example of interest takes place on the \(\infty\)-category of filtered complexes, whose definition we briefly recall. View \(\mathbb{Z}_{\geq 0}\) as a poset with the usual ordering and endow it with the monoid structure arising from addition. Given a commutative ring \(k\), we may endow the functor category \[F^{\geq 0}\mathrm{Mod}_{k}:=Fun(\mathbb{Z}_{\geq 0}^{op},\mathrm{Mod}_{k})\] with the Day convolution symmetric monoidal structure. Observe that the natural evaluation functors \[ev_{i}\colon F^{\geq 0}\mathrm{Mod}_{k}\to\mathrm{Mod}_{k},\quad i\in \mathbb{Z}_{\geq 0}\] admit fully faithful left adjoint _insertion_ functors \[ins^{i}\colon\mathrm{Mod}_{k}\to F^{\geq 0}\mathrm{Mod}_{k}.\] Concretely, these functors may be understood as follows: \[ins^{i}(M)^{j}=\begin{cases}M&j\leq i,\\ 0&j>i\end{cases}\] with structure maps the identity in weights less than or equal to \(i\). **Example 2.1.3**.: We may endow \(F^{\geq 0}\mathrm{Mod}_{k}\) with the neutral \(t\)-structure defined by \[F^{\geq 0}\mathrm{Mod}_{k,\leq 0}=\{M\in F^{\geq 0}\mathrm{Mod}_{k}\big{|}M^{i} \in\mathrm{Mod}_{k,\leq 0}\}\] \[F^{\geq 0}\mathrm{Mod}_{k,\geq 0}=\{M\colon F^{\geq 0}\mathrm{Mod}_{k}\big{|}M^{i} \in\mathrm{Mod}_{k,\geq 0}\}\] Equipped with this \(t\)-structure, \(F^{\geq 0}\mathrm{Mod}_{k}\) enjoys the structure of a derived algebraic context. The compact projective generators in this example are insertions of compact projective \(k\)-modules. **Variant 2.1.4**.: In the above examples, we could replace \(\mathbb{Z}_{\geq 0}\) with \(\Delta^{1,op}\) (the opposite of the \(1\)-simplex). Endowing \(\Delta^{1,op}\) with the symmetric monoidal structure \(\star\) given by \[0\star 0=0,\quad 0\star 1=1\star 0=1\star 1=1.\] Given a commutative ring \(k\), we may endow the functor category \[F^{\{0,1\}}\mathrm{Mod}_{k}:=Fun(\Delta^{1,op},\mathrm{Mod}_{k})\] with the Day convolution symmetric monoidal structure. Mimicking the above construction yields a derived algebraic context on \(F^{\{0,1\}}\mathrm{Mod}_{k}\). **Variant 2.1.5**.: Replacing \(\mathbb{Z}_{\geq 0}\) with \(\mathbb{Z}_{\geq 0}^{op}\) in the definition of \(F^{\geq 0}\mathrm{Mod}_{k}\) yields _increasing_ filtrations, which we denote by \(F_{\geq 0}\mathrm{Mod}_{k}\). We may replicate the above definition in this setting as well. **Variant 2.1.6**.: Replacing \(\mathbb{Z}_{\geq 0}\) with \(\mathbb{Z}_{\geq 0}^{disc}\) (the underlying discrete category) in the definition of \(F^{\geq 0}\mathrm{Mod}_{k}\) yields the \(\infty\)-category of _graded objects_ in \(\mathrm{Mod}_{k}\), denoted by \(Gr^{\geq 0}\mathrm{Mod}_{k}\). We may replicate the above definition in this setting as well. To avoid confusion, we will distinguish graded insertions from filtered insertions notationally by \[(n):\mathrm{Mod}_{k}\to Gr^{\geq 0}\mathrm{Mod}_{k}\] and recall their explicit description: \[M(n)^{i}=\begin{cases}M&i=n,\\ 0&else.\end{cases}\] As above, the compact projective objects in the heart are precisely the insertions of the compact projective \(k\)-modules. We now turn our attention towards the construction of a class of algebra objects in general derived algebraic contexts. These algebra objects are built by extending the symmetric algebra monad on the heart to the entire category, a process which makes use of Goodwillie calculus. **Notation 2.1.7**.: For \(\infty\)-categories \(\mathcal{C},\mathcal{D}\), we denote by \(Fun_{\Sigma}(\mathcal{C},\mathcal{D})\) the full subcategory of \(Fun(\mathcal{C},\mathcal{D})\) spanned by those functors which preserve sifted colimits. **Definition 2.1.8** (Definition 3.21 in [7]).: Let \((\mathcal{C},\mathcal{C}_{\leq 0},\mathcal{C}_{\geq 0})\) be a derived algebraic context, and denote by \(\operatorname{Perf}_{\mathcal{C},\geq 0}\) the subcategory of compact coconnective objects in \(\mathcal{C}\). Let \(\mathcal{D}\) be an \(\infty\)-category with sifted colimits and all limits. We say a functor \[F\colon\mathcal{C}^{0}\to\mathcal{D}\] is _right extendable_ if the right Kan extension \[F^{R}\colon\operatorname{Perf}_{\mathcal{C},\leq 0}\to\mathcal{D}\] preserves finite coconnective geometric realizations - i.e. those finite geometric realizations of objects in \(\operatorname{Perf}_{\mathcal{C},\leq 0}\) whose colimit in \(\mathcal{C}\) remains coconnective. The utility of right-extendable functors is due to the following Theorem, which allows us to extend such functors to functors on the entire derived category of a ring. **Theorem 2.1.9** (Proposition 3.14 in [7]).: Let \(k\) be a commutative ring, and \(\mathcal{D}\) an \(\infty\)-category with sifted colimits. Denote by \(Fun_{\sigma}(\operatorname{Perf}_{k,\leq 0},\mathcal{D})\) the full subcategory of \(Fun(\operatorname{Perf}_{k,\leq 0},\mathcal{D})\) spanned by those functors which preserve finite coconnective geometric realizations. Then the restriction functor \[Fun_{\Sigma}(\operatorname{Mod}_{k},\mathcal{D})\to Fun_{\sigma}(Perf_{k,\leq 0 },\mathcal{D})\] is an equivalence, with inverse given by left Kan extension. The source of this functor is as in Notation 2.1.7. **Construction 2.1.10**.: Let \(\mathcal{D}\) be an \(\infty\)-category with all limits and colimits. Denote by \(Fun^{ext}(\operatorname{Mod}^{fpp}_{k},\mathcal{D})\) the full subcategory of \(Fun(\operatorname{Mod}^{fpp}_{k},\mathcal{D})\) spanned by the right extendable functors. Then composing right Kan extension along \(\operatorname{Mod}^{fpp}_{k}\to\operatorname{Perf}_{k,\leq 0}\) with left Kan extension along \(\operatorname{Perf}_{k,\leq 0}\to\operatorname{Mod}_{k}\) yields the _right-left extension functor_ \[Fun^{ext}(\operatorname{Mod}^{fpp}_{k},\mathcal{D})\xrightarrow{(-)^{RL}}Fun _{\Sigma}(\operatorname{Mod}_{k},\mathcal{D}).\] We will revisit this construction and generalize it in the next section. Our main tool for determining when a functor is right-extendable, and thus amenable to the previous construction, is through the notion of excisively polynomial functors which are always right extendable. **Definition 2.1.11**.: For \(n\geq 0\), we denote by \(\mathbb{P}_{n}\) the power set of \(\{0,...,n\}\), and for \(m\leq n\), \(\mathbb{P}_{n}^{\leq m}\) (respectively \(\mathbb{P}_{n}^{\geq m}\)) the subset of \(\mathbb{P}_{n}\) consisting of those subsets of cardinality less than (greater than) \(m\). Given an \(\infty\)-category \(\mathcal{C}\), an \(n\)_-cube_ in \(\mathcal{C}\) is a diagram \[\chi:\mathbb{P}_{n}\to\mathcal{C}.\] We say an \(n\)-cube \(\chi\) is * _cocartesian_ if it is a colimit diagram. * _cartesian_ if it is a limit diagram. * _strongly cocartesian_ if it is left Kan extended from its restriction to \(\mathbb{P}_{n}^{\leq 1}\). Let \(\mathcal{D}\) be a stable \(\infty\)-category. A functor \(F:\mathcal{C}\to\mathcal{D}\) is \(n\)_-excisive_ if it carries strongly cocartesian \((n+1)\) cubes to cartesian cubes (which in the context of stable \(\infty\)-categories agree with cocartesian cubes). **Definition 2.1.12**.: Let \(\mathcal{C}\) be a cocomplete \(\infty\)-category and \(\mathcal{D}\) an idempotent complete additive \(\infty\)-category. A functor \[F:\mathcal{C}\to\mathcal{D}\] is said to be _additively polynomial of degree \(0\)_ if it is constant. Inductively, we say that \(F\) is _additively polynomial of degree \(n\)_ if for every object \(X\) of \(\mathcal{C}\), the derivative functor \[D_{X}F:=fib(F(X\oplus-)\to F(-))\] is additively polynomial of degree \(n-1\). If we do not wish to specify the degree, we will speak of \(F\) as being _additively polynomial_. We say \(F\) is _excisively polynomial_ if it is additively polynomial and preserves finite geometric realizations. **Theorem 2.1.13**.: Given a derived algebaic context \((\mathcal{C},\mathcal{C}_{\leq 0},\mathcal{C}_{\geq 0})\), and a cocomplete stable \(\infty\)-category \(\mathcal{D}\), the restriction functor \[res:Fun^{ep}_{\Sigma}(\mathcal{C},\mathcal{D})\to Fun^{ep}_{\Sigma}( \mathcal{C}_{\geq 0},\mathcal{D})\] is an equivalence. Here the superscript \(ep\) indicates we are only considering excisively polynomial functors, and the subscript \(\Sigma\) is as in Notation 2.1.7. Furthermore, left Kan extension along the inclusion \(\mathcal{C}^{\heartsuit}\to\mathcal{C}_{\geq 0}\) induces an equivalence \[Fun^{ap}_{\Sigma}(\mathcal{C}^{\heartsuit},\mathcal{D})\xrightarrow{\simeq}Fun ^{ep}_{\Sigma}(\mathcal{C}_{\geq 0},\mathcal{D}).\] The superscript \(ap\) above refers to additively polynomial functors. Proof.: See Proposition 4.2.15 in [15], and Proposition 3.35 in [7]. **Theorem 2.1.14**.: Fix a derived algebraic context \(\mathcal{C}\). Then the functors \[End^{ep}_{\Sigma}(\mathcal{C}_{\geq 0})\xrightarrow{F\to\tau_{\leq 0}oFo_{ \mathcal{C}}}End^{ap}_{\Sigma}(\mathcal{C}^{\heartsuit})\] and \[End^{ep}_{\Sigma}(\mathcal{C}_{\geq 0},\mathcal{C}_{\geq 0})\xrightarrow{i_{*} }Fun^{ep}_{\Sigma}(\mathcal{C}_{\geq 0},\mathcal{C})\simeq End^{ep}_{\Sigma}( \mathcal{C})\] are both monoidal left adjoints. The right adjoint to the first functor will be denoted by \(L\) (which stands for the 'derived approximation' following [1]). Proof.: See Proposition 2.17 and Corollary 2.20 in [1]. **Definition 2.1.15** (Definition 4.1.2 in [15]).: Let \(\mathcal{C}\) be an \(\infty\)-category. A _filtered monad_ on \(\mathcal{C}\) is a lax monoidal functor \[\mathbb{Z}_{\geq 0}^{\times}\to End(\mathcal{C})\] where \(\mathbb{Z}_{\geq 0}^{\times}\) is viewed as a monoidal category via multiplication. Given a full monoidal subcategory \(\mathcal{E}\subset End(\mathcal{C})\), we define a _filtered_\(\mathcal{E}\)_-monad_ to be a lax monoidal functor \[\mathbb{Z}_{\geq 0}^{\times}\to\mathcal{E}.\] **Theorem 2.1.16**.: Let \(\mathcal{C}\) be a cocomplete \(\infty\)-category and \(F\) a filtered \(\mathcal{E}\)-monad on \(\mathcal{C}\), where \(\mathcal{E}\) is a subcategory such that every \(f\in\mathcal{E}\) commutes with sequential colimits. Then \(colim_{\mathbb{Z}_{\geq 0}}F\) is a monad on \(\mathcal{C}\). Proof.: This is Proposition 4.1.4 in [15]. **Observation 2.1.17**.: Given a full monoidal subcategory \(\mathcal{E}\subset End(\mathcal{C})\), denote by \(F_{\geq 0}Alg(\mathcal{E})\) the \(\infty\)-category of filtered \(\mathcal{E}\)-monads (see Definition 2.1.15). The composite \[End_{\Sigma}^{ap}(\mathcal{C}^{\heartsuit})\xrightarrow{L}End_{\Sigma}^{ep}( \mathcal{C}_{\geq 0})\to End_{\Sigma}^{ep}(\mathcal{C})\] is lax monoidal, and thus induces a functor \[F_{\geq 0}Alg(End^{ap}(\mathcal{C}^{\heartsuit}))\xrightarrow{L}F_{\geq 0} Alg(End_{\Sigma}^{ep}(\mathcal{C}))\] on filtered monad objects therein. **Example 2.1.18**.: Fix a derived algebraic context \((\mathcal{C},\mathcal{C}_{\leq 0},\mathcal{C}_{\geq 0})\), and consider the symmetric algebra functor on the heart \(Sym_{\mathcal{C}}^{\heartsuit}\). We will choose to view this not as an endomorphism of \(\mathcal{C}^{\heartsuit}\), but rather as a functor \[\mathcal{C}^{\heartsuit}\to\mathcal{C}\] by postcomposing with the inclusion of the heart into \(\mathcal{C}\). This functor is not additively polynomial, but it has a filtration by additively polynomial subfunctors \[Sym_{\mathcal{C}}^{\heartsuit,\leq n}\in Fun_{\Sigma}^{ap}(\mathcal{C}^{\heartsuit },\mathcal{C}).\] These filtered pieces organize into a filtered monad structure refining the monad structure on \(Sym_{\mathcal{C}}^{\heartsuit}\), and applying the preceding observation yields a filtered monad on \(\mathcal{C}\), denoted by \(LSym_{\mathcal{C}}^{\heartsuit\star}\). The colimit of this filtered monad is denoted by \(LSym_{\mathcal{C}}\), and referred to as the _derived symmetric algebra monad_. The \(\infty\)-category of algebras over this monad is denoted by \(DAlg(\mathcal{C})\), and objects therein are referred to as _derived commutative rings_ in \(\mathcal{C}\). **Lemma 2.1.19**.: Let \(\mathcal{C}\) be a stable infinity category which admits sifted colimits, and let \(T\) be a sifted colimit preserving monad thereon. Then sifted colimits in \(\operatorname{Mod}_{T}(\mathcal{C})\) commute with finite limits. In particular, this applies to \(DAlg(\mathcal{C})\) for any derived algebraic context \(\mathcal{C}\). Proof.: It suffices to check that finite limits commute with geometric realizations, and since any simplicial object can be written as a filtered colimit of \(n\)-skeletal simplicial objects, it suffices to check that finite limits commute with finite geometric realizations. By construction, the forgetful functor \[F\colon\operatorname{Mod}_{T}(\mathcal{C})\to\mathcal{C}\] reflects sifted colimits and limits. Stability of \(\mathcal{C}\) implies that finite limits commute with finite colimits in \(\mathcal{C}\), from which we conclude. **Lemma 2.1.20**.: Let \(\mathcal{A}\) be an \(\infty\)-category which admits finite colimits and a final object, and let \(\mathcal{D}\) be a cocomplete stable \(\infty\)-category. Suppose \(F:\mathcal{A}\to\mathcal{D}\) is an \(n\)-excisive functor which preserves filtered colimits. Then \(F\) also preserves finite totalizations and all geometric realizations. Proof.: The proof of Proposition 3.37 in [7] carries over verbatim to this more general setting. We recall the proof here. Recall (from Theorem 1.8 in [9]) the inclusion of the \(n\)-excisive functors \(Fun^{ep,n}(\mathcal{A},\mathcal{D})\to Fun(\mathcal{A},\mathcal{D})\) admits a left adjoint \(P_{n}\), and as \(n\) varies, these adjoints organize into a tower \[...\to P_{n+1}\to P_{n}\to P_{n-1}\to...\] We will denote by \(D_{n}(F)\) the fiber of \(P_{n}(F)\to P_{n-1}(F)\), and recall that this is an \(n\)-homogeneous functor. The import of these recollections for us is two-fold: * We can prove the claim inductively, and thus reduce to verifying the conclusion for \(D_{n}(F)\). * Proposition 6.1.4.14 in [13] posits the existence of a symmetric functor \(G:(\mathcal{A}^{n}\times E\Sigma_{n})/\Sigma_{n}\to\mathcal{D}\) which preserves colimits in each variable such that \[D_{n}(F)\simeq G(X,...,X)_{h\Sigma_{n}}.\] Passing to homotopy coinvariants is exact, and thus it suffices to prove that the functor \(\mathcal{A}\to\mathcal{D}\) given by \(X\to G(X,...,X)\) preserves geometric realizations and finite totalizations. The first follows immediately from the fact that \(\Delta^{op}\to(\Delta^{op})^{n}\) is left cofinal (Lemma 5.5.8.4 in [12]). If \(X^{\star}\colon\Delta\to\mathcal{A}\) is right Kan extended from \(\Delta_{\leq m}\), then \((X^{\star},...,X^{\star})\colon(\Delta)^{n}\to\mathcal{A}^{n}\) is also right Kan extended from \((\Delta_{\leq m})^{n}\). Appealing to exactness in each variable of \(G\), \(G(X^{\star},...,X^{\star})\) is also right Kan extended from \((\Delta_{\leq m})^{n}\), and thus \[G(Tot(X^{\cdot}),...,Tot(X^{\cdot}))\simeq lim_{(\Delta)^{n}}G(X^{\cdot},...,X ^{\cdot})\simeq Tot(G(X^{\cdot},...,X^{\cdot}))\] where the last equivalence follows from right cofinality of \(\Delta\to(\Delta)^{n}\) ### Functors of \(k\)-Algebras Recall, the truncated Witt vectors may be viewed as a functor \[W_{n}:CAlg^{\heartsuit}_{\mathbb{Z}_{p}}\to CAlg^{\heartsuit}_{\mathbb{Z}_{p}}\] By Kan extending from polynomial rings, one obtains a notion of _animated_ Witt vectors: These animated Witt vectors play a useful role in the study of animated \(\delta\)-rings (see e.g. Appendix A in [3]). In our investigation of derived \(\delta\)-rings, we would thus like a notion of truncated Witt vectors for derived commutative rings, but the techniques of the preceding section no longer apply to this context. Indeed, all the results encountered thus far have dealt with the extension of functors defined on the compact projective objects in the heart of a derived algebraic context, not algebra objects therein. The goal of this section is to establish results which will allow us to extend functors of polynomial algebras to functors of derived algebras. Our strategy consists of reducing to the context of the previous section by precomposing with the free algebra functor. More precisely, fix a ring \(k\). Given a functor \[F:Poly_{k}\rightarrow\mathcal{D}\] we obtain a new functor \[G:=F\circ Sym^{\heartsuit}_{k}:\mathrm{Mod}^{fpp}_{k}\rightarrow\mathcal{D}\] which is now amenable to the techniques of the preceding section. Moreover, one can recover \(F\) from \(G\) by equipping \(G\) with a natural piece of extra structure: namely that of a right \(Sym^{\heartsuit}_{k}\)-module, where the action map is induced by the monad multiplication on \(Sym^{\heartsuit}_{k}\). Indeed, when endowed with this extra structure, we obtain an identification (see the proof of Lemma 2.2.18) of functors \[F\simeq colim_{\Delta^{op}}Bar_{\star}(G,Sym^{\heartsuit}_{k},-).\] More generally, given a monad \(T\) on an \(\infty\)-category \(\mathcal{C}\), the two sided Bar construction yields a functor \[R\mathrm{Mod}_{T}(Fun(\mathcal{C},\mathcal{D}))\xrightarrow{G\to|Bar_{ \star}(G,T,-)|}Fun(L\mathrm{Mod}_{T}(\mathcal{C}),\mathcal{D})\] where we are appealing to the natural right tensoring of \(Fun(\mathcal{C},\mathcal{D})\) over \(End(\mathcal{C})\) in order to make sense of the left hand side. In this way, we reduce the problem of extending functors of polynomial algebras to that of understanding the preservation of right \(Sym_{k}^{\heartsuit}\)-modules under the extension procedures of the preceding section. We begin by briefly reviewing the necessary notions regarding right-tensorings of categories following [13]. **Definition 2.2.1** (See Definition 4.2.2.2 in [13]).: Let \(\mathcal{C}\) be an \(\infty\)-category. A _right action object_ is a natural transformation \[\alpha:M^{\prime}\to M\] in \(Fun(N(\Delta^{op}),\mathcal{C})\) satisfying the following two properties: * \(M\) is a monoid object in \(\mathcal{C}\) (see Definition 4.1.2.5 in [13]) * The maps \(M^{\prime}([n])\to M^{\prime}(\{n\})\) and \(M^{\prime}([n])\to M([n])\) witness \(M^{\prime}([n])\) as the product \[M^{\prime}([n])\simeq M^{\prime}(\{n\})\times M([n])\simeq M^{\prime}([0]) \times M([n]).\] Following the standard abuse of terminology, we will refer to the \(\infty\)-category \(M^{\prime}([0])\) as being endowed with a right action of \(M([1])\). We denote by \(RMon(\mathcal{C})\) the full subcategory of \(Fun(N(\Delta^{op})\times\Delta^{1},\mathcal{C})\) spanned by the right action objects. **Definition 2.2.2** (See Proposition 4.2.2.9 in [13]).: A _right-tensoring_ of an \(\infty\)-category \(\mathcal{M}\) over a monoidal category \(\mathcal{C}\) is the data of a right action object \(M^{\prime}\to M\) in \(Cat_{\infty}\) equipped with equivalences \[M\simeq\mathcal{C}\quad\text{ and }\quad M^{\prime}([0])\simeq\mathcal{M}.\] **Remark 2.2.3** (Variant 4.2.1.36 and Proposition 4.2.2.9 in [13]).: There exists an \(\infty\)-operad \(\mathcal{RM}^{\otimes}\) whose underlying \(\infty\)-category \(\mathcal{RM}\) is the discrete simplicial set \(\{\mathfrak{a},\mathfrak{m}\}\) such that for any \(\infty\)-category \(\mathcal{C}\) with finite products, there is a canonical equivalence \[Mon_{\mathcal{RM}}(\mathcal{C})\simeq RMon(\mathcal{C}).\] In particular, a right tensoring of an \(\infty\)-category \(\mathcal{M}\) over a monoidal category \(\mathcal{C}\) is equivalent to a co-Cartesian fibration of \(\infty\)-operads \[\mathcal{M}^{\otimes}\to\mathcal{RM}^{\otimes}\] equipped with equivalences \[\mathcal{M}_{\mathfrak{a}}\simeq\mathcal{C}\quad\text{ and }\quad\mathcal{M}_{ \mathfrak{m}}\simeq\mathcal{M}.\] For our purposes, all right-tensored (and left-tensored) categories will arise from (the obvious analogue of) the following construction. **Construction 2.2.4**.: Fix two quasi-categories \(\mathcal{C},\mathcal{M}\in sSet\), and suppose \(\mathcal{C}\) is endowed with the structure of a simplicial monoid and \(\mathcal{M}\) is equipped with a right action of \(\mathcal{C}\) in the category of simplicial sets. The monoid structure on the simplicial set \(\mathcal{C}\) yields a simplicial object in quasi-categories \[M_{\star}:=\left(...\mathcal{C}\times\mathcal{C}\rightrightarrows\mathcal{C} \rightrightarrows[0])\,.\] Unstraightening this diagram yields a monoidal \(\infty\)-category \(\mathcal{C}^{\otimes}\) whose underlying \(\infty\)-category is \(\mathcal{C}\). Similarly, the right action of \(\mathcal{C}\) on \(\mathcal{M}\) yields a simplicial object in quasi-categories \[M^{\prime}_{\star}:=\left(...\mathcal{C}\times\mathcal{C}\times\mathcal{M} \rightrightarrows\mathcal{C}\times\mathcal{M}\rightrightarrows\mathcal{M})\] and the projection maps \(\mathcal{C}^{n}\times\mathcal{M}\rightarrow\mathcal{C}^{n}\) induce a map of simplicial quasi-categories \(M^{\prime}\to M\). This yields a right action object in \(Cat_{\infty}\) which we will denote by \(\mathcal{M}_{\mathcal{C}}^{\otimes}\). The right action object \(\mathcal{M}_{\mathcal{C}}^{\otimes}\) encodes a right tensoring of \(\mathcal{M}\) over \(\mathcal{C}\). **Definition 2.2.5**.: A _lax map of right tensored categories_\(p,q\colon\mathcal{M}^{\otimes},\mathcal{N}^{\otimes}\rightarrow\mathcal{ RM}^{\otimes}\) is a map of \(\infty\)-operads over \(\mathcal{RM}^{\otimes}\) \[\mathcal{M}^{\otimes}\rightarrow\mathcal{N}^{\otimes}.\] A _(strict) map of right tensored categories_ as above is a lax map of right tensored categories which takes \(p\)-coCartesian edges to \(p\circ q\)-coCartesian edges. **Example 2.2.6**.: For general \(\infty\)-categories \(\mathcal{C}\) and \(\mathcal{D}\), \(Fun(\mathcal{C},\mathcal{D})\) is right tensored over \(End(\mathcal{C})\), and this tensoring arises as in Construction 2.2.4. **Example 2.2.7**.: Let \(k\) be a commutative ring and \(\mathcal{D}\) a presentable \(\infty\)-category. Let \(End_{\Sigma}(\operatorname{Mod}_{k}^{\heartsuit})\) denote the full subcategory of \(End(\operatorname{Mod}_{k}^{\heartsuit})\) spanned by the endomorphisms which preserve \(1\)-sifted colimits. We denote by \(\mathcal{E}\) the full subcategory of \(End_{\Sigma}(\operatorname{Mod}_{k}^{\heartsuit})\) spanned by those endomorphisms \(f\) such that the image of \(f\) under the composite \[End_{\Sigma}(\operatorname{Mod}_{k}^{\heartsuit})\to Fun_{\Sigma}( \operatorname{Mod}_{k}^{\heartsuit},\operatorname{Mod}_{k})\simeq Fun( \operatorname{Mod}_{k}^{fpp},\operatorname{Mod}_{k})\] is right-extendable in the sense of Definition 2.1.8. Lemma 2.2.8 below exhibits \(\mathcal{E}\) as a monoidal subcategory of \(End_{\Sigma}(\operatorname{Mod}_{k}^{\heartsuit})\). In this context, Construction 2.2.4 endows the category of right-extendable functors \(Fun_{\Sigma}^{ext}(\operatorname{Mod}_{k}^{\heartsuit},\mathcal{D})\) with a right tensoring over \(\mathcal{E}\), \(Fun_{\sigma}(\operatorname{Perf}_{k,\leq 0},\mathcal{D})\) with a right tensoring over \(End_{\sigma}(\operatorname{Perf}_{k,\leq 0})\), and \(Fun_{\Sigma}(\operatorname{Mod}_{k},\mathcal{D})\) with a right tensoring over \(End_{\Sigma}(\operatorname{Mod}_{k})\). **Lemma 2.2.8**.: Let \(\mathcal{E}\), \(End_{\Sigma}(\operatorname{Mod}_{k}^{\heartsuit})\) be as in Example 2.2.7. Then the inclusion \(\mathcal{E}\rightarrow End_{\Sigma}(\operatorname{Mod}_{k}^{\heartsuit})\) is monoidal. Proof.: Both categories in question are \(1\)-categories, and so it suffices to show that the composite of two extendable endomorphisms is once again extendable. Suppose \(f,g\in\mathcal{E}\). We must show that the right Kan extension \[(i\circ f\circ g)^{R}\colon\operatorname{Perf}_{k,\leq 0}\rightarrow\operatorname{Mod }_{k}\] preserves finite coconnective geometric realizations. Before verifying this, we make some preliminary observations. First observe that for any finite simplicial set \(K\), \(Fun(K,\mathrm{Mod}_{k})\) is compactly generated by \(Fun(K,\mathrm{Perf}_{k})\). Since limits and colimits are computed pointwise, any functor \(F\colon K\to\mathrm{Mod}_{k}\) which factors through \(\mathrm{Perf}_{k}\) is compact. Indeed, given a directed diagram \(\{G_{n}\}\), and a natural transformation \(F\to colim_{n}G_{n}\), for each vertex \(k\in K\) we obtain a factorization \(F(k)\to G_{n_{k}}(k)\) for some \(n_{k}\). Since \(K\) was assumed to be finite, we can find a uniform \(n_{k}\) so as to attain the desired factorization. To see that such functors generate the category, we appeal to stability to reduce to checking that any functor \(F\) satisfying the property that for all \(G\in Fun(K,\mathrm{Perf}_{k})\), \(Hom(G,F)\simeq 0\) is equivalent to the zero functor. This follows immediately from the fact that evaluation at a vertex \(i\in K\), viewed as a functor \(Fun(K,\mathrm{Mod}_{k})\xrightarrow{ev_{i}}\mathrm{Mod}_{k}\) admits a left adjoint, which reduces us to a pointwise calculation. We next claim that given \(X\in\mathrm{Perf}_{k,\leq 0}\), for any presentation of \((i\circ g)^{R}(X)\) as a filtered colimit of coconnective compact objects \[(i\circ g)^{R}(X)\simeq colim_{n}Y^{n},\] the canonical map \[colim_{n}(i\circ f)^{R}(Y^{n})\to(i\circ f\circ g)^{R}(X)\] is an equivalence. Indeed, we can present \(X\) as a finite totalization of objects in \(\mathrm{Mod}_{k}^{\heartsuit}\) - \(X\simeq Tot(X_{\star})\), and then witness (using the above observation) the diagram \(X_{\star}\) as a filtered colimit of finite cosimplicial objects in \(\mathrm{Perf}_{\mathrm{Mod}_{k}^{\heartsuit}}\), which we will denote by \(Z_{\star}^{\star}\). Then we have \[(i\circ f\circ g)^{R}(X)\simeq Tot(i\circ f\circ g(X_{\star}))\simeq colim_{n} Tot(i\circ f\circ g(Z_{\star}^{n}))\] and since \(f\) preserves \(1\)-sifted colimits, any presentation of \(g(Z_{\star}^{n})\) as a filtered colimit of compact objects is preserved by \(f\), whence the claim. We now return to the main thread of the proof. Fix a diagram \(X_{\star}\colon\Delta^{\leq m,op}\to\mathrm{Perf}_{k,\leq 0}\) such that \(|X_{\star}|\) remains coconnective. Our task is to show that the canonical map \[\big{|}(i\circ f\circ g)^{R}(X_{\star})\big{|}\to(i\circ f\circ g)^{R}(\big{|} X_{\star}\big{|})\] is an equivalence. Since \(g\) preserves discrete objects and \(\mathrm{Mod}_{k,\leq 0}\subset\mathrm{Mod}_{k}\) is closed under limits, \((i\circ g)^{R}(X_{\star})\) is coconnective and we can write the diagram \((i\circ g)^{R}(X_{\star})\) as a filtered colimit of diagrams \(Y_{\star}^{n}\) in \(\mathrm{Perf}_{k,\leq 0}\). Appealing to the claim of the preceding paragraph, we see that \[\big{|}(i\circ f\circ g)^{R}(X_{\star})\big{|} \simeq colim_{n}\big{|}(i\circ f)^{R}(Y_{\star}^{n})\big{|}\] \[\simeq colim_{n}(i\circ f)^{R}(\big{|}Y_{\star}^{n}\big{|})\] \[\simeq(i\circ f\circ g)^{R}(\big{|}X_{\star}\big{|})\] where the second equivalence follows from extendability of \(i\circ f\), and the third from extendability of \(i\circ g\). **Remark 2.2.9**.: The reason we restrict our attention to \(\operatorname{Mod}_{k}\) as opposed to a general derived algebraic context is so that we may write any coconnective perfect object in \(\operatorname{Mod}_{k}\) as a finite totalization of finitely presented projective \(k\)-modules. The above proof relies on this fact, which does not hold in general derived algebraic contexts. This may be avoided by replacing \(\mathcal{E}\) with the category of filtered functors \(F^{\geq 0}End^{ext}(\mathcal{C}^{\heartsuit})\) consisting of endomorphisms \(\mathcal{C}^{\heartsuit}\rightarrow\mathcal{C}^{\heartsuit}\) with a filtration whose filtered pieces preserve \(\mathcal{C}^{0}\) and are extendable. Working with filtered endomorphisms introduces mild complications in the forthcoming constructions, so we content ourselves with the simpler setting of \(\operatorname{Mod}_{k}\) for this paper. **Lemma 2.2.10**.: Let \(\mathcal{C}^{\prime}\xrightarrow{i}\mathcal{C}\) be a full subcategory, and suppose that the right Kan extension of any functor \(F\colon\mathcal{C}^{\prime}\rightarrow\mathcal{C}\) exists. Then the composite \[End(\mathcal{C}^{\prime})\xrightarrow{i\circ-}Fun(\mathcal{C}^{\prime}, \mathcal{C})\xrightarrow{\mathit{Ran}^{\mathcal{C}^{\prime}}_{\mathcal{C}^{ \prime}}}End(\mathcal{C})\] is lax monoidal. Proof.: Consider the full subcategory \(End_{\mathcal{C}^{\prime}}(\mathcal{C})\xrightarrow{j}End(\mathcal{C})\) spanned by those endofunctors which preserve the subcategory \(\mathcal{C}^{\prime}\). The composite in question then factors through \(End_{\mathcal{C}^{\prime}}(\mathcal{C})\) as follows: By inspection \(End_{\mathcal{C}^{\prime}}(\mathcal{C})\) is a monoidal subcategory of \(End(\mathcal{C})\), and the natural restriction functor \[res\colon End_{\mathcal{C}^{\prime}}(\mathcal{C})\to End(\mathcal{C})\] is a map of simplicial monoids, and thus canonically refines to a strictly monoidal functor. It thus suffices to show that \(R\) is a right adjoint to \(res\) by Corollary 7.3.2.7 in [13]. Let us construct the co-unit of the adjunction. Right Kan extension is right adjoint to restriction, and so we have a unit natural transformation \[\hat{\epsilon}\colon res\circ Ran^{\mathcal{C}}_{\mathcal{C}^{\prime}} \to Id_{Fun(\mathcal{C}^{\prime},\mathcal{C})}.\] Since \(i\circ-\) is fully faithful, it suffices to construct a natural transformation of the form \[i\epsilon\colon(i\circ res\circ R)\rightarrow(i\circ-).\] We have identifications \(i\circ res\circ R\simeq res\circ j\circ R\simeq res\circ Ran^{\mathcal{C}}_{ \mathcal{C}^{\prime}}\circ i\), and thus we declare \[i\epsilon:=\hat{\epsilon}\circ i.\] To verify that the natural transformation \(\epsilon\) is the counit of an adjunction between \(R\) and \(res\), we must verify that for any \(f\in End(\mathcal{C}^{\prime})\) and \(g\in End_{\mathcal{C}^{\prime}}(\mathcal{C})\), the composite \[Hom_{End_{\mathcal{C}^{\prime}}(\mathcal{C})}(f,R(g))\xrightarrow{res}Hom_{ End(\mathcal{C}^{\prime})}(res(f),res(R(g)))\xrightarrow{\epsilon_{g}\circ-} Hom_{End(\mathcal{C}^{\prime})}(res(f),g)\] is a homotopy equivalence. This composite fits into the following commutative diagram of spaces \[\begin{CD}Hom_{End_{\mathcal{C}^{\prime}}(\mathcal{C})}(f,R(g))@>{\epsilon \text{ores}(-)}>{}>Hom_{End(\mathcal{C}^{\prime})}(res(f),g)\\ @V{}V{\simeq}V@V{}V{\simeq}V\\ Hom_{End(\mathcal{C})}(j(f),j(R(g)))@>{\hat{\epsilon}ores(-)}>{}>Hom_{Fun( \mathcal{C}^{\prime},\mathcal{C})}(i(res(f)),i(g)).\end{CD}\] The bottom arrow is a homotopy equivalence since \(\hat{\epsilon}\) is the counit of an adjunction between \(Ran^{\mathcal{C}}_{\mathcal{C}^{\prime}}\) and \(res\), from which we conclude. **Lemma 2.2.11**.: Let \(\mathcal{C}^{\prime}\xrightarrow{i}\mathcal{C}\) be as in the preceding lemma, and let \(\mathcal{D}\) be an \(\infty\)-category such that the right Kan extension of any functor \(F\in Fun(\mathcal{C}^{\prime},\mathcal{D})\) along \(i\) exists. Then the right Kan extension \[Fun(\mathcal{C}^{\prime},\mathcal{D})\xrightarrow{Ran^{\mathcal{C}}_{ \mathcal{C}^{\prime}}}Fun(\mathcal{C},\mathcal{D})\] refines to a lax map of right tensored categories, where we equip the domain and target with their natural tensorings over \(End(\mathcal{C}^{\prime})\) and \(End(\mathcal{C})\) respectively. This refinement is such that the induced lax monoidal map \(End(\mathcal{C}^{\prime})\to End(\mathcal{C})\) is precisely the composite of the preceding lemma. Proof.: First, observe that we may also endow \(Fun(\mathcal{C},\mathcal{D})\) with a right tensoring over \(End_{\mathcal{C}^{\prime}}(\mathcal{C})\) (see the notation in the proof of Lemma 2.2.10) via Construction 2.2.4. Since \(End_{\mathcal{C}^{\prime}}(\mathcal{C})\subset End(\mathcal{C})\) is a sub-simplicial monoid, the identity \(Fun(\mathcal{C},\mathcal{D})\to Fun(\mathcal{C},\mathcal{D})\) refines to a (strict) map of right tensored categories relating the two tensorings. It thus suffices to prove that right Kan extension refines to a lax map of right tensored categories where \(Fun(\mathcal{C},\mathcal{D})\) is equipped with the tensoring over \(End_{\mathcal{C}^{\prime}}(\mathcal{C})\). This is an immediate application of Corollary 7.3.2.7 in [13], and the proof of Lemma 2.2.10. The analogous situation for left Kan extensions is slightly more subtle, as the left adjoint to a monoidal functor is only oplax monoidal. Nevertheless, the following lemma will suffice for our purposes. **Lemma 2.2.12**.: Let \(\mathcal{C}\simeq Ind_{\kappa}(\mathcal{C}^{\prime})\) be a compactly generated \(\infty\)-category, and let \(i\colon\mathcal{C}^{\prime}\to\mathcal{C}\) denote the inclusion. Then the composite \[End(\mathcal{C}^{\prime})\xrightarrow{i\circ-}Fun(\mathcal{C}^{\prime}, \mathcal{C})\xrightarrow{\mathit{Lan}^{\mathcal{C}}_{\mathcal{C}^{\prime}}} End(\mathcal{C})\] is strictly monoidal. Proof.: As in the proof of Lemma 2.2.10, we factor the composite through \(End_{\mathcal{C}^{\prime}}(\mathcal{C})\xrightarrow{j}End(\mathcal{C})\). This yields a functor \(L\colon End(\mathcal{C}^{\prime})\to End_{\mathcal{C}^{\prime}}(\mathcal{C})\), which is easily seen to be left adjoint to \(res\colon End_{\mathcal{C}^{\prime}}(\mathcal{C})\to End(\mathcal{C}^{\prime})\) via an argument analogous to that of Lemma 2.2.10. To establish the claim, we may reduce (by Corollary 7.3.2.12 in [13]) to checking that for any finite collection of elements \(f_{1},...,f_{n}\in End(\mathcal{C}^{\prime})\), the canonical map \[L(f_{1}\circ...\circ f_{n})\to L(f_{1})\circ...\circ L(f_{n})\] is an equivalence, which we can check pointwise. By hypothesis, left Kan extension induces an equivalence \[Fun(\mathcal{C}^{\prime},\mathcal{C})\simeq End_{\omega}(\mathcal{C})\] and so in particular, each \(L(f_{i})\) preserves filtered colimits. Given any \(c\in\mathcal{C}\), the category \(\mathcal{C}^{\prime}\downarrow c\) is filtered, and thus we see \[L(f_{1})\circ...\circ L(f_{n})(c)\simeq colim_{\mathcal{C}^{\prime}\downarrow c }\left(f_{1}\circ...\circ f_{n}(c^{\prime})\right)\simeq L(f_{1}\circ...\circ f _{n})(c)\] as desired. **Lemma 2.2.13**.: Let \(\mathcal{C}\simeq Ind_{\kappa}(\mathcal{C}^{\prime})\) be as above and \(\mathcal{D}\) be an \(\infty\)-category such that the left Kan extension of any functor \(F\colon\mathcal{C}^{\prime}\to\mathcal{D}\) along \(i\) exists. Then left Kan extension \[Fun(\mathcal{C}^{\prime},\mathcal{D})\xrightarrow{\mathit{Lan}^{\mathcal{C}}_ {\mathcal{C}^{\prime}}}Fun(\mathcal{C},\mathcal{D})\] refines to a strict map of right tensored categories, where we equip the domain and target with their natural tensorings over \(End(\mathcal{C}^{\prime})\) and \(End(\mathcal{C})\) respectively. This refinement is such that the induced monoidal map \(End(\mathcal{C}^{\prime})\to End(\mathcal{C})\) is precisely the composite of the preceding lemma. Proof.: This is exactly the same as the proof of Lemma 2.2.11, except that rather than appealing to Corollary 7.3.2.7 in [13], we must appeal to Corollary 7.3.2.12 in loc. cit., which has an additional hypothesis. This reduces to checking that for any \(f\in Fun(\mathcal{C}^{\prime},\mathcal{D})\) and any \(g_{1},...,g_{n}\in End(\mathcal{C}^{\prime})\), the canonical map \[\mathit{Lan}^{\mathcal{C}}_{\mathcal{C}^{\prime}}(f\circ g_{1}\circ...\circ g _{n})\to\mathit{Lan}^{\mathcal{C}}_{\mathcal{C}^{\prime}}(f)\circ L(g_{1}) \circ...\circ L(g_{n})\] is an equivalence, which follows by arguing pointwise as in the proof of Lemma 2.2.12. **Proposition 2.2.14**.: Let \(\mathcal{C}\) be a derived algebraic context, and denote by \(\mathrm{Perf}_{\mathcal{C},\leq 0}\) the subcategory of coconnective compact objects. Let \(j:\mathrm{Perf}_{\mathcal{C},\leq 0}\to\mathcal{C}\) denote the inclusion. Then the functor \[End_{\sigma}(\mathrm{Perf}_{\mathcal{C},\leq 0})\xrightarrow{(j\circ-)^{L}}End_{ \Sigma}(\mathcal{C})\] is (strictly) monoidal, where the superscript \(L\) denotes left Kan extension along \(j\). Proof.: This is an immediate application of Lemma 2.2.12, noting that \(End_{\sigma}(\mathrm{Perf}_{\mathcal{C},\leq 0})\subset End(\mathrm{Perf}_{ \mathcal{C},\leq 0})\) and \(End_{\Sigma}(\mathcal{C})\subset End(\mathcal{C})\) are both monoidal subcategories and that left Kan extension identifies these subcategories (see Proposition 3.13 in [7]). Denote by \(\mathcal{E}:=End_{\Sigma}^{ext}(\mathrm{Mod}_{k}^{\heartsuit})\) the category of extendable endomorphisms, as defined in Example 2.2.7. Let \(\mathcal{D}\) be a presentable \(\infty\)-category, and denote by \(Fun_{\Sigma}^{ext}(\mathrm{Mod}_{k}^{\heartsuit},\mathcal{D})\) the full subcategory of \(Fun_{\Sigma}(\mathrm{Mod}_{k}^{\heartsuit},\mathcal{D})\) spanned by the right-extendable functors. Our task is to show that the right-left extension functor \[Fun_{\Sigma}^{ext}(\mathrm{Mod}_{k}^{\heartsuit},\mathcal{D})\to Fun_{ \Sigma}(\mathrm{Mod}_{k},\mathcal{D})\] respects the natural tensorings of these categories over \(\mathcal{E}\) and \(End_{\Sigma}(\mathrm{Mod}_{k})\) respectively. The primary observation of this section is captured in the following Proposition. **Proposition 2.2.15**.: Let \(\mathcal{D}\) a complete presentable \(\infty\)-category. Then the right-left extension functor of Construction 2.1.10 \[(-)^{RL}:Fun_{\Sigma}^{ext}(\mathrm{Mod}_{k}^{\heartsuit},\mathcal{D})\to Fun_{ \Sigma}(\mathrm{Mod}_{k},\mathcal{D})\] refines to a lax map of right-tensored \(\infty\)-categories \[(-)^{RL}:\Big{(}Fun_{\Sigma}^{ext}(\mathrm{Mod}_{k}^{\heartsuit},\mathcal{D}) \Big{)}_{\mathcal{E}}^{\otimes}\to(Fun_{\Sigma}(\mathrm{Mod}_{k},\mathcal{D}) )_{End_{\Sigma}(\mathcal{C})}^{\otimes}\] where we are using the notation of Construction 2.2.4 to depict the tensorings in question. Proof.: Lemmas 2.2.11 and 2.2.13 imply that the composite \[Fun(\mathrm{Mod}_{k}^{fpp},\mathcal{D})\xrightarrow{(-)^{R}}Fun(\mathrm{Perf}_ {k,\leq 0},\mathcal{D})\xrightarrow{(-)^{L}}Fun(\mathrm{Mod}_{k},\mathcal{D})\] admits a canonical refinement to a lax map of right-tensored categories, and our task it to show that this refinement restricts to the subcategories in question. The inclusions \(Fun^{ext}(\mathrm{Mod}_{k}^{fpp},\mathcal{D})\to Fun(\mathrm{Mod}_{k}^{fpp}, \mathcal{D})\) and \(Fun_{\Sigma}(\mathrm{Mod}_{k},\mathcal{D})\to Fun(\mathrm{Mod}_{k},\mathcal{D})\) both admit natural refinements to strict maps of right-tensored categories where the domains of these inclusions are tensored over the categories \(\mathcal{E}\) and \(End_{\Sigma}(\mathcal{C})\) respectively. We thus obtain a natural factorization over \(\mathcal{RM}^{\otimes}\) as indicated in the diagram \[\begin{CD}\left(Fun^{ext}(\mathrm{Mod}_{k}^{fpp},\mathcal{D})\right)_{ \mathcal{E}}^{\otimes}\xrightarrow{(-)^{RL}}(Fun_{\Sigma}(\mathrm{Mod}_{k}, \mathcal{D}))_{End_{\Sigma}(\mathrm{Mod}_{k})}^{\otimes}\\ @V{}V{}V@V{}V{}V\\ \left(Fun(\mathrm{Mod}_{k}^{fpp},\mathcal{D})\right)_{\mathcal{E}}^{\otimes} \xrightarrow{(-)^{RL}}(Fun(\mathrm{Mod}_{k},\mathcal{D}))_{End_{\Sigma}( \mathrm{Mod}_{k})}^{\otimes}\end{CD}\] and it follows formally that the dashed arrow is in fact a map of \(\infty\)-operads. It follows formally then that right-left extension induces a functor on right \(\mathcal{E}\)-module objects in \(Fun_{\Sigma}^{ext}(Mod_{k}^{\heartsuit},\mathcal{D})\) to right \(End_{\Sigma}(Mod_{k})\)-module objects in \(Fun_{\Sigma}(Mod_{k},\mathcal{D})\). The symmetric algebra monad on \(Mod_{k}^{\heartsuit}\) is an \(\mathcal{E}\)-monad, and so in particular, right left extension induces a functor \[R\mathrm{Mod}_{Sym_{k}^{\heartsuit}}(Fun_{\Sigma}^{ext}(Mod_{k}^{\heartsuit}, \mathcal{D}))\to R\mathrm{Mod}_{LSym_{k}}(Fun_{\Sigma}(Mod_{k},\mathcal{D})).\] This is our main tool for extending functors from \(Poly_{k}^{fg}\) to all of \(DAlg(Mod_{k})\). **Definition 2.2.16**.: Let \(\mathcal{D}\) a presentable \(\infty\)-category. We will say that a functor \[F\colon Poly_{k}^{fg}\to\mathcal{D}\] is _right extendable_ if the composite \(F\circ Sym_{k}^{\heartsuit}\colon\mathrm{Mod}_{k}^{fpp}\to\mathcal{D}\) is right extendable in the sense of Definition 2.1.8. We will denote by \(Fun^{ext}(Poly_{k}^{fg},\mathcal{D})\) the full subcategory of \(Fun(Poly_{k}^{fg},\mathcal{D})\) spanned by the right extendable functors. **Construction 2.2.17** (Non-linear right-left extensions).: Let \(\mathcal{D}\) a presentable \(\infty\)-category. We construct a functor \(Fun^{ext}(Poly_{k}^{fg},\mathcal{D})\simeq Fun_{\Sigma}^{ext}(DAlg(\mathrm{ Mod}_{k})^{\heartsuit},\mathcal{D})\to Fun_{\Sigma}(DAlg(\mathrm{Mod}_{k}), \mathcal{D})\) by tracing around the following diagram: where \(Bar_{\star}(-,LSym_{\mathcal{C}},-)\) is the two-sided Bar construction. We refer to the dashed arrow as the _non-linear right-left extension functor_. We now investigate the basic properties of this construction. Our first task is to justify the use of the term 'extension'. **Lemma 2.2.18**.: Suppose \(F\in Fun_{\Sigma}^{ext}(DAlg(\mathrm{Mod}_{k})^{\heartsuit},\mathcal{D})\) is a right extendable functor. Then there is a natural equivalence of functors \[F^{RL}\big{|}_{DAlg(\mathrm{Mod}_{k})^{\heartsuit}}\simeq F.\] Proof.: Denote by \(G=(F\circ Sym_{k}^{\heartsuit})^{RL}\), and note that \(G\big{|}_{\mathrm{Mod}_{k}^{\heartsuit}}\simeq F\circ Sym_{k}^{\heartsuit}\). In particular, for any \(P\in DAlg(\mathrm{Mod}_{k})^{\heartsuit}\), the right left extension is the geometric realization of the simplicial object \[Bar_{\star}(F\circ Sym_{k}^{\heartsuit},Sym_{k}^{\heartsuit},P)\simeq F(Bar_{ \star}(Sym_{k}^{\heartsuit},Sym_{k}^{\heartsuit},P)).\] The Bar resolution of \(P\) as a left \(Sym_{k}^{\heartsuit}\)-module admits a natural augmentation to \(P\) itself, and thus by applying \(F\) we obtain a natural transformation \[F^{RL}\big{|}_{\mathrm{Mod}_{k}^{\heartsuit}}\to F.\] To check it is an isomorphism, it suffices to check on the subcategory \(Poly_{k}^{fg}\) since both functors preserve \(1\)-sifted colimits. Restricted to this subcategory, the augmentation of the Bar complex admits a natural splitting given by the unit of the \(Sym_{k}^{\heartsuit}\)-module structure on \(P\) (this splitting only exists on the level of underlying complexes since the unit is not multiplicative, but this is sufficient for our purposes), from which we conclude. **Observation 2.2.19**.: The non-linear right-left extension of a functor manifestly preserves sifted colimits. This property combined with the conclusion of the preceding lemma immediately implies that the restriction of \(F^{RL}\) to the connective objects is precisely the left Kan extension of \(F\) to animated commutative rings: \[F^{RL}\big{|}_{DAlg(\mathrm{Mod}_{k})}\simeq Lan_{Poly_{k}^{fg}}^{\mathfrak{o} CAlg_{k}}(F).\] In order to understand the behavior of the non-linear right-left extension on non-connective objects, it is useful to understand what types of limits are preserved by the extension. Our main result in this direction is the following: **Theorem 2.2.20**.: Suppose \(\mathcal{D}\) is a stable \(\infty\)-category and \(T\) is a sifted colimit preserving monad thereon. If \(F\in Fun(Poly_{k}^{fg},L\mathrm{Mod}_{T}(\mathcal{D}))\) satisfies the property that \(F\circ Sym_{k}^{\heartsuit}\) admits an exhaustive filtration by excisively polynomial subfunctors. Then \(F^{RL}\) preserves finite totalizations. Proof.: Fix a diagram \[X_{\star}\colon\Delta^{\leq n}\to DAlg(\mathrm{Mod}_{k}).\] We first observe that the existence of the hypothesized filtration on \(F\circ Sym_{k}^{\heartsuit}\) guarantees that \[G:=(F\circ Sym_{k}^{\heartsuit})^{RL}\] preserves finite totalizations by Lemma 2.1.20, as does \(LSym_{k}\). In particular, the canonical map of simplicial objects in \(L\mathrm{Mod}_{T}(\mathcal{D})\) \[Bar_{\star}(G,LSym_{k},Tot(X_{\star}))\to Tot(Bar_{\star}(G,LSym_{k},X_{ \star}))\] is an equivalence, since the \(n\)-simplices of the Bar resolution are canonically identified with \(G\circ(LSym_{k})^{\circ n}(-)\). By Lemma 2.1.19, geometric realizations commute with finite limits in \(L\mathrm{Mod}_{T}(\mathcal{D})\), and thus we obtain \[F^{RL}(Tot(X_{\star})) :=\left|Bar_{\star}(G,LSym_{k},Tot(X_{\star}))\right|\] \[\simeq\left|Tot(Bar_{\star}(G,LSym_{k},X_{\star}))\right|\] \[\simeq Tot\left(\left|Bar_{\star}(G,LSym_{k},X_{\star})\right|\right)\] \[\simeq Tot(F^{RL}(X_{\star}))\] as desired. **Example 2.2.21**.: Consider the \(n\)-truncated Witt vectors \[W_{n}\colon Poly_{\mathbb{Z}}^{fg}\to DAlg(\mathrm{Mod}_{\mathbb{Z}}).\] We claim that \(W_{n}\) is right extendable. Since the forgetful functor \(DAlg(\mathrm{Mod}_{\mathbb{Z}})\to\mathrm{Mod}_{\mathbb{Z}}\) reflects sifted colimits and limits, it suffices to verify right-extendability of the composite \[\mathrm{Mod}_{\mathbb{Z}}^{fpp}\xrightarrow{Sym^{\heartsuit}_{\mathbb{Z}}}Poly _{\mathbb{Z}}^{fg}\xrightarrow{W_{n}}DAlg(\mathrm{Mod}_{\mathbb{Z}})\to \mathrm{Mod}_{\mathbb{Z}}\] which we will denote by \(\mathscr{W}_{n}\). The Verschiebung fits in to a fiber sequence \[\mathscr{W}_{1}\xrightarrow{V^{n}}\mathscr{W}_{n}\to\mathscr{W}_{n-1}\] in \(Fun(\mathrm{Mod}_{\mathbb{Z}}^{fpp},\mathrm{Mod}_{\mathbb{Z}})\). Right Kan extending yields a fiber sequence \[\mathscr{W}_{1}^{R}\to\mathscr{W}_{n}^{R}\to\mathscr{W}_{n-1}^{R}.\] Observe that we have a natural identification \(\mathscr{W}_{1}\simeq Sym^{\heartsuit}_{\mathbb{Z}}\), and thus \(\mathscr{W}_{1}\) is right extendable. Proceeding inductively, the above fiber sequence implies that \(\mathscr{W}_{n}^{R}\) preserves finite coconnective geometric realizations, and thus \(W_{n}\) is right extendable as desired. ### The \(LSym^{\delta}\)-Monad Let \(\mathfrak{a}CAlg^{\delta}(\mathrm{Mod}_{\mathbb{Z}_{p}})\) denote the \(\infty\)-category of animated \(\delta\)-rings over \(\mathbb{Z}_{p}\) (see e.g. Appendix A of [3]). We will denote by \(\mathrm{Mod}_{\mathbb{Z}_{p}}\) the derived \(\infty\)-category of \(\mathbb{Z}_{p}\). Recall from Proposition A.20 of [3] that the forgetful functor \[\mathfrak{a}CAlg^{\delta}(\mathrm{Mod}_{\mathbb{Z}_{p}})\to\mathrm{Mod}_{ \mathbb{Z}_{p},\geq 0}\] admits a left adjoint, and in fact the resulting adjunction is monadic. Our goal in this section is to extend this monad to the entire derived category, using the results of Section 2.1. To accomplish this, we may restrict the monad to free \(\mathbb{Z}_{p}\)-modules, where we will denote the monad by \(Sym_{\mathbb{Z}_{p}}^{\delta}\). Our strategy is to introduce a filtration of \(Sym_{\mathbb{Z}_{p}}^{\delta}\) by subfunctors \(Sym_{\mathbb{Z}_{p}}^{\delta,\leq n}\) such that the following three conditions hold: * The filtration is exhaustive. * Each filtered piece \(Sym_{\mathbb{Z}_{p}}^{\delta,\leq n}\) is additively polynomial. * The assignment \(n\to Sym_{\mathbb{Z}_{p}}^{\delta,\leq n}\) may be endowed with the structure of a filtered monad refining the monad structure on \(Sym_{\mathbb{Z}_{p}}^{\delta}\). **Notation 2.3.1**.: Let \(S\) be a set. We will denote by \(\mathbb{Z}_{p}\left\langle S\right\rangle\) the free \(\mathbb{Z}_{p}\)-module generated by \(S\). Evaluating \(Sym_{\mathbb{Z}_{p}}^{\delta}\) on \(\mathbb{Z}_{p}\left\langle S\right\rangle\) yields \[Sym_{\mathbb{Z}_{p}}^{\delta}(\mathbb{Z}_{p}\left\langle S\right\rangle)\simeq \mathbb{Z}_{p}\left[x_{s,i}\big{|}s\in S,i\in\mathbb{Z}_{\geq 0}\right]\] where the right hand side is the module underlying the polynomial algebra on the depicted generators. In this notation the \(\delta\)-operator acts by \(\delta(x_{s,i})=x_{s,i+1}\). **Definition 2.3.2**.: Given a monomial \[f=\lambda x_{s_{1},i_{1}}^{j_{1}}...x_{s_{n},i_{n}}^{j_{n}}\] in \(Sym_{\mathbb{Z}_{p}}^{\delta}(\mathbb{Z}_{p}\left\langle S\right\rangle)\), we define the \(\delta\)_-degree_ of \(f\) to be \[deg_{\delta}(f):=\sum_{k=1}^{n}p^{i_{k}}j_{k}.\] We extend this to general polynomials in the obvious way: \(deg_{\delta}(f_{1}+f_{2})=\max\{deg_{\delta}(f_{1}),deg_{\delta}(f_{2})\}\). **Definition 2.3.3**.: For a set \(S\), we define \[Sym_{\mathbb{Z}_{p}}^{\delta,\leq n}(\mathbb{Z}_{p}\left\langle S\right\rangle )\subset Sym_{\mathbb{Z}_{p}}^{\delta}(\mathbb{Z}_{p}\left\langle S\right\rangle)\] to be the submodule generated by those polynomials whose \(\delta\)-degree is less than or equal to \(n\). **Observation 2.3.4**.: Given a map of free \(\mathbb{Z}_{p}\)-modules \(f\colon M\to N\), the associated map of \(\delta\)-rings \(Sym_{\mathbb{Z}_{p}}^{\delta}(f)\) does not necessarily preserve \(\delta\)-degree (to see this, take any non-trivial element of the kernel of \(f\)). However, it does preserve the filtration by \(\delta\)-degree, since it can only decrease the \(\delta\)-degree. In particular, we may view the filtration as a functor \[Sym_{\mathbb{Z}_{p}}^{\delta,\leq\star}\colon\mathrm{Mod}_{\mathbb{Z}_{p}}^{ free}\to Fun(\mathbb{Z}_{\geq 0}^{op},\mathrm{Mod}_{\mathbb{Z}_{p}})=F_{\geq 0} \mathrm{Mod}_{\mathbb{Z}_{p}}\] from free \(\mathbb{Z}_{p}\)-modules into filtered complexes (with increasing filtrations). **Lemma 2.3.5**.: The functor \[Sym^{\delta,\leq\star}_{\mathbb{Z}_{p}}\colon\operatorname{Mod}^{free}_{ \mathbb{Z}_{p}}\to Fun(\mathbb{Z}_{\geq 0}^{op},\operatorname{Mod}_{\mathbb{Z}_{p}})\] satisfies \[Sym^{\delta,\leq n}_{\mathbb{Z}_{p}}(M\oplus N)\simeq\bigoplus_{i+j=n}Sym^{ \delta,\leq i}_{\mathbb{Z}_{p}}(M)\otimes_{\mathbb{Z}_{p}}Sym^{\delta,\leq j}_ {\mathbb{Z}_{p}}(N).\] Proof.: This is a simple unwinding of definitions. **Corollary 2.3.6**.: Let \(i\colon\operatorname{Mod}^{free}_{\mathbb{Z}_{p}}\to\operatorname{Mod}_{ \mathbb{Z}_{p}}\) be the inclusion. Then \(i\circ Sym^{\delta,\leq n}\) is additively polynomial of degree \(n\) for all \(n\) (where we view \(Sym^{\delta,\leq n}\) as an endomorphism of \(\operatorname{Mod}^{free}_{\mathbb{Z}_{p}}\)). Proof.: Appealing to the preceding lemma, we can express the derivative at \(X\) via \[D_{X}(i\circ Sym^{\delta,\leq n}_{\mathbb{Z}_{p}})(M) \mathrel{\mathop{:}}=fib(Sym^{\delta,\leq n}_{\mathbb{Z}_{p}}( X\oplus M)\to Sym^{\delta,\leq n}_{\mathbb{Z}_{p}}(M))\] \[\simeq fib\left(\bigoplus_{i+j=n}(Sym^{\delta,\leq i}_{\mathbb{Z} _{p}}(X)\otimes_{\mathbb{Z}_{p}}Sym^{\delta,\leq j}_{\mathbb{Z}_{p}}(M))\to Sym ^{\delta,\leq n}_{\mathbb{Z}_{p}}(M)\right)\] \[\simeq\bigoplus_{i+j=n,j\neq n}Sym^{\delta,\leq i}_{\mathbb{Z}_{p }}(X)\otimes_{\mathbb{Z}_{p}}Sym^{\delta,\leq j}_{\mathbb{Z}_{p}}(M)\] from which the claim now follows by induction on \(n\) along with the fact that tensoring is exact. **Proposition 2.3.7**.: Fix an element \(f\in Sym^{\delta}_{\mathbb{Z}_{p}}(\mathbb{Z}_{p}\left\langle S\right\rangle)\) of \(\delta\)-degree \(n\). Then \[deg_{\delta}(\delta(f))=pn.\] Proof.: We first reduce to the case that \(f\) is a monomial, which follows immediately from the \(\delta\)-ring relation \[\delta(f_{1}+f_{2})=\delta(f_{1})+\delta(f_{2})+\frac{f_{1}^{p}+f_{2}^{p}-(f_ {1}+f_{2})^{p}}{p}.\] We may thus take \(f\) to be a monomial, in which case we will proceed by induction on \(n\). Notice that the base case is immediate from the definition of \(\delta\)-degree. If \(f=x_{s,j}\) for some \(j\), the claim is obvious. Otherwise we may write \[f=x_{s,j}\tilde{f}\] for some \(s\in S\) where \(deg_{\delta}(\tilde{f})\leq(n-1)\). Applying the \(\delta\) operation then yields \[\delta(f)=x_{s,0}^{p}\delta(\tilde{f})+\tilde{f}^{p}\delta(x_{s,0})+p\delta( x_{s,0})\delta(\tilde{f}).\] Analyzing each of the summands individually and applying the induction hypothesis reveals that each term is precisely of degree \(pn\), as desired. **Corollary 2.3.8**.: The functor \[\mathbb{Z}_{\geq 0}\xrightarrow{Sym^{\delta,\leq*}_{\mathbb{Z}_{p}}}End(\mathrm{ Mod}^{free}_{\mathbb{Z}_{p}})\] is lax-monoidal. Furthermore, for any \(n\) and \(m\), the diagram of natural transformations commutes, where \(\mu\) is the monad structure. In particular, \(Sym^{\delta,\leq*}_{\mathbb{Z}_{p}}\) enjoys the structure of filtered monad which is furthermore compatible with the monad structure on \(Sym^{\delta}_{\mathbb{Z}_{p}}\). Proof.: Since the structure maps of the filtration \(i\colon Sym^{\delta,\leq n}_{\mathbb{Z}_{p}}\to Sym^{\delta}_{\mathbb{Z}_{p}}\) are pointwise monomorphisms, it suffices to prove that the composite \[Sym^{\delta,\leq n}_{\mathbb{Z}_{p}}\circ Sym^{\delta,\leq m}_{\mathbb{Z}_{p} }\xrightarrow{i}Sym^{\delta}_{\mathbb{Z}_{p}}\circ Sym^{\delta}_{\mathbb{Z}_ {p}}\xrightarrow{\mu}Sym^{\delta}_{\mathbb{Z}_{p}}\] factors through \(Sym^{\delta,\leq nm}\). Indeed, if such a factorization exists, it is necessarily unique, and the lax monoidality follows formally. We can check the factorization pointwise, so fix \(M=\mathbb{Z}_{p}\left\langle S\right\rangle\). Define \[\tilde{S}:=\left\{\text{monic monomials of $\delta$-degree }\leq m\text{ in }Sym^{\delta}_{\mathbb{Z}_{p}}(M)\right\}\] so in particular, we obtain \[Sym^{\delta,\leq n}\circ Sym^{\delta,\leq m}(M)\simeq Sym^{\delta,\leq n}( \mathbb{Z}_{p}\left\langle\tilde{S}\right\rangle).\] Fix a monomial \(f=\lambda f^{j_{1}}_{s_{1},i_{1}}...f^{j_{n}}_{s_{n},i_{n}}\) in \(Sym^{\delta,\leq n}(\mathbb{Z}_{p}\left\langle\tilde{S}\right\rangle)\) (see Notation 2.3.1). Our task is to show that the degree of \(\mu\circ i(f)\) is bounded above by \(nm\). Unpacking definitions, we observe that \[\mu\circ i(f)=\lambda(\delta^{i_{1}}(f_{s_{1}}))^{j_{1}}...(\delta^{i_{n}}(f_ {s_{n}}))^{j_{n}}\] whose degree is (courtesy of Proposition 2.3.7) given by \[deg_{\delta}(\mu\circ i(f))=\sum_{k=1}^{n}j_{k}p^{i_{k}}deg_{\delta}(f_{s_{k}}).\] By hypothesis, we know that \(\sum_{k=1}^{n}j_{k}p^{i_{k}}\leq m\), and \(deg_{\delta}(f_{s_{k}})\leq n\) for all \(k\), from which we conclude. **Theorem 2.3.9**.: Denote by \(LSym^{\delta,\leq n}_{\mathbb{Z}_{p}}\) the right-left extension of \(Sym^{\delta,\leq n}_{\mathbb{Z}_{p}}\) to \(\mathrm{Mod}_{\mathbb{Z}_{p}}\). Consider the colimit \[LSym^{\delta}_{\mathbb{Z}_{p}}:=colim_{n}LSym^{\delta,\leq n}\in End_{\Sigma}( \mathrm{Mod}_{\mathbb{Z}_{p}}).\] Then \(LSym^{\delta}_{\mathbb{Z}_{p}}\) inherits the structure of a monad. We refer to this as the _derived \(\delta\)-algebra monad_. Proof.: We have already equipped \(LSym^{\delta,\leq\star}_{\mathbb{Z}_{p}}\) with the structure of a filtered monad. The theorem then follows by appealing to Proposition 4.1.4 of [15]. **Definition 2.3.10**.: We denote by \(DAlg^{\delta}(\mathrm{Mod}_{\mathbb{Z}_{p}})\) the \(\infty\)-category of \(LSym^{\delta}_{\mathbb{Z}_{p}}\)-algebras, and refer to this as the \(\infty\)-category of _derived \(\delta\)-rings_ over \(\mathbb{Z}_{p}\). **Observation 2.3.11**.: Denote by \[DAlg^{\delta}(\mathrm{Mod}_{\mathbb{Z}_{p}})_{\geq 0}:=DAlg^{\delta}(\mathrm{ Mod}_{\mathbb{Z}_{p}})\times_{\mathrm{Mod}_{\mathbb{Z}_{p}}}\mathrm{Mod}_{ \mathbb{Z}_{p},\geq 0}.\] It follows formally that the forgetful functor \(DAlg^{\delta}(\mathrm{Mod}_{\mathbb{Z}_{p}})_{\geq 0}\to\mathrm{Mod}_{ \mathbb{Z}_{p},\geq 0}\) preserves sifted colimits and admits a left adjoint. In particular, the adjunction is monadic and by Proposition A.20 in [3], we obtain a canonical identification \[DAlg^{\delta}(\mathrm{Mod}_{\mathbb{Z}_{p}})_{\geq 0}\simeq\mathfrak{a}CAlg^{ \delta}(Mod_{\mathbb{Z}_{p}})\] where the right hand side is the \(\infty\)-category of animated \(\delta\)-rings as introduced in Appendix A of [3]. **Proposition 2.3.12**.: The forgetful functor \[DAlg^{\delta}(\mathrm{Mod}_{\mathbb{Z}_{p}})\xrightarrow{F}CAlg(\mathrm{ Mod}_{\mathbb{Z}_{p}})\] preserves small limits and colimits. Proof.: We follow the proof of Proposition 4.2.27 in [15] essentially verbatim. We first observe that \(F\) preserves limits and sifted colimits. Indeed, the composite \[DAlg^{\delta}(\mathrm{Mod}_{\mathbb{Z}_{p}})\xrightarrow{F}CAlg(\mathrm{ Mod}_{\mathbb{Z}_{p}})\to\mathrm{Mod}_{\mathbb{Z}_{p}}\] preserves limits and sifted colimits by construction. On the other hand, the forgetful functor \[CAlg(\mathrm{Mod}_{\mathbb{Z}_{p}})\to\mathrm{Mod}_{\mathbb{Z}_{p}}\] is conservative, and thus \(F\) must also preserves limits and sifted colimits. It remains to show that \(F\) preserves binary coproducts. Since \(DAlg^{\delta}(\mathrm{Mod}_{\mathbb{Z}_{p}})\) is the category of algebras over the \(LSym^{\delta}_{\mathbb{Z}_{p}}\) monad, for any object \(A\in DAlg^{\delta}(\mathrm{Mod}_{\mathbb{Z}_{p}})\) we have a canonical Bar resolution \[A\simeq colim_{\Delta^{op}}(LSym^{\delta}_{\mathbb{Z}_{p}})^{\circ(\star+1)}(A)\] which reduces us to verifying that \(F\) preserves coproducts of free algebras. So fix free algebras \(A=LSym^{\delta}_{\mathbb{Z}_{p}}(M)\) and \(B=LSym^{\delta}_{\mathbb{Z}_{p}}(N)\). Our task is to verify that the natural map \[F(A)\otimes_{\mathbb{Z}_{p}}F(B)\to F(A\sqcup B)\] is an equivalence. Since \(A\) and \(B\) are free, we see that \[A\sqcup B\simeq LSym^{\delta}_{\mathbb{Z}_{p}}(M\oplus N).\] Under this identification, we can refine the comparison map of commutative algebra objects \[LSym^{\delta}_{\mathbb{Z}_{p}}(M)\otimes LSym^{\delta}_{\mathbb{Z}_{p}}(N) \to LSym^{\delta}_{\mathbb{Z}_{p}}(M\oplus N)\] to a map of filtered algebras \[LSym^{\delta,\leq\star}_{\mathbb{Z}_{p}}(M)\otimes_{\mathbb{Z}_{p}}LSym^{ \delta,\leq\star}_{\mathbb{Z}_{p}}(N)\to LSym^{\delta,\leq\star}_{\mathbb{Z}_ {p}}(M\oplus N)\] where the tensor product is now given by Day convolution. Since the filtered pieces are excisively polynomial, we can reduce to the case where \(M\) and \(N\) are connective, and thus to the case that \(M\) and \(N\) are free \(\mathbb{Z}_{p}\)-algebras, where the claim is immediate. **Definition 2.3.13**.: The right adjoint to the forgetful functor \(DAlg^{\delta}(Mod_{\mathbb{Z}_{p}})\to DAlg(Mod_{\mathbb{Z}_{p}})\) will be denoted by \(W\), and referred to as the _derived Witt vectors_ functor. The left adjoint to the forgetful functor will be denoted by \(Free^{\delta}_{\mathbb{Z}_{p}}\) and referred to as the _free \(\delta\)-ring_ functor. **Observation 2.3.14**.: The conclusions of Proposition 2.3.12 apply equally as well to connective objects, yielding left and right adjoints to the forgetful functor \[DAlg^{\delta}(\mathrm{Mod}_{\mathbb{Z}_{p}})_{\geq 0}\to\mathfrak{aC}Alg(Mod_{ \mathbb{Z}_{p}}).\] The identification \[DAlg^{\delta}(\mathrm{Mod}_{\mathbb{Z}_{p}})_{\geq 0}\simeq\mathfrak{aC}Alg^{ \delta}(Mod_{\mathbb{Z}_{p}})\] of Observation 2.3.11 then identifies the right adjoint to the above with the animated Witt vectors of [3]. ### Constructions and Examples In the connective setting (see Appendix A in [3]), there is an equivalence between animated \(\delta\)-rings and animated rings equipped with a lift of Frobenius. This is proven by appealing to the left Kan extension of the length \(2\) Witt vector functor and _defining_ animated \(\delta\)-rings to be animated rings equipped with a section of the canonical projection \(\pi\colon W_{2}(R)\to R\). The desired description then follows by appealing to the usual pullback square expressing the Frobenius \(W_{2}(R)\to R\) as the pullback of \(Frob\colon R\to R\otimes^{\mathbb{L}}\mathbb{F}_{p}\) along the reduction mod \(p\) map. An analogous description holds in the derived setting: derived \(\delta\)-rings are nothing other than derived commutative rings equipped with a lift of Frobenius (see Proposition 2.4.4 below). To prove this however, we will not follow the above outline. Rather, we will use our monadic definition of \(\delta\)-rings. For any category \(\mathcal{C}\), one can concoct a new \(\infty\)-category consisting of objects of \(\mathcal{C}\) equipped with an endomorphism by contemplating functors from the classifying space of the monoid \(\mathbb{N}\) into \(\mathcal{C}\): \(Fun(B\mathbb{N},\mathcal{C})\). Any functorial assignment of an endomorphism to an object of \(\mathcal{C}\) can then be expressed as a section of the evaluation map \(ev_{\star}\colon Fun(B\mathbb{N},\mathcal{C})\to\mathcal{C}\). **Construction 2.4.1**.: Consider the functor \[Frob\colon Poly_{\mathbb{F}_{p}}^{fg}\to Fun(B\mathbb{N},Poly_{\mathbb{F}_{p}}) \xrightarrow{i\circ-}Fun(B\mathbb{N},DAlg(\mathrm{Mod}_{\mathbb{F}_{p}}))\] which assigns a polynomial \(\mathbb{F}_{p}\)-algebra its Frobenius endomorphism. We claim that \(Frob\) is right-extendable in the sense of Definition 2.2.16, which is to say that the right Kan extension of \(Frob\circ Sym_{\mathbb{F}_{p}}^{\heartsuit}\) to perfect coconnective \(\mathbb{F}_{p}\)-modules preserves finite coconnective geometric realizations. The composite \[Fun(B\mathbb{N},DAlg(\mathrm{Mod}_{\mathbb{F}_{p}}))\xrightarrow{ev_{\star}} DAlg(\mathrm{Mod}_{\mathbb{F}_{p}})\to\mathrm{Mod}_{\mathbb{F}_{p}}\] reflects sifted colimits and limits, and thus it suffices to prove that the composite \[\mathrm{Mod}_{\mathbb{F}_{p}}^{fpp}\xrightarrow{Sym_{\mathbb{F}_{p}}^{\heartsuit }}Poly_{\mathbb{F}_{p}}^{fg}\xrightarrow{Frob}Fun(B\mathbb{N},DAlg(\mathrm{Mod }_{\mathbb{F}_{p}}))\to\mathrm{Mod}_{\mathbb{F}_{p}}\] is right extendable, but this is precisely \(Sym_{\mathbb{F}_{p}}^{\heartsuit}\), which we know to be right-extendable. Construction 2.2.17 thus yields an extension of \(Frob\) to a functor of the form \[DAlg(\mathrm{Mod}_{\mathbb{F}_{p}})\to Fun(B\mathbb{N},DAlg(\mathrm{Mod}_{ \mathbb{F}_{p}}))\] which we will continue to denote by \(Frob\). **Remark 2.4.2**.: The functor \(Frob\) as defined above preserves small limits and colimits. Indeed, since \(B\mathbb{N}\) has only a single object, evaluation at this object reflects both limits and colimits \[ev_{\star}\colon Fun(B\mathbb{N},DAlg(\mathrm{Mod}_{\mathbb{F}_{p}}))\to DAlg (\mathrm{Mod}_{\mathbb{F}_{p}}).\] By construction, \(Frob\) is a section of \(ev_{\star}\), and so the claim follows at once. **Definition 2.4.3**.: Let \(DAlg(\mathrm{Mod}_{\mathbb{Z}_{(p)}})^{Frob}\) denote the pullback where \(\pi\colon DAlg(\mathrm{Mod}_{\mathbb{Z}_{(p)}})\to DAlg(\mathrm{Mod}_{\mathbb{F}_{ p}})\) is reduction modulo \(p\). **Theorem 2.4.4**.: The \(\infty\)-category \(DAlg^{\delta}\) is equivalent to the category of derived algebras equipped with a Frobenius lift. Proof.: Since both \(Frob\) and \(\pi_{\star}\) preserve small limits and colimits, so too does the top horizontal arrow \[i\colon DAlg(\mathrm{Mod}_{\mathbb{Z}_{(p)}})^{Frob}\to Fun(B\mathbb{N},DAlg( \mathrm{Mod}_{\mathbb{Z}_{(p)}})).\] Postcomposing with \(\,ev_{0}\colon Fun(B\mathbb{N},DAlg(\mathrm{Mod}_{\mathbb{Z}_{(p)}}))\to DAlg( \mathrm{Mod}_{\mathbb{Z}_{(p)}})\) which also preserves such limits and colimits, we deduce that \(DAlg(\mathrm{Mod}_{\mathbb{Z}_{(p)}})^{Frob}\to\mathrm{Mod}_{\mathbb{Z}_{(p)}}\) admits a left adjoint and the resulting adjunction is monadic by Barr-Beck-Lurie. Since \(DAlg(\mathrm{Mod}_{\mathbb{Z}_{(p)}})^{Frob}\to DAlg(\mathrm{Mod}_{\mathbb{Z} _{(p)}})\) preserves small limits and colimits, the monad is left right extended from its restriction to \(\mathrm{Mod}_{\mathbb{Z}_{(p)}}^{free}\), and thus it suffices to identify the monad restricted to \(\mathrm{Mod}_{\mathbb{Z}_{(p)}}^{free}\) with \(Sym^{\delta}\), but this is classical. It will be useful to contemplate not only \(\delta\)-algebras, but filtrations and completions on such objects. **Variant 2.4.5**.: We define the \(\infty\)-category of _filtered \(\delta\)-rings_ as the fiber product \[DAlg^{\delta}(F^{\geq 0}\mathrm{Mod}_{\mathbb{Z}_{p}}):=DAlg(F^{\geq 0} \mathrm{Mod}_{\mathbb{Z}_{p}})\times_{DAlg(\mathrm{Mod}_{\mathbb{Z}_{p}})}DAlg ^{\delta}(\mathrm{Mod}_{\mathbb{Z}_{p}}).\] Similarly, the \(\infty\)-category of \(\delta\)_-pairs_ is denoted by \(DAlg^{\delta}(F^{\{0,1\}}\mathrm{Mod}_{\mathbb{Z}_{p}})\), and defined analogously. **Lemma 2.4.6**.: The forgetful functor \(F\colon DAlg^{\delta}(F^{\geq 0}\mathrm{Mod}_{\mathbb{Z}_{p}})\to F^{\geq 0} \mathrm{Mod}_{\mathbb{Z}_{p}}\) preserves small limits and sifted colimits. In particular, it admits a left adjoint, denoted by \(LSym_{\mathbb{Z}_{p}}^{\delta,[0]}\), and the resulting adjunction is monadic. Proof.: Since the forgetful functor \(DAlg(F^{\geq 0}\mathrm{Mod}_{\mathbb{Z}_{p}})\to F^{\geq 0}\mathrm{Mod}_{ \mathbb{Z}_{p}}\) is conservative and preserves limits and sifted colimits, it suffices to prove that the forgetful functor \[DAlg^{\delta}(F^{\geq 0}\mathrm{Mod}_{\mathbb{Z}_{p}})\to DAlg(F^{\geq 0} \mathrm{Mod}_{\mathbb{Z}_{p}})\] preserves limits and sifted colimits. This follows from the definition of \(DAlg^{\delta}(F^{\geq 0}\mathrm{Mod}_{\mathbb{Z}_{p}})\) as a fiber product since all the \(\infty\)-categories in question are presentable, and the functors in the defining diagram preserve both limits and colimits (see Proposition 2.3.12, as well as Propositions 5.5.3.13 and 5.5.3.18 in [12]). **Notation 2.4.7**.: The proof of Lemma 2.4.6 establishes the existence of a left adjoint to the forgetful functor \[DAlg^{\delta}(F^{\geq 0}\mathrm{Mod}_{\mathbb{Z}_{p}})\to DAlg(F^{\geq 0} \mathrm{Mod}_{\mathbb{Z}_{p}})\] which we will denote by \(Free^{\delta,[0]}_{\mathbb{Z}_{p}}\). **Definition 2.4.8**.: Let \(A\) be a (possibly derived) \(\delta\)-ring. We define the \(\infty\)-category of derived \(\delta\)-\(A\)-algebras to be the slice category \[DAlg^{\delta}(\mathrm{Mod}_{A}):=DAlg^{\delta}(\mathrm{Mod}_{\mathbb{Z}_{p}}) _{A\downarrow}.\] Let \((A,I)\in DAlg^{\delta}(F^{\{0,1\}}\mathrm{Mod}_{\mathbb{Z}_{p}})\). We now wish to define \(I\)-complete \(\delta\)-rings. In order to phrase this in the level of generality that we need, we introduce some notation. **Notation 2.4.9**.: Let \(p_{!}\colon DAlg(F^{\{0,1\}}\mathrm{Mod}_{A})\to DAlg(F^{\geq 0}\mathrm{Mod}_{A})\) denote the left adjoint to restriction. We will denote by \(I^{\star}A\) the filtered algebra given by \(p_{!}(A,I)\) and refer to this as the \(I\)-adic filtration on \(A\). **Definition 2.4.10**.: We say a map of \(A\)-modules \(M\to N\) is an \(A/I\)-equivalence if \(M\otimes_{A}A/I\to N\otimes_{A}A/I\) is an equivalence. Let \(S\) denote the class of \(A/I\)-equivalences. We define the subcategory of \(I\)_-adically complete_\(A\)-modules to be the full subcategory of \(\mathrm{Mod}_{A}\) spanned by the \(S\)-local objects. This subcategory will be denoted by \(\widetilde{\mathrm{Mod}}_{A}^{I}\), or if \(I\) is clear from context, simply by \(\widetilde{\mathrm{Mod}}_{A}\). **Observation 2.4.11**.: If \(M\) is an \(I\)-adically complete \(A\)-module, then the \(I\)-adic filtration \[ins^{0}(M)\otimes_{A}I^{\star}A\] is in fact filtration complete. Indeed, the functor \[I^{\star}\colon\mathrm{Mod}_{A}\to F^{\geq 0}\mathrm{Mod}_{A}\] sends \(A/I\)-equivalences to \(gr^{\star}\)-equivalences, and the complete filtered modules are precisely the \(gr^{\star}\)-local filtered objects. **Warning 2.4.12**.: If \((A,I)\) is a discrete commutative ring equipped with an ideal, what we refer to as the \(I\)-adic filtration on a discrete \(A\)-module \(M\) does not agree with the classical notion unless \(I\) is locally generated by a regular sequence. **Definition 2.4.13**.: Let \(A\) be a derived \(\delta\)-ring, and \(I\to A\) a generalized Cartier divisor (a tensor invertible \(A\)-module equipped with a map of \(A\)-modules to \(A\)). The \(\infty\)-category of \((p,I)\)-complete derived \(\delta\)-\(A\)-algebras is defined as the fiber product \[\widehat{DAlg^{\delta}}(\operatorname{Mod}_{A}):=DAlg^{\delta}(\operatorname{ Mod}_{A})\times_{\operatorname{Mod}_{A}}\widehat{\operatorname{Mod}_{A}}.\] **Observation 2.4.14**.: The inclusions \(DAlg^{\delta}(\operatorname{Mod}_{A})\to DAlg^{\delta}(\operatorname{Mod}_{ \mathbb{Z}_{p}})\) and \(\widehat{DAlg^{\delta}}(\operatorname{Mod}_{A})\to DAlg^{\delta}( \operatorname{Mod}_{A})\) preserve limits and sifted colimits. It follows formally that the forgetful functors \(DAlg^{\delta}(\operatorname{Mod}_{A})\to\operatorname{Mod}_{A}\) and \(\widehat{DAlg^{\delta}}(\operatorname{Mod}_{A})\to\widehat{\operatorname{Mod }}_{A}\) preserve limits and sifted colimits, and thus the resulting adjunctions are monadic. We will denote the left adjoints by \(LSym_{A}^{\delta}\) and \(\widehat{LSym}_{A}^{\delta}\) respectively. **Example 2.4.15**.: Recall from Proposition 2.3.12 the forgetful functor \(DAlg^{\delta}(\operatorname{Mod}_{\mathbb{Z}_{p}})\to CAlg(\operatorname{ Mod}_{\mathbb{Z}_{p}})\) preserves limits. It follows that for any sheaf of (derived) \(\delta\)-rings \(\mathcal{F}\) on a site \(\mathcal{C}\), the derived global sections \(R\Gamma(\mathcal{C},\mathcal{F})\) can be canonically promoted to a derived \(\delta\)-ring. In particular, given a prism \((A,I)\), and an \(A/I\)-algebra \(R\), the derived prismatic cohomology \(\mathbb{A}_{R/A}\) may be endowed with the structure of a \((p,I)\)-complete derived \(\delta\)-\(A\)-algebra. **Example 2.4.16**.: (perfect \(\delta\)-rings) A derived \(\delta\)-ring \(A\) is said to be _perfect_ if the Frobenius endomorphism \[\varphi_{A}:A\to A\] of Proposition 2.4.4 is an isomorphism. Perfect \(p\)-complete animated \(\delta\)-rings are always isomorphic to the Witt vectors of a perfect \(\mathbb{F}_{p}\)-algebra, and in particular are discrete since perfect \(\mathbb{F}_{p}\)-algebras are always discrete (Proposition 11.6 of [5]). In the derived setting, there exist non-discrete perfect \(\mathbb{F}_{p}\)-algebras (see Remark 11.7 of [5]). Despite the increase in generality in the derived setting, the analogous characterization of perfect \(\delta\)-rings still holds, as stated below. **Proposition 2.4.17**.: Let \(R\) be a \(p\)-complete perfect derived \(\delta\)-ring. Then the canonical map \(R\to W(R\otimes_{\mathbb{Z}_{p}}\mathbb{F}_{p})\) is an equivalence. Proof.: The very argument which establishes this fact in the animated setting carries over to our more general setting. We will content ourselves with a brief sketch. Recall, the derived Witt vectors are defined as the right adjoint to the forgetful functor \[DAlg^{\delta}(\operatorname{Mod}_{\mathbb{Z}_{p}})\to DAlg(\operatorname{ Mod}_{\mathbb{Z}_{p}}).\] It follows formally that upon restricting to \(\mathbb{F}_{p}\)-algebras, the derived Witt vector functor takes values in \(p\)-complete \(\delta\)-rings. For any perfect derived \(\mathbb{F}_{p}\)-algebra \(S\), we declare \[W_{n}(S):=W(S)\otimes_{\mathbb{Z}_{p}}\mathbb{Z}/p^{n}\mathbb{Z},\] which may be identified with the derived truncated Witt vectors of Example 2.2.21 in this case. The classification of perfect \(p\)-complete derived \(\delta\)-rings then follows from the series of observations: 1. Denote by \(\overline{R}\) the reduction mod \(p\) of \(R\): \(\overline{R}:=R\otimes_{\mathbb{Z}_{p}}\mathbb{F}_{p}\). Each projection \(W_{n}(\overline{R})\to W_{n-1}(\overline{R})\) is a square zero extension by \(I_{n}:=(p^{n-1})/(p^{n})\otimes_{\mathbb{Z}_{p}}W_{n}(\overline{R})\). 2. The existence and uniqueness of deformations across these thickenings are controlled by \(Ext^{i}(L_{\overline{R}/\mathbb{F}_{p}},I_{n})\) for \(i=1,2\) respectively. The cotangent complex for a map of derived commutative rings is studied in Section 4.4 of [15]. 3. Since \(\overline{R}\) is perfect, \[L_{\overline{R}/\mathbb{F}_{p}}\simeq 0\] for all \(n\). 4. Combining the previous two facts, we see that \[R\otimes_{\mathbb{Z}_{p}}\mathbb{Z}/p^{n}\mathbb{Z}\simeq W_{n}(\overline{R})\] from which \(p\)-completeness of both sides allows us to deduce that the canonical map \[R\to W(\overline{R})\] is an isomorphism. **Example 2.4.18**.: The following example was suggested to the author by Benjamin Antieau. Let \(E\) be an ordinary elliptic curve over \(\mathbb{F}_{p}\). Then \(R\Gamma(E,\mathcal{O}_{E})\) is a perfect \(\mathbb{F}_{p}\)-algebra. By applying Witt vectors pointwise to the sheaf \(\mathcal{O}_{E}\), one obtains a sheaf of \(p\)-complete \(\delta\)-rings \(W\mathcal{O}_{E}\) on the elliptic curve \(E\). The resulting \(\delta\)-structure on \(R\Gamma(E,W\mathcal{O}_{E})\) is easily seen to be perfect, and since the derived global sections are automatically \(p\)-complete, Example 2.4.16 implies that we have a canonical equivalence of derived \(\delta\)-rings: \[R\Gamma(E,W\mathcal{O}_{E})\simeq W(R\Gamma(E,W\mathcal{O}_{E})\otimes_{ \mathbb{Z}_{p}}\mathbb{F}_{p})\simeq W(R\Gamma(E,W\mathcal{O}_{E}\otimes \mathbb{F}_{p}))\simeq W(R\Gamma(E,\mathcal{O}_{E})).\] ## 3 Prismatic Cohomology Fix a \(\delta\)-ring \(A\). The results of the previous section allow us to contemplate the free (filtered) \(\delta\)-\(A\)-algebra on an arbitrary (filtered) derived \(A\)-algebra. In particular, for any \(A\)-algebra \(R\), we can apply this procedure to the Hodge-filtered derived infinitesimal cohomology \(F^{\star}_{H}\mathbb{I}\vDash_{R/A}\). In the case where \((A,I)\) is a prism and \(R\) is an \(\overline{A}:=A/I\)-algebra, the Hodge filtered derived infinitesimal cohomology \(F^{\star}_{H}\mathbb{I}\vDash_{R/A}\) is naturally an algebra over the \(I\)-adic filtration \(I^{\star}A\) on \(A\), and thus so is the free \(\delta\)-algebra thereon. We may view \[\left(Free^{\delta}_{A}(\mathbb{I}\vDash_{R/A}),F^{1}_{H}Free^{\delta}_{A}( \mathbb{I}\vDash_{R/A})\right)\] as a '\(\delta\)-pair' over \((A,I)\). Our task then becomes to formulate an appropriate notion of prism object in derived commutative rings, and to take the prismatic envelope of this \(\delta\)-pair. Following the Cech-Alexander approach to prismatic cohomology, we would then expect this prismatic envelope to compute prismatic cohomology, which we will see to be true. The condition on prisms that the Cartier divisor be locally generated by a distinguished element is difficult to make sense of in the nonconnective setting. To work around this difficulty, we will instead appeal to the notion of rigidity of maps between prisms to define an appropriate analogue of prisms in the derived category. Recall, any map of prisms \((A,I)\to(B,J)\) satisfies the property that \[I\otimes_{A}B\simeq J\] (see Lemma 3.5 in [6]). Furthermore, this property almost characterizes prisms among those pairs \((B,J)\) where \(B\) is a \((p,I)\)-complete \(\delta\)-\(A\)-algebra, \(J\subset B\) is an ideal, and the map \(I\to B\) factors through \(J\). In fact, the only further condition to check in order to guarantee \((B,J)\) is a prism is that \(B\) is \(I\)-torsion free, a constraint which we will ignore in the setting of derived \(\delta\)-\(A\)-algebras. In particular, as soon as we specify that our prisms receive a map from \((A,I)\), the datum of the Cartier divisor is completely determined by \(I\) itself, and we will take the \(\infty\)-category of \((p,I)\)-complete \(\delta\)-\(A\)-algebras, \(\widehat{DAlg}^{\delta}(\mathrm{Mod}_{A})\), as our analogue of prisms. With this definition in play, the analogue of prismatic envelopes is manifest: it is left adjoint to the \(I\)-adic filtration functor. ### Recollections on Infinitesimal Cohomology In this subsection, we review some basic structural properties of derived infinitesimal cohomology following the perspective of [1]. Beyond the definition, the most important feature for us will be the notion of the vertical filtration (Theorem 3.1.11) on the associated graded of the Hodge filtration. It will play an important technical role in Section 3.3 (see Construction 3.3.10 and Theorem 3.3.12). **Definition 3.1.1**.: Recall that the functor \[DAlg(F^{\geq 0}\mathrm{Mod}_{A})\xrightarrow{gr^{0}}DAlg(\mathrm{Mod}_{A})\] admits a left adjoint (see [1], as well as [15] for the analogous description of de Rham cohomology). Let \(F^{\star}\mathbb{I}_{-/A}\) denote the left adjoint. We refer to this functor as _Hodge-filtered derived infinitesimal cohomology_. It is shown in [1] that the associated graded of the Hodge filtration may be identified with \[gr^{\star}_{H}\mathbb{I}_{R/A}\simeq LSym^{\star}_{R}(L_{R/A}[-1]).\] **Example 3.1.2**.: Suppose \(J\subset A\) is an ideal generated by a regular sequence. Then the Hodge filtered infinitesimal cohomology of \(A/J\) with respect to \(A\) may be identified \[F^{\star}_{H}\mathbb{I}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof.: This is (the categorical dual of) Proposition 5.2.5.1 in [12]. **Lemma 3.1.5**.: Let \(A\to B\to C\) be a triple of derived commutative rings. Then there is a canonical equivalence of filtered derived \(B\)-algebras \[F^{\star}_{H}\mathbb{I}_{C/A}\otimes_{F^{\star}_{H}\mathbb{I}_{B/A}}ins^{0}(B) \simeq F^{\star}_{H}\mathbb{I}_{C/B}.\] Proof.: We identify the universal property of the left hand side with that of the right hand side. Fix a filtered \(B\)-algebra \(F^{\star}R\). Then \[Hom_{DAlg(F^{\geq 0}\mathrm{Mod}_{B})}(F^{\star}_{H}\mathbb{I}_{C/A} \otimes_{F^{\star}_{H}\mathbb{I}_{B/A}}ins^{0}(B),F^{\star}R)\] \[\simeq Hom_{DAlg(F^{\geq 0}\mathrm{Mod}_{F^{\star}_{H}\mathbb{I}_{ B/A}})}(F^{\star}_{H}\mathbb{I}_{C/A},F^{\star}R)\] \[\simeq Hom_{DAlg_{B}}(C,gr^{0}R)\] \[\simeq Hom_{DAlg(F^{\geq 0}\mathrm{Mod}_{B})}(F^{\star}_{H} \mathbb{I}_{C/B},F^{\star}R)\] as desired. **Construction 3.1.6**.: Fix \(F^{\star}R\in DAlg(F^{\geq 0}\mathrm{Mod}_{k})\). For each \(i\in\mathbb{Z}_{\geq 0}\) we associate a filtered \(F^{\star}R\) module \(F^{\geq i}R\) by the formula \[F^{\geq i}R:=ins^{i}(F^{i}R)\otimes_{ins^{0}(F^{0}R)}F^{\star}R.\] Observe that this is concentrated in weights \(\geq i\) and in weight \(j\geq i\) agrees with \(F^{j}R\). **Construction 3.1.7**.: Let \(A\to B\to C\) be a triple of rings. We define a filtration on \(F^{\star}_{H}\mathbb{I}_{C/A}\) by declaring \[F^{i}_{KO}(F^{\star}_{H}\mathbb{I}_{C/A}):=F^{\star}_{H}\mathbb{I}_{C/A} \otimes_{F^{\star}_{H}\mathbb{I}_{B/A}}F^{\geq i}_{H}\mathbb{I}_{B/A}.\] Here we are taking the tensor product in \(F^{\geq 0}\mathrm{Mod}_{A}\). We refer to this as _the Katz-Oda filtration_ (following the terminology of [10]). **Notation 3.1.8**.: The Katz-Oda filtration is evidently functorial in \(C\), and yields a functor \[F^{\star,\star}\mathbb{I}_{-/A}\colon DAlg(\mathrm{Mod}_{B})\to F^{\geq(0,0)} DAlg(\mathrm{Mod}_{A})\] to bi-filtered derived commutative \(A\)-algebras, where the first index is recording the Katz-Oda filtration and the second the internal filtration. It is helpful to unwind the notation explicitly: \[F^{i,\star}\mathbb{I}_{C/A}:=F^{i}_{KO}F^{\star}_{H}\mathbb{I}_{C/A}\] \[F^{i,j}\mathbb{I}_{C/A}:=F^{j}(F^{\star}_{H}\mathbb{I}_{C/A}\otimes_{F^{\star }_{H}\mathbb{I}_{B/A}}F^{\geq i}_{H}\mathbb{I}_{B/A})\] **Lemma 3.1.9** (compare with Lemma 3.13 in [10]).: Let \(A\to B\to C\) be a triple of rings. Then \(F^{\star,\star}\mathbb{I}_{C/A}\) enjoys the following properties: 1. For any \(j\), we have \(F^{0,j}\mathbb{I}\mathbb{I}_{C/A}\simeq F^{j}_{H}\mathbb{I}_{C/A}\). 2. For each \(0\leq j\leq i\), we have an identification \[F^{i,j}\mathbb{I}\mathbb{I}_{C/A}\simeq F^{0}_{H}\mathbb{I}_{C/A}\otimes_{F^{0 }_{H}\mathbb{I}_{B/A}}F^{i}\mathbb{I}\mathbb{I}_{B/A}\simeq F^{i,0}\mathbb{I} \mathbb{I}_{C/A}.\] 3. For each \(0\leq i\leq j\), we have a natural identification \[Cofib(F^{i+1,j}\mathbb{I}\mathbb{I}_{C/A}\to F^{i,j}\mathbb{I}\mathbb{I}_{C/A} )\simeq F^{j-i}\mathbb{I}\mathbb{I}_{C/B}\otimes_{B}LSym^{i}(L_{B/A}[-1]).\] Proof.: For (1): This follows immediately from our definition. For (2): We can compute the tensor product defining \(F^{i,j}\mathbb{I}\mathbb{I}_{C/A}\) via the Bar resolution: \[F^{i,j}\mathbb{I}\mathbb{I}_{C/A} :=F^{j}(F^{\star}_{H}\mathbb{I}\mathbb{I}_{C/A}\otimes_{F^{\star }_{H}\mathbb{I}_{B/A}}F^{\geq i}_{H}\mathbb{I}_{B/A})\] \[\simeq colim_{\Delta^{op}}\left(...\rightrightarrows F^{j}(F^{ \star}_{H}\mathbb{I}\mathbb{I}_{C/A}\otimes_{A}F^{\star}_{H}\mathbb{I}_{B/A} \otimes_{A}F^{\geq i}_{H}\mathbb{I}_{B/A})\rightrightarrows F^{j}(F^{\star}_{ H}\mathbb{I}\mathbb{I}_{C/A}\otimes_{A}F^{\geq i}_{H}\mathbb{I}_{B/A})\right)\] where the tensor products adorned by \(A\) are given by Day convolution. Unpacking the Day convolution expresses each term in the simplicial object as a colimit: \[F^{j}(F^{\star}_{H}\mathbb{I}_{C/A}\otimes_{A}(F^{\star}_{H} \mathbb{I}_{B/A})^{\otimes_{A}k}\otimes_{A}F^{\geq i}_{H}\mathbb{I}_{B/A})\] \[\simeq colim_{n+m\geq j}F^{n}_{H}\mathbb{I}\mathbb{I}_{C/A} \otimes_{A}F^{m}\left((F^{\star}_{H}\mathbb{I}_{B/A})^{\otimes_{A}k}\otimes_{ A}F^{\geq i}_{H}\mathbb{I}_{B/A}\right)\] \[\simeq F^{0}_{H}\mathbb{I}_{C/A}\otimes_{A}(F^{0}_{H}\mathbb{I}_ {B/A})^{\otimes_{A}k}\otimes_{A}F^{i}_{H}\mathbb{I}_{B/A}\] where the colimit collapses thanks to the hypothesis that \(j\leq i\) (and \(F^{\geq i}_{H}\mathbb{I}_{B/A}\) is concentrated in weights \(\geq i\)). Inputting this back in to the Bar resolution yields the claim. For (3): By definition we have \[gr^{i}_{KO}(F^{\star}_{H}\mathbb{I}_{C/A})\simeq F^{\star}_{H}\mathbb{I}_{C/A} \otimes_{F^{\star}_{H}\mathbb{I}_{B/A}}LSym^{i}(L_{B/A}[-1]).\] We can now appeal to Lemma 3.1.5 to rewrite the right hand side as \[F^{\star}_{H}\mathbb{I}_{C/A}\otimes_{F^{\star}_{H}\mathbb{I}_{ B/A}}LSym^{i}_{B}(L_{B/A}[-1]) \simeq(F^{\star}_{H}\mathbb{I}_{C/A}\otimes_{F^{\star}_{H}\mathbb{ I}_{B/A}}B)\otimes_{B}LSym^{i}_{B}(L_{B/A}[-1])\] \[\simeq F^{\star}_{H}\mathbb{I}_{C/B}\otimes_{B}LSym^{i}(L_{B/A}[-1]).\] where \(LSym^{i}(L_{B/A}[-1])\) has weight \(i\). The claim now follows by weight considerations. **Construction 3.1.10**.: Keeping with the notation of the preceding lemma, let us denote by \[X^{i,j}:=Cone(F^{i,j+1}\mathbb{I}\mathbb{I}_{C/A}\to F^{i,j}\mathbb{I}_{C/A}).\] Observe that we have natural maps \[X^{i,j}\xrightarrow{\alpha_{i,j}}X^{i-1,j}\] arising from the structure maps of the filtration. In addition, the maps \(F^{i,j}\mathbb{I}\mathbb{I}_{C/A}\to F^{0,j}\mathbb{I}\mathbb{I}_{C/A}\simeq F ^{j}_{H}\mathbb{I}\mathbb{I}_{C/A}\) induce canonical maps \[X^{i,j}\to gr^{j}_{H}\mathbb{I}\mathbb{I}_{C/A}.\] We define the _vertical filtration_ on \(gr^{\star}_{H}\mathbb{I}\mathbb{I}_{C/A}\) by \[F^{vert}_{n}(gr^{\star}_{H}\mathbb{I}_{C/A}):=\bigoplus_{k}X^{k,k+n}.\] The structure maps are given by the direct sum of the \(\alpha_{k,k+n}\), which yields an increasing filtration. **Theorem 3.1.11** (compare with Lemma 4.7 in [11]).: The vertical filtration endows \(gr^{\star}_{H}\mathbb{I}_{C/A}\) with the structure of a derived filtered algebra object in graded modules over \(gr^{\star}_{H}(\mathbb{I}\mathbb{I}_{B/A})\) - i.e. as an object in the category \(DAlg(F_{\geq 0}(Gr^{\geq 0}\mathrm{Mod}_{gr^{\star}_{H}\mathbb{I}_{B/A}}))\). Furthermore, the filtration is exhaustive and the associated graded is given by \[gr^{vert}_{i}(gr^{\star}_{H}\mathbb{I}_{C/A})\simeq LSym^{i}_{C}(L_{C/B}[-1]) \otimes_{B}LSym^{\star}_{B}(L_{B/A}[-1]).\] Proof.: The filtration is exhaustive by inspection and the identification of the associated graded follows immediately by appealing to Lemma 3.1.9 (3), to identify the cone of the \(\alpha_{i,j}\): \[Cone(\alpha_{i,j})\simeq LSym^{j-i+1}_{C}(L_{C/B}[-1])\otimes_{B}LSym^{i-1}_{ B}(L_{B/A}[-1]).\] It remains to be seen that the filtration may be endowed with a derived algebraic structure. We will denote by \[gr^{(-,\star)}\colon F^{\geq(0,0)}\mathrm{Mod}_{A}\to F^{\geq 0}Gr^{\geq 0} \mathrm{Mod}_{A}\] the functor which takes a doubly filtered object and passes to the associated graded in the second weight. This is obviously a morphism of derived algebraic contexts, and the vertical filtration is obtained by a simple reindexing of this construction. Namely, consider the symmetric monoidal functor \[p\colon\mathbb{Z}^{op}_{\geq 0}\times\mathbb{Z}^{disc}_{\geq 0}\xrightarrow{(i,k)\rightarrow(k,k+i)}\mathbb{Z}_{\geq 0}\times\mathbb{Z}^{disc}_{\geq 0}.\] Pullback along \(p\) yields a morphism of derived algebraic contexts \(F^{\geq 0}Gr^{\geq 0}\mathrm{Mod}_{A}\to F_{\geq 0}Gr^{\geq 0} \mathrm{Mod}_{A}\), and \[F^{v}_{\star}(gr^{\star}\mathbb{I}\mathbb{I}_{R/A})\simeq p^{\star}gr^{(-, \star)}(F^{\star}_{KO}F^{\star}_{H}\mathbb{I}_{R/A})\] from which we conclude. ### \(I\)-adic Envelopes Fix a derived algebraic context \(\mathcal{C}\) and an algebra \(F^{\star}A\in DAlg(F^{\{0,1\}}\mathcal{C})\). **Notation 3.2.1**.: We view \(F^{\star}A\) as a ring equipped with a generalized ideal, and adopt the following conventions which mimic standard notation in prismatic cohomology. * We will denote by \(I:=ev^{1}(F^{\star}A)\), \(A:=ev^{0}(F^{\star}A)\), and often write \((A,I)\) in place of \(F^{\star}A\). * We will denote by \(I^{\star}A\in DAlg(F^{\geq 0}\mathcal{C})\) the \(I\)-adic filtration associated to \((A,I)\) as defined in Notation 2.4.9. * We will denote by \(\overline{A}:=gr^{0}(F^{\star}A)\simeq cofib(I\to A)\). * Given an \(\overline{A}\)-module \(M\), we will denote by \(M\{n\}\) the tensor product \[M\otimes_{\overline{A}}gr^{n}(I^{\star}A).\] **Definition 3.2.2**.: We define the \(I\)_-adic filtration functor_ \[I^{\star}\colon DAlg(\mathrm{Mod}_{A})\to DAlg(F^{\geq 0}\mathrm{Mod}_{I^{ \star}A})\] as the composite \[DAlg(\mathrm{Mod}_{A})\xrightarrow{ins^{0}}DAlg(F^{\geq 0}\mathrm{Mod}_{A}) \xrightarrow{-\otimes_{A}I^{\star}A}DAlg(F^{\geq 0}\mathrm{Mod}_{I^{ \star}A}).\] **Definition 3.2.3**.: An algebra \((A,I)\in DAlg(F^{\{0,1\}}\mathcal{C})\) is said to be a _Cartier pair_ if \(I\) is a tensor invertible \(A\)-module. **Lemma 3.2.4**.: Let \((A,I)\) be a Cartier pair. Then the functor \(I^{\star}\) of 3.2.2 preserves small limits and colimits. In particular, it admits a left adjoint. Proof.: Since the evaluation functors \[ev_{i}\colon DAlg(F^{\geq 0}\mathrm{Mod}_{A})\to\mathrm{Mod}_{A}\] are jointly conservative and preserve small limits and colimits, it suffices to check that \(ev_{i}\circ I^{\star}\) preservers small limits and colimits. But \[ev_{i}\circ I^{\star}\simeq-\otimes_{A}LSym^{i}_{A}(I)\simeq-\otimes_{A}I^{ \otimes_{A}i}\] is given by tensoring with a tensor invertible \(A\)-module, from which the claim follows. **Definition 3.2.5**.: The left adjoint to \(I^{\star}\) is denoted by \(\mathrm{Env}_{I}\). We will refer to this adjoint as the (uncompleted) _derived \(I\)-adic envelope_. We may also restrict attention to the \(\infty\)-category of \((p,I)\)-complete \(A\)-algebras, \(\widehat{DAlg}(\mathrm{Mod}_{A})\), in which case the \(I\)-adic filtration functor factors (by definition) through the \(\infty\)-category of complete filtered algebras, \(DAlg(\widehat{F^{\geq 0}}\mathrm{Mod}_{A})\). In this case, we denote the left adjoint by \(\widehat{\mathrm{Env}_{I}}\), and still refer to it as the derived \(I\)-adic envelope. **Variant 3.2.6**.: Suppose \((A,I)\) is a Cartier pair such that the underlying algebra \(A\) is equipped with a \(\delta\)-structure. The same construction carries through for derived \(\delta\)-\(A\)-algebras, yielding a left adjoint \[\mathrm{Env}_{I}^{\underline{A}}\colon DAlg^{\delta}(F^{\geq 0}\mathrm{Mod}_{A })\to DAlg^{\delta}(\mathrm{Mod}_{A})\] to the \(I\)-adic filtration functor. We will still refer to this as the derived \(I\)-adic envelope, but distinguish it notationally as depicted. It will also be useful to study the analogue of envelopes on graded algebras. **Variant 3.2.7**.: The functor \[DAlg(\mathrm{Mod}_{\overline{A}})\xrightarrow{ins^{0}(-)\otimes_{\overline{ A}}gr^{\star}(I^{\star}A)}DAlg(Gr^{\geq 0}\mathrm{Mod}_{gr^{\star}(I^{\star}A)})\] also admits a left adjoint. Indeed, the base change functor on graded algebras preserves all limits. One can check such a claim pointwise on the level of underlying modules, where we see that \[(M^{\star}\otimes_{A/I}gr^{\star}(I^{\star}A))^{n}=\bigoplus_{i+j=n}M^{i} \otimes_{A/I}I^{j}/I^{j+1}.\] Since the direct sum is finite and \(I^{i}/I^{i+1}\) is an invertible \(A/I\)-module, the claim follows. We will denote the left adjoint by \(\mathrm{Env}_{I}^{Gr}\), and refer to this as the _graded \(I\)-adic envelope_. The relationship between \(\mathrm{Env}_{I}\) and \(\mathrm{Env}_{I}^{Gr}\) may be expressed as follows. **Lemma 3.2.8**.: The composite \[DAlg(F^{\geq 0}\mathrm{Mod}_{I^{\star}A})\xrightarrow{\mathrm{Env}_{I}}DAlg( \mathrm{Mod}_{A})\xrightarrow{-\otimes_{A}\overline{A}}DAlg(\mathrm{Mod}_{ \overline{A}})\] admits a canonical factorization: \[DAlg(F^{\geq 0}\mathrm{Mod}_{I^{\star}A})\xrightarrow{\mathrm{Env}_{I}}DAlg( \mathrm{Mod}_{A})\] \[DAlg(Gr^{\geq 0}\mathrm{Mod}_{gr^{\star}(I^{\star}A)})\xrightarrow{\mathrm{ Env}_{I}^{Gr}}DAlg(\mathrm{Mod}_{\overline{A}})\] Proof.: Recall that the associated graded functor \[gr^{\star}\colon DAlg(F^{\geq 0}\mathrm{Mod}_{A})\to DAlg(Gr^{\geq 0}\mathrm{Mod}_{A})\] admits a right adjoint which sends a graded algebra \(R^{\star}\) to the filtered object \[...\stackrel{{ 0}}{{\to}}R^{n}\stackrel{{ 0}}{{\to}}R^{n-1} \stackrel{{ 0}}{{\to}}...\] We will denote this right adjoint by \(\beta\) for the remainder of this proof. Unwinding definitions, one verifies that the following diagram of functors commutes: \[\begin{CD}DAlg(\mathrm{Mod}_{A})@>{ins^{0}}>{}>DAlg(F^{\geq 0}\mathrm{Mod}_{ \overline{A}})@>{}>{}>DAlg(F^{\geq 0}\mathrm{Mod}_{A})@>{-\otimes AI^{\star}A}>{}>DAlg(F^{ \geq 0}\mathrm{Mod}_{I^{\star}A})\\ Id\Big{\uparrow}@V{\beta}V{}V@V{\beta}V{}V\\ DAlg(\mathrm{Mod}_{A})@>{ins^{0}}>{}>DAlg(Gr^{\geq 0}\mathrm{Mod}_{ \overline{A}})@>{-\otimes_{\overline{A}}gr^{\star}(\overline{A})}>{}>DAlg( Gr^{\geq 0}\mathrm{Mod}_{gr^{\star}(I^{\star}A)})\end{CD}\] from which the claim follows by passing to left adjoints. We now establish a basic naturality result for envelopes which will be used in the next section. **Observation 3.2.9**.: Given a map \(f\colon\mathcal{C}\to\mathcal{D}\) of derived algebraic contexts, we can promote \(f\) to a colimit-preserving map \[\hat{f}\colon DAlg(F^{\{0,1\}}\mathcal{C})\to DAlg(F^{\{0,1\}}\mathcal{D})\] in the obvious way. Given a Cartier pair \((A,I)\) over \(\mathcal{C}\), \(\hat{f}(A,I)\) is also a Cartier pair. It is unclear whether or not envelopes commute with the functor \(f\) in the above generality, but we can say something about the case where \(f\) is the functor of taking a filtered object to its associated graded. **Proposition 3.2.10**.: Let \((A,I)\) be a Cartier pair in \(\mathcal{C}^{\prime}:=F^{\geq 0}\mathcal{C}\), and let \((B,J):=(gr^{\star}(A),gr^{\star}(I))\) be the Cartier pair in \(\mathcal{D}:=Gr^{\geq 0}\mathcal{C}\) associated to the map of derived algebraic contexts \(gr^{\star}\colon\mathcal{C}^{\prime}\to\mathcal{D}\). Then the following diagram commutes \[\begin{CD}DAlg(F^{\geq 0}\mathrm{Mod}_{I^{\star}A})@>{\mathrm{Env}_{I}}>{}>DAlg( \mathrm{Mod}_{A})\\ @V{\big{\downarrow}}V{F^{\geq 0}gr^{\star}}V@V{}V{gr^{\star}}V\\ DAlg(F^{\geq 0}\mathrm{Mod}_{J^{\star}B})@>{\mathrm{Env}_{J}}>{}>DAlg(\mathrm{Mod}_{B}) \end{CD}\] The analogous diagram for graded envelopes also commutes. Proof.: Since we understand the right adjoint to \(gr^{\star}\) (and thus to \(F^{\geq 0}gr^{\star}\)), we just check that the diagram of right adjoints commutes, which follows by exactly the same reasoning as employed in Lemma 3.2.7. **Observation 3.2.11**.: Given any derived \(A\)-algebra, \(B\), the co-unit \[\operatorname{Env}_{I}(I^{\star}B)\to B\] is an equivalence, since \(I^{\star}\) is fully faithful. **Observation 3.2.12**.: We could make an analogous definition of envelopes for \(\{0,1\}\)-filtered algebras instead of general filtered algebras, but the distinction between these two constructions is irrelevant for algebras whose filtration is induced from weight \(1\). Indeed, there is a canonical commutative diagram through the category of \((A,I)\)-algebras. To see this, observe that the associated diagram of right adjoints also factors through \(\{0,1\}\)-filtered rings since the \(I\)-adic filtration on \(A\) is defined as a filtration induced from the generalized Cartier divisor \(I\). Many of the examples of interest will have filtration freely induced by weight \(1\) (the main example being the Hodge filtration on infinitesimal cohomology), and in such cases we will not distinguish between these two notions of envelope. **Lemma 3.2.13**.: Fix a Cartier pair \((A,I)\) and let \(R\) be an \(\overline{A}\)-algebra. For any \(M\in\operatorname{Mod}_{R}\), there is a canonical equivalence \[\operatorname{Env}_{I}^{Gr}\left(LSym_{R}^{Gr^{\geq 0}}(M(n))\otimes_{ \overline{A}}gr^{\star}(I^{\star}A)\right)\simeq LSym_{R}(M\{-n\})\] where \(\operatorname{Env}_{I}^{Gr}\) is the restriction of the graded envelope to \(R\)-algebras. That is, \(\operatorname{Env}_{I}^{Gr}\) is left adjoint to \[DAlg(\operatorname{Mod}_{R})\xrightarrow{ins^{0}(-)\otimes_{ \overline{A}(0)}gr^{\star}(I^{\star}A)}DAlg(Gr^{\geq 0}\operatorname{Mod}_{R (0)\otimes_{\overline{A}(0)}gr^{\star}(I^{\star}A)}).\] Proof.: We will only establish the claim in the case that \(R=\overline{A}\) and observe that the same proof carries through in general (with slightly more burdensome notation). We identify the universal properties of both sides. Fix an \(\overline{A}\)-algebra \(S\in DAlg_{\overline{A}}\). Recalling that \(ins^{0}\) is right adjoint to \(ev^{0}\) on graded objects, we obtain \[Hom_{DAlg(\mathrm{Mod}_{\overline{A}})}(\mathrm{Env}_{I}^{Gr}(LSym _{\overline{A}}^{\underline{Gr}^{\geq 0}}(M(n))\otimes_{\overline{A}}gr^{\star}(I^{ \star}A)),S)\] \[\simeq Hom_{Gr^{\geq 0}DAlg(\mathrm{Mod}_{gr^{\star}(I^{\star}A)}) }(LSym_{\overline{A}}^{\underline{Gr}^{\geq 0}}(M(n))\otimes_{\overline{A}}gr^{ \star}(I^{\star}A),S(0)\otimes_{\overline{A}}gr^{\star}(I^{\star}A))\] \[\simeq Hom_{Gr^{\geq 0}\mathrm{Mod}_{\overline{A}}}(M(n),S(0) \otimes_{\overline{A}}gr^{\star}(I^{\star}A))\] where we have used the definition of \(\mathrm{Env}_{I}^{Gr}\) and \(LSym_{\overline{A}}^{\underline{Gr}^{\geq 0}}\) as left adjoints to rewrite the mapping spaces in the first and second equivalences respectively. We now appeal to the fact that \(ev^{n}\) is also right adjoint to \(ins^{n}\) to identify the preceding mapping space with \(Hom_{\mathrm{Mod}_{\overline{A}}}(M,S\otimes_{\overline{A}}I^{n}/I^{n+1})\). Proceeding, we obtain \[Hom_{\mathrm{Mod}_{\overline{A}}}(M,S\otimes_{\overline{A}}I^{n }/I^{n+1}) \simeq Hom_{\mathrm{Mod}_{\overline{A}}}(M\{-n\},S)\] \[\simeq Hom_{DAlg(\mathrm{Mod}_{\overline{A}})}(LSym_{\overline{A }}(M\{-n\}),S)\] as desired. We now turn our attention towards Variant 3.2.6, and relate the theory of envelopes in this case to the prismatic envelopes of [6]. **Observation 3.2.14**.: Given a map of prisms \((A,I)\to(B,J)\), we have a canonical equivalence of functors \[\mathrm{Env}_{I}^{\underline{A}}(-)\otimes_{A}B\simeq\mathrm{Env}_{J}^{ \underline{A}}(-\otimes_{I^{\star}A}J^{\star}B).\] To see this, it suffices to show that the associated diagram of _right_ adjoints commutes: which is true by rigidity of maps of prisms. Indeed, for any \(\delta\)-\(B\)-algebra \(R\), we see that \[J^{\star}R:=ins^{0}(R)\otimes_{B}J^{\star}B\simeq ins^{0}(R)\otimes_{B}(ins^{ 0}(B)\otimes_{A}I^{\star}A)\simeq I^{\star}R.\] **Construction 3.2.15**.: Fix a \(\delta\)-Cartier pair \((A,I)\). Given a complex \(N\in\mathrm{Mod}_{A}\), we define a map of \(\delta\)-\(A\)-algebras \[\gamma_{N}\colon LSym_{A}^{\delta}(N)\to LSym_{A}^{\delta}(N\otimes_{A}I^{-1})\] as the map induced by the map of \(A\)-modules \[N\simeq I\otimes_{A}N\otimes_{A}I^{-1}\xrightarrow{id\otimes i}I\otimes_{A} LSym_{A}^{\delta}(N\otimes_{A}I^{-1})\xrightarrow{\mu}LSym_{A}^{\delta}(N\otimes_{A}I^{-1}).\] Here, \(\mu\) is the multiplication arising from the \(A\)-algebra structure on \(LSym_{A}^{\delta}(N)\). **Example 3.2.16**.: Suppose \((A,I)\) is a prism and \(I=(d)\) is endowed with an orientation. The free \(\delta\)-\(A\)-algebra on a single generator is then given by \(LSym^{\delta}(A)\simeq A\{x\}\) and the map \(\gamma_{A}\) of the above construction may be identified with \[A\{x\}\xrightarrow{x\to d\cdot y}A\{y\}.\] **Lemma 3.2.17**.: Let \((A,I)\) be a \(\delta\)-Cartier pair. Fix an object \((N\to M)\in F^{\{0,1\}}\mathrm{Mod}_{A}\), and denote by \(LSym^{\delta}_{A}(M)\{\frac{N}{I}\}\) the pushout in the category of \(\delta\)-\(A\)-algebras \[\begin{CD}LSym^{\delta}_{A}(N)@>{\gamma_{N}}>{}>LSym^{\delta}_{A}(N\otimes_{A }I^{-1})\\ @V{}V{}V@V{}V{}V\\ LSym^{\delta}_{A}(M)@>{}>{}>LSym^{\delta}_{A}(M)\{\frac{N}{I}\}.\end{CD}\] Then there is a canonical equivalence of \(\delta\)-\(A\)-algebras \[\mathrm{Env}^{\underline{A}}_{I}(LSym^{\delta,\{0,1\}}_{A}(N\to M)\otimes_{A }I^{\star}A)\xrightarrow{\simeq}LSym^{\delta}_{A}(M)\{\frac{N}{I}\}.\] Proof.: Fix a \(\delta\)-\(A\)-algebra \(R\), and denote by \((B,J)\) the filtered \(\delta\)-algebra \[LSym^{\delta,\{0,1\}}_{A}(N\to M)\otimes_{A}I^{\star}A.\] It suffices to construct a functorial identification of mapping spaces \[Hom_{DAlg^{\delta}(\mathrm{Mod}_{A})}(LSym^{\delta}_{A}(M)\{\frac{N}{I}\},R) \xrightarrow{\simeq}Hom_{DAlg^{\delta}(F^{\geq 0}\mathrm{Mod}_{I^{\star}A})}((B, J),I^{\star}R).\] The definition of \(LSym^{\delta}_{A}(M)\{\frac{N}{I}\}\) as a pushout identifies \(Hom_{DAlg^{\delta}(\mathrm{Mod}_{A})}(LSym^{\delta}_{A}(M)\{\frac{N}{I}\},R)\) as \[\begin{CD}Hom_{DAlg^{\delta}(\mathrm{Mod}_{A})}(LSym^{\delta}_{A}(M),R)& \times_{Hom_{DAlg^{\delta}(\mathrm{Mod}_{A})}(LSym^{\delta}_{A}(N),R)}Hom_{ DAlg^{\delta}(\mathrm{Mod}_{A})}(LSym^{\delta}(N\otimes_{A}I^{-1}),R)\\ &\simeq Hom_{\mathrm{Mod}_{A}}(M,R)\times_{Hom_{\mathrm{Mod}_{A}}(N,R)} Hom_{\mathrm{Mod}_{A}}(N\otimes I^{-1},R)\\ &\simeq Hom_{\mathrm{Mod}_{A}}(M,R)\times_{Hom_{\mathrm{Mod}_{A}}(N,R)} Hom_{\mathrm{Mod}_{A}}(N,I\otimes R)\\ &\simeq Hom_{F^{\{0,1\}}\mathrm{Mod}_{A}}((N\to M),I^{\star}R)\\ &\simeq Hom_{DAlg^{\delta}(F^{\geq 0}\mathrm{Mod}_{A})}(LSym^{ \delta,\{0,1\}}_{A}(N\to M),I^{\star}R)\\ &\simeq Hom_{DAlg^{\delta}(F^{\geq 0}\mathrm{Mod}_{I^{\star}A})}((B, J),I^{\star}R)\end{CD}\] as desired. Note that the third equivalence follows from the definition of mapping spaces in the filtered derived category. **Corollary 3.2.18**.: Let \((A,I)\) be a prism. Fix an ideal \(J=(I,x_{1},...,x_{n})\) where the \(x_{i}\) form a \((p,I)\)-completely regular sequence in \(A\). Then the canonical map \[\widehat{\operatorname{Env}}_{I}(J\to A)\xrightarrow{\simeq}A\{\frac{J}{I}\}_ {(p,I)}^{\wedge}\] is an equivalence, where \(A\{\frac{J}{I}\}_{(p,I)}^{\wedge}\) is the prismatic envelope of [6]. Proof.: By Lemma 3.2.17 and Remark 2.7 in [11], it suffices to exhibit the filtered \(\delta\)-ring (\(J\to A\)) as the pushout in \((p,I)\)-complete filtered \(\delta\)-\(A\)-algebras of the following diagram: Since each term in the pushout is free, it suffices to identify the pushout of the following diagram in filtered derived \(A\)-algebras: Each vertex of this diagram can be computed as the pushout of the corresponding row in the following diagram: Indeed, appealing to Lemma 3.1.5 and Example 3.1.2, the pushout of the first row is \(F_{H}^{\star}\mathbb{I}_{\overline{A}/A[\hat{x}_{1},...,\hat{x}_{n}]}\simeq((I,\hat{x}_{1},...,\hat{x}_{n})\to A[\hat{x}_{1},...,\hat{x}_{n}])\) whereas the pushout of the second is \(F_{H}^{\star}\mathbb{I}_{\overline{A}[\hat{x}_{1},...,\hat{x}_{n}]/[A[\hat{x }_{1},...,\hat{x}_{n}]}\simeq I^{\star}A[\overline{x}_{1},...,\overline{x}_{n}]\). The colimit of this larger diagram can be computed by first pushing out along the rows and then taking the colimit of the resulting span, or first pushing out the columns and then taking the colimit of the resulting span (see Lemma 1.13 in [8]). The first procedure thus yields the colimit of the diagram we are interested in analyzing The pushout of the columns results in applying \(F_{H}^{\star}\mathbb{I}_{-/A}\) to the pushout diagram \[\begin{CD}\overline{A}[\hat{x}_{1},...,\hat{x}_{n}]@>{\hat{x}_{i}\to 0}>{}> \overline{A}\\ @V{}V{\hat{x}_{i}\to x_{i}}V\\ \overline{A}\end{CD}\] and by regularity of the sequence \(x_{i}\), the pushout of this diagram is \(A/J\). In particular Example 3.1.2 implies that the colimit of the big diagram is precisely \((J\to A)\), as desired. ### Derived Prismatic Cohomology We now formulate the universal property of derived prismatic cohomology. Throughout this section \((A,I)\) will always denote a prism unless explicitly indicated otherwise. **Observation 3.3.1**.: Let \((A,I)\) be a \(\delta\)-Cartier pair. Since \(I\to A\) is a Cartier divisor, the composite \[\widehat{DAlg^{\delta}}(\mathrm{Mod}_{A})\xrightarrow{Forget}\widehat{DAlg }(\mathrm{Mod}_{A})\xrightarrow{-\otimes_{A}\overline{A}}\widehat{DAlg}( \mathrm{Mod}_{\overline{A}})\] admits a left adjoint. **Definition 3.3.2**.: Let \((A,I)\) be any \(\delta\)-Cartier pair. We denote by \[L\mathbb{A}_{-/A}\colon\widehat{DAlg}(\mathrm{Mod}_{\overline{A}})\to \widehat{DAlg^{\delta}}(\mathrm{Mod}_{A})\] the left adjoint of 3.3.1. Given \(R\in\widehat{DAlg^{\delta}}(\mathrm{Mod}_{\overline{A}})\), we refer to the object \(L\mathbb{A}_{R/A}\) as the _derived prismatic cohomology_ of \(R\) relative to \(A\). This terminology is justified by Theorem 3.3.7 below. **Variant 3.3.3**.: Similarly, the functor \[\widehat{DAlg}(\mathrm{Mod}_{A})\xrightarrow{-\otimes_{A}\overline{A}} \widehat{DAlg}(\mathrm{Mod}_{\overline{A}})\] admits a left adjoint, which we will denote by \(L\Theta_{-/A}\). Observe that for any \(\overline{A}\)-algebra \(R\), there is a canonical equivalence \[L\mathbb{A}_{R/A}\simeq Free_{A}^{\delta}\circ L\Theta_{R/A}.\] **Example 3.3.4**.: Both \(L\Theta_{R/A}\) and \(L\mathbb{A}_{R/A}\) can be made slightly more explicit in the case that \(R=LSym_{\overline{A}}(\overline{A})\simeq\overline{A}\left\langle x\right\rangle\) is a \(p\)-complete polynomial algebra over \(\overline{A}\). For simplicity, we will assume that \(I=(d)\) is principal. Indeed, in this case we have a commutative diagram of right adjoints \(\widehat{DAlg}(\mathrm{Mod}_{\overline{A}})\)\(\widehat{\operatorname{Mod}_{A}}\)\(\widehat{\operatorname{Mod}_{A}}\) which implies that, denoting by \(L\) the left adjoint to the bottom horizontal arrow, we have \[L\Theta_{R/A}\simeq LSym_{A}(L(\overline{A})).\] It thus suffices to identify \(L(\overline{A})\), which is obtained by appealing to universal properties: \[Hom_{\widehat{\operatorname{Mod}_{A}}}(L(\overline{A}),M) \simeq Hom_{\widehat{\operatorname{Mod}_{A}}}(\overline{A},M \otimes_{A}\overline{A})\] \[\simeq cofib(Hom_{\widehat{\operatorname{Mod}_{A}}}(A,M) \xrightarrow{d^{*}}Hom_{\widehat{\operatorname{Mod}_{A}}}(A,M))\] \[\simeq Hom_{\widehat{\operatorname{Mod}_{A}}}(fib(d),M)\] where \(d\) indicates the multiplication map \(d:A\to A\). In particular, \(L(\overline{A})\simeq\overline{A}[-1]\) and we see that \[L\Theta_{\overline{A}\langle x\rangle/A}\simeq LSym_{A}(\overline{A}[-1])\] and \[L\mathbb{A}_{\overline{A}\langle x\rangle/A}\simeq LSym_{A}^{\delta}( \overline{A}[-1]).\] The starting point for our investigation into derived prismatic cohomology is the following expression of \(L\Theta_{-/A}\). **Lemma 3.3.5**.: There is a natural equivalence of functors \[L\Theta_{-/A}\simeq\widehat{\operatorname{Env}_{I}}\circ F_{H}^{\star} \widehat{\operatorname{\mathbb{I}}}_{-/A}\] where the symbol \(\widehat{\operatorname{\mathbb{I}}}\) refers to Hodge-complete infinitesimal cohomology. Proof.: The functor \(-\otimes_{A}\overline{A}\) factors as \[\widehat{DAlg}(\mathrm{Mod}_{A})\xrightarrow{I^{*}}DAlg(\widehat{F^{\geq 0 }}\mathrm{Mod}_{I^{*}A})\xrightarrow{gr^{0}}DAlg(\mathrm{Mod}_{\overline{A}}).\] Recall from Lemma 3.1.3 that the left adjoint to \(gr^{0}\) is precisely \(F_{H}^{\star}\widehat{\operatorname{\mathbb{I}}}_{-/A}\). The Lemma now follows from the definition of \(\widehat{\operatorname{Env}_{I}}\) as the left adjoint to \(I^{\star}\). **Observation 3.3.6**.: Following Example 2.4.15, we see that for any smooth \(A/I\)-algebra \(R\), \(\mathbb{A}_{R/A}\) may be viewed as an object of \(\widehat{DAlg^{\delta}}(\mathrm{Mod}_{A})\). Furthermore, the Hodge-Tate complex \(\overline{\Delta}_{R/A}\simeq\mathbb{A}_{R/A}\otimes_{A}\overline{A}\) is naturally an \(R\)-algebra, by definition. By adjunction, we thus obtain a canonical map of derived \(\delta\)-rings \[comp_{R/A}\colon L\mathbb{A}_{R/A}\to\mathbb{A}_{R/A}.\] By left Kan extension, we obtain a comparison map for any animated \(\overline{A}\)-algebra \(R\) (where the right hand side is now understood to be the left Kan extension of the usual prismatic cohomology). **Theorem 3.3.7**.: For any \(p\)-complete animated \(\overline{A}\) algebra \(R\), the map \[comp_{R/A}\colon L\mathbb{A}_{R/A}\to\mathbb{A}_{R/A}\] is an equivalence. The proof of Theorem 3.3.7 will take some preliminaries. We begin by establishing the theorem in a special case: **Lemma 3.3.8**.: Suppose \(R=A/J\) where \(J=(I,x_{1},...,x_{n})\) is generated by a \((p,I)\)-completely regular sequence. Then \(comp_{R/A}\) is an equivalence. Proof.: Combine the description \(L\mathbb{A}_{R/A}\simeq\widehat{\operatorname{Env}}_{I}(Free^{\delta}_{A}(F^ {\star}_{H}\mathbb{I}_{R/A}))\) with Example 3.1.2 and Observation 3.2.12 to identify \(L\mathbb{A}_{R/A}\) with \[L\mathbb{A}_{R/A}\simeq\widehat{\operatorname{Env}}_{I}^{\mathbb{A}}(J\to A).\] Corollary 3.2.18 then yields \(L\mathbb{A}_{R/A}\simeq A\{\frac{J}{I}\}_{(p,I)}^{\wedge}\). The result then follows from Example 7.9 in [6] which identifies the derived prismatic cohomology of \(R\) over \(A\) with the prismatic envelope. Prismatic cohomology (of smooth algebras) can be computed via an appropriate Cech-Alexander complex, where each of the terms in the complex fits into the context of the preceding lemma. It therefore suffices to prove that \(L\mathbb{A}_{R/A}\) can be accessed via Cech-Alexander complexes in the same way. To establish this, we will use the factorization \(L\mathbb{A}_{R/A}=Free^{\delta}_{A}\circ L\Theta_{R/A}\) to break the problem up into two stages. We will begin by proving that \(L\Theta_{R/A}\) can be computed via a Cech-Alexander complex (see Proposition 3.3.12 below), and then we will appeal to the filtration on \(Free^{\delta}_{A}\) by polynomial subfunctors from Section 2 to deduce the analogous claim for \(L\mathbb{A}_{R/A}\). **Lemma 3.3.9**.: Denote by \(L\overline{\Theta}_{-/A}:=L\Theta_{-/A}\otimes_{A}\overline{A}\). Then there is a canonical factorization Proof.: Recall from Variant 3.2.7, the base change functor \(ins^{0}(-)\otimes_{\overline{A}(0)}gr^{\star}(I^{\star}A)\) admits a left adjoint given by the graded \(I\)-adic envelope \(\mathrm{Env}^{Gr}_{I}\). We appeal to Lemma 3.2.8 to conclude: \[\mathrm{Env}^{Gr}_{I}(gr^{\star}_{H}\mathbb{I}_{-/A}) \simeq\mathrm{Env}_{I}(F^{\star}_{H}\mathbb{I}_{-/A})\otimes_{A} \overline{A}\] \[\simeq L\overline{\Theta}_{-/A}.\] **Construction 3.3.10**.: (The Conjugate Filtration on \(L\overline{\Theta}\)) Recall from Theorem 3.1.11 that we can endow \(gr^{\star}_{H}\mathbb{I}_{R/A}\) with a functorial exhaustive increasing filtration, denoted by \(F^{v}_{\star}gr^{\star}_{H}\mathbb{I}_{R/A}\), and thus view it as an object of the \(\infty\)-category \(DAlg(F_{\geq 0}Gr^{\geq 0}\mathrm{Mod}_{gr^{\star}(I^{\star}A)})\). To be clear, we are viewing \(gr^{\star}(I^{\star}A)\) as a filtered graded object concentrated in filtration weight \(0\) (so the relevant derived algebraic context is \(F_{\geq 0}Gr^{\geq 0}\mathrm{Mod}_{A}:=Fun(\mathbb{Z}_{\geq 0}\times\mathbb{Z}_{ \geq 0}^{disc},\mathrm{Mod}_{A})\)). We define the _conjugate filtration_ on \(L\overline{\Theta}_{-/A}\) to be the composite \[DAlg(\mathrm{Mod}_{\overline{A}})\xrightarrow{F^{v}_{\star}gr^{\star}_{H} \mathbb{I}_{-/A}}DAlg(Gr^{\geq 0}F_{\geq 0}\mathrm{Mod}_{A})_{gr^{\star}(I^{ \star}A)}\xrightarrow{\mathrm{Env}^{Gr}_{I}}DAlg(F_{\geq 0}\mathrm{Mod}_{A})\] which we denote by \(F^{conj}_{\star}L\overline{\Theta}_{-/A}\). A simple unwinding of definitions combined with Lemma 3.3.9 identifies \(und(F^{conj}_{\star}L\overline{\Theta}_{R/A})\simeq L\overline{\Theta}_{R/A}\). We now turn our attention towards identifying the associated graded. **Theorem 3.3.11**.: There is a functorial isomorphism of graded algebras \[gr^{conj}_{\star}L\overline{\Theta}_{R/A}\simeq LSym^{\star}_{R}(L_{R/ \overline{A}}[-1]\{-1\}(1)).\] Proof.: We know from Proposition 3.2.10 that we can express the left hand side as \[gr^{conj}_{\star}L\overline{\Theta}_{R/A} \simeq gr^{conj}_{\star}\mathrm{Env}^{Gr}_{I}\circ F^{v}_{\star }(gr^{\star}_{H}\mathbb{I}_{R/A})\] \[\simeq\mathrm{Env}^{Gr}_{I}gr^{vert}_{\star}(gr^{\star}_{H} \mathbb{I}_{R/A})\] Now appealing to Theorem 3.1.11, we can explicitly identify the associated graded of the vertical filtration and thus rewrite the last line as \[\mathrm{Env}^{Gr}_{I}(LSym^{\star}_{R}(L_{R/\overline{A}}[-1](1))\otimes_{ \overline{A}}LSym_{A}(I/I^{2}(1)))\simeq LSym_{R}(L_{R/\overline{A}}[-1]\{-1 \}(1))\] where the last equivalence follows from Lemma 3.2.13. **Proposition 3.3.12**.: Let \(R\) be a smooth \(A/I\)-algebra. Denote by \(Poly_{A\downarrow R}\) the \(1\)-category consisting of \((p,I)\)-complete polynomial \(A\)-algebras \(P\) equipped with a map \(\alpha\colon P\to R\). Then the canonical map \[L\Theta_{R/A}\to\lim_{Poly_{A\downarrow R}}L\Theta_{R/P}\] is an equivalence. Proof.: It suffices to check that the map \[L\Theta_{R/A}\to\lim_{Poly_{A\downarrow R}}L\Theta_{R/P}\] is an equivalence after reduction modulo \(I\), as both sides are \(I\)-adically complete. Since \(I\subset A\) is a Cartier divisor, reduction modulo \(I\) preserves the totalization on the right hand side, reducing us to verifying that the map \[L\overline{\Theta}_{R/A}\to\lim_{Poly_{A\downarrow R}}L\overline{\Theta}_{R/P}\] is an equivalence. Since \(R\) is assumed to be \(p\)-completely smooth, we may choose a surjection \(P\to R\) in \(Poly_{A\downarrow R}\), and restrict our attention to the cofinal subdiagram given by the Cech nerve \[L\overline{\Theta}_{R/A}\to\lim_{\Delta}L\overline{\Theta}_{R/P^{\otimes \cdot+1}}.\] At this point, we can appeal to the conjugate filtration to reduce to showing that \(LSym_{R}^{i}(L_{R/A}[-1])\) is computed by the relevant totalization. The cotangent complex satisfies the desired descent properties by combining Proposition 3.1 of [4] with the (\(p\)-completed) conormal sequence of the cotangent complex. Furthermore, since \(P\to R\) is a surjection whose kernel is generated by a \(p\)-completely regular sequence (since \(R\) is \(p\)-completely smooth, one can always choose the surjection to satisfy this), \(L_{R/P}^{\wedge}\)[-1] is in fact a finitely presented projective \(R\)-module. Hence \(LSym_{R}\) preserves the desired totalization, as it is the left-right extension of its restriction to the finitely presented projective \(R\)-modules. **Theorem 3.3.13**.: Denote by \(Poly_{A\downarrow R}\) the \(1\)-category consisting of \((p,I)\)-complete polynomial \(A\)-algebras \(P\) equipped with a map \(\alpha\colon P\to R\). Let \(F_{P}:=Free^{\delta}_{A}(P)\) and \(S_{P}:=F_{P}\otimes_{P}R\). Then the canonical map \[L\mathbb{A}_{R/A}\to\lim_{Poly_{A\downarrow R}}L\mathbb{A}_{S_{P}/F_{P}}\] is an equivalence. Proof.: Recall that \(L\mathbb{A}_{R/A}\simeq Free^{\delta}_{A}\circ L\Theta_{R/A}\), and observe that \(Free^{\delta}_{A}\circ L\Theta_{R/P}\simeq Free^{\delta}_{A}\circ L\Theta_{S_ {P}/F_{P}}\). Appealing to Proposition 3.3.12, we are thus reduced to verifying that \[Free^{\delta}_{A}(\lim_{Poly_{A\downarrow R}}L\Theta_{R/P})\simeq\lim_{Poly_{ A\downarrow R}}(L\mathbb{A}_{S_{P}/F_{P}}).\] Fix \(P\in Poly_{A\downarrow R}\) such that \(P\to R\) is a surjection and the kernel is generated by a regular sequence \((f_{1},...,f_{k})\) (this is always possible locally on \(R\), and the claim in question is local). We then obtain an identification \(F^{0}:=L\Theta_{R/P}\simeq A\left\langle x_{1},...,x_{n}\right\rangle\left[ \frac{f_{i}}{I}\right]_{(p,I)}^{\wedge}\) via the same argument as Corollary 3.2.18. In particular, \(F^{0}\) and all terms in the Cech nerve of \(A\to F^{0}\) are discrete, since \(F^{0}\) is \((p,I)\)-completely flat over \(A\). We will denote the Cech nerve by \(F^{\star}\). It suffices to show that \(Free^{\delta}_{A}\) preserves the totalization of \(F^{\star}\) - i.e. the canonical map \[Free^{\delta}_{A}(Tot(F^{\star}))\to Tot(Free^{\delta}_{A}(F^{\star}))\] is an equivalence. Since each \(F^{n}\) is discrete, we may view \(F^{\star}\) as a cosimplicial commutative ring (as opposed to a cosimplicial derived commutative ring), and since the inclusion \[CAlg^{\heartsuit}_{A}\to DAlg_{A}\] preserves filtered colimits, we can pass to the cosimplicial skeleta to rewrite \[F^{\star}\simeq colim_{n}sk_{n}F^{\star}\] where the skeleta are taken in cosimplicial \(A\)-algebras, and so in particular remain pointwise discrete. We now claim that we can commute the filtered colimit and the totalization: \[Tot(F^{\star})\simeq Tot(colim_{n}sk_{n}(F^{\star}))\simeq colim_{n}Tot(sk_{n} (F^{\star})).\] Indeed, since the forgetful functor \(CAlg(\mathrm{Mod}_{A})\to\mathrm{Mod}_{A}\) is conservative and preserves filtered colimits and all limits, it suffices to check on the level of underlying \(A\)-modules. Since each term in the cosimplicial object is discrete, the \(m^{th}\) truncation (as a cosimplicial \(A\)-module) \(\tau_{\geq m}(Tot(F^{\star}))\) depends only on the \(m^{th}\)-coskeleton of the appearing totalizations, which reduces us to the case of commuting a filtered colimit with a finite limit. Recall that \(Free^{\delta}_{A}\circ Sym^{\heartsuit}_{A}=Sym^{\delta,\heartsuit}_{A}\) admits an exhaustive filtration by excisively polynomial subfunctors, and thus \(Free^{\delta}_{A}\) is the non-linear right-left extension of its restriction to polynomial \(A\)-algebras. In particular, Theorem 2.2.20 guarantees that \(Free^{\delta}_{A}\) preserves finite totalizations of \(A\)-algebras. Hence we obtain \[Free^{\delta}_{A}(Tot(F^{\star})) \simeq colim_{n}Free^{\delta}_{A}(Tot(sk_{n}(F^{\star})))\] \[\simeq colim_{n}Tot(Free^{\delta}_{A}(sk_{n}(F^{\star}))\] \[\simeq Tot(Free^{\delta}_{A}(F^{\star}))\] as desired. Proof of Theorem 3.3.7.: It suffices to verify the theorem in the case that \(R\) is a \(p\)-completely smooth \(\overline{A}\)-algebra, since both \(L\mathbb{A}_{-/A}\) and \(\mathbb{A}_{-/A}\) preserve sifted colimits (as always, \(\mathbb{A}_{-/A}\) is viewed as the left Kan extension of the site-theoretic cohomology from the smooth case). The comparison map \[comp_{R/A}\colon L\mathbb{A}_{R/A}\to\mathbb{A}_{R/A}\] is functorial both in \(R\) and in \(A\). In particular, given any \((p,I)\)-complete polynomial \(A\)-algebra \(P\) with a surjection \[P\to R\] \(comp_{R/A}\) refines to a map of Cech-Alexander complexes \[comp_{S^{\star}_{P}/F^{\star}_{P}}\colon L\mathbb{A}_{S^{\star}_{P}/F^{\star}_{ P}}\to\mathbb{A}_{S^{\star}_{P}/F^{\star}_{P}}\] which is a termwise equivalence by Corollary 3.2.18. We know from Construction 4.18 in [6] that \[\mathbb{A}_{R/A}\simeq lim_{\Delta}\mathbb{A}_{S^{\star}_{P}/F^{\star}_{P}}\] from which we conclude. As an application of these definitions, we can establish affineness of the relative prismatization of an affine formal scheme. Recall, for any derived \(p\)-adic formal \(\overline{A}\)-scheme \(X\), the relative prismatization is the presheaf on \((p,I)\)-nilpotent animated \(A\)-algebras defined by \[WCart_{X/A}(B)=X(\overline{W(B)}).\] Together with Theorem 3.3.7, the following Lemma recovers Theorem 7.17 in [3]. **Lemma 3.3.14**.: For any \(p\)-complete animated commutative \(\overline{A}\)-algebra \(R\), there is a canonical equivalence \[WCart_{Spf(R)/A}\simeq Spf(L\mathbb{A}_{R/A}).\] Proof.: Recall, the derived Witt vectors functor (Definition 2.3.13) is right adjoint to the forgetful functor \(DAlg^{\delta}_{\mathbb{Z}_{p}}\to DAlg_{\mathbb{Z}_{p}}\), and on connective objects this recovers the animated Witt vectors (Observation 2.3.14). It follows from Lemma 3.1.4 that the forgetful functor \(DAlg^{\delta}_{A}\to DAlg_{A}\) also admits a right adjoint which is once again given by derived Witt vectors. Unwinding the various universal properties in play, we see \[Spf(L\mathbb{A}_{R/A})(S) \simeq Map_{DAlg_{A}}(L\mathbb{A}_{R/A},S)\] \[\simeq Map_{DAlg^{\delta}_{A}}(L\mathbb{A}_{R/A},W(S))\] \[\simeq Map_{DAlg_{\overline{A}}}(R,\overline{W(S)})\] \[\simeq WCart_{Spf(R)/A}(S)\] from which we conclude.
2309.02777
LightNeuS: Neural Surface Reconstruction in Endoscopy using Illumination Decline
We propose a new approach to 3D reconstruction from sequences of images acquired by monocular endoscopes. It is based on two key insights. First, endoluminal cavities are watertight, a property naturally enforced by modeling them in terms of a signed distance function. Second, the scene illumination is variable. It comes from the endoscope's light sources and decays with the inverse of the squared distance to the surface. To exploit these insights, we build on NeuS, a neural implicit surface reconstruction technique with an outstanding capability to learn appearance and a SDF surface model from multiple views, but currently limited to scenes with static illumination. To remove this limitation and exploit the relation between pixel brightness and depth, we modify the NeuS architecture to explicitly account for it and introduce a calibrated photometric model of the endoscope's camera and light source. Our method is the first one to produce watertight reconstructions of whole colon sections. We demonstrate excellent accuracy on phantom imagery. Remarkably, the watertight prior combined with illumination decline, allows to complete the reconstruction of unseen portions of the surface with acceptable accuracy, paving the way to automatic quality assessment of cancer screening explorations, measuring the global percentage of observed mucosa.
Víctor M. Batlle, José M. M. Montiel, Pascal Fua, Juan D. Tardós
2023-09-06T06:41:40Z
http://arxiv.org/abs/2309.02777v1
# LightNeuS: Neural Surface Reconstruction in Endoscopy using Illumination Decline ###### Abstract We propose a new approach to 3D reconstruction from sequences of images acquired by monocular endoscopes. It is based on two key insights. First, endoluminal cavities are watertight, a property naturally enforced by modeling them in terms of a signed distance function. Second, the scene illumination is variable. It comes from the endoscope's light sources and decays with the inverse of the squared distance to the surface. To exploit these insights, we build on NeuS [25], a neural implicit surface reconstruction technique with an outstanding capability to learn appearance and a SDF surface model from multiple views, but currently limited to scenes with static illumination. To remove this limitation and exploit the relation between pixel brightness and depth, we modify the NeuS architecture to explicitly account for it and introduce a calibrated photometric model of the endoscope's camera and light source. Our method is the first one to produce watertight reconstructions of whole colon sections. We demonstrate excellent accuracy on phantom imagery. Remarkably, the watertight prior combined with illumination decline, allows to complete the reconstruction of unseen portions of the surface with acceptable accuracy, paving the way to automatic quality assessment of cancer screening explorations, measuring the global percentage of observed mucosa. Keywords:Reconstruction Photometric multi-view Endoscopy ## 1 Introduction Colorectal cancer (CRC) is the third most commonly diagnosed cancer and is the second most common cause of cancer death [23]. Early detection is crucial for a good prognosis. Despite the existence of other techniques, such as virtual colonoscopy (VC), optical colonoscopy (OC) remains the gold standard for colonoscopy screening and the removal of precursor lesions. Unfortunately, we do not yet have the ability to reconstruct densely the 3D shape of large sections of the colon. This would usher exciting new developments, such as post-intervention diagnosis, measuring polyps and stenosis, and automatically evaluating exploration thoroughness in terms of the surface percentage that has been observed. This is the problem we address here. It has been shown that the colon 3D shape can be estimated from single images acquired during human colonoscopies [3]. However, to model large sections of it while increasing the reconstruction accuracy, multiple images must be used. As most endoscopes contain a single camera, the natural way to do this is to use video sequences acquired by these cameras in the manner of structure-from-motion algorithms. An important first step in that direction is to register the images from the sequences. This can now be done reliably using either batch [21] or SLAM techniques [8]. Unfortunately, this solves only half the problem because these techniques provide very sparse reconstructions and going from there to dense ones remains an open problem. And occlusions, specularities, varying albedos, and specificities of endoscopic lighting make it a challenging one. To overcome these difficulties, we rely on two properties of endoscopic images: * Endoluminal cavities such as the gastrointestinal tract, and in particular the human colon, are watertight surfaces. To account for this, we represent its surface in terms of a signed distance function (SDF), which by its very nature presents continuous watertight surfaces. * In endoscopy the light source is co-located with the camera. It illuminates a dark scene and is always close to the surface. As a result, the irradiance decreases rapidly with distance \(t\) from camera to surface; more specifically it is a function of \(1/t^{2}\). In other words, there is a strong correlation between light and depth, which remains exploited. To take advantage of these specificities, we build on the success of Neural implicit Surfaces (NeuS) [25] that have been shown to be highly effective at deriving surface 3D models from sets of registered images. As the Neural Radiance Fields (NeRFs) [15] that inspired them, they were designed to operate on regular images taken around a scene, sampling fairly regularly the set of possible viewing directions. Furthermore, the lighting is assumed to be static and distant so that the brightness of a pixel and its distance to the camera are unrelated. Unfortunately, none of these conditions hold in endoscopies. The camera is inside a cavity (in the colon, a roughly cylindrical tunnel) that limits viewing directions. The light source is co-located with the camera and close to the surface, which results in a strong correlation between pixel brightness and distance to the camera. In this paper, we show that, far from being a handicap, this correlation is a key information for neural network self-supervision. As shown in Fig. 1, unlike in the original architecture, we feed to the NeuS renderer the distance from the light source to each surface point, and the renderer explicitly uses it reproduce illumination decline. We also introduce and calibrate a photometric model for the endoscope light and camera, so that the inverse square law discussed above actually holds. Our results show that exploiting the illumination is key to unlocking implicit neural surface reconstruction in endoscopy. It delivers accuracy in the range of 2 to 3 mm comparable to state of the art monocular dense methods, as opposed to an unmodified NeuS, which is 5 times less accurate or fails to reconstruct any surface at all. This makes us the first to show accurate results of extended 3D watertight surfaces from monocular endoscopy images. ## 2 Related Works **3D Reconstruction from Endoscopic Images.** It can help with the effective localization of lesions, such as polyps and adenomas, by providing a complete representation of the observed surface. Unfortunately, many state-of the-art SLAM techniques based on feature matching [5] or direct methods [7, 6] are impractical for dense endoscopic reconstruction due to the lack of texture and the inconsistent lighting that moves along with the camera. Nevertheless, sparse reconstructions by classical Structure-from-Motion (SfM) algorithms can be good starting points for refinement and densification based on Shape-from-Shading (SfS) [24, 28]. However, classical multi-view and SfS methods require strong sub-optimal priors on colon surface shape and reflectance. In monocular dense reconstructions, it is common practice to encode shape priors in terms of smooth rigid surfaces [17, 20, 14]. Recently, [22] proposes a tubular topology prior for NRSfM aimed to process endoluminal cavities where these tubular shapes are prevalent. In contrast, for the same environments, we propose the watertight prior coded by implicit SDF representations. Recent methods for dense reconstruction rely on neural networks to predict per-pixel depth in the 2D space of each image and fuse the depth maps by using multi-view stereo (MVS) [2] or a SLAM pipeline [12, 13]. However, holes in the reconstruction appear due to failures in triangulation and inaccurate depth estimation or in areas not observed in any image. Wang et al. [27] show the Figure 1: **From NeuS to LightNeuS**. The original NeuS architecture is depicted by the black arrows. In LightNeuS, when training the network with a sampled surface point, we provide its distance \(t\) to the renderer, that takes into account illumination decline. We also incorporate a calibrated photometric endoscope model that is used to correctly compute the photometric loss. The changes are shown in red. potential of neural rendering in reconstruction from medical images, although they use a binocular static camera with fixed light source, which is not feasible in endoluminal endoscopy. **Neural Radiance Fields (NeRFs)** were first proposed to reconstruct novel views of non-Lambertian objects [15]. This method provides an _implicit neural representation_ of a scene in terms of local densities and associated colors. In effect, the scene representation is stored in the weights of a neural network, usually a multilayer perceptron (MLP), that learns its shape and reflectance for any coordinate and viewing direction. NeRFs use volume rendering [9], based on ray-tracing from multiple camera positions. The volume density \(\sigma(\mathbf{x})\) can be interpreted as the differential probability of a ray terminating at an infinitesimal particle at location \(\mathbf{x}\). The expected color \(C(\mathbf{r})\) of the pixel with camera ray \(\mathbf{r}(t)=\mathbf{o}+t\mathbf{d}\) is the integration of the radiance emitted by the field at every traveled distance \(t\) from near to far bounds \(t_{n}\) and \(t_{f}\), such that \[C(\mathbf{r})=\int_{t_{n}}^{t_{f}}T(t)\:\sigma(\mathbf{r}(t))\:\mathbf{c}( \mathbf{r}(t),\mathbf{d})\,\mathrm{d}t\quad\text{where }T(t)=\exp\left(-\int_{t_{n}}^{t}\sigma( \mathbf{r}(s))\,\mathrm{d}s\right) \tag{1}\] where \(\mathbf{c}\) stands for the color. The function \(T\) denotes the accumulated transmittance along the ray from \(t_{n}\) to \(t\), that is the probability that the ray travels from \(t_{n}\) to \(t\) without hitting any other particle. The authors propose two MLPs to estimate the volume density function \(\sigma:\mathbf{x}\rightarrow[0,1]\) and the directional emitted color function \(\mathbf{c}:(\mathbf{x},\mathbf{d})\rightarrow[0,1]^{3}\), so the density of a point does not depend on the viewing direction \(\mathbf{d}\), but the color does. This allow them to model non-Lambertian reflectance. In addition, they propose a positional encoding for location \(\mathbf{x}\) and direction \(\mathbf{d}\), which allows high-frequency details in the reconstruction. **Neural Implicit Surfaces (NeuS)** were introduced in [25] to improve the quality of NeRF representation modelling watertight surfaces. For that, the volume density \(\sigma\) is computed so as to be maximal at the zero-crossings of a signed distance function (SDF) \(f\): \[\sigma(\mathbf{r}(t))=\max\left(\frac{\Phi_{s}^{\prime}(f(\mathbf{r}(t)))}{ \Phi_{s}(f(\mathbf{r}(t)))},0\right)\quad\text{where }\Phi_{s}(x)=\frac{1}{1+e^{-sx}} \tag{2}\] The SDF formulation makes it possible to estimate the surface normal as \(\mathbf{n}=\nabla f(\mathbf{x})\). The reflectance of a material is usually determined as a function of the incoming and outgoing light directions with respect to the surface normal. Therefore, the normal is added as an input to the MLP that estimates color \(\mathbf{c}:(\mathbf{x},\mathbf{d},\mathbf{n})\), as shown in Fig. 1. ## 3 LightNeuS In this section, we present the key contributions that make _LightNeuS_ a neural implicit reconstruction method suitable for endoscopy in endoluminal cavities. In this context, the light source is located next to the camera and moves with it. Furthermore, it is close to the surfaces to be modeled. As a result, for any surface point \(\mathbf{x}=\mathbf{o}+t\mathbf{d}\), the irradiance decreases with the square of the distance to the camera \(t\). Hence, we can write the color of the corresponding pixel as [3]: \[\mathcal{I}(\mathbf{x})=\left(\frac{L_{e}}{t^{2}}\ \mathrm{BRDF}(\mathbf{x}, \mathbf{d})\ \cos\left(\theta\right)\ g\right)^{1/\gamma} \tag{3}\] where \(L_{e}\) is the radiance emitted from the light source towards the surface point. The bidirectional reflectance distribution function (BRDF) determines how much light is reflected to the camera, and the cosine term \(\cos\left(\theta\right)=-\mathbf{d}\cdot\mathbf{n}\) weights the incoming radiance with respect to the surface normal \(\mathbf{n}\). Equation (3) also takes into account the camera gain \(g\) and gamma correction \(\gamma\). ### Using Illumination Decline as a Depth Cue The NeuS formulation of Section 2 assumes distant and fixed lighting. However, in endoscopy inverse-square light decline is significant, as quantified in Eq. (3). Accounting for this is done by modifying the original NeuS formulation as follows. Fig. 1 depicts the original NeuS network in black. It uses a SDF network--shown in orange--to estimate a view-independent geometry and only the final RGB color depends on the viewing direction \(\mathbf{d}\). It is estimated by the network shown in green. Thus, this second network \(\mathbf{c}(\mathbf{x},\mathbf{d},\mathbf{n})\) may learn to model non-Lambertian BRDF\((\mathbf{x},\mathbf{d})\), including specular highlights, and the cosine term of Eq. (3). However, if the distance \(t\) from the light to the point \(\mathbf{x}\) is not provided to the color network, the \(1/t^{2}\) dependency cannot be learned, and surface reconstruction will fail. Our key insight is to explicitly supply this distance as input to the volume rendering algorithm, as shown in red in Fig. 1 and reformulate Eq. (1) as \[C(\mathbf{r})=\int_{t_{n}}^{t_{f}}T(t)\ \sigma(\mathbf{r}(t))\ \frac{\mathbf{c}( \mathbf{r}(t),\mathbf{d},\mathbf{n})}{t^{2}}\,\mathrm{d}t \tag{4}\] This conceptually simple change, using illumination decline while training, unlocks all the power of neural surface reconstruction in endoscopy. ### Endoscope Photometric Model Apart from illumination decline, there are several significant differences between the images captured by endoscopes and those conventionally used to train NeRFs and NeuS: fish-eye lenses, strong vignetting, uneven scene illumination, and post-processing. Endoscopes use fisheye lenses to cover a wide field of view, usually close to 170 degrees. These lenses produce strong deformations, making it unwise to use the standard pinhole camera model. Instead, specific models [19, 10] must be used. Hence, we also modified the original NeuS implementation to support these models. The light sources of endoscopes behave like spotlights. In other words, they do not emit with the same intensity in all directions, so \(L_{e}\) in Eq. (3) is not constant for all image pixels. This effect is similar to the vignetting effect caused by conventional lenses, that is aggravated in fisheye lenses. Fortunately, they can be accurately calibrated [1, 16] and compensated for. The post-processing software of medical endoscopes is designed to always display well-exposed images, so that physicians can see details correctly. An adaptive gain factor \(g\) is applied by the endoscope's internal logic and gamma correction is also used to adapt to non-linear human vision, achieving better contrast perception in mid tones and dark areas. Endoscope manufacturers know the post-processing logic of their devices, but this information is proprietary and not available to users. Again, gamma correction can be calibrated assuming it is constant [3], and the gain change between successive images can be estimated, for example, by sparse feature matching. All these factors must be taken into account during network training. Thus, our photometric loss is computed using a normalized image: \[I^{\prime}=\left(\frac{I^{\gamma}}{L_{e}g}\right)^{1/\gamma} \tag{5}\] ## 4 Experiments To validate our method, we selected four sequences of the C3VD dataset [4], covering different sections of the colon anatomy. This dataset contains sequences recorded with a medical video colonoscope, Olympus Evis Exera III CF-HQ190L. The images were recorded inside a _phantom_, a model of a human colon made of silicone. The intrinsic camera parameters are provided. The camera extrinsics for each frame are estimated by 2D-3D registration against the known 3D model. In an operational setting, we could use a structure-from-motion approach such as COLMAP [21] or a SLAM technique such as [8], which have been shown to work well in endoscopic settings. The gain values were easily estimated from the dataset itself. For vignetting, we use the calibration obtained from a colonoscope of the same brand and series from the EndoMapper dataset [1]. \begin{table} \begin{tabular}{c|c|c c c|c c c|} \cline{3-8} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{**Sequence**} & \multicolumn{3}{c|}{**Surveyed**} & \multicolumn{3}{c|}{**Extended**} \\ \cline{3-8} \multicolumn{1}{c|}{} & & MedAE & MAE & RMSE & MedAE & MAE & RMSE \\ \hline NeuS & \multicolumn{1}{c|}{\multirow{2}{*}{Cecum 1 a}} & 4.53 & 5.07 & 6.40 & 4.68 & 6.24 & 8.77 \\ \hline \multirow{8}{*}{Ours} & \multicolumn{1}{c|}{\multirow{2}{*}{Transcending 4 a}} & 0.95 & 1.48 & 2.01 & 0.83 & 1.26 & 1.72 \\ & \multicolumn{1}{c|}{Descending 4 a} & 2.66 & 3.26 & 4.08 & 4.50 & 6.61 & 9.32 \\ \cline{1-1} & \multicolumn{1}{c|}{Transcending 1 a} & 3.43 & 3.47 & 4.07 & 3.38 & 3.31 & 3.86 \\ \cline{1-1} & \multicolumn{1}{c|}{Transcending 4 a} & 1.15 & 2.31 & 3.79 & 1.29 & 2.22 & 3.32 \\ \cline{1-1} & \multicolumn{1}{c|}{**Mean**} & **2.05** & **2.63** & **3.49** & **2.50** & **3.35** & **4.56** \\ \hline \end{tabular} \end{table} Table 1: **Reconstruction error [mm] on the C3VD dataset. Surveyed:** Evaluated on all points seen at least once. **Extended:** Evaluated on points within 20 mm of a visible points. For NeuS, we provide a single set of numbers because the optimization failed on the three other sections. For each sequence, we train both the vanilla NeuS and our LighNeuS using 20 frames each time. They are extracted uniformly over the duration of the video. We use the same batch size and number of iterations as in the original NeuS paper, 512 and 300k respectively. Once the network is trained, we can extract triangulated meshes from the reconstruction. Since the C3VD dataset comprises a ground-truth triangle mesh, we compute point-to-triangle distances from all the vertices in the reconstruction to the closest ground-truth triangle. In the left column of Table 1, we report median (MedAE), mean (MAE), and root mean square (RMSE) values of these distances for all vertices seen in at least one image, for each one of the four sections of the colon. Note that we do not have values for three of them in the case NeuS because the optimization failed to converge. For the fourth one, our accuracy numbers are much better than the NeuS ones. We report mean error below 3 mm, and median of 2 mm for colon sections of \(\approx\) 100 mm in size and a low parallax camera motion. It is in the range of reported accuracy in the literature for monocular dense non-watertight depth estimation, 1.1 mm in [14] for high parallax geometry in laparoscopy, which is a much more favorable geometry than the one we have here, or 0.85 mm for the significantly smaller-size cavities of endoscopic endonasal surgery (ESS) [11]. We provide a qualitative result in Fig. 2 and additional ones in the supplementary material. Note that the watertight prior inherent to an SDF allows the the network to hallucinate unseen areas. Remarkably, these unsurveyed areas continue the tubular shape of the colon and we found them to be mostly accurate when compared to the ground truth. For example, the curved areas of the colon where a wall is occluded behind the corner of the curve is reconstructed, as shown in Fig. 3. This ability to "fill in" observation gaps may be useful in Figure 2: **Benefits of illumination decline**. Result on the _“Cecum 1 a”_ sequence. **Top:** The NeuS reconstruction exhibits multiple artifacts that make it unusable. **Bottom:** Our reconstruction is much closer to the ground truth shape. The error is shown in blue if the reconstruction is inside the surface, and in red otherwise. A fully saturated red or blue denotes an error of more than 1cm and grey denotes no error at all. providing the endoscopist with an estimate of the percentage of unsurveyed area during a procedure. We hypothesize that this desirable behavior stems from the fact that the network learns an empirical shape prior from the observed anatomy of the colon. However, we don't expect this behavior to hold for distant unseen parts, but only for regions closer than 20 mm to one observation. In the right column of Table 1, we compute accuracy metrics for this _extended_ region. It includes not only surveyed areas, but also neighbouring areas that where not observed. ## 5 Conclusion We have presented a method for 3D dense multi-view reconstruction from endoscopic images. We are the first to show that neural radiance fields can be used to obtain accurate dense reconstructions of colon sections of significant length. At the heart of our approach, is exploiting the correlation between depth and brightness. We have observed that, without it, neural reconstruction fails. Currently our method is limited to offline applications, but real-time performance could be achieved in the future [26]. Similar to other reconstruction methods, for now our approach works in areas of the colon where there is little deformation. Several sub-maps of non-deformed areas can be created if necessary. However, this limitation could be overcome by adopting the deformable NeRFs formalism [18]. Figure 3: **Reconstructing partially observed regions**. Results on _“Transcending 4 a”_ sequence. The camera performs a short trajectory from (a) to (b). In (c) we represent both frames and intermediate camera poses. (d) Number of frames seeing each surface point, with GT unobserved areas shown in gray. (e) We managed to reconstruct a curved section of the colon. (f) Our method plausibly estimates the wall of the colon at the right of camera (b), although it was never seen in the images. ## Acknowledgement This work was supported by EU-H2020 grant 863146: ENDOMAPPER, Spanish government grants PID2021-127685NB-I00 and FPU20/06782 and by Aragon government grant DGA_T45-17R.
2305.15860
On the swelling properties of pom-pom polymers: impact of backbone length
The present work continues our previous studies of pom-pom molecule [K. Haydukivska, O. Kalyuzhnyi, V. Blavatska, and J. Ilnytskyi, J. Mol. Liq. 328, 115456 (2021); Condens. Matter Phys. 25, 23302 (2022)]. The molecule consists of a linear backbone with two branching points at both ends, with functionalities $f_1$ and $f_2$. Here, the main attention is concentrated on studying the impact of the central backbone length on the configurational characteristics of complex molecule, such as size and shape ratios. We apply both a direct polymer renormalization scheme based on continuous chain model and the alternative Wei's method to analyze a set of size and shape properties of pom-pom polymers in dilute solution. The size ratio of a pom-pom and a chain polymer of the same total molecular mass is calculated with an excluded volume interaction taken into account, and estimates for asphericity are found in Gaussian approximation, whereas for the size ratio we found a monotonous dependence of the length of backbone at different functionalities of side arms. Results for asphericity show a non-trivial behaviour.
K. Haydukivska, V. Blavatska
2023-05-25T08:51:47Z
http://arxiv.org/abs/2305.15860v1
[ ###### Abstract The present work continues our previous studies of pom-pom molecule [K. Haydukvska, O. Kalyuzhnyi, V. Blavatska, and J. Ilnytskyi, J. Mol. Liq. **328**, 115456 (2021); Condens. Matter Phys. **25**, 23302 (2022)]. The molecule consists of a linear backbone with two branching points at both ends, with functionalities \(f_{1}\) and \(f_{2}\). Here, the main attention is concentrated on studying the impact of the central backbone length on the configurational characteristics of complex molecule, such as size and shape ratios. We apply both a direct polymer renormalization scheme based on continuous chain model and the alternative Wei's method to analyze a set of size and shape properties of pom-pom polymers in dilute solution. The size ratio of a pom-pom and a chain polymer of the same total molecular mass is calculated with an excluded volume interaction taken into account, and estimates for asphericity are found in Gaussian approximation, whereas for the size ratio we found a monotonous dependence of the length of backbone at different functionalities of side arms. Results for asphericity show a non-trivial behaviour. p oymers, shape characteristics, continuous chain model, Wei's method On the swelling properties of pom-pom polymers]On the swelling properties of pom-pom polymers: impact of backbone length ## 1 Introduction In recent decades a number of methods were developed that permit to synthesize complex polymers with a desired number of branching points and their functionalities, the length of individual branches, the presence of loops, etc.[1, 2, 3, 4]. Such an interest has arisen due to the strong impact of architecture of individual macromolecules on the expected properties of their melts or solvents [5, 6, 7]. The simplest representative of the multiple branching polymer architecture is the so-called H-polymer, which was not only synthesised by a number of different strategies but was also thoroughly studied [8, 9]. As generalization of this structure, the pom-pom architecture, containing two branching points of functionalities \(f_{1}\) and \(f_{2}\) was synthesised [10, 11, 12, 13, 7, 14]. A number of studies were dedicated to the properties of such macromolecules in melts [15, 16, 17, 18]. It is important to notice that also solutions of multibranched polymers exhibit significantly different viscoelastic properties in comparison with the case of molecules of more simple topologies [12]. On the other hand, branched polymers are used as viscosity modifiers in the solution, for example in lubricants because it is a well known fact that branched polymers are characterised by a lower intrinsic viscosity than their linear counterparts of the same total molecular mass. Quantitatively, it is described by the shrinking factor which is equal to a ratio between the viscosities of the branched and liner polymers [12, 19, 20]. Some experimental data show a decrease of this ratio with an increase of the branching parameter for pom-pom polymers [12]. The decrease in viscosity is traditionally related to the decrease of the effective polymer size. This relation between intrinsic viscosity and effective size is described by the Flory-Fox equation [21]. The properties of polymer melts and dense solutions strongly depend on the conformational characteristics of individual macromolecules. The study of such properties can be much easier conducted while considering polymers in dilute solutions, when the interaction between different molecules is insignificant and thus the properties of a single molecule can be analyzed [22]. In this scenario, in statistical description of molecules one can find a number of properties which do not depend on any microscopic details of the macromolecules but rather depend on the so-called global characteristics, such as as space dimension, quality of the solvent and the topology of macromolecules [23, 24]. As typical examples of such universal properties, one consideres, e.g., the so-called size ratio \(g\) of mean-squared gyration radii of the complex molecule \(\langle R_{g}^{2}\rangle_{\rm complex}\) and that of the simplest linear polymer chain \(\langle R_{g}^{2}\rangle_{\rm chain}\) of the same total molecular weight [25], describing the effective extention of the shape of complex polymer architecture in solution, as compared with the linear one. More subtle size characteristics, specific to branched polymers, are the individual backbone and side branches swelling ratios as well as backbone-to-side branches ratio, studied in detail in our previous works [26, 27]. The elongation of the macromolecule may be also characterised by considering the universal shape characteristics like the asphericity \(A_{d}\). It describes the deviation of the shape from a spherical one being equal to 0 for a sphere and reaching the value of 1 for the rod-like state and can be defined as [28, 29]: \[\langle A_{d}\rangle=\frac{1}{d(d-1)}\left\langle\frac{{\rm Tr}\,\mathbf{S}^{ 2}}{({\rm Tr}\,\mathbf{S})^{2}}\right\rangle. \tag{1.1}\] Here, \(\mathbf{S}\) is the gyration tensor, \(\mathbf{\hat{S}}=\mathbf{S}-\overline{\mu}\mathbf{I}\) with \(\overline{\mu}\) being an average eigenvalue and \(\mathbf{I}\) is a unity matrix. In our previous studies [26, 27], we analyzed a case of pom-pom polymer, where both the backbone and side arms are of equal length \(L\). Here, we consider the central backbone of variable length \(aL\) (with variable \(a\)), which is assumed to be longer than the side chains. This structure is expected to have a more elongated shape than a star like structure. The layout of the paper is as follows. We start this paper by calculations of the size and shape characteristics in terms of continuous chain model in section 2, that is followed by application of the Wei's method in section 3. Results received in both methods are compared and discussed in section 4 before we finish this work with some final remarks in conclusions. ## 2 Continuous chain model ### The model Within the frames of continuous chain model [30], the linear polymer chains are described as trajectories of length \(L_{i}\) that are parameterised by radius vector \(\mathbf{r}_{i}(s)\) with \(s\) changing from 0 to \(L_{i}\). The Hamiltonian of the pom-pom architecture can be thus presented as: \[H=\frac{1}{2}\sum_{i=1}^{F}\int\limits_{0}^{L_{i}}\mathrm{d}s\,\left(\frac{ \mathrm{d}\mathbf{r}_{1}(s)}{\mathrm{d}s}\right)^{2}+\frac{u}{2}\sum_{i,j=0}^{ F}\int\limits_{0}^{L_{i}}\mathrm{d}s^{\prime}\int\limits_{0}^{L_{j}} \mathrm{d}s^{\prime\prime}\,\delta(\mathbf{r}_{1}(s^{\prime})-\mathbf{r}_{2} (s^{\prime\prime})), \tag{2.1}\] with the first term representing a chain connectivity, the second one describing an excluded volume interactions with coupling constant \(u\), \(L_{i}=L\) for branches of both side stars, and \(L_{0}=aL\) (for backbone), and \(F=f_{1}+f_{2}\). Within this model, the polymer topology is introduced in the definition of the partition function according to: \[Z_{f_{1},f_{2}}^{\rm pom-pom}=\frac{1}{Z_{0}^{\rm pom-pom}}\int\,D\mathbf{r}(s )\prod_{i=1}^{f_{1}}\prod_{j=1}^{f_{2}}\,\delta(\mathbf{r}_{1}(0)-\mathbf{r}_{ 0}(0))\delta(\mathbf{r}_{1}(0)-\mathbf{r}_{0}(L_{0}))\,\mathrm{e}^{-H}, \tag{2.2}\] with \(Z_{0}^{\rm pom-pom}\) being a partition function of Gaussian model (corresponding to the absence of the second term with excluded volume interaction in the Hamiltonian (1)), trajectory \({\bf r_{0}}(L_{0})\) being the backbone chain and \(\delta\)-functions stating that there are \(f_{1}\) and \(f_{2}\) chains starting at its end points. For the further analyses we put \(f_{1}=f_{2}=f\). Observables, calculated on the basis of continuous chain model, can be presented as functions of dimensionless coupling constants \(u_{0}=(2\pi)^{-d/2}uL^{2-d/2}\). In the limit \(L\to\infty\), this constant also tends to infinity and an observable becomes divergent. In order to remove these divergences, it was proposed to replace this coupling constant with a renormalized one \(u_{R}\)[23], which in the same limit reaches a fixed value: \(\lim_{L\to\infty}u_{R}(u_{0})=u_{R}^{*}\). For the model under consideration, the values of fixed points are well known, and in the first order of the \(\epsilon=4-d\) -- expansion read [23]: \[{\rm Gaussian:}\;u_{R}^{*}=0,\quad{\rm at}\quad d\geqslant 4, \tag{3}\] \[{\rm Pure:}\quad\quad u_{R}^{*}=\frac{\epsilon}{8},\quad{\rm at} \quad d<4, \tag{4}\] where (3) describes an idealized Gaussian model, and (4) is a coupling constant for the model with excluded volume interaction. ### Universal characteristics: size ratio We start our discussion by considering the size ratio of the pom-pom structure and a linear chain of the same molecular mass: \[g_{c}=\frac{\langle R_{g,{\rm pom-pom}}^{2}\rangle}{\langle R_{g,{\rm chain}} ^{2}\rangle}, \tag{5}\] with gyration radius defined in terms of continuous chain model as: \[\langle R_{g}^{2}\rangle=\frac{1}{2(LF+La)^{2}}\sum_{i,j=0}^{F}\int \limits_{0}^{L_{j}}\int\limits_{0}^{L_{i}}{\rm d}s_{1}\;{\rm d}s_{2}\langle({ \bf r}_{i}(s_{2})-{\bf r}_{j}(s_{1}))^{2}\rangle. \tag{6}\] We are making use of an identity: \[\langle({\bf r}_{i}(s_{2})-{\bf r}_{j}(s_{1}))^{2}\rangle=-2\frac {{\rm d}}{{\rm d}|{\bf k}|^{2}}\xi({\bf k})_{{\bf k}=0},\] \[\xi({\bf k})\equiv\langle{\rm e}^{-{\rm i}{\bf k}({\bf r}_{i}(s_ {2})-{\bf r}_{j}(s_{1}))}\rangle. \tag{7}\] To calculate \(\xi({\bf k})\) within the path integration approach, we use a diagrammatic technique for which the diagrams for the Gaussian approximation are given in figures 1. These diagrams should be accounted for Figure 1: (Colour online) Diagrammatic presentations of contributions into \(\xi({\bf k})\) in Gaussian approximation. The solid lines represent polymer paths and arrows represent the so-called restriction points \(s_{1}\) and \(s_{2}\). with some prefactors, as was discussed in our previous works [26; 27]. The Gaussian approximation for the pom-pom structure will thus read: \[\langle R_{g,\text{pom-pom}}^{2}\rangle_{0}=\frac{dL}{6(2f+a)^{2}}\left[f^{2}(2+ a)+f\left(a^{2}+a-\frac{2}{3}\right)+\frac{a^{3}}{6}\right]\,. \tag{8}\] Note that the result depends on the relative length of the backbone and side chains. Taking into account the excluded volume interaction governed by coupling constant \(u_{0}\), we develop the first order of perturbation theory in the form: \[\langle R_{g}^{2}\rangle=\langle R_{g}^{2}\rangle_{0}\left[1-u_{0}A(f_{1},f_{ 2},a,d)\right]\,, \tag{9}\] with the expressions for \(A(f_{1},f_{2},a,d)\) provided in the appendix. It is a well known practice in the renormalization group approaches to evaluate the \(\epsilon\)-expansions for the observable of interest. In order to get a reliable result on their basis, which can be compared with the data of computer simulations or experiments, we need to consider at least the second order of perturbation theory, which is usually a tricky task from mathematical point of view. One of the possibilities is to make use of a Douglas-Freed (DF) approximation (details are provided in the appendix) that gives a good quantitative agreement with simulations [26; 27] and experiment [31]. To establish the influence of the excluded volume on the size ratio (5), we compare the results of two approximations for a few fixed values of \(f\); the data are presented in figure 2. Note that the influence of the excluded volume for a range of values of \(a\) is rather small and the general behaviour, tendencies and limits are rather the same. In general, let us summarize the main features of the size ratio, as observed in figure 2: * for \(a=0\) (backbone of zeroes length), the ratio transforms into a well known result for a star with \(2f\) branches versus a linear chain [32]. * in the limit \(a=\infty\), the ratio tends to \(1\). Indeed, in this case the presence of side stars plays practically no role on the behaviour of infinitely long central backbone chain. * in the case of \(a=1\) (backbone and side branches of equal lengths), we recover results from our previous work [26]. Figure 2. (Colour online) Size ratio \(g_{c}\) for symmetric case \(f_{1}=f_{2}=f\) as a function of parameter \(a\). ### Universal characteristics: asphericity We continue our consideration by calculating the asphericity of the pom-pom structure. An analytical calculation of the asphericity as defined by equation (1.1) is rarely possible even in the simple cases. Thus, it was proposed in [28] to consider a slightly different quantity: \[\overline{A_{d}}=\frac{1}{d(d-1)}\frac{\langle\text{Tr}\,\hat{\mathsf{S}}^{2} \rangle}{\langle(\text{Tr}\,\hat{\mathsf{S}})^{2}\rangle}. \tag{2.10}\] This can be presented in terms of components of the gyration tensor: \[\overline{A_{d}}=\frac{\langle S_{\alpha\alpha}S_{\alpha\alpha}\rangle+d \langle S_{\alpha\beta}\rangle-\langle S_{\alpha\alpha}S_{\beta\beta}\rangle}{ \langle S_{\alpha\alpha}S_{\alpha\alpha}\rangle+(d-1)\langle S_{\alpha\alpha} S_{\beta\beta}\rangle}, \tag{2.11}\] with \(S_{\alpha\beta}\) in terms of the continuous chain model given by: \[S_{\alpha\beta}=\frac{1}{2(LF+aL)^{2}}\sum_{i,j=0}^{F}\int\limits_{0}^{L_{j}} \int\limits_{0}^{L_{i}}(r_{i}^{\alpha}(s_{2})-r_{j}^{\alpha}(s_{1}))(r_{i}^{ \beta}(s_{2})-r_{j}^{\beta}(s_{1}))\text{d}s_{1}\,\text{d}s_{2}. \tag{2.12}\] In order to calculate the contributions into (2.11), we introduce an identity similarly to the one utilized for the case of gyration radius: \[r_{i}^{\alpha}(s_{2})-r_{j}^{\alpha}(s_{1}))(r_{i}^{\beta}(s_{2} )-r_{j}^{\beta}(s_{1}))(r_{l}^{\alpha}(s_{4})-r_{m}^{\alpha}(s_{3}))(r_{m}^{ \beta}(s_{4})-r_{j}^{\beta}(s_{3}))=\] \[\frac{\text{d}}{\text{d}k_{1}^{\alpha}}\frac{\text{d}}{\text{d}k _{1}^{\beta}}\frac{\text{d}}{\text{d}k_{2}^{\alpha}}\frac{\text{d}}{\text{d}k _{2}^{\beta}}\zeta(\mathbf{k}_{1},\mathbf{k}_{2})|_{\mathbf{k}_{1}=\mathbf{k}_ {2}=0} \tag{2.13}\] with \(\zeta(\mathbf{k}_{1},\mathbf{k}_{2})=\text{e}^{-\mathbf{i}\mathbf{k}_{1}( \mathbf{r}_{i}(s_{2})-\mathbf{r}_{j}(s_{1}))}\text{e}^{-\mathbf{i}\mathbf{k}_ {2}(\mathbf{r}_{i}(s_{4})-\mathbf{r}_{m}(s_{3}))}\). The contributions into this function are calculated using the diagrams presented in figure 3 for a Gaussian case. Performing the calculations we receive the expression: \[\overline{A_{d}}=\frac{2(2+d)C_{1}(f_{1},f_{2},a)}{5dC_{2}(f_{1},f_{2},a)+4C_ {1}(f_{1},f_{2},a)}, \tag{2.14}\] Figure 3: (Colour online) Diagrammatic presentation of contributions into \(\zeta(\mathbf{k}_{1},\mathbf{k}_{2})\) in Gaussian approximation. The solid lines represent the polymer paths and arrows represent the so-called restriction points \(s_{1},\,s_{2},\,s_{3}\) and \(s_{4}\). with \(C_{1}(f_{1},f_{2},a)\) and \(C_{2}(f_{1},f_{2},a)\) given by: \[C_{1}(f_{1},f_{2},a)=f_{2}^{2}(15f_{2}-14)+f_{1}^{2}(15f_{1}-14)+f_ {1}a(6a^{4}+15a^{3}f_{1}+20a^{2}+15a+30f_{1}-24)\] \[+a^{6}+f_{2}a(6a^{4}+15a^{3}f_{2}+20a^{2}+15a+30f_{2}-24)+f_{1}f_ {2}(45f_{2}+45f_{1}-28\] \[+30a(2f_{1}+2f_{2}+2+(6f_{1}f_{2}-3f_{1}-3f_{2}+7)a+(f_{1}+f_{2}+2 )a^{2}+a^{3})), \tag{2.15}\] \[C_{2}(f_{1},f_{2},a)=f_{2}^{2}(3f_{2}-2)^{2}+f_{1}^{2}(3f_{1}-2)^ {2}+36f_{1}^{2}f_{2}(f_{1}-1)+6f_{1}^{2}(3f_{1}-2)a+3f_{1}^{2}(6f_{1}-1)a^{2}\] \[+4f_{1}(6f_{1}-1)a^{3}+3f_{1}(3f_{1}+2)a^{4}+6a^{5}f_{1}+36f_{1}f_ {2}^{2}(f_{2}-1)+6f_{2}^{2}(3f_{2}-2)a+3f_{2}^{2}(6f_{2}-1)a^{2}\] \[+4f_{2}(6f_{2}-1)a^{3}+3f_{2}(3f_{2}+2)a^{4}+6a^{5}f_{2}+a^{6}+2f_ {1}f_{2}(27f_{1}f_{2}+4+(18(f_{1}+f_{2})^{2}+6f_{1}\] \[+6f_{2}+6)a+(18f_{1}f_{2}+27f_{1}+27f_{2}+33)a^{2}+(9f_{1}+9f_{2} +42)a^{3}+15a^{4}). \tag{2.16}\] Again, at \(f_{1}=f_{2}=1\) and any values of \(a\), we recover the result for the linear chain [29], whereas for \(f_{2}=0,f_{1}=f,a=0\), the asphericity of a single star is recovered [32]. Following the same idea as with size ratio in previous subsection, let us introduce the asphericity ratio \[p_{c}=\frac{\overline{A_{d\text{pom-pom}}}}{\overline{A_{d\text{chain}}}}, \tag{2.17}\] which is usufel in comparing the shape properties of complex polymer and the corresponding linear chain molecule. Since in the present work we are interested in the influence of the relative length of the backbone, we provide some results for fixed values of branching parameters in figure 4. Since within the continuous chain model we are restricted to the calculation of (2.10) rather than (1.1), we can receive only a qualitative description with this approach. And since the behaviour of the Gaussian model and the model with excluded volume are rather similar (see figure 2), we restrict ourselves with Gaussian case calculation for the asphericity. An additional bonus to this is a possibility to compare the different averaging with the Wei's method described in the next section. Figure 4: (Colour online) Asphericity ratio \(p_{c}\) for symmetric case \(f_{1}=f_{2}=f\) as function of the parameter \(a\). ## 3 Wei method and analytical approach to eigenvalue problem of Kirchhoff matrix Any complex polymer structure can be described as a mathematical graph (network), where the individual monomers are presented as vertices, and the chemical bonds between monomers as links between them. The chemical functionalities of monomers are then equal to the degrees of corresponding vertices. The Wei's method [33] is applicable in evaluating the size and shape properties of polymer network of any topology, if the Kirchhoff matrix and its eigenvalues are defined. For the polymer structure of total number of \(M\) monomers, Kirchhoff \(M\times M\) matrix \(\mathbf{K}\) is defined as follows. Its diagonal elements \(K_{ii}\) equal the degree of vertex \(i\), whereas the non-diagonal elements \(K_{ij}\) equal \(-1\) when the vertices \(i\) and \(j\) are adjacent and \(0\) otherwise. Let \(\lambda_{2},\ldots,\lambda_{M}\) be \((M-1)\) non-zero eigenvalues of the \(M\times M\) Kirchhoff matrix \[\mathbf{K}\mathbf{Q}_{i}=\lambda_{i}\mathbf{Q}_{i},\quad i=1\ldots M \tag{3.1}\] (\(\lambda_{1}\) is always \(0\)). The asphericity in \(d\) dimensions is thus given by [33, 34]: \[\langle A_{d}\rangle=\frac{d(d+2)}{2}\int\limits_{0}^{\infty}\mathrm{d}y\sum \limits_{j=2}^{M}\frac{y^{3}}{(\lambda_{j}+y^{2})^{2}}\left[\prod\limits_{k=2} ^{M}\frac{\lambda_{k}}{\lambda_{k}+y^{2}}\right]^{d/2}. \tag{3.2}\] To evaluate the expressions for the set of eigenvalues of the pom-pom structure, we follow the scheme developed in the original work by Zimm and Kilb [35] for the case of star-like polymers. We represent the components of eigenvectors by continuous eigenfunctions. Let us introduce notations \(Ql_{i}(s)\), \(Qr_{i}(s)\) with \(s=0,\ldots,L\), \(i=1,\ldots,f\) for eigenfunctions corresponding to the"left-hand" and "right-hand" stars, and \(Qc(s)\) with \(s=0,\ldots,aL\) for the central backbone chain. The total number of eigenvalues is thus given by \(M=(2f+a)L\). Taking into account the structure of Kirchhoff matrix for considered structure, we may write the eigenvalue equations for the internal points of each branch of side stars in the form: \[\mathbf{K}Ql_{i}(s)=-(2Ql_{i}(s)-Ql_{i}^{s-\delta}-Ql_{i}^{s+ \delta}),\] \[\mathbf{K}Ql_{r}(s)=-(2Qr_{t}(s)-Qr_{i}^{s-\delta}-Qr_{i}^{s+ \delta}),\] for the end points: \[\mathbf{K}Ql_{i}(L)=-(Ql_{i}(L)-Ql_{i}(L-\delta)),\] \[\mathbf{K}Qr_{i}(L)=-(Qr_{i}(L)-Qr_{i}(L-\delta)),\] and for the central branching points of two side stars: \[\mathbf{K}Ql_{i}(0)=-\Bigl{(}\sum\limits_{j=1}^{f}Ql_{j}(0)+Qc(0 )-\sum\limits_{j=1}^{f}Ql_{j}(\delta)-Qc(\delta)\Bigr{)},\] \[\mathbf{K}Qr_{i}(0)=-\Bigl{(}\sum\limits_{j=1}^{f}Qr_{j}(0)+Qc(0 )-\sum\limits_{j=1}^{f}Qr_{j}(\delta)-Qc(\delta)\Bigr{)},\] where \(i=1,\ldots,f\) and \(\delta\) is a small parameter. The eigenvalue problem is thus reduced to the general equation \[-\delta^{2}\frac{\mathrm{d}^{2}Q^{2}(s)}{\mathrm{d}s^{2}}=\lambda_{i}Q(s), \tag{3.3}\] and the solution can be presented in the form \[Q_{n}(s)=A_{1}\cos(ks)+A_{2}\sin(ks),\;\;k=\sqrt{\lambda}. \tag{3.4}\] For convenience of the following evaluation, let us consider the middle of the central backbone as reference point, so that the branching points have coordinates \(aL/2\) and \(-aL/2\), and the end points \((a+2)L/2\) and \(-(a+2)L/2\), correspondingly (see figure 5). The boundary conditions are thus imposed as: 1. \(\frac{\mathrm{d}Qr_{l}(s)}{\mathrm{d}s}|_{s=(a+2)L/2}=0,\ \ \frac{\mathrm{d}Ql_{l}(s)}{ \mathrm{d}s}|_{s=-(a+2)L/2}=0,\) 2. \(Qc(aL/2)\)=\(Qr_{i}(aL/2)\), \(Qc(-aL/2)\)=\(Ql_{i}(-aL/2)\), 3. \(\sum\limits_{i=1}^{f}Qr_{i}(aL/2)\)+\(Qc\)\((aL/2)\)=0, \(\ \sum\limits_{i=1}^{f}Ql_{i}(-aL/2)\)+\(Qc(-aL/2)\)=0. **Antisymmetric eigenfunctions** Let us start with the antisymmetric solutions with \(Qc(s)=-Qc(-s)\), \(Qr_{l}((a+2)L/2+s)=-Ql_{i}(-(a+2)L/2-s)\). We use the following ansatz: \[Qc(s)=\sin(ks),\] \[Ql_{i}(s)=-B\cos(k((a+2)L/2+s)),\,i=1,\ldots,f,\] \[Qr_{i}(s)=B\cos(k((a+2)L/2-s)),\,i=1,\ldots,f. \tag{3.5}\] Solving the boundary conditions we find \[B\cos(kL)=-\sin(k\ aL/2), \tag{3.6}\] \[-fB\sin(kL)=\cos(k\ aL/2). \tag{3.7}\] These equations are solved by \[f\tan(kL)\tan(k\ aL/2)=1. \tag{3.8}\] In the case when \(a=1\), equation (3.8) is easily solved giving two branches of solutions: \[k_{i}=2/L\ \mathrm{arctan}\left(\frac{1}{\sqrt{2f-1}}+n\pi \right),\quad i=1,\ldots,L/2, \tag{3.9}\] \[k_{i}=-2/L\ \mathrm{arctan}\left(\frac{1}{\sqrt{2f-1}}+n\pi \right),\quad i=0,\ldots,L/2, \tag{3.10}\] Figure 5: Schematic presentation of possible vibration modes in pom-pom molecule: asymmetric (1), symmetric (2), independent asymmetric modes of side stars (3). thus resulting in total in \(L\) eigenvalues. Otherwise, there are \(a\) branches in the case of even \(a\) and \((a+1)\) branches for odd \(a\). One more solution of equation 3.7 is obtained from the condition, then the derivatives of eigenfunctions at branching points equal zero, thus leading to: \[\cos(k\;aL/2)=0\to k=\frac{(2n+1)\pi}{aL},\quad n=0,1,2,3\ldots, \tag{3.11}\] \[\sin(kL)=0\to k=\frac{n\pi}{L},\quad n=1,2,3\ldots. \tag{3.12}\] The simultaneous solution of these equations results in \[k_{n}=\frac{(2n+1)\pi}{aL}, \tag{3.13}\] with \(n\) values obeying the condition that \((2n+1)/a\) is integer number, giving additional \(L/2\) eigenvalues. Note, that the last condition holds only for the cases of odd \(a\). There are additional \(L/2\) eigenvalues obtained from this condition. **Symmetric eigenfunctions** Here, we are looking for the symmetric solutions with \(Qc(s)=Qc(-s)\), \(Qr_{i}((a+2)L/2+s)=Ql_{i}(-(a+2)L/2-s)\). We use the following ansatz: \[Qc(s)=\cos(ks),\] \[Ql_{i}(s)=B\cos(k((a+2)L/2+s)),\quad i=1,\ldots,f,\] \[Qr_{i}(s)=B\cos(k((a+2)L/2-s)),\quad i=1,\ldots,f. \tag{3.14}\] Solving the boundary conditions we find \[B\cos(k((a+2)L/2-a))=\cos(k\;aL/2), \tag{3.15}\] \[fB\sin(k((a+2)L/2-a))=-\sin(k\;aL/2). \tag{3.16}\] These equations are solved by \[f\frac{\tan(k\,L)}{\tan(k\;aL/2)}=-1. \tag{3.17}\] In the case when \(a=1\), equation (3.17) is easily solved giving two branches of solutions: \[k_{i}=2/L\;\arctan\left(\sqrt{2f-1}+n\pi\right),\quad i=1,\ldots,L/2, \tag{3.18}\] \[k_{i}=-2/L\;\arctan\left(\sqrt{2f-1}+n\pi\right),\quad i=0, \ldots,L/2, \tag{3.19}\] thus giving \(L\) eigenvalues. Otherwise, there are \(a\) branches in the case of even \(a\) and \(a+1\) branches for odd \(a\). One more solution of equation 3.16 is obtained from the condition, then the derivatives of eigenfunctions at branching points equal zero, thus leading to: \[\sin(k\;aL/2)=0\to k=\frac{2n\pi}{aL},\quad n=1,2,3\ldots, \tag{3.20}\] \[\sin(k\,L)=0\to k=\frac{n\pi}{L},\quad n=1,2,3\ldots. \tag{3.21}\] The simultaneous solution of these equations results in \[k_{n}=\frac{2n\pi}{aL}, \tag{3.22}\] with \(n\) values obeying the condition that \(2n/a\) is integer number, this giving additional \(L/2\) eigenvalues. **Independent asymmetric eigenfunctions of two side stars** There are additional functions, having nodes at the branching points of either the side stars, when only two branches of the corresponding star are excited in antisymmetric manner. In this way, the corresponding eigenfunctions coincide with those of star polymer derived in [35]. There are \(2(f-1)\) independent eigenfunctions (with degenerate values of \(k\)) given by: \[Q(s)=\sin(k(s-aL/2)),Q(s)=-\sin(k(s-aL/2)). \tag{3.23}\] They correspond to \[k=\frac{(2n+1)\pi}{2L},n=0,1,2,\ldots, \tag{3.24}\] giving \(2(f-1)L\) eigenvalues. Thus, the complete set of eigenvalues of Kirchhoff matrix of pom-pom polymer structures is given as \(\lambda_{i}=k_{i}^{2},\ i=1,\ \ldots,(2f+a)L\), with \((a+1)L/2\) values of \(k\) obtained on the base of equation (3.8), \(L/2\) of equation (3.13), \((a+1)L/2\) values of equation (3.17), \(L/2\) of equation (3.22) and \(2(f-1)L\) values given by equation (3.24). An estimate for asphericity ratio based on substituting this set of eigenvalues in equation (3.2) is presented in figure 6. ## 4 Results and discussion The aim of the present study was to describe the impact of the backbone length on the universal properties of the pom-pom polymer. For those purposes, we considered the size and shape properties, such as size ratio and asphericity. Our results for the size ratio as defined by equation (2.5) are received in the framework of the continuous chain model with excluded volume interaction accounted for through the usage of the Douglas-Freed approximation. The results at some fixed values of the branching parameter \(f\) are plotted in figure 2. Apart from the very small differences in values between the Gaussain polymers and polymers with excluded volume interactions, it is interesting to note that the side branches become unimportant rather quickly with increasing the parameter \(a\) (the ratio tends to 1), reflecting the increasing influence of the backbone on the pom-pom characteristic size. The asphericity for the case of ideal Gaussain chain is calculated using two different averaging schemes within the frameworks of two different approaches. For the continuous chain model as was mentioned above we used the averaging in as given by (2.10) and for the Wei's method by (1.1). Due to the different averaging definitions, an absolute value comparison is impossible, thus we consider a ratio: \[\alpha(a)=\frac{A_{d}^{\text{pom-pom}}}{A_{d}^{\text{chain}}}, \tag{4.1}\] Figure 6: Asphericity ratio \(p_{c}\) for symmetric case \(f_{1}=f_{2}=f\) as function of the parameter \(a\) based on on substituting the analytically derived set of eigenvalues in equation (3.2). with \(A_{d}\) being ether \(\overline{A_{d}}\) or \(\langle A_{d}\rangle\). This allows us to provide a relative comparison of not only the averaging methods but also the topologies. In both calculation schemes (see figures 4, 6) we can see two distinct regions: at small values of parameter \(a\), the ratio is smaller than 1 (the shape of pom-pom structure is more symmetric than that of a chain), whereas at \(a\) larger than some critical value, the situation is the opposite, while it tends to 1 with \(a\rightarrow\infty\). It is interesting to note that for the large values of \(a\), the asphericity of pom-pom structure is only slightly (under 10%) larger than the corresponding value of the linear chain in both averaging schemes. This indicates a non-trivial influence of the side arms even in the cases where \(g_{c}\) is around 1. A larger value of asphericity corresponds to the elongation of the macromolecule under the influence of the side arms. Note that the calculations of the shape characteristics were conducted only for the ideal Gaussian chain, that can be treated as an approximation of the so-called \(\Theta\)-solution [23], while the behaviour for the polymers in good solutions (with taking into account the excluded volume effect) is expected to be qualitatively similar. ## 5 Conclusions This work is a continuation of a cycle of our studies devoted to the analysis of universal conformational properties of pom-pom polymers. Our previous works [26, 27] were mainly concentrated on the influence of the branching parameters of side stars on side and shape properties of macromolecule, whereas here we finally address the question of the impact of the backbone length. Within the frames of the continuous chain model, we evaluated the estimates for the size ratio \(g_{c}\) that compares the effective size of the pom-pom topology in a solvent to that of a linear chain with the same molecular mass. We find that as the length of the backbone increases (with an increase of parameter \(a\)), the ratio monotonously tends toward 1, thus for the large value of \(a>5\) the characteristic size of the chain and pom-pom is rather similar. The analysis of the shape characteristics was conducted only for the ideal Gaussian case because in this case it was possible to obtain the exact results for both approaches used: the continuous chain model framework with application of path integration method and the Wei's method based on evaluation of the set of eigenvalues of Kirchhoff matrix of the graph corresponding to the topology of complex polymer under consideration. In both cases we find that the pom-pom molecules are more elongated and asymmetric as compared to the chain-like topology when the backbone length considerably exceeds the length of side branches. These differences may play an important role in the rheological properties of the polymer solution. ## Acknowledgements K.H. would like to acknowledge the support from the National Science Center, Poland (Grant No. 2018/30/E/ST3/00428). V.B. is grateful for support from the U.S. National Academy of Sciences (NAS) and the Polish Academy of Sciences (PAS) to scientists from Ukraine. ## Appendix Here, we give expressions for contributions into the partition function with taking into account the excluded volume interaction. Corresponding diagrammatic presentations are given in figure 7. Diagram \(Z_{1}\) is accounted with prefactor \(2f\) times, diagram \(Z_{2}\) comes with \(f(f-1)\), diagram \(Z_{3}\) with \(f^{2}\), diagram \(Z_{5}\) with \(2f\)\(Z_{4}\) only once. The analytical expressions read: \[Z_{1}=\frac{u(2\pi)^{-d/2}L^{2-d/2}}{(1-d/2)(2-d/2)},\] (A.1) \[Z_{2}=\frac{u(2\pi)^{-d/2}L^{2-d/2}(2^{2-d/2}-2)}{(1-d/2)(2-d/2 )},\] (A.2) \[Z_{3}=\frac{u(2\pi)^{-d/2}L^{2-d/2}}{(1-d/2)(2-d/2)}\left[(2L+L _{c})^{2-d/2}-2(L+L_{c})^{2-d/2}+L_{c}^{2-d/2}\right]\,,\] (A.3) \[Z_{4} = \frac{u(2\pi)^{-d/2}L_{c}^{2-d/2}}{(1-d/2)(2-d/2)},\] (A.4) \[Z_{5} = \frac{u(2\pi)^{-d/2}L^{2-d/2}}{(1-d/2)(2-d/2)}\left[(L+Lc)^{2-d/2}-2 (L)^{2-d/2}-L_{c}^{2-d/2}\right].\] (A.5) Introducing a dimensionless coupling constant \(u_{0}=u(2\pi)^{-d/2}L^{2-d/2}\), and taking into account that \(L_{c}=aL\), we express the partition function in one loop approximation as: \[Z_{f,f}^{\text{pom-pom}}=1-\frac{u_{0}}{(d-2)(d-4)}\left[4(2+a)^{ 2-\frac{d}{2}}f_{1}f_{2}+4a^{2-\frac{d}{2}}(f_{2}-1)(f_{1}-1)\right.\] \[\left.-4(2f_{1}f_{2}-f_{1}-f_{2})(a+1)^{2-\frac{d}{2}}+4(f_{1}^{2 }+f_{2}^{2}-f_{1}-f_{2})(2^{1-\frac{d}{2}}-1)\right].\] (A.6) This expression is used in calculations of all the averaged values that follow below, with averaging defined as: \[\langle(\dots)\rangle=\frac{1}{Z_{f,f}^{\text{pom-pom}}}\prod_{i=1}^{f}\prod_{ j=1}^{f}\int\;D\mathbf{r}(s)\;\delta(\mathbf{r_{i}}(0)-\mathbf{r_{0}}(0)) \delta(\mathbf{r_{j}}(0)-\mathbf{r_{0}}(L))\,\text{e}^{-H}(\dots).\] (A.7) In order to calculate the contribution to the gyration radius in one loop approximation, we have to consider all possible combinations between diagrams in figures 7 and 1. In general, the expression can be presented as: \[\langle R_{g}^{2}\rangle=\langle R_{g}^{2}\rangle_{0}\left(1-u_{0}A(f_{1},f_{2 },a,d)\right),\] (A.8) with \(A(f_{1},f_{2},a,d)\) being given by the expression: \[A(f_{1},f_{2},a,d)=-2((f_{1}+f_{2})(3f_{1}+3f_{2}-2)+(3(2f_{1}f _{2}+f_{1}+f_{2}))a+(3(f_{1}+f_{2}))a^{2}+a^{3})^{-1}\] \[\times\left(\frac{96(f_{1}+f_{2})(f_{1}-1)(f_{2}-1)a^{3-\frac{d} {2}}}{(d-4)(d-2)d(d-6)}+\frac{12f_{2}f_{1}(f_{1}-1)(f_{2}-1)a^{3-\frac{d}{2}}} {d(d-2)}\right.\] \[+\frac{4a^{4-\frac{d}{2}}(f_{2}-1)(f_{1}-1)(f_{1}+f_{2})}{(d-6)d (d-8)(d-2)(d-4)}\times(d^{3}-18d^{2}+80d-192)\] \[+\frac{a^{5-\frac{d}{2}}(f_{2}-1)(f_{1}-1)(d^{2}-26d+136)}{(d-10) (d-6)(d-8)(d-4)}-\frac{12(a+1)^{1-\frac{d}{2}}(f_{2}-1)(f_{1}-1)2f_{1}f_{2}}{ d(d-2)}\] \[+\frac{12(a+2)^{1-\frac{d}{2}}(f_{2}-1)(f_{1}-1)(4f_{1}f_{2}-f_{ 2})}{d(d-2)}-\frac{12(a+1)^{3-\frac{d}{2}}(2f_{1}^{2}f_{2}^{2}+f_{1}^{2}+f_{2} ^{2}-f_{1}-f_{2})}{d(d-2)}\] \[-\frac{12(a+1)^{3-\frac{d}{2}}(4d^{2}-40d+80)f_{2}f_{1}}{((d-6)d (d-2)(d-4))}+\frac{12(a+1)^{3-\frac{d}{2}}f_{1}f_{2}(3d^{2}-30d+64)(f_{1}+f_{ 2})}{(d-6)d(d-2)(d-4)}\] \[-\frac{4(a+1)^{4-\frac{d}{2}}(f_{1}+f_{2}-1)(2f_{1}f_{2}-f_{1}-f_ {2})}{(d-6)d(d-8)(d-2)(d-4)}(d^{3}-18d^{2}+80d-192)\] Figure 7: (Colour online) Diagrammatic presentations of contributions into partition function up to the first order approximation in coupling constant \(u\). The solid lines are schematic presentations of polymer paths and dash line represents a two monomer excluded volume interaction. \[-\frac{(a+1)^{4-\frac{d}{2}}(d^{2}-26d+136)(2f_{1}f_{2}-f_{1}-f_{2})}{ (d-10)(d-6)(d-8)(d-8)(d-4)}+\frac{(2+a)^{1-\frac{d}{2}}f_{1}f_{2}(48d-480)(f_{2}- 1)(f_{1}-1)}{(d-10)d(d-2)}\] \[-\frac{12(2+a)^{2-\frac{d}{2}}f_{1}f_{2}(4f_{1}f_{2}-5f_{1}-5f_{2} +6)}{d(d-2)}+\frac{12(2+a)^{3-\frac{d}{2}}f_{1}f_{2}(f_{1}f_{2}-2f_{1}-2f_{2}+3 )}{d(d-2)}\] \[+\frac{4(2+a)^{4-\frac{d}{2}}f_{1}f_{2}(f_{1}+f_{2}-2)}{(d-6)d(d- 8)(d-2)(d-4)}\times(d^{3}-18d^{2}+80d-192)+\frac{(2+a)^{5-\frac{d}{2}}f_{1}f_{ 2}(d^{2}-26d+136)}{(d-10)(d-6)(d-8)(d-4)}\] \[+\frac{2^{3-\frac{d}{2}}(f_{1}^{2}+f_{2}^{2}-f_{1}-f_{2})}{(d-10 )(d-6)d(d-8)(d-2)(d-4)}((d-10)(d^{3}-18d^{2}+8d-192)(a+f_{1}+f_{2})-3840)\] \[+\frac{(3(f_{1}^{2}+f_{2}^{2}-f_{1}-f_{2}))}{(d-10)(d-6)d(d-8)(d- 8)(d-2)(d-4)}\left(d^{4}-28d^{3}+348d^{2}-2384d+7680\right)\] \[-\frac{4(d^{3}-18d^{2}+128d-576)}{(d-2)d(d-6)(d-4)(d-8)}(f_{1}f_{ 2}(f_{1}+f_{2}-2)+(f_{1}^{2}+f_{2}^{2}-f_{1}-f_{2})a)\] \[-\frac{4(d-12)(d^{2}-6d+32)}{(d-2)d(d-6)(d-4)(d-8)}\left(f_{2}^{2 }(f_{2}-1)+f_{1}^{2}(f_{1}-1)\right)\Big{)}\.\] (A.9) In order to receive the value of the size ratio (2.5) from the above expression one can follow one of the two strategies: consider an \(\epsilon=4-d\) -- expansion and using the fixed point (2.4) or to use a Douglas-Freed approximation [31]. The first strategy provides only qualitative values for the ratio and in order to get the quantitative values we need to consider higher orders in \(u_{0}\). The second method allows one to receive values that are comparable with experimental [31] and numerical [26; 27] results.
2310.00765
Semi-Generalized co-Bassian Groups
As a common non-trivial generalization of the notion of a generalized co-Bassian group, recently defined by the third author, we introduce the notion of a semi-generalized co-Bassian group and initiate its comprehensive study. Specifically, we give a complete characterization of these groups in the cases of p-torsion groups and groups of finite torsion-free rank by showing that these groups can be completely determined in terms of generalized finite p-ranks and also depends on their quotients modulo the maximal torsion subgroup. Surprisingly, for p-primary groups, the concept of a semi-generalized co-Bassian group is closely related to that of a generalized co-Bassian group.
Andrey R. Chekhlov, Peter V. Danchev, Patrick W. Keef
2023-10-01T19:09:08Z
http://arxiv.org/abs/2310.00765v1
# Semi-generalized co-Bassian groups ###### Abstract. As a common non-trivial generalization of the notion of a generalized co-Bassian group, recently defined by the third author, we introduce the notion of a _semi-generalized co-Bassian_ group and initiate its comprehensive study. Specifically, we give a complete characterization of these groups in the cases of \(p\)-torsion groups and groups of finite torsion-free rank by showing that these groups can be completely determined in terms of generalized finite \(p\)-ranks and also depends on their quotients modulo the maximal torsion subgroup. Surprisingly, for \(p\)-primary groups, the concept of a semi-generalized co-Bassian group is closely related to that of a generalized co-Bassian group. Key words and phrases:Bassian groups, (generalized) Bassian groups, (generalized) co-Bassian groups 2010 Mathematics Subject Classification: 20K10, 20K20, 20K30 ## 1. Introduction and Motivation Throughout the rest of the paper, unless specified something else, all groups will be additively written Abelian groups. We will primarily use the notation and terminology of [6, 7, 8], but we will follow somewhat those from [10] and [9] as well. We just recall that an arbitrary subgroup \(H\) of a group \(G\) is _essential_ in \(G\) if, for any non-zero subgroup \(S\) of \(G\), the intersection between \(H\) and \(S\) is also non-zero. It is an elementary exercise to see that every subgroup will be essential as a subgroup of itself and thus, in particular, the zero subgroup \(\{0\}\) is essential in itself too. We begin with a brief review of some of the most important concepts which motivate our writing of the present article. Mimicking [1], a group \(G\) is said to be _Bassian_ if the existence of an injective homomorphism \(\phi:G\to G/N\) for some subgroup \(N\) of \(G\) implies that \(N=\{0\}\). More generally, imitating [2], if this injection \(\phi\) implies that \(N\) is a direct summand of \(G\), then \(G\) is said to be _generalized Bassian_. Note that Bassian groups were completely characterized in [1]. A crucial example of a generalized Bassian group which is _not_ Bassian is the infinite _elementary \(p\)-group_ that is an arbitrary infinite direct sum of the cyclic group \(\mathbb{Z}_{p}\) for some fixed prime \(p\). Unfortunately, the class of generalized Bassian groups is _not_ fully characterized due to the unsettled at this stage problem of whether or _not_ a subgroup of a generalized Bassian group remains so. On this vein, for some other interesting properties of subgroups of (generalized) Bassian groups, we refer the interested readers to [4]. Taking into account all of the information we have so far, and in order to refine the group property of being generalized Bassian, we introduced in [3] a group \(G\) to be _semi-generalized Bassian_ if, for any its subgroup \(N\), the injective homomorphism \(\phi:G\to G/N\) implies that \(N\) is essential in a direct summand of \(G\). Clearly, any generalized Bassian group is semi-generalized Bassian. In difference to the case of generalized Bassian groups, it is quite curious that semi-generalized Bassian groups were totally classified in the cases where \(G\) is _not_ truly mixed. In fact, we succeeded to classify torsion, torsion-free and splitting mixed semi-generalized Bassian groups. Reciprocally, in [11], was defined the so-called _(generalized) co-Bassian_ groups in the following manner: A group \(G\) is termed co-Bassian if, for all subgroups \(H\leq G\), whenever \(\varphi:G\to G/H\) is an injective homomorphism, then \(\varphi(G)=G/H\). In general, if \(\varphi(G)\) is a direct summand of \(G/H\), the group \(G\) is termed generalized co-Bassian. Fortunately, these two classes of groups were completely described. Furthermore, expanding generalized co-Bassian groups in the way of semi-generalized Bassian groups, we shall say that a group \(G\) is _semi-generalized co-Bassian_ if, for all \(H\leq G\), the injection \(\varphi:G\to G/H\) forces that \(\varphi(G)\) is essential in a direct summand of \(G/H\). Our motivating tool is to explore the structure of the so-defined semi-generalized co-Bassian groups and to show that their full description is rather more complicated than the case of generalized-Bassian groups. The essence of our claim is the situation of having such groups with infinite torsion-free rank and, especially, the lack of workable idea and approach how to prove or even to disprove that if \(G\) is a group whose torsion subgroup \(T\) is a direct sum of a divisible group and an elementary group such that the quotient \(G/T\) is divisible, then \(G\) is semi-generalized co-Bassian group or _not_. Our next work is structured thus: In the present first section, we gave a short retrospection of the basic notions. In the subsequent second section, we formulate our chief results and provide their complete proofs. Precisely, the paper's main goal is to characterize in detail semi-generalized co-Bassian \(p\)-groups as well as semi-generalized co-Bassian groups having finite torsion-free rank. Concretely, the most important of them state like this: _A \(p\)-group \(G\) is semi-generalized co-Bassian if, and only if, its subgroup \(pG\) has generalized finite \(p\)-rank_ (see Theorem 2.7); _Suppose \(G\) is a group of finite torsion-free rank. Then, \(G\) is semi-generalized co-Bassian if, and only if, for each prime \(p\), the \(p\)-component of torsion \(T_{p}\) is semi-generalized co-Bassian group such that either (a) \(T_{p}\) possesses finite \(p\)-rank, or (b) \(T_{p}\) is divisible, or (c) the quotient-group \(G/T\) is \(p\)-divisible_ (see Theorem 2.9). We also examine the case of groups with infinite torsion-free rank proving that _If a group \(G\) is semi-generalized co-Bassian of infinite torsion-free rank, then the factor-group \(G/T\) is divisible and the torsion part \(T\) is a direct sum of a divisible group and an elementary group_ (see Proposition 2.11). Finally, we end our work with certain comments and also state several open problems which quite logically arise and, hopefully, will stimulate a further exploration on the subject. ## 2. Main Results and Their Proofs We first begin with the following elementary but useful observation, which was _not_ stated and proved in [11], that shows to what extent the classes of co-Bassian and generalized co-Bassian groups differ each other. **Proposition 2.1**.: _If \(G\) is a Hopfian group, then \(G\) is generalized co-Bassian if, and only if, \(G\) is co-Bassian._ Proof.: One direction being trivial, we deal with the opposite one. In fact, assume in a way of a contradiction that \(G\) is generalized co-Bassian but, however, not co-Bassian. Then, there exists a subgroup \(N\) of \(G\) and an embedding \(f:G\to G/N\). Since \(G\) is generalized co-Bassian, one has that \(f(G)\) is a direct summand of \(G/N\), say \(G/N=f(G)\oplus H/N\) for some \(H/N\neq\{\overline{0}\}\). Now, there is an isomorphism \[G/H\cong(G/N)/(H/N)\cong f(G),\] say \(\varphi:G/H\to f(G)\) and, moreover, if \(\pi:G\to G/H\) is the canonical epimorphism, then the composition \(\varphi^{-1}\pi\) is obviously an epimorphism but a non-isomorphism of \(G\) as \(\ker(\varphi^{-1}\pi)=H\neq\{0\}\), contrary to the condition that \(G\) is Hopfian. So, \(G\) is really co-Bassian, as required. Our next incidental assertion curiously states as follows. **Proposition 2.2**.: _The free group \(G\) is semi-generalized co-Bassian if, and only if, \(G\) is Bassian._ Proof.: If we assume for a moment that \(G\) has infinite rank, then there exists a subgroup \(N\leq G\) such that the quotient-group \(G/N\) is a torsion-free indecomposable group with \(\operatorname{rank}(G)=\operatorname{rank}(G/N)\). Thus, there is an injection \(f:G\to G/N\) such that \(f(G)\) is _not_ essential in \(G/N\), whence for the purification of \(f(G)\) we have \(\langle f(G)\rangle_{*}\neq G/N\) and, consequently, in view of indecomposability of \(G/N\) the group \(G\) is _not_ semi-generalized co-Bassian, as expected. Therefore, \(G\) has a finite rank and hence applying [1] the group \(G\) must be Bassian, as claimed. We continue our work with two critical constructions as follows: **Example 2.3**.: (i) Every divisible group is semi-generalized co-Bassian. (ii) The group \(G=\bigoplus_{\alpha}\mathbb{Z}_{p^{n}}\) is semi-generalized co-Bassian for all ordinals \(\alpha\) and integers \(n\geq 1\). Proof.: (i) This follows automatically, because the factor-group of a divisible group is also divisible and, for each divisible group \(D\), any its subgroup is essential in some direct summand of \(D\) (see [7, Theorem 24.4]), as needed. (ii) Since, for every injection \(f:G\to G/H\), the image \(f(G)\) is a direct sum of cyclic groups of order exactly \(p^{n}\) and \(G/H\) is a direct sum of cyclic groups of order \(\leq p^{n}\), we can infer that \(f(G)\) is a direct summand in \(G/H\) (see [7, Proposition 27.1]), as required. The following technicality is helpful for our presentation below. **Lemma 2.4**.: _The class of semi-generalized co-Bassian groups is closed under taking direct summands._ Proof.: Suppose \(G\) is a semi-generalized co-Bassian group and \(H\) is a summand of \(G\); say \(G=H\oplus G^{\prime}\) for some \(G^{\prime}\leq G\). Assume now \(N\) is a subgroup of \(H\) and \(f:H\to H/N:=\overline{H}\) is an injective homomorphism. Extending \(f\) to \(f^{\prime}:G\to G/N:=\overline{G}\cong\overline{H}\oplus G^{\prime}\) by letting it equal to the identity on \(G^{\prime}\), it remains an injective homomorphism. We, therefore, have \(\overline{G}=K\oplus A\), with \(A\) containing \(f^{\prime}(G)=f(H)\oplus G^{\prime}\) as an essential subgroup. It readily follows that \(A\cap\overline{H}\subseteq\overline{H}\) contains \(f(H)\) as an essential subgroup. And since \(A=(A\cap\overline{H})\oplus G^{\prime}\), we can also conclude that \(A\cap\overline{H}\) is a summand of \(\overline{G}\), and hence of \(\overline{H}\), as required. We now continue with a series of statements necessary for the successful establishment of our two main results presented in what follows. Before doing that, we need to review some notation, used in [11]: As above, when \(G\) is some group, \(T\) will always be its torsion subgroup and \(T_{p}\) the \(p\)-component of \(T\). If we have occasion to refer to some other group \(A\), we will denote its torsion subgroup by \(T_{A}\). For the group \(G\), suppose \(\phi\), and \(\pi\) are, respectively, injective and surjective homomorphism \(G\to\overline{G}\); when necessary, we may assume \(\overline{G}=G/N\) for some \(N\leq G\) and \(\pi\) is the usual epimorphism. if \(A\) is any subgroup of \(G\), we will let \(\hat{A}=\phi(A)\) and \(\overline{A}=\pi(A)\). **Lemma 2.5**.: _If \(p\) is a prime and \(T_{p}\) is a reduced unbounded \(p\)-group, then \(G\) is not a semi-generalized co-Bassian group. In particular, any reduced unbounded \(p\)-group \(G\) is not a semi-generalized co-Bassian group._ Proof.: Suppose \(T_{p}=\langle b\rangle\oplus T_{b}^{\prime}\), where \(b\) has order at least \(p^{2}\), and \(G=\langle b\rangle\oplus G^{\prime}\), where \(T_{p}^{\prime}\subseteq G^{\prime}\). If \(D\) is a divisible hull for \(T_{p}^{\prime}\) and \(Z:=\mathbb{Z}_{p^{\infty}}\), then there is a surjective homomorphism \(T_{p}^{\prime}\to Z\oplus D\) which extends to a surjective homomorphism \(\tau:G^{\prime}\to Z\oplus D\). This, in turn, extends to a surjective homomorphism \[\pi:G=\langle b\rangle\oplus G^{\prime}\to\overline{G}:=\langle b\rangle \oplus Z\oplus D\oplus(G^{\prime}/T_{p}^{\prime})\] by setting it equal to the identity on \(\langle b\rangle\) and, for \(x\in G^{\prime}\), letting \(\pi(x)=\tau(x)+(x+T_{p}^{\prime})\). Let \(c\in Z\) have the same order as \(b\). The inclusion \(T^{\prime}_{p}\subseteq D\) extends to a homomorphism \(\sigma:G^{\prime}\to D\). Define a homomorphism \(\phi:G\to\overline{G}\) as follows: If \(x\in G^{\prime}\), let \(\phi(x)=\sigma(x)+(x+T^{\prime}_{p})\), and \(\phi(b)=pb+c\). It is readily checked that \(\phi\) is injective. If \(G\) were semi-generalized co-Bassian, it would follow that \(\hat{G}=\phi(G)\) is essential in a summand \(S\) of \(\overline{G}\). Since every element of \(\hat{G}[p]=Z[p]\oplus D[p]\) has infinite height in \(\overline{G}\), the \(p\)-torsion subgroup of \(S\) is a divisible subgroup of \(\overline{G}\). However, \(\phi(b)=pb+c\) is such a \(p\)-torsion element of \(S\) and its \(p\)-height satisfies \(|pb+c|_{S}=|pb+c|_{\overline{G}}=1\). With this contradiction, we can deduce that \(G\) is not semi-generalized co-Bassian, as stated. The second part is immediate. **Proposition 2.6**.: _Suppose \(m>1\) is an integer and \(\alpha\) is either \(\infty\) or a positive integer with \(m<\alpha\). If \(G\) is a group with a summand of the form \(\mathbb{Z}_{p^{m}}\oplus\mathbb{Z}_{p^{\alpha}}^{(\kappa)}\), where \(\kappa\) is infinite, then \(G\) is not semi-generalized Bassian._ Proof.: Utilizing Lemma 2.4, there is no loss of generality in assuming that \(G=\mathbb{Z}_{p^{m}}\oplus\mathbb{Z}_{p^{\alpha}}^{(\kappa)}\). Suppose the first summand is \(\langle b\rangle\). There is clearly a decomposition \(\mathbb{Z}_{p^{\alpha}}^{(\kappa)}=\mathbb{Z}_{p^{\alpha}}\oplus Z\) with an isomorphism \(\sigma:\mathbb{Z}_{p^{\alpha}}^{(\kappa)}\to Z\). Let \(c\) be an element of the first term of this decomposition of order \(p^{m}\); in particular, \(pc\neq 0\) and \(|pc|_{p}>1\). Let \(N=\langle pb\rangle\) and \[\pi:G\to\overline{G}:=(\langle b\rangle/N)\oplus\mathbb{Z}_{p^{\alpha}}^{( \kappa)}=(\langle b\rangle/N)\oplus\mathbb{Z}_{p^{\alpha}}\oplus Z\] be the obvious surjection. Define now the map \(\phi:G\to\overline{G}\) as follows: \(\phi(b)=(b+N)+c\) and \(\phi\) restricted to \(\mathbb{Z}_{p^{\alpha}}\) agrees with \(\sigma\). It is readily checked that \(\phi\) is injective. If, however, \(\hat{G}\) were an essential subgroup of a summand of \(\overline{G}\), it would easily follow that \(\langle\phi(b)\rangle\) would be an essential subgroup of a summand of \((\langle b\rangle/N)\oplus\mathbb{Z}_{p^{\alpha}}\). Since this summand would have to be cyclic, this cannot be true, because \(|\phi(b)|_{p}=0\), \(p\phi(b)=pc\neq 0\) and \(|p\phi(b)|_{p}=|pc|_{p}>1\), as required. Following [11], the \(p\)-group \(G\) is said to have _generalized finite \(p\)-rank_ if \[G\cong\mathbb{Z}_{p^{\sigma_{1}}}^{(\rho_{1})}\oplus\cdots\oplus\mathbb{Z}_{p^ {\sigma_{n}}}^{(\rho_{n})},\] where all ordinals of the increasing sequence \(\sigma_{1}<\cdots<\sigma_{n}\) are in \(\omega\cup\{\infty\}\), and \(\rho_{1},\ldots,\rho_{n}\) are cardinals with \(\rho_{j}\) finite whenever \(j>1\). We are now prepared to prove our first major assertion, which sounds quite surprising. **Theorem 2.7**.: _The \(p\)-group \(G\) is semi-generalized co-Bassian if, and only if, \(pG\) is generalized co-Bassian, i.e., \(pG\) has generalized finite \(p\)-rank._ Proof.: It is easily verified that \(G\) satisfies this condition exactly when there is a decomposition \(G=E\oplus H\), where \(E\) is a \(p\)-high subgroup of \(G\) and \(H\) is a group that has generalized finite \(p\)-rank with no elementary summands. Firstly, suppose \(G\) is semi-generalized co-Bassian. Using Lemmas 2.4 and 2.5, \(G\) must be of the form: \[G=\mathbb{Z}_{p^{\alpha_{1}}}^{(\kappa_{1})}\oplus\mathbb{Z}_{p^{\alpha_{2}}}^ {(\kappa_{2})}\oplus\cdots\oplus\mathbb{Z}_{p^{\alpha_{k}}}^{(\kappa_{k})},\] where the \(\kappa\)s are all (non-zero) cardinals and \(\alpha_{1}<\alpha_{2}<\cdots<\alpha_{k}\) are either positive integers or \(\infty\). The result, therefore, follows directly from the above Proposition 2.6. Conversely, suppose that \(G=E\oplus H\) possesses the above form. By definition, \(H\cong\mathbb{Z}_{p^{\alpha}}^{(\kappa)}\oplus F\), where \(\alpha>1\) is either an integer or \(\infty\), \(\kappa\) is a cardinal, \(F\) has finite \(p\)-rank and \(F[p]=p^{\alpha}H[p]\) when \(\alpha\) is an integer and \(F=0\) when \(\alpha=\infty\). If \(\kappa\) is finite, then \(G\) itself is generalized co-Bassian, and hence semi-generalized co-Bassian, as desired. So, we may assume hereafter that \(\kappa\) is infinite. Let \(N\) be a subgroup of \(G\), and set \(\overline{G}:=G/N\) and \(\phi:G\to\overline{G}\) is an injective homomorphism. As in [10], for any subgroup \(A\) of \(G\), we let \(\hat{A}=\phi(A)\) and \(\overline{A}\) be the image of \(A\) under the canonical homomorphism \(\pi:G\to\overline{G}\). We need to show \(\hat{G}=\phi(G)\) is an essential subgroup of a summand of \(\overline{G}\). To that goal, suppose first that \(\alpha=\infty\); so, \(H=\mathbb{Z}_{p^{\infty}}^{(\kappa)}\) is the maximal divisible subgroup of \(G\) and \(F=0\). Thus, \(\overline{H}\) is divisible and \(p(\overline{G}/\overline{H})=0\), so that \(\overline{G}=E^{\prime}\oplus\overline{H}\), where \(E^{\prime}\) is also elementary. Clearly, \(\hat{H}\) will also be divisible, so that \(\overline{H}=D\oplus\hat{H}\), where \(D\) is divisible. If \[E^{\prime\prime}:=\hat{G}\cap(E^{\prime}\oplus D)\subseteq(E^{\prime}\oplus D )[p],\] it follows that \(\hat{G}=E^{\prime\prime}\oplus\hat{H}\). But it also plainly follows that any subsocle of a direct sum of an elementary group and a divisible group is supported by a summand of the containing group (see [7] as well). In particular, this means that \(\hat{G}=E^{\prime\prime}\oplus\hat{H}\) is an essential subgroup of a summand of \(G\), as required. The proof where \(\alpha\) is a positive integer is similar; for clarity, denote \(\alpha\) by \(n\). Since \(p^{n}G=p^{n}H\) is co-Bassian and \(\phi\) and \(\pi\) restrict to an injection and surjection, respectively, on \(p^{n}G\), it follows that \(p^{n}\hat{H}=p^{n}\hat{G}=p^{n}\overline{G}\). Since \(H\) has no summands isomorphic to \(\mathbb{Z}_{p^{m}}\) for \(m<n\), this readily implies that \(\overline{G}=\hat{H}\oplus K\), where \(p^{n}K=0\). It follows that \(\hat{G}\cap K\subseteq K[p]\). And since \(K\) is bounded, \(\hat{G}\cap K\) supports a summand \(K^{\prime}\) of \(K\). Consequently, \(K^{\prime}\oplus\hat{H}\) will be a summand of \(\overline{G}\) containing \(\hat{G}\) as an essential subgroup, as wanted. The pivotal instrument, needed to establish our second major assertion listed below, is the following. **Proposition 2.8**.: _If \(G\) is a semi-generalized co-Bassian group and \(p\) is a prime, then \(T_{p}\) is semi-generalized co-Bassian._ Proof.: Suppose \(G\) is a semi-generalized co-Bassian group, \(p\) is a prime and \(T_{p}=R\oplus D\), where \(R\) is reduced and \(D\) is divisible. If \(D\) has infinite \(p\)-rank, then by Proposition 2.6 and Lemma 2.4, \(R=\{0\}\) so that \(T_{p}\) is semi-generalized co-Bassian. So, we may assume that \(D\) has finite rank. If \(G=G^{\prime}\oplus D\), where \(R=T_{G^{\prime}}\), then by Lemma 2.4, \(G^{\prime}\) is semi-generalized co-Bassian, which by Lemma 2.5 implies that \(R\) is bounded. The fact that \(T_{p}\) is semi-generalized co-Bassian then quickly follows from Theorem 2.7, Proposition 2.6 and Lemma 2.4. We next present a full characterization of the semi-generalized co-Bassian group of finite torsion-free rank. **Theorem 2.9**.: _If \(G\) has finite torsion-free rank, then \(G\) is semi-generalized co-Bassian if, and only if, for every prime \(p\), \(T_{p}\) is semi-generalized co-Bassian and either (a) \(T_{p}\) has finite \(p\)-rank, or (b) \(T_{p}\) is divisible, or (c) \(G/T\) is \(p\)-divisible._ Proof.: Suppose \(G\) is semi-generalized co-Bassian and \(p\) is a prime. By Proposition 2.8, \(T_{p}\) is semi-generalized co-Bassian. Assume \(G/T\) is not \(p\)-divisible and \(T_{p}\) has infinite \(p\)-rank; we need to show that \(T_{p}\) is divisible. Assume not, so that it has a non-zero cyclic summand \(C=\langle c\rangle\) of minimal order \(p^{k}\). It is straightforward to verify that there is a decomposition \(T_{p}=C\oplus K\) and an injection \(\phi:T_{p}\to K\) such that \(\phi(T_{p}[p])=K[p]\). Since \(T_{p}\) is the direct sum of a bounded and a divisible group, there is a decomposition \(G=T_{p}\oplus A=C\oplus K\oplus A\); we are therefore assuming that multiplication by \(p\) is an injective, but not surjective, endomorphism on \(A\). It follows that \(A/p^{k}A\) is isomorphic to a direct sum of copies of \(\mathbb{Z}_{p^{k}}\). So, if \(a\in A\) has \(p\)-height \(0\), then \(a+p^{k}A\) will have order \(p^{k}\) in \(A/p^{k}A\), and hence will generate a summand. Thus, there is a composite of natural homomorphisms \[\gamma:A\to A/p^{k}A\to\langle a+p^{k}A\rangle\to C\] such that \(\gamma(a)=c\). Extend \(\phi\) to an injection \(\phi:G\to G\) by setting, for all \(x\in A\), \(\phi(x)=\gamma(x)+px\). Letting \(\pi:G\to\overline{G}:=G\) be the identity, then since \(G\) is semi-generalized co-Bassian, we can conclude that \(\hat{G}=\phi(G)\) is an essential subgroup of some summand \(S\) of \(G\). Since \(\hat{G}[p]=K[p]\), \(p^{k-1}c\) does not represent an element of the Ulm factor \(U_{k-1}(S)\subseteq U_{k-1}(G)\). On the other hand, considering \(p\)-heights, \[|p^{k-1}\phi(a)|=|p^{k-1}c+p^{k}a|=k-1\] and \[|p^{k}\phi(a)|=|p^{k+1}a|=k+1,\] so there is an element \(s\in S\) such that \(|s|=k\) and \(ps=p^{k}\phi(a)\). Therefore, \(p^{k-1}\phi(a)-s\) does represent an element of \(U_{k-1}(S)\). But since \[|(p^{k-1}\phi(a)-s)-p^{k-1}c|=|p^{k}a-s|\geq k,\] we can conclude that \(p^{k-1}\phi(a)-s\) and \(p^{k-1}c\) represent the same element of \(U_{k-1}(G)\), giving a contradiction. Therefore, \(G\) must be divisible, completing the proof of sufficiency. Conversely, suppose that for all primes \(p\), \(T_{p}\) is semi-generalized co-Bassian and if \(G/T\) is not \(p\)-divisible, then either \(G\) has finite \(p\)-rank or \(T_{p}\) is divisible. To show \(G\) is semi-generalized co-Bassian, suppose \(\pi\) and \(\phi\) are, respectively, surjective and injective homomorphisms \(G\to\overline{G}\). Let \(\mathcal{P}_{1}\) be the collection of primes such that \(G/T\) is \(p\)-divisible, and \(\mathcal{P}_{2}\) be those primes \(p\not\in\mathcal{P}_{1}\) for which \(T_{p}\) has finite \(p\)-rank and \(\mathcal{P}_{3}\) be the remaining primes; so if \(p\in\mathcal{P}_{3}\), then \(G/T\) is not \(p\)-divisible and \(T_{p}\) is divisible (of infinite rank). If \(N\) is the kernel of \(\pi\), then the fact that \(G\) has finite torsion-free rank implies that \(N\subseteq T\) and \(T_{\overline{G}}=\overline{T}\). Therefore, for each prime \(p\), \(\pi\) and \(\phi\) restrict to surjective and injective homomorphisms \(T_{p}\to\overline{T}_{p}\). Since we are assuming \(T_{p}\) is semi-generalized co-Bassian, we can conclude that there is a decomposition \(\overline{T}_{p}=L_{p}\oplus B_{p}\), where \(L_{p}\) contains \(\hat{T}_{p}\) as an essential subgroup. Note that if \(p\in\mathcal{P}_{2}\), then \(L_{p}=\hat{T}_{p}=\overline{T}_{p}\) and \(B_{p}=0\); and if \(p\in\mathcal{P}_{3}\), then both \(L_{p}=\hat{T}_{p}\) and \(B_{p}\) are divisible. Letting \(L=\oplus_{p}L_{p}\) and \(B=\oplus_{p}B_{p}\), then \(\hat{T}\) is essential in \(L\) and \(\overline{T}=L\oplus B\). Observe that there are isomorphisms of torsion-free finite rank groups \(\hat{G}/\hat{T}\cong G/T\cong\overline{G}/\overline{T}\). Therefore, since \(\hat{G}/\hat{T}\) clearly embeds in \(\overline{G}/\overline{T}\) and they are isomorphic, \[\overline{G}/[\overline{T}+\hat{G}]\cong(\overline{G}/\overline{T})/(\hat{G}/ \hat{T}):=X\] is a finite group. And since both groups in this quotient are \(p\)-divisible for all primes \(p\in\mathcal{P}_{1}\), it follows that \(X\) has no \(p\)-torsion whenever \(p\in\mathcal{P}_{1}\). Suppose \(x_{0},\ldots x_{k}\) in \(\overline{G}\) project onto a linearly independent generating set for \(X\). If \(n_{i}\) is the order of the image of \(x_{i}\) in \(X\), then \(n_{i}\) is not divisible by any prime from \(\mathcal{P}_{1}\). So, there is a finite set of primes \(F\subseteq\mathcal{P}_{2}\cup\mathcal{P}_{3}\), such that each \(n_{i}\) is only divisible by primes from \(F\). For each \(i\), there is a \(y_{i}\in\hat{G}\) and \(z_{i}\in\overline{T}\) such that \(n_{i}x_{i}=y_{i}+z_{i}\). For each \(i\) there is an \(m_{i}\) not divisible by any prime of \(F\) such that \(m_{i}z_{i}\) has order divisible only by primes in \(F\). Clearly, the \(m_{i}x_{i}\) still project to a linearly independent generating set for \(X\), so replacing \(x_{i},y_{i}\) and \(z_{i}\) by \(m_{i}x_{i},m_{i}y_{i}\) and \(m_{i}z_{i}\) respectively, we may assume that each \(z_{i}\) is in \(\oplus_{p\in F}\overline{T}_{p}=\oplus_{p\in F}(\hat{T}_{p}\oplus B_{p})\). We may clearly absorb each component of \(z_{i}\) in this decomposition that lives in \(\hat{T}_{p}\) into the corresponding element \(y_{i}\in\hat{G}\). So, we may assume \(z_{i}\in\oplus_{p\in F}B_{p}\). But since \(F\subseteq\mathcal{P}_{2}\cup\mathcal{P}_{3}\), the direct sum \(\oplus_{p\in F}B_{p}\) is divisible. Let \(w_{i}\in\oplus_{p\in F}B_{p}\) satisfy \(n_{i}w_{i}=z_{i}.\) So, \(x_{i}^{\prime}:=x_{i}-w_{i}\) represent the exact same elements of \(X\), but now, \(n_{i}x_{i}^{\prime}=y_{i}\). Replacing \(x_{i}\) by \(x_{i}^{\prime}\) and letting \[S=\hat{G}+\langle x_{1},\ldots,x_{k}\rangle+L\subseteq\overline{G},\] we claim that \(T_{S}=L\): The containment \(\supseteq\) being obvious, suppose \(s\in T_{S}\); we need to show \(s\in L\). By definition, there is \(u\in\hat{G}\), \(v\in L\) and integers \(j_{0},\ldots,j_{k}\) such that \[s=u+j_{0}x_{0}+\cdots+j_{k}x_{k}+v.\] If we map this relation into the quotient \(X\), if readily follows that for each \(i\), that \(j_{i}x_{i}\in\hat{G}\). Therefore, \[u+j_{0}x_{0}+\cdots+j_{k}x_{k}=s-v\in\hat{G}\cap\overline{T}=\hat{T}\subseteq L.\] This gives \(s=v+(s-v)\in L\), as required. Notice also that \((\dagger)\) also implies that \(\overline{G}=S+\overline{T}\). We have \(\overline{T}=L\oplus B=T_{S}\oplus B\), so that \(S+B=S+\overline{T}=G\) and \(S\cap B=T_{S}\cap B=0\). Therefore, \(\overline{G}=S\oplus B\). And finally, since \(\hat{T}\) is essential in \(L=T_{S}\) and \(S/\hat{G}\) is torsion, it readily follows that \(\hat{G}\) is an essential subgroup of \(S\), as required. The next result follows immediately from Theorem 2.9. **Corollary 2.10**.: _The finite torsion-free rank group \(G\) is semi-generalized co-Bassian if, and only if, for every prime \(p\), its localization \(G_{(p)}\) is semi-generalized co-Bassian._ Having characterized the semi-generalized co-Bassian groups of finite rank, we turn to a consideration of those with infinite rank. We will say a group is _D+E_ if it is isomorphic to \(D\oplus E\), where \(D\) is divisible and \(E\) is elementary. **Proposition 2.11**.: _If \(G\) is semi-generalized co-Bassian of infinite torsion-free rank, then \(G/T\) is divisible and \(T\) is D+E._ Proof.: Let \(\kappa\) be the rank of \(G\). Since each \(T_{p}\) is semi-generalized co-Bassian, \(T\) will be a direct sum of cocyclic groups. This easily implies that there is a decomposition \(G=G^{\prime}\oplus T^{\prime}\), where \(T^{\prime}\) is torsion, \(G^{\prime}\) has cardinality \(\kappa\) and \(T\) is D+E if, and only if, that condition holds for \(T_{G^{\prime}}\). So, by Lemma 2.4, there is no loss of generality in assuming that \(G\) has cardinality and rank \(\kappa\). Suppose first that \(G/T\) is not \(p\)-divisible for some \(p\). Let \(A\) be a subgroup of \(G\) containing \(T\) such that \(G/A\) has order \(p\) and let \(\tau:G\to G/A\) be the usual epimorphism. If \(D\) is a divisible hull for \(G\), then there is clearly a surjective homomorphism \(A\to D\) which extends to a homomorphism \(\gamma:G\to D\). The homomorphism \(\pi:G\to\overline{G}:=(G/A)\oplus D\) given by \(\pi(x)=(x+A)+\gamma(x)\) is clearly onto. The homomorphism \(\phi:G\to(G/A)\oplus D\) given by \(\phi(x)=(x+A)+x\) is clearly injective. If \(G\) were semi-generalized co-Bassian, then \(\hat{G}\) would be contained as an essential subgroup of a summand \(S\subseteq\overline{G}\). Since \(S[p]=0\oplus T[p]\subseteq D\subseteq\overline{G}\), we could conclude that the \(p\)-torsion subgroup of \(S\) is divisible. If \(x\not\in A\), then \(|\phi(x)|_{S}=|\phi(x)|_{\overline{G}}=0\), and \(|p\phi(x)|_{S}=|p\phi(x)|_{\overline{G}}=\infty\), which cannot happen when the \(p\)-torsion of \(S\) is divisible. Suppose now that \(G\) has a cyclic summand, \(C=\langle c\rangle\), of order \(p^{k}\), where \(k>1\). Let \(G=C\oplus A\). If \(D\) is a divisible hull for \(A\) and \(Z\cong\mathbb{Z}_{p^{\infty}}\), then there is clearly a surjection \[\pi:A\to Z\oplus D\] and an injection \[\phi:A\subseteq D\subseteq Z\oplus D.\] Extending \(\pi\) to \(G\) by setting it equal to the identity on \(C\) makes it a surjection \[G\to\overline{G}:=C\oplus Z\oplus D.\] Choosing \(z\in Z\) of order \(p^{k}\) and extending \(\phi\) by setting \(\phi(c)=pc+z\) turns it into an injection \(\phi:G\to\overline{G}\). Again \(\hat{G}[p]\subseteq(Z\oplus D)[p]\), so that all elements of \(\hat{G}[p]\) have infinite height. So, if \(\hat{G}\) were contained as an essential subgroup of a summand \(S\), then the \(p\)-torsion subgroup of \(S\) would be divisible. This, however, contradicts the fact that \(|\phi(c)|_{\overline{G}}=1\neq\infty\). As a direct consequence, we yield: **Corollary 2.12**.: _Suppose \(G\) is a semi-generalized co-Bassian group of infinite torsion-free rank and \(G\cong D\oplus R\), where \(D\) is divisible and \(R\) is reduced. Then, \(T_{R}\) is elementary and \(R/T_{R}\) is divisible._ The following is a partial converse to Proposition 2.11. **Proposition 2.13**.: _If the group \(G\) itself is D+E, then \(G\) is semi-generalized co-Bassian._ Proof.: It is easily seen that a homomorphic image of a D+E-group is also D+E, and that any D+E-subgroup of a D+E-group will be essential in a summand. This will imply the pursued result. So, if \(G\) is a torsion-splitting group, i.e., \(T\) is a summand of \(G\), then the converse of Proposition 2.11 holds. However, it is not clear that this converse holds when \(G\) is not torsion-splitting. In fact, the problem may be quite difficult and we leave it unsettled at this stage. Thus, we explicitly pose the following: **Problem 2.14**.: Does it follow that a group \(G\) is semi-generalized co-Bassian, provided its torsion part \(T\) is a direct sum of a divisible group and an elementary group and the quotient \(G/T\) is a divisible group? **Funding:** The work of the first-named author A.R. Chekhlov was supported by the Ministry of Science and Higher Education of Russia (agreement No. 075-02-2023-943). The work of the second-named author P.V. Danchev was partially supported by the Bulgarian National Science Fund under Grant KP-06 No. 32/1 of December 07, 2019, as well as by the Junta de Andalucia under Grant FQM 264, and by the BIDEB 2221 of TUBITAK.
2310.17219
Scalable Verification of Strategy Logic through Three-valued Abstraction
The model checking problem for multi-agent systems against Strategy Logic specifications is known to be non-elementary. On this logic several fragments have been defined to tackle this issue but at the expense of expressiveness. In this paper, we propose a three-valued semantics for Strategy Logic upon which we define an abstraction method. We show that the latter semantics is an approximation of the classic two-valued one for Strategy Logic. Furthermore, we extend MCMAS, an open-source model checker for multi-agent specifications, to incorporate our abstraction method and present some promising experimental results.
Francesco Belardinelli, Angelo Ferrando, Wojciech Jamroga, Vadim Malvone, Aniello Murano
2023-10-26T08:15:29Z
http://arxiv.org/abs/2310.17219v1
# Scalable Verification of Strategy Logic through Three-valued Abstraction # Scalable Verification of Strategy Logic through Three-valued Abstraction Francesco Belardinelli\({}^{1}\) Angelo Ferrando\({}^{2}\) Wojciech Jamroga\({}^{3}\) Vadim Malvone\({}^{4}\) Aniello Murano\({}^{5}\) \({}^{1}\)Imperial College London, United Kingdom \({}^{2}\)University of Genoa, Italy \({}^{3}\)SnT, University of Luxembourg & Institute of Computer Science, Polish Academy of Sciences \({}^{4}\)Telecom Paris, France \({}^{5}\)University of Naples Federico II, Italy [email protected], [email protected], [email protected], [email protected], [email protected] ###### Abstract The model checking problem for multi-agent systems against Strategy Logic specifications is known to be non-elementary. On this logic several fragments have been defined to tackle this issue but at the expense of expressiveness. In this paper, we propose a three-valued semantics for Strategy Logic upon which we define an abstraction method. We show that the latter semantics is an approximation of the classic two-valued one for Strategy Logic. Furthermore, we extend MCMAS, an open-source model checker for multi-agent specifications, to incorporate our abstraction method and present some promising experimental results. ## 1 Introduction In multi-agent systems, logics for strategic reasoning play a key role. In this domain, one of the success stories is Alternating-time Temporal Logic (ATL\({}^{*}\))[1], which can express cooperation and competition among teams of agents in order to achieve temporal goals, such as fairness, liveness, safety requirements. In fact, ATL\({}^{*}\) extends the well known branching-time temporal logic CTL\({}^{*}\)[12] by generalizing the existential \(E\) and universal \(A\) path quantifiers of CTL\({}^{*}\) with the strategic modalities \(\langle\!\langle C\rangle\!\rangle\) and \(\llbracket\!\llbracket C\rrbracket\), where \(C\) is a coalition of agents. However, it has been observed that ATL\({}^{*}\) suffers from a number of limitations that, on the one hand, make the model-checking and satisfiability problems decidable (both are \(2\)ExpTime-complete); but, on the other hand, make the logic too weak to express key game-theoretic concepts, such as Nash equilibria [16]. To overcome these limitations, _Strategy Logic_ (SL) [16, 17] has been put forward. A key aspect of SL is to consider strategies as first-order objects that can be existentially or universally quantified over by means of the strategy quantifiers \(\exists x\) and \(\forall x\), respectively. Then, by means of a binding operator \((a,x)\), a strategy \(x\) can be associated to a specific agent \(a\). This allows to reuse strategies as well as to share them among different agents. Since its introduction, SL has proved to be a powerful formalism: it can express complex solution concepts, including Nash equilibria, and subsumes all previously introduced logics for strategic reasoning, including ATL\({}^{*}\). The high expressivity of SL has spurted its analysis in a number of directions and extensions, such as prompt [1], graded [1], fuzzy [18], probabilistic [1], and imperfect [1, 1] strategic reasoning. As one may expect, the high expressivity of SL comes at a price. Indeed, its model-checking problem turns out to be non-elementary [16]. Moreover, the model checking procedure is not immune to the well-known state-space explosion, as faithful models of real-world systems are intrinsically complex and often infeasible even to generate, let alone verify. These issues call for techniques to make model checking SL amenable at least in practice. A technique that has been increasingly used in industrial settings to verify hardware and software systems is state abstraction, which allows to reduce the state space to manageable size by clustering "similar" concrete states into abstract states. Abstraction has been first introduced for stand-alone systems [11], then extended to two-agent system verification [1, 16, 15]. Recently, abstraction approaches have been investigated for multi-agent systems w.r.t. ATL\({}^{*}\) specifications [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 62, 63, 64, 65, 66, 67, 68, 69, 71, 73, 75, 76, 77, 78, 79, 80, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 112, 109, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 182, 184, 188, 189, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 232, 242, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 31, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 321, 329, 343, 328, 329, 344, 329, 350, 351, 352, 353, 354, 355, 356, 357, 358, 360, 359, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 382, 383, 390, 391, 392, 393, 394, 395, 396, 397, 398, 400, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 409, 421, 409, 431, 444, 45, 46, 47, 48, 49, 51, 400, 409, 411, 409, 44, 411, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 112, 109, 112, 109, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 132, 133, 134, 135, 136, 137, 138, 139, 140, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164 introduce an abstraction procedure for SL, which can reduce significantly the size of the state space of SL models, although at the cost of making some formulas undefined. The main theoretical result is the Preservation Theorem 4.2, which allows us to model check SL formulas in the three-valued abstraction and then lift any defined answer to the original two-valued model. Third, in Sec. 6 we evaluate empirically the trade-off between state-space reduction and definiteness, by applying our abstraction procedure to a scheduling scenario. What we observe empirically is a significant reduction of the model size, which allows us to verify instances that are not amenable to current model checking tools. Related Work.The present contribution is inspired by a long tradition of works on the abstraction of MAS models, including through three-valued semantics. An abstraction-refinement framework for the temporal logic CTL over a three-valued semantics was first studied in [23, 24], and then extended to the full \(\mu\)-calculus [15] and hierarchical systems [1]. Three-valued abstractions for the verification of Alternating-time Temporal Logic have been put forward in [1, 1, 18, 19]. In [1, 17], it is interpreted under perfect information; while [1, 18, 19] consider _non-uniform_ strategies [10]. Finally, [1, 18] introduce a multi-valued semantics for ATL\({}^{*}\) that is a conservative extension of the classical two-valued variant. Related to this line, three-valued logics have been extensively applied to system verification, including [1, 1, 19] Clearly, we build in this long line of works, but the expressiveness of SL raises specific challenges that the authors of the contributions above need not to tackle. We briefly mention them here and refer to specific sections for further details. First, we have to introduce individual \(must\) and \(may\) actions and strategies as under- and over- approximations of the behaviours of our agents. Second, the loosely-coupled nature of agents requires to consider non-deterministic transitions in the abstraction (Sec. 4). Third, the arbitrary alternation of existential and universal strategy quantifiers makes proving the Preservation Theorem 4.2 significantly more challenging, and complicates our experiments in verifying three-valued SL in the two-valued model-checking tool MCMAS (Sec. 6). ## 2 Reasoning about Strategies In this section we recall the definitions of basic notions for Strategy Logic [25]. ### Syntax _Strategy Logic (SL)_ syntactically extends LTL with two _strategy quantifiers_, the existential \(\exists x\) and universal \(\forall x\), and an _agent binding_\((a,x)\), where \(a\) is an agent and \(x\) a variable. Intuitively, these additional elements can be respectively read as _"there exists a strategy \(x\)"_, _"for all strategies \(x\)"_, and _"bind agent a to the strategy associated with the variable \(x\)"_. Since negated quantifiers often prove problematic in many valued settings, we restrict the syntax of SL to formulas in Negation Normal Form (NNF), without loss of expressiveness. In that case, the universal strategic quantifier \(\forall x\) and the temporal operator "Release" \(R\) are added as primitives, and negation is allowed only at the level of literals. Note that every formula of SL can be equivalently transformed to one in NNF, with at most a linear blowup [25]. **Definition 2.1** (SL Syntax).: _Given the set AP of atoms, variables Var, and agents Ag, the formal syntax of SL is defined as follows, where \(p\in\textit{AP}\), \(x\in\textit{Var}\), and \(a\in\textit{Ag}\):_ \[\varphi ::= p\mid\neg p\mid\varphi\land\varphi\mid\varphi\lor\varphi\mid \exists x\varphi\mid(a,x)\varphi\mid\] \[X\varphi\mid\varphi\cup\varphi\mid\varphi R\,\varphi\] We introduce the derived temporal operators as usual: \(F\varphi=\top U\varphi\) ("eventually") and \(G\varphi=\bot R\varphi\) ("always"). Usually, predicate logics need the concepts of free and bound _placeholders_ in order to formally define their semantics. In SL, since strategies can be associated to both agents and variables, we introduce the set of _free agents/variables free\((\varphi)\)_ as the subset of \(\textit{Ag}\)\(\cup\)_Var_ containing _(i)_ all agents \(a\) for which there is no binding \((a,x)\) before the occurrence of a temporal operator, and _(ii)_ all variables \(x\) for which there is a binding \((a,x)\) but no quantification \(\exists x\) or \(\forall x\). A formula \(\varphi\) without free agents (resp., variables), i.e., with \(\textit{free}(\varphi)\cap\textit{Ag}=\emptyset\) (resp., \(\textit{free}(\varphi)\cap\textit{Var}=\emptyset\)), is called _agent-closed_ (resp., _variable-closed_). If \(\varphi\) is both agent- and variable-closed, it is a _sentence_. ### Two-valued Semantics We now provide a formal semantics to Strategy Logic. Models.To model the behaviour of multi-agent systems, we use a variant of concurrent game structures [1]. **Definition 2.2** (Cgs).: _A concurrent game structure (CGS) is a tuple \(G=\langle Ag,St,s_{0},Act,\tau,AP,V\rangle\) such that (i) \(Ag\) is a finite, non-empty set of agents. (ii) \(St\) is a finite, non-empty set of states, with initial state \(s_{0}\in St\). (iii) \(Act\) is a finite, nonempty set of actions. We use \(ACT=Act^{|Ag|}\) for the set of all joint actions (a.k.a. action profiles), i.e., tuples of individual actions, played synchronously by all agents. (iv) \(\tau:St\times ACT\to 2^{St}\) is the transition function assigning successor states \(\{s^{\prime},s^{\prime\prime},\dots\}=\tau(s,\vec{\alpha})\) to each state \(s\in\mathcal{St}\) and joint action \(\vec{\alpha}\in ACT\). We assume that the transitions in a CGS are deterministic, i.e., \(\tau(s,\vec{\alpha})\) is always a singleton.1 (v) \(AP\) is a set of atomic propositions, and (vi) \(V:St\times AP\to\{\top,\bot\}\) is a two-valued labelling function._ Footnote 1: The deterministic transitions in a CGS are usually defined by a function of type \(\tau:St\times ACT\to St\). We use a slightly different (but equivalent) formulation. This will make it easier for us to extend it to nondeterministic transitions in three-valued models (see Def. 3.1). By Def. 2.2 a CGS describes the interactions of a group \(Ag\) of agents, starting from the initial state \(s_{0}\in St\), according to the transition function \(\tau\). We use \(G\) as a subscript for \(\textit{Ag}_{G}\), \(\textit{St}_{G}\), etc., whenever the model is not clear from the context. Note that the CGSs used in the semantics of Strategy Logic assume that all the actions are available to every agent at every state [13]. This is because a strategy assigned to variable \(x\) can be later associated with any agent \(a\) by means of the binding operator \((a,x)\). As a consequence, the available strategies (and hence also the available actions) are the same for every agent. **Tracks, Paths, Strategies.** We denote the \(i\)-th element of a tuple \(v\) as \(v_{i}\), the prefix of \(v\) of lenght \(i\) as \(v_{\leq i}\), and with \(last(v)\) as the last element of \(v\). A _track_ is a _finite nonempty_ sequence of states \(\rho\in St^{+}\) such that, for all \(0\leq i\leq|\rho|-1\), there is an action profile \(\vec{\alpha}\in ACT\) with \((\rho)_{i+1}\in\tau((\rho)_{i},\vec{\alpha})\). Similarly, a _path_ is an _infinite_ sequence of states \(\pi\in St^{\omega}\) such that, for all \(i\in N\), there is \(\vec{\alpha}\in ACT\) with \((\pi)_{i+1}\in\tau((\pi)_{i},\vec{\alpha})\). The set \(\mathit{Trk}\subseteq St^{+}\) contains all the tracks in the model, and \(\mathit{Trk}(s)\) the tracks starting at state \(s\in St\). The sets \(\mathit{Pht}\) and \(\mathit{Pht}(s)\) are defined analogously. We denote the prefix of a path \(\pi\) up to position \(i\in\mathbb{N}\) as \(\pi_{\leq i}\). A _strategy_ is a partial function \(f:\mathit{Trk}\to Act\) that maps each track in its domain to an action. Intuitively, a strategy is a conditional plan that, for some tracks of \(G\), prescribes an action to be executed. A strategy is _memoryless_ (or positional), if \(last(\rho)=last(\rho^{\prime})\) implies \(f(\rho)=f(\rho^{\prime})\), that is, the strategy only depends on the last state. The set \(\mathit{Str}=\mathit{Trk}\to Act\) (resp., \(\mathit{Str}(s)=\mathit{Trk}(s)\to Act\)) contains all strategies (resp., strategies starting from \(s\)). **Assignments.** Let \(\mathit{Var}\) be the set of variables. An _assignment_ is a partial function \(\chi:\mathit{Var}\cup Ag\to Str\) mapping variables and agents in its domain to strategies. An assignment \(\chi\) is _complete_ if it is defined on all agents, i.e., \(\mathit{Ag}\subseteq dom(\chi)\). The set \(\mathit{Asg}=\mathit{Var}\cup\mathit{Ag}\to Str\) contains all assignments. Moreover, \(\mathit{Asg}(X)=X\to Str\) indicates the subset of \(X\)-_defined_ assignments, i.e., assignments defined on \(X\subseteq\mathit{Var}\cup\mathit{Ag}\). As in first-order logic, in order to quantify over strategies or bind a strategy to an agent, we update an assignment \(\chi\) by associating an agent or a variable \(l\) with a new strategy \(f\). Let \(\chi\in Asg\) be an assignment, \(f\in Str\) a strategy and \(l\in\mathit{Var}\cup Ag\) either an agent or a variable. Then, \(\chi[l\mapsto f]\in Asg\) denotes the new assignment that returns \(f\) on \(l\) and the same value that \(\chi\) would return on the rest of its domain. **Outcome Plays of a Strategy.** A _play_ is the unique outcome of the game settled by all agent strategies engaged in it. Formally, given a state \(s\in St\) and a complete assignment \(\chi\in Asg(s)\), the function \(\mathit{play}(\chi,s)\) returns the path \(\pi\in\mathit{Pht}(s)\) such that, for all \(i\in N\), it holds that \(\{\pi_{i+1}\}=\tau(\pi_{i},\vec{\alpha})\), where \(\vec{\alpha}(a)=\chi(a)(\pi_{\leq i})\) for each \(a\in Ag\). We now define the translation of an assignment together with a related path (resp. state). It is used to keep track, at a certain stage of the play, of the current state and its updated assignment. For a path \(\pi\) and an assignment \(\chi\in Asg\), the \(i\)-_th global translation_ of \((\chi,\pi)\) with \(i\in N\) is the pair \((\chi,\pi)^{i}=(\chi_{\pi_{\leq i}},\pi_{i})\) of an assignment and a state. Moreover, for a state \(s\in St\), we define \((\chi,s)^{i}=(\chi,\mathit{play}(\chi,s))^{i}\). As in the case of components of a model, in order to avoid any ambiguity, we sometimes use the name of the model as a subscript of the sets and functions introduced above. **Satisfaction.** The (two-valued) satisfaction relation for SL is defined as follows. **Definition 2.3** (Two-valued Satisfaction).: _Given a model \(G\), for all SL formulas \(\varphi\), states \(s\in St\), and assignments \(\chi\in Asg\) with \(\text{free}(\varphi)\subseteq dom(\chi)\), the satisfaction relation \((G,\chi,s)\models^{2}\varphi\) is inductively defined as follows:_ * \((G,\chi,s)\models^{2}p\) _iff_ \(V(s,p)=\top\)_, for_ \(p\in\mathit{AP}\)_._ * _Boolean operators are interpreted as usual._ * \((G,\chi,s)\models^{2}\exists\,x\varphi\) _iff for some strategy_ \(f\in Str(s)\)_,_ \((G,\chi[x\mapsto f],s)\models^{2}\varphi\)_._ * \((G,\chi,s)\models^{2}\forall x\varphi\) _iff for all strategies_ \(f\in Str(s)\)_,_ \((G,\chi[x\mapsto f],s)\models^{2}\varphi\)_._ * \((G,\chi,s)\models^{2}(a,x)\varphi\) _iff_ \((G,\chi[a\mapsto\chi(x)],s)\models^{2}\varphi\)_._ * _Finally, if the assignment_ \(\chi\) _is also complete, it holds that:_ * \((G,\chi,s)\models^{2}X\varphi\) _iff_ \((G,(\chi,s)^{1})\models^{2}\varphi\)_;_ * \((G,\chi,s)\models^{2}\varphi_{1}U\varphi_{2}\) _iff for some index_ \(i\in N\)_,_ \((G,(\chi,s)^{i})\models^{2}\varphi_{2}\) _and, for all_ \(j<i\)_, it holds that_ \((G,(\chi,s)^{j})\models^{2}\varphi_{1}\)_;_ * \((G,\chi,s)\models^{2}\varphi_{1}R\varphi_{2}\) _iff, for all_ \(i\in N\)_,_ \((G,(\chi,s)^{i})\models^{2}\varphi_{2}\) _or there is_ \(j\leq i\) _such that_ \((G,(\chi,s)^{i})\models^{2}\varphi_{1}\)_._ Due to the semantics of the Next \(X\), Until \(U\), and Release \(R\) operators, LTL semantics is clearly embedded into the SL one. Furthermore, since the satisfaction of a sentence \(\varphi\) does not depend on assignments, we omit them and write \((G,s)\models\varphi\), when \(s\) is a generic state in \(St\), and \(G\models\varphi\) when \(s=s_{0}\). Note that we can easily define the memoryless variant of strategy logic by restricting the clauses for operators \(\exists x\) and \((a,x)\) to memoryless strategies. Finally, we define the (two-valued) model checking problem for SL as determining whether an SL formula \(\phi\) holds in a CGS \(G\), that is, whether \(G\models^{2}\phi\). We conclude this section by stating the related complexity result. **Theorem 2.4** ([13]).: _The model checking problem for Strategy Logic is non-elementary._ ## 3 Three-Valued Strategy Logic In this section we introduce a novel three-valued semantics for Strategy Logic starting by extending CGSs. ### Three-Valued CGSs We extend (two-valued) CGSs with \(\mathit{must}\) and \(\mathit{may}\) transitions as under- and over-approximations of the strategic abilities of agents. **Definition 3.1** (Three-valued CGS).: _A three-valued CGS is a tuple \(G\!=\!\langle Ag,St,s_{0},Act^{\mathit{may}},Act^{\mathit{must}},\tau^{ \mathit{may}},\tau^{\mathit{must}},AP,V\rangle\), where:_ * \(Ag,St,s_{0},AP\) _are defined as in Def._ 2.2_._ * \(Act^{\mathit{may}}\) _and_ \(Act^{\mathit{must}}\) _provide respectively the upper and lower approximation of the available actions. We assume that_ \(Act^{\mathit{must}}\subseteq Act^{\mathit{may}}\)_. The sets of_ \(\mathit{may}\) _and_ \(\mathit{must}\) _action profiles are given by_ \(ACT^{\mathit{may}}=(Act^{\mathit{may}})^{|Ag|}\) _and_ \(ACT^{\mathit{must}}=(Act^{\mathit{must}})^{|Ag|}\)_, respectively._ * \(\tau^{may}:\mathit{St}\times ACT^{may}\to 2^{\mathit{St}}\) _is the may transition function, and_ \(\tau^{must}:\mathit{St}\times ACT^{may}\to 2^{\mathit{St}}\) _the must transition function. Note that both functions are possibly non-deterministic and are_ defined _on all the potential action profiles in the system, i.e.,_ \(ACT^{\mathit{may}}\)_. However, we only require that they_ return nonempty successor sets _on their respective action profiles. That is,_ \(\tau^{\mathit{may}}(s,\vec{\alpha})\neq\emptyset\) _for every state_ \(s\in\mathit{St}\) _and action profile_ \(\vec{\alpha}\in ACT^{\mathit{may}}\)_, and_ \(\tau^{must}(s,\vec{\alpha})\neq\emptyset\) _for every state_ \(s\in\mathit{St}\) _and action profile_ \(\vec{\alpha}\in ACT^{\mathit{must}}\)_._2 _Moreover, it is required that_ \(\tau^{must}(s,\vec{\alpha})\subseteq\tau^{\mathit{may}}(s,\vec{\alpha})\) _for every_ \(s\in\mathit{St}\) _and_ \(\vec{\alpha}\in ACT^{\mathit{may}}\)_. In other words, every_ \(must\) _transition is also a_ \(may\) _transition, but not necessarily viceversa._ Footnote 2: Note that the function \(\tau^{must}\) is total because we assume the empty set as an element of the co-domain. * _The labelling function_ \(V:\mathit{St}\times\mathit{AP}\rightarrow\{\bot,\top,\mathbf{u}\}\) _maps now each pair of a state and an atom to a truth value of "true," "false," or "undefined."_ The notions of tracks, paths, and the definitions of sets \(\mathit{Trk},\mathit{Trk}(s),\mathit{Pth},\mathit{Pth}(s)\) carry over from Section 2.2. **May/Must Strategies and their Outcomes.** A _may-strategy_ (resp. _must-strategy_) is a function \(f:\mathit{Trk}\rightarrow\mathit{Act}^{\mathit{may}}\) (resp. \(Act^{\mathit{must}}\)) that maps each track to a _may_ (resp. \(must\)) action. Note that each \(must\)-strategy is a _may_-strategy, but not necessarily the other way around. Moreover, we can define memoryless \(may\)- and \(must\)-strategies in the standard way. The sets \(Str^{\mathit{may}}\) and \(Str^{\mathit{must}}\) are defined analogously to Section 2.2. Given a state \(s\in\mathit{St}\) and a profile of (\(may\) and/or \(must\)) strategies, represented by a complete assignment \(\chi\in\mathit{Asg}\), we define two kinds of outcome sets, \(\mathit{plays}^{\mathit{may}}(\chi,s)\) and \(\mathit{plays}^{\mathit{must}}(\chi,s)\). The former over-approximates the set of paths that can really occur when executing \(\chi\) from \(s\), while the latter under-approximates it. Typically, we will use \(\mathit{plays}^{\mathit{may}}\) to establish that the value of a temporal formula \(\varphi\) is \(\top\) (if \(\varphi\) holds in all such paths), and \(\mathit{plays}^{\mathit{must}}\) for \(\bot\) (if \(\varphi\) is false in at least one path). Formally, the function \(\mathit{plays}^{\mathit{may}}(\chi,s)\) returns the paths \(\pi\in\mathit{Pth}(s)\) such that, for all \(i\in N\), it holds that \(\pi_{i+1}\in\tau^{\mathit{may}}(\pi_{i},\vec{\alpha})\), where \(\vec{\alpha}(a)=\chi(a)(\pi_{\leq i})\) for each \(a\in\mathit{Ag}\). The definition of \(\mathit{plays}^{\mathit{must}}(\chi,s)\) is analogous, only with \(\tau^{\mathit{must}}\) being used instead of \(\tau^{\mathit{may}}\). ### Three-valued Semantics We now define the Three-valued satisfaction relation for Strategy Logic. **Definition 3.2** (Three-valued Satisfaction).: _Given a 3-valued model \(G\), for all SL formulas \(\varphi\), states \(s\in\mathit{St}\), and assignments \(\chi\in\mathit{Asg}(s)\) with \(\mathit{free}(\varphi)\subseteq\mathit{dom}(\chi)\), the satisfaction relation \((G,\chi,s\models^{3}\varphi)=\mathit{tv}\) is inductively defined as follows._ * \((G,\chi,s\models^{3}p)\ =\ V(s,p)\)_, for_ \(p\in\mathit{AP}\)_._ * _Boolean operators are interpreted as in Lukasiewicz's three valued logic_ _[_12, 10_]__._ * _For_ \(\phi=\exists x\varphi\)_,_ * \((G,\chi,s\models^{3}\phi)=\top\) _iff_ \((G,\chi[x\mapsto f],s\models^{3}\varphi)=\top\) _for some_ \(must\)_-strategy_ \(f\in Str^{\mathit{must}}(s)\)_;_ * _for_ \(p\in\mathit{AP}\)_._ * \((G,\chi,s\models^{3}\phi)=\top\) _iff_ \((G,\chi[x\mapsto f],s\models^{3}\varphi)=\bot\) _for some_ \(must\)_-strategy_ \(f\in Str^{\mathit{must}}(s)\)_;_ * _otherwise,_ \((G,\chi,s\models^{3}\phi)=\mathbf{u}\)_._ * \((G,\chi,s\models^{3}(a,x)\varphi)=(G,\chi[a\mapsto\chi(x)],s\models^{3}\varphi)\)_._ * _Finally, if the assignment_ \(\chi\) _is also complete, we define:_ * \((G,\chi,s\models^{3}X\varphi)=\top\) _iff for all_ \(\pi\in\mathit{plays}^{\mathit{may}}(\chi,s)\)_, we have_ \((G,(\chi,\pi)^{1}\models^{3}\varphi)=\top\)_;_ * \((G,\chi,s\models^{3}X\varphi)=\bot\) _iff for some_ \(\pi\in\mathit{plays}^{\mathit{must}}(\chi,s)\)_, we have_ \((G,(\chi,\pi)^{1}\models^{3}\varphi)=\bot\)_;_ * _otherwise,_ \((G,\chi,s\models^{3}X\varphi)=\mathbf{u}\)_._ * \((G,\chi,s\models^{3}\varphi_{1}U\varphi_{2})=\top\) _iff for all_ \(\pi\in\mathit{plays}^{\mathit{may}}(\chi,s)\)_, there is_ \(i\in N\) _such that_ \((G,(\chi,\pi)^{i}\models^{3}\varphi_{2})=\top\)_, and for all_ \(j<i\) _we have_ \((G,(\chi,\pi)^{j}\models^{3}\varphi_{1})=\top\)_;_ * \((G,\chi,s\models^{3}\varphi_{1}U\varphi_{2})=\bot\) _iff for some_ \(\pi\in\mathit{plays}^{\mathit{must}}(\chi,s)\) _and all_ \(i\in N\)_, we have_ \((G,(\chi,\pi)^{i}\models^{3}\varphi_{2})=\top\) _or there_ \(\pi\in\mathit{plays}^{\mathit{must}}(\chi,s)\) _and all_ \(i\in N\)_, we have_ \((G,(\chi,\pi)^{i}\models^{3}\varphi_{1})=\top\)_;_ * \((G,\chi,s\models^{3}\varphi_{1}R\varphi_{2})=\bot\) _iff for some_ \(\pi\in\mathit{plays}^{\mathit{must}}(\chi,s)\) _and_ \(i\in N\)_, we have_ \((G,(\chi,\pi)^{i}\models^{3}\varphi_{2})=\bot\) _and for all_ \(j\leq i\)_, we have_ \((G,(\chi,\pi)^{j}\models^{3}\varphi_{1})=\bot\)_;_ * _otherwise,_ \((G,\chi,s\models^{3}\varphi_{1}R\varphi_{2})=\mathbf{u}\)_._ * \((G,\chi,s\models^{3}\varphi_{1}R\varphi_{2})=\top\) _if for all_ \(\pi\in\mathit{plays}^{\mathit{may}}(\chi,s)\) _and_ \(i\in N\)_, we have_ \((G,(\chi,\pi)^{i}\models^{3}\varphi_{2})=\top\) _or there exists_ \(j\leq i\) _such that_ \((G,(\chi,\pi)^{j}\models^{3}\varphi_{1})=\top\)_;_ * \((G,\chi,s\models^{3}\varphi_{1}R\varphi_{2})=\bot\) _iff for some_ \(\pi\in\mathit{plays}^{\mathit{must}}(\chi,s)\) _and_ \(i\in N\)_, we have_ \((G,(\chi,\pi)^{i}\models^{3}\varphi_{2})=\bot\) _and for all_ \(j\leq i\)_, we have_ \((G,(\chi,\pi)^{j}\models^{3}\varphi_{1})=\bot\)_;_ * _otherwise,_ \((G,\chi,s\models^{3}\varphi_{1}R\varphi_{2})=\mathbf{u}\)_._ Again, we can define the memoryless, three-valued satisfaction relation for SL by restricting the clauses for operators \(\exists x\), \(\forall x\), and \((a,x)\) to memoryless strategies. Similarly to Section 2, if \(\phi\) is a sentence, then \((G,s\models^{3}\varphi)=(G,\chi,s\models^{3}\varphi)\) for any assignment \(\chi\), and \((G\models^{3}\varphi)=(G,s_{0}\models^{3}\varphi)\). We now show that our three-valued semantics in Def. 2.3 is a conservative extension of the standard two-valued interpretation in Sec. 2. **Theorem 3.3** (Conservativeness).: _Let \(G\) be a standard CGS, that is, \(Act^{\mathit{may}}=Act^{\mathit{must}}\), \(\tau^{\mathit{may}}=\tau^{\mathit{must}}\) are functions, and the truth value of every atom is defined (i.e., it is equal to either \(\top\) or \(\bot\)). Then, for every formula \(\phi\) in SL,_ \[(G,\chi,s\models^{3} **Remark 3.4** (Model checking).: _For any syntactic fragment \(\mathcal{L}\) of SL, model checking of \(\mathcal{L}\) with 3-valued semantics can be reduced to 2-valued model checking of \(\mathcal{L}\) by a construction similar to [3, Theorem 4]. Note also that 2-valued model checking for \(\mathcal{L}\) is a special case of its 3-valued counterpart, due to Theorem 3.3. Thus, the decidability and complexity for 2-valued model checking in fragments of SL carry over to 3-valued verification._ ## 4 Three-valued Abstraction for SL Here, we define the 3-valued state abstraction for CGS. The idea is to cluster the states of a CGS (called the _concrete model_) according to a given equivalence relation \(\approx\), e.g., one provided by a domain expert. Typically, two states are deemed equivalent if they agree on the evaluation of atoms, possibly just the atoms appearing on a given formula \(\phi\) to be checked. In some cases, such an equivalence relation might be too coarse and therefore more domain-dependent information could be taken into account. Then, the sets of may (resp. must) actions and the may (resp. must) transitions are computed in such a way that they always overapproximate (resp. underapproximate) the actions and transitions in the concrete model. Formally, the abstraction is defined as follows. **Definition 4.1** (Abstraction).: _Let \(G=\langle Ag,St,s_{0},Act,\tau,V\rangle\) be a CGS, and \(\approx\subset St\times St\) an equivalence relation. We write \([s]\) for the equivalence class of \(\approx\) that contains \(s\). The abstract model of \(G\) w.r.t. \(\approx\) is defined as the 3-valued CGS \(\mathcal{A}(G)=\langle\mathcal{A}(Ag),\mathcal{A}(St),\mathcal{A}(s_{0}), \mathcal{A}^{\mathit{may}}(Act),\mathcal{A}^{\mathit{must}}(Act),\mathcal{A}^ {\mathit{may}}(\tau),\)\(\mathcal{A}^{\mathit{must}}(\tau),\mathcal{A}(AP),\mathcal{A}(V)\rangle\), with:_ * \(\mathcal{A}(Ag)=\mathit{Ag}\) _and_ \(\mathcal{A}(AP)=AP\)_._ * \(\mathcal{A}(St)=\{[s]\mid s\in St\}\) _with_ \(\mathcal{A}(s_{0})=[s_{0}]\)_._ * \(\mathcal{A}^{\mathit{may}}(Act)=Act\)_._ * \(\mathcal{A}^{\mathit{may}}(\tau)=\tau^{\mathit{may}}:\mathcal{A}(St)\times( \mathcal{A}^{\mathit{may}}(Act))^{|\mathcal{A}(Ag)|}\to 2^{\mathcal{A}(St)}\) _such that_ \(\tau^{\mathit{may}}([s],\vec{\alpha})=\)__ \(\{[s_{succ}]\mid\exists s^{\prime}\in[s]\ \exists s^{\prime}_{succ}\in[s_{succ}]\.\ s^{\prime}_{succ}\in\tau(s^{\prime},\vec{\alpha})\}\)_._ * \(\mathcal{A}^{\mathit{must}}(\tau)=\tau^{\mathit{must}}:\mathcal{A}(St)\times (\mathcal{A}^{\mathit{may}}(Act))^{|\mathcal{A}(Ag)|}\to 2^{\mathcal{A}(St)}\) _such that_ \(\tau^{\mathit{must}}([s],\vec{\alpha})=\)__ \(\{[s_{succ}]\mid\forall s^{\prime}\in[s]\ \exists s^{\prime}_{succ}\in[s_{succ}]\.\ s^{\prime}_{succ}\in\tau(s^{\prime},\vec{\alpha})\}\)_._ * \(\mathcal{A}^{\mathit{must}}(Act)\) _is a maximal_3 _set_ \(Act^{\mathit{must}}\subseteq Act\) _such that_ \(\forall s\in St\ \forall\vec{\alpha}\in(Act^{\mathit{must}})^{|\mathcal{A}(Ag)|}\.\ \tau^{\mathit{must}}([s],\vec{\alpha})\neq\emptyset\)_. Note that a unique maximal set does not always exist. In such cases, a natural heuristics would be to choose the maximal subset of actions with the largest cardinality, breaking ties lexicographically in case there are still multiple solutions._ Footnote 3: with respect to set inclusion. * \(\mathcal{A}(V)([s],p)=\left\{\begin{array}{ll}\top&\text{if }V(s^{\prime},p)=\top \text{ for all }s^{\prime}\in[s]\\ \bot&\text{if }V(s^{\prime},p)=\bot\text{ for all }s^{\prime}\in[s]\\ \mathbf{u}&\text{otherwise.}\end{array}\right.\)__ Note that \(\mathcal{A}(G)\) can be computed in polynomial time w.r.t. the size of \(G\), assuming the above heuristics for \(\mathcal{A}^{\mathit{must}}(Act)\). We now prove that the abstraction preserves classical truth values. Given a strategy \(f\) in \(G\), we define the set of corresponding _may_-strategies in \(\mathcal{A}(G)\) by \(abstr^{\mathit{may}}(f)=\{f^{\dagger}\mid f^{\dagger}([s_{0}],\ldots,[s_{n}] )=f(s^{\prime}_{0},\ldots,s_{n})\text{ for some }s^{\prime}_{0}\in[s_{0}],\ldots,s^{ \prime}_{n}\in[s_{n}]\}\). Moreover, \(abstr^{\mathit{must}}(f)=abstr^{\mathit{may}}(f)\cap Str^{\mathit{must}}\). Note that \(abstr^{\mathit{may}}(f)\) is always nonempty. Also, \(abstr^{\mathit{must}}(f)\) is either empty or a singleton. Conversely, given a (_may_ or _must_) strategy \(f\) in \(\mathcal{A}(G)\), we define the set of corresponding concrete strategies in \(G\) by \(concr(f)=\{f^{*}\mid f^{*}(s_{0},\ldots,s_{n})=f([s_{0}],\ldots,[s_{n}])\}\). Notice that \(concr(f)\) is always a singleton for _must_ strategies, and either empty or a singleton for _may_ strategies. We lift \(abstr^{\mathit{may}},abstr^{\mathit{must}},concr\) to sets of strategies in the standard way. Clearly, \(f\in concr(abstr^{\mathit{may}}(f))\) for any concrete strategy, and \(f\in abstr^{\mathit{must}}(concr(f))\) for any \(\mathit{must}\)-strategy. We lift the notation to assignments analogously. Observe that, in every \(\chi^{*}\in concr(\chi[x\mapsto f])\), \(x\) is assigned with \(f^{*}\in concr(f)\). **Theorem 4.2** (Preservation).: _Let \(G\) be a CGS and \(\mathcal{A}(G)\) its abstraction induced by equivalence relation \(\approx\). Then, for every formula \(\phi\) in SL, every (may or must) assignment \(\chi\) and state \(s\) in \(\mathcal{A}(G)\), every assignment \(\chi^{*}\in concr(\chi)\), and state \(t\in s\) in \(G\), it holds that:_ \[((\mathcal{A}(G),\chi,s)\models^{3}\phi)=\top \Rightarrow (G,\chi^{*},t)\models^{2}\phi \tag{3}\] \[((\mathcal{A}(G),\chi,s)\models^{3}\phi)=\bot \Rightarrow (G,\chi^{*},t)\not\models^{2}\phi \tag{4}\] Proof.: The proof is by induction on the structure of \(\phi\). Induction base (\(\phi=p\)): \(((\mathcal{A}(G),\chi,s)\models^{3}\phi)=\top\) iff \(\mathcal{A}(V)(s,p)=\top\), iff for all \(t\in s\), \(V(t,p)=\top\), that is, \((G,\chi^{*},t)\models^{2}\phi\). The case for \(\bot\) is proved similarly. The case of \(\phi=\neg p\) is analogous. Case \(\phi=\psi_{1}\wedge\psi_{2}\): \(((\mathcal{A}(G),\chi,s)\models^{3}\phi)=\top\) iff \(((\mathcal{A}(G),\chi,s)\models^{3}\psi_{1})=\top\) and \(((\mathcal{A}(G),\chi,s)\models^{3}\psi_{2})=\top\). By induction, for all \(\chi^{*}\in concr(\chi)\) and \(t\in s\), \((G,\chi^{*},t)\models^{2}\psi_{1}\) and \(\psi_{1}\) and \((G,\chi^{*},t)\models^{2}\psi_{2}\). Thus, \((G,\chi^{*},t)\models^{2}\psi_{1}\wedge\psi_{2}\). Further, \(((\mathcal{A}(G),\chi,s)\models^{3}\phi)=\bot\) iff \(((\mathcal{A}(G),\chi,s)\models^{3}\psi_{1})=\bot\) or \(((\mathcal{A}(G),\chi,s)\models^{3}\psi_{2})=\bot\). By induction, for all \(\chi^{*}\in concr(\chi)\) and \(t\in s\), \((G,\chi^{*},t)\not\models^{2}\psi_{1}\) or for all \(\chi^{*}\in concr(\chi)\) and \(t\in s\), \((G,\chi^{*},t)\not\models^{2}\psi_{2}\). Thus, for all \(\chi^{*}\in concr(\chi)\) and \(t\in s\), \((G,\chi^{*},t)\not\models^{2}\psi_{1}\wedge\psi_{2}\). The case of \(\underline{\phi}=\psi_{1}\vee\psi_{2}\) is analogous. Case \(\phi=\exists x\psi\): \((\mathcal{A}(G),\chi,s\models^{3}\phi)=\top\) iff for some _must_-strategy \(f\in Str^{\mathit{must}}(s)\), \((\mathcal{A}(G),\chi[x\mapsto f])\), \(s\models^{3}\psi)=\top\). By induction, for all \(\chi^{*}\in concr(\chi[x\mapsto f])\) and \(t\in s\), it holds that \((G,\chi^{*},t)\models^{2}\psi\). Assume that \(concr(\chi[x\mapsto f])\) is nonempty, and consider the sole concrete strategy \(f^{*}\in concr(f)\). Clearly, \(\chi^{*}=\chi^{ \((G,\chi^{*},t)\models^{2}\psi\)\(=\)\(\perp\). Thus, by induction, \((G,\chi^{*},t)\not\models^{2}\psi\) for all \(\chi^{*}\in\mathit{concr}(\chi[x\mapsto g^{\dagger}])\) and \(t\in s\). Similarly to the previous paragraph, either (i) \(\mathit{concr}(\chi[x\mapsto g^{\dagger}])\) is nonempty and \(\chi^{*}[x\mapsto g]\in\chi^{*}\), thus \((G,\chi^{\prime}[x\mapsto g],t)\not\models^{2}\psi\) for all such \(\chi^{*}\), or (ii) the same statement holds vacuously. In both cases, \((G,\chi^{*},t)\not\models^{2}\exists x\psi\). The cases \(\underline{\phi}=\forall x\psi\) and \(\underline{\phi}=(a,x)\psi\) are analogous. Case \(\underline{\phi}=X\psi\): \((\mathcal{A}(G),\chi,s\models^{3}\phi)=\)\(\top\) iff for all \(\pi\in\mathit{plays}^{\mathit{map}}(\chi,s)\), we have \((G,(\chi,\pi)^{1})\models^{3}\psi)=\top\). By induction, \((G,\chi^{*},t)\models^{2}\psi\) for every \(\pi\in\mathit{plays}^{\mathit{map}}(\chi,s)\), \(\chi^{*}\in\mathit{concr}(\chi_{\pi_{\leq 1}})\) and \(t\in(\pi)_{1}\). Take any state \(t^{\prime}\in s\) and assignment \(\chi^{\prime}\) in \(G\) such that \(\chi^{\prime}_{\pi^{*}_{\leq 1}}=\chi^{*}\) for some \(\pi^{*}\in\mathit{plays}(\chi^{\prime},t^{\prime})\). Since \(\mathit{may}\) paths in \(\mathcal{A}(G)\) overapproximate paths in \(G\), we get that \((G,\chi^{\prime},t^{\prime})\models^{2}X\psi\). Further, \((\mathcal{A}(G),\chi,s\models^{3}\phi)=\)\(\perp\) iff for some \(\pi\in\mathit{plays}^{\mathit{must}}(\chi,s)\), we have \((G,(\chi,\pi)^{1}\models^{3}\varphi)=\)\(\perp\). By induction, there is \(\pi\in\mathit{plays}^{\mathit{must}}(\chi,s)\) such that \((G,\chi^{*},t)\not\models^{2}\psi\) for every \(\chi^{*}\in\mathit{concr}(\chi_{\pi_{\leq 1}})\) and \(t\in(\pi)_{1}\). Take any state \(t^{\prime}\in s\) and assignment \(\chi^{\prime}\) in \(G\). Since \(\mathit{must}\) paths in \(\mathcal{A}(G)\) underapproximate paths in \(G\), there must be a path \(\pi^{*}\in\mathit{plays}(\chi^{\prime},t^{\prime})\) such that \(\chi^{\prime}_{\pi^{*}_{\leq 1}}=\chi^{*}\). Thus, \((G,\chi^{\prime},t^{\prime})\not\models^{2}X\psi\). The cases \(\underline{\phi}=\psi_{1}U\psi_{2}\) and \(\underline{\phi}=\psi_{1}R\psi_{2}\) are analogous. **Corollary 4.3**.: _For any CGS \(G\) and SL formula \(\phi\):_ \[(\mathcal{A}(G)\models^{3}\phi)=\top \Rightarrow G\models^{2}\phi\] \[(\mathcal{A}(G)\models^{3}\phi)=\bot \Rightarrow G\not\models^{2}\phi\] It is easy to see that the above results hold also for the semantic variant of SL based on memoryless strategies. ## 5 Implementation We implemented a prototype tool in Java4, which accepts CGSs and SL properties as input, on top of MCMAS, the _de facto_ standard model checker for MAS [16]. Specifically, our tool exploits MCMAS as a black-box, for performing the actual verification step. In fact, our tool focuses on the abstraction procedure for the verification of SL formulas (as presented in this paper), while their verification is obtained through MCMAS. Footnote 4: [https://github.com/AngeloFerrando/3-valuedSL](https://github.com/AngeloFerrando/3-valuedSL) From a practical perspective, there are various aspects to report, that can be summarised as (i) input/output of the tool; (ii) abstraction of the CGS; (iii) verification in MCMAS. (i) The implementation allows for the definition of CGSs as external JSON5 formatted input files. In this way, any end user may easily interact with the tool, independently from the CGS's internal representation (_i.e._, the corresponding data structures). As CGSs, also the definition of the SL formula to check is handled as an external parameter to the tool. Once the verification ends, the outcome is returned to the user. Footnote 5: [https://www.json.org/](https://www.json.org/) (ii) As presented in the paper, in order to improve the verification performance, the CGS is first abstracted. The abstraction is obtained by clustering multiple states into a single abstract state of the CGS. This step is based on an equivalence relation (\(\approx\)), as presented in Definition 4.1. An abstract state may be labeled by atoms. As presented in Definition 4.1, an atom holds (resp. does not hold) in the abstract state iff it holds (resp. does not hold) in all the concrete states which have been collapsed into the abstract state. Otherwise, the atom is considered undefined. Note that, since atoms can hold, not hold, or being undefined in a state, they are explicitly labeled in each state. In practice, this is obtained by duplicating each atom \(p\) into atoms \(p_{\top}\) and \(p_{\bot}\), which correspond to \(p\) holding or not holding in a certain state of the abstract CGS; whereas being undefined can be marked by having neither \(p_{\top}\) nor \(p_{\bot}\) present in the abstract state. (iii) The abstract CGS is then verified in MCMAS against an SL formula. In more detail, our tool exploits the MCMAS extension for SL[1], i.e., the _one goal_ fragment [3], and the MCMAS extension for SLK, i.e., an epistemic extension of SL, [3]. Note that, to make use of the MCMAS model checker, our CGSs need to be first translated into Interpreted Systems [10]. In fact, MCMAS does not support CGSs, and it expects Interpreted Systems expressed using a domain specific language called Interpreted Systems Programming Language (ISPL). Thus, a pre-processing step before calling MCMAS is always required, where the CGS of interest is first automatically translated into its ISPL representation. This is only a technical detail, since CGSs and Interpreted Systems are equally expressive [1, 2, 1]. It is important to report that the ISPL generation is performed on standard CGSs, not on their abstraction. Indeed, the abstract CGSs as described in Definition 4.1 cannot be used in MCMAS straight away, but need to be reprocessed first. To generate a CGS which can then be verified into MCMAS, the tool splits the 3-valued CGS into two CGSs. Such a split is determined by the SL formula under evaluation; that is, given an SL formula \(\varphi\), we extract two sets of agents, \(E\) and \(U\), whose strategies are only existentially and universally quantified in \(\varphi\), respectively. By using these two sets, we split the 3-valued CGS into two CGSs: one CGS where agents in \(E\) use \(must\)-strategies, while agents in \(U\) use \(may\)-strategies; one CGS where agents in \(E\) use \(may\)-strategies, while agents in \(U\) use \(must\)-strategies. The first CGS can be used to prove the satisfaction of \(\varphi\), while the second CGS can be used to prove the violation of \(\varphi\). This follows from Definition 2.3, third and fourth bullet points. As a consequence of how the verification is performed in practice, we remark an important difference between the theory presented in this paper and its implementation: the implementation handles SL formulas with arbitrary alternation of universal (\(\forall x\)) and existential (\(\exists x\)) quantifiers, as long as for each agent (\(a\)) in the formula, there is one single binding \((a,x)\). Even though at the theoretical level our abstraction method can handle all SL formulas, at the implementation level this is not the case. In fact, our tool is based on MCMAS, and because of that, we cannot handle formulas where the agents need to swap between universally and existentially quantified strategies. This would require to modify MCMAS internally, which we leave as future work. ## 6 Experiments We carried out the experiments on a machine with the following specifications: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz, 4 cores 8 threads, 16 GB RAM DDR4. The case study we experimented on consists in a scheduler, where \(N\) agents, _i.e._, processes (called \(P_{i}\) for \(1\leq i\leq N\)) compete to get CPU time, while an \(Arbiter\) agent decides which process to grant access (one at a time). The full description of the example can be found in [3]. The corresponding CGS can be parameterised over the number \(N\) of processes. Naturally, such parameter largely influences the size and complexity of the resulting CGS. Table 1 reports experimental results we obtained by applying our tool to the scheduler case study. We considered the verification of the same SL formula \(\varphi\) verified in [3], that is: \(\begin{array}{l}\varphi=\forall x,\vec{y}(Arbiter,x)(P_{1},y_{1})\ldots(P_{ n},y_{n})\\ G\neg\bigvee_{i=1}^{n}\bigvee_{j=i+1}^{n}rs_{i}\wedge rs_{j}\end{array}\) Intuitively, \(\varphi\) asserts that at most one process (\(P_{i}\)) owns the resource (\(rs\)) at any given point in time. In Table 1, each row refers to a fixed number of processes, from 2 to 9, used to generate the corresponding CGS. Each row also reports the number of states and transitions of the CGS, and the time required to perform its verification in MCMAS, both on the original CGS and its 3-valued abstraction, for comparison. For the latter, the time required to generate such an abstraction is reported. For the experiments with the scheduler, the abstraction is assumed to be guided by an expert of the system. In more detail, all states where at least one process is waiting to be selected by the scheduler are clustered together. This choice, as apparent in Table 1, largely reduces the number of states and transitions of the CGS. Nonetheless, this does not prevent the verification process to correctly conclude the satisfaction of \(\varphi\) on both the CGS and its 3-valued version, _i.e._, the abstraction does not remove any information necessary to determine the satisfaction of \(\varphi\). Table 1 also reports the execution time required for the actual verification of both the CGS and its 3-valued abstraction. As we can observe, without the abstraction step, the verification of the CGS times out when 3 processes are considered. In fact, MCMAS cannot model check \(\varphi\) in less than 3 hours, which was set as the time out (both for the SL[1G] and SLK extensions of MCMAS). Instead, thanks to the abstraction, the verification can be performed for up to 9 (a more realistic number of processes). Note that, the verification of the 3-valued CGS could have been performed for even larger numbers of processes. However, the CGS with 10 processes did not fit into the available memory of the machine used for the experiments; so, it was not possible to apply our technique to generate its 3-valued abstraction. Nonetheless, we expect the tool to handle even the case with \(10\) processes via abstraction. Figure 1 reports the data compression obtained in the scheduler case study. It is immediate to observe the huge compression obtained via abstraction. Indeed, the larger is the number of processes involved, the more significant is such compression. Note that, for more than 6 processes, the abstraction produces a CGS with \(\sim\)\(99\)% less states and transitions. Besides \(\varphi\), we experimented with other specifications as well. Specifically, we carried out experiments over a large set of randomly-generated SL formulas. The goal of these experiments is to understand how many times our tool would return a conclusive answer (_i.e._, not \(\mathbf{u}\)). We automatically synthesised 10,000 different SL formulas and verified them in the scheduler case study; where we kept the same abstraction as for Table 1. Over the 10,000 different SL formulas, the tool was capable of providing a defined truth value (either true or false) in the 83% of cases. Of course, this is a preliminary evaluation, which needs to be corroborated through additional experiments, also involving further real-world scenarios. Nonetheless, the results we obtained are promising, and allow us to empirically show the effectiveness of our approach, not only from a data-compression perspective, but also from a computational one. ## 7 Conclusion The high complexity of the verification problem for Strategy Logic hinders the development of practical model checking tools and therefore its application in critical, real-life scenarios. As a consequence, it is of upmost importance to develop techniques to alleviate this computational burden and allow the use of Strategy Logic in concrete use cases, such as the scheduler scenario here analysed. This contribution is meant to be the first step in this direction. \begin{table} \begin{tabular}{|r|r|r|r|r|r|r|r|r|r|} \hline \multirow{2}{*}{Processes} & \multicolumn{4}{c|}{CGS} & \multicolumn{4}{c|}{3-valued CGS} \\ \cline{2-10} & \multirow{2}{*}{States} & \multirow{2}{*}{Transitions} & \multicolumn{1}{c|}{Verification} & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} Verification \\ Time [sec] \\ (SL1G) \\ \end{tabular} }} & \multirow{2}{*}{States} & \multirow{2}{*}{Transitions} & \multirow{2}{*}{\begin{tabular}{c} Transitions \\ [Mus] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Transitions \\ Time [sec] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Abstraction \\ Time [sec] \\ (SL1G1) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Verification \\ Time [sec] \\ (SLK) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Verification \\ Time [sec] \\ (SL1G1) \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Verification \\ Time [sec] \\ (SLK) \\ \end{tabular} } \\ \hline 2 & 9 & 40 & 0.48 & 0.18 & 5 & 16 & 18 & 0.01 & 0.10 & 0.03 \\ \hline 3 & 21 & 232 & 3725.56 & 2863.75 & 6 & 24 & 27 & 0.03 & 0.12 & 0.13 \\ \hline 4 & 49 & 1376 & T.O & T.O & 7 & 34 & 38 & 0.10 & 0.24 & 0.19 \\ \hline 5 & 113 & 7994 & T.O & T.O & 8 & 46 & 51 & 0.31 & 0.72 & 0.69 \\ \hline 6 & 257 & 43520 & T.O & T.O & 9 & 60 & 66 & 1.24 & 2.08 & 1.12 \\ \hline 7 & 577 & 230528 & T.O & T.O & 10 & 76 & 83 & 10.88 & 5.66 & 4.99 \\ \hline 8 & 1281 & 1182208 & T.O & T.O & 11 & 94 & 102 & 107.89 & 8.40 & 6.67 \\ \hline 9 & 2817 & 5903872 & T.O & T.O & 12 & 114 & 123 & 1087.14 & 29.37 & 26.81 \\ \hline \end{tabular} \end{table} Table 1: Experimental results for the scheduler case study (T.O. stands for Time Out). Figure 1: CGS compression in the scheduler case study. ## Acknowledgements W. Jamroga acknowledges the support of NCBR Poland, NCN Poland, and FNR Luxembourg under projects STV (POLLUX-VII/1/2019), SpaceVote (POLLUX-XI/14/SpaceVote/2023), and SAI (2020/02/Y/ST6/00064). A. Murano acknowledges the support of the PNNR FAIR project, the InDAM project "Strategic Reasoning in Mechanism Design", and the PRIN 2020 Project RIPER.
2308.04991
Rejuvenation and memory effects in active glasses induced by thermal and active cycling
It has recently been shown that thermal active glasses can display physical aging behavior comparable to that of passive glasses, although there are some notable distinctions due to the intrinsic non-equilibrium nature of active matter. The question whether active disordered materials can also exhibit rejuvenation and memory effects, akin to the phenomenology of e.g.\ spin glasses, has thus far remained unexplored. Here we address this question by numerical simulations of active glasses composed of active Brownian particles that are subjected to a thermal or active cycling protocol. We find that an active system undergoing thermal cycling indeed shows rejuvenation and memory effects, with the strength of rejuvenation depending on the persistence time. In contrast, however, a passive Brownian system subjected to the same thermal cycle lacks the rejuvenation effect. We attribute this to the enhanced motility of active particles, which enables them to escape from their cages and restart aging at the new temperature, thus rejuvenating the material. Finally, we also demonstrate that both rejuvenation and memory effects can be induced by an activity cycle which quenches the material from an active to passive glass and back, providing a unique means to rejuvenate active matter.
Giulia Janzen, Liesbeth M. C. Janssen
2023-08-09T14:45:08Z
http://arxiv.org/abs/2308.04991v1
# Rejuvenation and memory effects in active glasses induced by thermal and active cycling ###### Abstract It has recently been shown that thermal active glasses can display physical aging behavior comparable to that of passive glasses, although there are some notable distinctions due to the intrinsic non-equilibrium nature of active matter. The question whether active disordered materials can also exhibit rejuvenation and memory effects, akin to the phenomenology of e.g. spin glasses, has thus far remained unexplored. Here we address this question by numerical simulations of active glasses composed of active Brownian particles that are subjected to a thermal or active cycling protocol. We find that an active system undergoing thermal cycling indeed shows rejuvenation and memory effects, with the strength of rejuvenation depending on the persistence time. In contrast, however, a passive Brownian system subjected to the same thermal cycle lacks the rejuvenation effect. We attribute this to the enhanced motility of active particles, which enables them to escape from their cages and restart aging at the new temperature, thus rejuvenating the material. Finally, we also demonstrate that both rejuvenation and memory effects can be induced by an activity cycle which quenches the material from an active to passive glass and back, providing a unique means to rejuvenate active matter. ## I Introduction The dynamics of many densely disordered systems, i.e. glassy materials, is characterized by an extremely slow structural relaxation that is often explicitly dependent on the age of the material. This age (or waiting time) dependence is called physical aging and is particularly well studied for glasses following a sudden quench toward a lower temperature [1; 2; 3; 4]. After such a temperature quench, the structural relaxation time tends to increase with the age of the material, typically as a power law [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. Theoretically, this aging behavior can be understood as a gradual approach of the material toward lower-energy equilibrium states [3]. While a single thermal quench is one of the most popular protocols to study the aging behavior of glassy systems, other protocols, such as thermal cycling involving repeated temperature changes, may lead to even richer dynamical behaviors and offer more versatility to characterize a material's out-of-equilibrium dynamics. For spin glasses, the effect of thermal cycling has been studied theoretically, experimentally, and in numerical simulations, disclosing new insights into their dynamical behavior [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. Briefly, the protocol consists of three steps: firstly, a high-temperature liquid is cooled to \(T_{q_{1}}<T_{g}\) (where \(T_{g}\) is the glass transition temperature); after a certain interval, the temperature is quenched once more to a lower temperature \(T_{q_{2}}<T_{q_{1}}\); and finally, the temperature is reheated to \(T_{q_{1}}\). During the first step, the system undergoes aging. In the subsequent step, instead of the relaxation process slowing down further, the aging process restarts at the new temperature \(T_{q_{2}}\). This phenomenon is commonly referred to as'rejuvenation'. The relaxation at \(T_{q_{2}}\) will be identical to that obtained from a direct quench at this temperature if the temperature jump is sufficiently large [21; 24; 29; 34; 35]. In the third step of the thermal cycling protocol, when the temperature is brought back to \(T_{q_{1}}\), the system exhibits _full memory_ if the relaxation restarts exactly from the point reached before the second step. However, achieving full memory requires a sufficiently large temperature difference, denoted as \(\Delta T\). If \(\Delta T\) is not large enough, the behavior observed in the third step will be influenced by the aging that occurred at \(T_{q_{2}}\)[36; 37]. Several studies have explored whether rejuvenation and memory, as observed in spin glasses, can also occur in structural glasses. Recent computer simulations of a continuously polydisperse model glassformer have shown that temperature cycling indeed leads to rejuvenation and memory effects, provided that both the duration of each step in the cycle and the temperature jumps between the steps are sufficiently large [38]. Similarly, in numerical simulations of binary Lennard-Jones mixtures, it was found that oscillatory temperature variations may induce rejuvenation, depending on the cooling rate and cycling amplitude [39; 40]. Numerous studies have also explored emergent non-equilibrium phenomena in metallic glasses exposed to oscillatory temperature variations [41; 42; 43]. Most notably, subjecting metallic glasses to cryogenic thermal cycling can induce rejuvenation and improve the plasticity of the material [44; 45; 46]. The study of glassy phenomena has recently seen a renewed surge of interest through the advent of active matter, i.e., non-equilibrium systems composed of self-propelling particles. Both theory and simulations [47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59], as well as experiments [60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70], have shown that dense active matter shares many similarities with conventional glassy systems, including anomalously slow dynamics and aging. Thus far, the physical aging behavior of active systems has been explored in a small number of simulation studies for thermal [71] and athermal active glasses [72; 73] using a single temperature or activity quench, respectively. Notably, for active thermal glasses, the aging relaxation dynamics was found to be governed by a time-dependent competition between thermal and active effects, with the active particles' persistence time controlling both the time scale and magnitude of the activity-enhanced speedup in dynamics [71]. However, it remains unclear how a protocol such as a thermal cycle would impact the dynamics of an active glass, and whether activity itself could be used to design a new non-equilibrium protocol, such as an activity cycle. Here we investigate how cyclic protocols affect the dynamics of structural active glasses. We find that, in contrast to our passive reference sample, active glasses do exhibit rejuvenation under a simple temperature cycle. Importantly, the strength of this thermally induced rejuvenation effect depends on the active particles' persistence time. We also observe a clear memory effect that, unlike rejuvenation, is more easily observed and independent of the persistence time. Moreover, we introduce a cyclic protocol unique to active matter, in which an active sample is made passive and then active again. By applying this protocol, we find that the rich out-of-equilibrium dynamics observed in the temperature cycle protocol becomes even more pronounced in the activity cycle. ## II Methods ### Simulation model We study a two-dimensional (2D) binary mixture of thermal active Brownian particles (ABPs). The overdamped equations of motion for each particle \(i\) are given by \[\gamma\,\dot{\mathbf{r}}_{i}=\sum_{i\neq j=1}^{N}\mathbf{f}_{ij}+f\,\mathbf{n }_{i}+\sqrt{2D_{T}}\,\mathbf{\eta} \tag{1}\] \[\dot{\theta}_{i}=\sqrt{2D_{r}}\,\eta_{\theta} \tag{2}\] where \(\mathbf{r}_{i}=(x_{i},y_{i})\) and \(\theta_{i}\) represent the particle's spatial and rotational coordinates, respectively. The dots denote a time derivative. Thermal noise is modeled as an independent Gaussian stochastic process, \(\mathbf{\eta}=(\eta_{x},\eta_{y})\), with zero mean and variance \(2D_{T}\delta(t-t^{\prime})\), where \(D_{T}=k_{B}T/\gamma\) with \(k_{B}\) being the Boltzmann constant, \(T\) the temperature, and \(\gamma\) the friction coefficient. The rotational noise \(\eta_{\theta}\) is a Gaussian stochastic process with zero mean and variance \(2D_{r}\delta(t-t^{\prime})\). The factors \(D_{T}\) and \(D_{r}\) represent the translational and rotational diffusion constants, respectively, and the ABP persistence time is defined as \(\tau_{r}=D_{r}^{-1}\). The constant self-propulsion speed \(f/\gamma\) is applied to each ABP along a direction \(\mathbf{n}_{i}=(\cos\theta_{i},\sin\theta_{i})\). Note that when the active force \(f\) is equal to zero, Eq. 1 reduces to the equation of motion for a passive Brownian particle. Finally, \(\mathbf{f}_{ij}=-\nabla_{i}V(r_{ij})\) is the interaction force between particles \(i\) and \(j\), where \(r_{ij}=|\mathbf{r}_{i}-\mathbf{r}_{j}|\) and \(V\) is the Lennard-Jones potential with a cutoff distance \(r_{ij}=2.5\sigma_{ij}\) and zero otherwise. In order to prevent crystallization we use the parameters of the 2D binary Kob-Andersen mixture [74]: \(A=65\%\), \(B=35\%\), \(\epsilon_{AA}=1\), \(\epsilon_{BB}=0.5\epsilon_{AA}\), \(\epsilon_{AB}=1.5\epsilon_{AA}\), \(\sigma_{AA}=1\), \(\sigma_{BB}=0.88\sigma_{AA}\) and \(\sigma_{AB}=0.8\sigma_{AA}\). We set the density to \(\rho=1.2\), the number of particles to \(N=10\,000\) and \(D_{T}=\gamma=1\). Results are in reduced units, where \(\sigma_{AA}\), \(\epsilon_{AA}\), \(\frac{\sigma_{AA}^{2}}{\epsilon_{AA}}\), and \(\frac{\epsilon_{AA}}{k_{B}}\) are the units of length, energy, time, and temperature, respectively. Simulations were performed using LAMMPS [75] by solving Eqs. 1 and 2 via the Euler-Maruyama method with a step size \(\delta t=10^{-4}\). ### Protocols To induce possible rejuvenation and memory effects, we expose our system to two different protocols: thermal and active cycling. The thermal cycling protocol is applied to both an active system (\(f=0.5\) and \(\tau_{r}=1,10\)) and a passive system (\(f=0\)) for comparison. In both cases we prepare 1000 independent configurations, which we first allow to equilibrate at a high temperature \(T_{i}=1\). We note that this relatively large number of independent trajectories is needed to ensure sufficient statistical quality of the results. Once the equilibration process is completed, we quench the temperature below the glass transition temperature to \(T_{q_{1}}=0.25<T_{g}\), where \(T_{g}\) is approximately 0.4 or 0.3 for a passive and active system, respectively [71; 76; 77]. The system is then allowed to evolve at \(T_{q_{1}}\) for a duration of \(t_{1}\). This initial aging step is followed by another quench to \(T_{q_{2}}=0.1<T_{q_{2}}\), after which the system is allowed to evolve for a time \(t_{2}\). Finally, we increase the temperature back to \(T_{q_{1}}\) and allow the system to evolve for a duration of \(t_{3}\). After Figure 1: Cyclic protocols used in this work to induce aging, rejuvenation, and memory in active systems. The left axis represents a thermal cycle and the right axis shows an activity cycle. some testing, we have found that \(t_{1}=t_{2}=t_{3}=500\) or \(1000\) provides a reasonable time frame to probe possible rejuvenation and memory effects. The left axis of Fig. 1 illustrates the temperature cycle protocol used in our simulations. For the activity cycling protocol, we control the active system's behavior by quenching the persistence time \(\tau_{\tau}\). To achieve this, we prepare \(1000\) independent configurations at a temperature of \(T=0.1\) or \(0.25\) and an active force of \(f=0.5\). We then equilibrate the system at a high persistence time of \(\tau_{r_{i}}=100\) before quenching the persistence time to \(\tau_{r_{1}}=10\). We subsequently allow the system to evolve for a time interval \(t_{1}\). After this initial aging step, we quench the persistence time once again to \(\tau_{r_{2}}=0\) to obtain a passive system. The system then evolves for a time \(t_{2}\). Finally, we change the persistence time back to \(\tau_{r_{1}}=10\) and we let the system evolve for a time interval of \(t_{3}\) (right axis of Fig. 1). In all cases we use \(t_{1}=t_{2}=t_{3}=500\). We note that, as an alternative active cycling protocol, one could also quench the self-propulsion force \(f\). We have verified that by using the self-propulsion force as the control parameter instead of the persistence time, similar results are found (see Supplementary Material). To probe possible rejuvenation and memory effects, we analyze the mean-squared displacement \(\langle\delta r^{2}(t_{w},t+t_{w})\rangle\) as a function of time \(t\). The time \(t\) ranges from \(0\) to the duration of the protocol's steps, which is either \(500\) or \(1000\). The waiting time \(t_{w}\) is defined as the time elapsed after the start of the respective protocol. Explicitly, in the first step or in a direct quench, \(t_{w}\) represents the time spent at the first quenching temperature \(T_{q_{1}}\) or the quenched persistence time \(\tau_{r_{1}}\). In the second and third steps of the protocol, \(t_{w}\) is defined as \(t_{1}+\hat{t}_{w}\) and \(t_{1}+t_{2}+\hat{t}_{w}\), respectively. Here, \(0\leq\hat{t}_{w}<1000\) when the duration of each step is \(1000\), or \(0\leq\hat{t}_{w}<500\) when the duration of each step is \(500\). ## III Results and Discussion ### Rejuvenation by thermal cycling: active versus passive We first discuss the results of thermal cycling applied to either an active and passive thermal system. The initial step of our thermal cycling protocol (step 1 in Fig. 1) corresponds to a temperature quench from high (\(T_{i}=1\)) to low temperature (\(T_{q_{1}}=0.25\)), which induces simple aging. As expected, during this step, the relaxation dynamics slows down with increasing waiting time \(t_{w}\). Specifically, both the active and passive systems exhibit a power-law scaling of the structural relaxation time with respect to the waiting time, \(\tau_{\alpha}\sim t_{w}^{-\delta}\), where the exponent \(\delta\) is different for active and passive systems. These aging results are in full agreement with literature, both for the passive and active case [73; 5; 71]. The second step of the protocol (step 2 in Fig. 1) corresponds to a quench from \(T_{q_{1}}=0.25\) to \(T_{q_{2}}=0.1\), which may induce rejuvenation. Previous studies of passive systems [38; 26] have shown that the rejuvenation effect is affected by both the temperature difference between the first and the second step \(\Delta T=|T_{q_{1}}-T_{q_{2}}|\) and the duration of each step of the process. If \(t_{1}\) is not long enough an additional aging contribution may be observed. Testing the protocol with \(t_{1}=t_{2}=500\), we have found no evidence of rejuvenation in either the passive or active system (see Supplementary Material). Indeed, during the second step of the process, with \(T_{q_{2}}=0.1\), the mean-squared displacement does not evolve as the system ages. This behavior could be due to a too-short time duration or a too-small \(\Delta T\). To investigate which of these parameters is responsible for this behavior, we repeat the protocol with \(t_{1}=t_{2}=1000\). As shown in the Supplementary Material, in the passive case, even with a longer duration of each step (\(t_{1}=1000\)), the system remains frozen at the new temperature \(T_{q_{2}}=0.1\), and the rejuvenation effect is not observed. Different waiting times \(\hat{t}_{w}\) exhibit the same dynamical behavior, indicating that the aging dynamics is slowed down by this additional temperature quench [40]. These results are in agreement with existing literature showing that structural glasses display rejuvenation only when the duration of the first step is substantial (\(t_{1}=1.2\cdot 10^{6}\) or \(\infty\) in Ref. [38]) and there is a considerable temperature jump between the steps. For spin glasses, especially in three dimensions, rejuvenation is also notoriously difficult to observe in simulations, and typically requires an exceptionally large temperature jump [33]. Hence, in our case, we believe the absence of rejuvenation in the passive glass may be attributed to either a time duration \(t_{1}\) that is still too short or a \(\Delta T\) that is not sufficiently large. However, decreasing the temperature \(T_{q_{2}}\) or further increasing \(t_{1}\) is computationally expensive as the dynamics become noisier, and obtaining reliable results would require even more than \(1000\) independent configurations. Conversely, for the active system, we do observe a rejuvenation effect under the same thermal cycling protocol. To see this, let us first consider an active system with a relatively small persistence time of \(\tau_{r}=1\). In Fig. 2(a) we plot the mean-squared displacements for this active system during the second step of the cycle for waiting times \(t_{w}=t_{1}+\hat{t}_{w}\), with \(t_{1}=1000\) and \(\hat{t}_{w}=100,500\). For comparison we also show the dynamics following a direct quench to the same temperature, \(T_{q}=T_{q_{2}}=0.1\). Notably, we observe that the mean-squared displacement at \(t_{w}=t_{1}+500\) overlaps almost perfectly with that of a directly-quenched sample at \(t_{w}=800\). This is the hallmark of rejuvenation: even though the system during the second step of the cycle is older (age \(t_{w}=1500\)), it behaves effectively as a younger system (age \(t_{w}=800\)) that was quenched to the same temperature. Since the behavior obtained from a thermal cycle at waiting time \(t_{1}+\hat{t}_{w}\) is equivalent to a direct quench at waiting time \(t_{w}\), this implies that the effective age of the system during the second step is \(t_{w}^{eff}=t_{w}\). Consequently, in this case, the effective waiting time can be expressed as \(t_{w}^{eff}=\hat{t}_{w}+t^{ag}\), where \(t^{ag}\) is nonzero but smaller than \(t_{1}\) (in this case \(t_{w}^{eff}=800\) and \(t^{ag}=300\)). Note that the limiting case of \(t^{ag}=0\) would indicate perfect rejuvenation, i.e., as if the first aging step never happened. To further investigate the rejuvenation effect in active systems, we consider a larger persistence time of \(\tau_{r}=10\). Figure 3(a) shows the corresponding mean-squared displacements at a waiting time \(t_{w}=t_{1}+10\) during the second step of the cycle, and at a waiting time \(t_{w}=100\) following a direct quench to the same temperature \(T_{q_{2}}\). When comparing the two curves, we find that the short-time behavior of the mean-squared displacements at \(t_{w}=t_{1}+10\) and \(t_{w}=100\) differs, in particular, the system in the second step is initially faster than a direct quench. Consequently, the short-time dynamics is still influenced by the first step of the protocol. However, the long-time behaviors of the mean-squared displacements at \(t_{w}=t_{1}+10\) and \(t_{w}=100\) overlap. Therefore, we can conclude that increasing the persistence time in active systems leads to a'stronger' rejuvenation effect. In this case, the extra aging contribution \(t^{ag}\) remains present, but is smaller compared to the one observed for the system with the smaller persistence time of \(\tau_{r}=1\). As a result, the system with \(\tau_{r}=10\) is effectively younger than the system with \(\tau_{r}=1\). Overall our results of the second step show that an active glassy system exhibits rejuvenation under thermal cycling, while its passive counterpart exposed to the same protocol does not. However, it must be noted that the rejuvenation process is 'weak' in the sense that the material is still effectively older than a system subjected to a direct quench. In other words, the effect of the first aging step is still noticeable in the dynamics. We hypothesize that the active rejuvenation effect is due to the ability of self-propelling particles to escape their cages relatively easily, thus allowing them to effectively restart aging at the new temperature. To rationalize the importance of the persistence time on the rejuvenation dynamics, we note that the dynamics of a thermal active glass is governed by both thermal and active effects. On sufficiently long timescales, however, it is known that the activity becomes dominant [71], and hence systems with larger persistence times exhibit stronger rejuvenation effects. ### Memory by thermal cycling: active versus passive Let us now look at the emergence of memory effects due to a temperature cycle. Such memory effects can occur in the final step of the thermal cycling protocol (step 3 in Fig. 1), in which the temperature is raised from \(T_{q_{2}}=0.1\) to \(T_{q_{1}}=0.25\). Figures 2(b) and 3(b) show the mean-squared displacements for two active systems with \(\tau_{r}=1\) and \(10\), respectively, at \(t_{w}=t_{1}+t_{2}+10\) (third step of the cycle) and \(t_{w}=1000\) (at the end of the first step). In both cases, the results reveal that the system recovers the same behavior observed at the end of the first step after a waiting time \(\hat{t}_{w}=10\). Thus, for the waiting time \(t_{w}^{mem}=t_{1}+t_{2}+t^{*}\), the system retains a memory of the time spent at \(T_{q_{1}}\). However, due to the relatively small value of \(t^{*}=10\) (for both \(\tau_{r}=1\) and \(10\)), the memory is not perfect, and this is the only consequence of the time spent at \(T_{q_{2}}\). This behavior is consistent with previous results obtained in (passive)spin glasses [36; 37], where it was shown that the time needed to remember the time spent at \(T_{q_{1}}\) is much shorter compared to the time spent at \(T_{q_{2}}\) and hence, after a very brief transient, the dynamics continues as if the second step has never occurred. Moreover, our results indicate that, unlike the rejuvenation effect, the memory effect is independent of the persistence time. In order to achieve full memory, i.e. \(t^{*}=0\), we hypothesize that the temperature jump \(\Delta T\) should be increased, as it has been shown in spin-glasses Figure 2: Mean-squared displacements as a function of time for an active thermal system with a relatively small persistence time of \(\tau_{r}=1\), subjected to a thermal cycle. Panels (a) and (b) correspond to the second and third step of the thermal cycling protocol to probe rejuvenation and memory effects, respectively. In both panels, solid lines correspond to different waiting times \(t_{w}\) (with \(t_{1}=t_{2}=1000\)). The dashed curve in panel (a) corresponds to a direct quench to \(T_{q_{2}}\), i.e. without the first aging step, at \(t_{w}=800\); the dashed curve in panel (b) corresponds to the end of the first step, \(t_{w}=1000\). Figure 3: Mean-squared displacements as a function of time for an active thermal system with a relatively large persistence time of \(\tau_{r}=10\), subjected to a thermal cycle. Panels (a) and (b) correspond to the second and third step of the thermal cycling protocol to probe rejuvenation and memory effects, respectively. In both panels, solid lines correspond to different waiting times \(t_{w}\) (with \(t_{1}=t_{2}=1000\)). The dashed curve in panel (a) corresponds to a direct quench to \(T_{q_{2}}\), i.e. without the first aging step, at \(t_{w}=100\); the dashed curve in panel (b) corresponds to the end of the first step, \(t_{w}=1000\). that increasing \(\Delta T\) leads to a decrease in \(t^{*}\)[36; 37]. As shown in the Supplementary Material, the memory effect can also be observed in passive systems, even when rejuvenation is absent. During the first step, the dynamics is frozen at \(T_{q_{2}}\), but shortly after the temperature is raised back to \(T_{q_{1}}\), the passive system remembers its behavior during the first step. In a similar manner to the active case, the memory effect in passive systems also exhibits a small \(t^{*}\), which represents the only consequence of the second step of the cycling. This suggests that, unlike rejuvenation, the memory effect is easier to observe in these systems. The presence of memory effects in these systems, even without rejuvenation, is consistent with the findings in the existing literature, where memory represents a common phenomenon observed in various systems. Such systems include materials experiencing cyclic deformation and biologically relevant systems like blood flow and growing tissue monolayers [78]. ### Rejuvenation and memory by active cycling: from active to passive We now consider the activity cycling protocol that, in contrast to thermal cycling, can only be applied to active systems. Here it is instructive to note that the cyclic change of the ABP persistence time \(\tau_{r}\) also corresponds to an _effective_ temperature cycle [79].Hence, this protocol offers not only a unique means to generate novel dynamics in active systems, but it also allows us to induce effectively large temperature jumps between steps via \(\tau_{r}\) (while keeping \(T\) fixed). The latter aspect is particularly useful to induce rejuvenation on reasonable simulation time scales, as rejuvenation demands a large temperature difference between steps or long step durations (see Sec. III.1). During the first step of the activity cycle, we again observe simple aging, similar to the thermal cycling scenario. Figure 4(a) illustrates the mean-squared displacement in the second step of the activity cycling (i.e. where the persistence time is quenched to \(\tau_{r_{2}}=0\)) at a fixed temperature of \(T=0.25\). Unlike thermal cycling, where the dynamics is frozen at \(t_{1}=500\), in activity cycling, the dynamics evolves for different \(\hat{t}_{w}\). Notably, the aging restarts, and at a waiting time of approximately \(t_{w}=t_{1}+100\), the system recovers the behavior observed after a direct quench at \(\tau_{r_{2}}=0\). The behavior in the second step overlaps with the one found with a direct quench for \(t_{w}=600\), thus indicating a rejuvenation effect. Here, since \(t_{1}=500\) and \(\hat{t}_{w}=100\), we can conclude that applying a direct quench or an activity cycling is equivalent, but the first step of the cycling has an impact on the second step because \(t^{ag}=t_{1}\). We have verified that the same rejuvenation behavior can be found when using the self-propulsion force as the control parameter instead of the persistence time (see Supplementary Material). This indicates that this behavior is protocol-independent, and quenching the self-propulsion force or the persistence time leads to the same dynamical behavior. To understand if temperature plays a role in this weak rejuvenation effect, we repeat the activity cycling at a lower temperature of \(T=0.1\). Figure 5(a) shows the behavior of the system during the second step of the activity cycling at this lower temperature. We find that the behavior of the system at the second step at \(t_{w}=t_{1}+100\) overlaps with that observed from a direct quench at \(t_{w}=800\). This indicates that the consequence of the first step of the cycle makes the system slightly older compared to a direct quench. This phenomenon, known as _overaging_ in the literature [80; 81], can be attributed to the fact that \(t_{1}\) is not long enough. Comparing this activity cycling to thermal cycling, we observe that the short duration of the first step leads to overaging and freezing of the dynamics, respectively. In conclusion, the observation of non-equilibrium phenomena such as rejuvenation becomes more accessible when activity cycling is applied. This is attributed to the significantly larger effective temperature jump compared to a purely passive system undergoing thermal cycling. Finally, as shown in Fig. 4(b) and Fig. 5(b), in the third step, when the persistence time is raised again to \(\tau_{r_{1}}=10\), the system quickly recovers the same behavior found at the end of the first step. Consequently, the system has a memory of the first step as if the second step did not happen. In agreement with the thermal cycle, the system needs a time \(t^{*}\) to'remember' the time spent at \(\tau_{r_{1}}\). The presence of this \(t^{*}\) can be due to the fact that the duration of each step of the cycle is not long enough. Therefore, we conclude that when applying an active cycle, we observe memory effects. We have also verified that this memory effect is present when the control parameter of the activity cycle is the self-propulsion force (see Supplementary Material). In contrast to the rejuvenation effect, the memory effect found within the activity cycle is consistent with the one observed when thermal cycling is applied. Figure 4: Mean-squared displacements as a function of time for an active thermal system (\(T=0.25,f=0.5\)) subjected to an activity cycle. Panels (a) and (b) correspond to the second and third step of the active cycling protocol to probe rejuvenation and memory effects, respectively. In both panels, solid lines correspond to different waiting times \(t_{w}\) (with \(t_{1}=t_{2}=500\)). The dashed curve in panel (a) corresponds to a direct quench to a passive system at \(t_{w}=600\); the dashed curve in panel (b) represents the mean-squared displacement before the end of the first step \(t_{w}=400\). ## IV Conclusions In summary, our work reveals that the non-equilibrium dynamic behaviors caused by a temperature cycle in an active thermal system are significantly different from those in a passive system. Specifically, an active system subjected to a temperature cycle exhibits rejuvenation and memory effects, whereas in the passive case, the system gets frozen at the new temperature, and the rejuvenation effect is absent [40]. Nevertheless, even in the absence of rejuvenation in the passive case, a memory effect similar to that observed in the active system can still be found. Additionally, we find that the rejuvenation effect becomes stronger as the persistence time increases, whereas the memory effect is independent of this parameter. We can rationalize the enhanced rejuvenation of more persistent active particles by considering that the long-time behavior of active thermal glasses is dominated by activity rather than thermal motion [71]. Hence, a more active system with a larger persistence time will more easily restart the aging process after a sudden temperature change, leading to a'stronger' rejuvenation effect. We find that rejuvenation and memory effects in active matter can also be induced by a non-equilibrium protocol unique to active systems, namely via an activity cycle. In particular, the application of an activity cycle from an active to a passive system enables access to higher temperature jumps without increasing the noise, leading to rejuvenation. Depending on the temperature of the system, we can observe either rejuvenation or overaging, with the latter occurring at lower temperatures. To mitigate the overaging effect at low temperatures, it is necessary to increase the duration of the first step of the protocol. Overall, our findings suggest that activity cycling offers richer non-equilibrium dynamics in the second step of the cycle compared to thermal cycling, when the duration of the first step is short. Moreover, even with this protocol, the memory of the first step of the cycle is quickly recovered when the system becomes active again. Our results also demonstrate that similar outcomes can be achieved by employing the self-propulsion force as the control parameter in the activity cycle. Moreover, these findings align with prior research indicating that different protocols yield comparable dynamics. This similarity is reminiscent of quenching protocols employed to study simple aging, where an activity quench generates analogous outcomes to a temperature quench [71; 73]. Our study provides new insights into the rich non-equilibrium dynamics of active glasses and emphasizes the significance of exploring the interplay between temperature and activity. Furthermore, our results show that, although the nature of an active system differs from that of spin glasses, the out-of-equilibrium dynamics observed during a temperature cycle exhibit remarkable similarities. The rejuvenation and memory effects observed in spin glasses have been attributed to the slow increase of a characteristic length following a temperature quench [16; 17; 18], or explained using a hierarchical energy-landscape picture [19; 20; 82; 83; 84]. However, since active systems are non-Hamiltonian and are not governed by energy minimization, understanding the presence of rejuvenation and memory effects in these systems will require a different theoretical framework. Finally, given that dense active matter is becoming increasingly relevant in the context of biology, it will be interesting to explore in future work how physical aging, rejuvenation, and memory are manifested in biological glassy systems such as confluent cell layers, tissues, and solid tumors [85; 66; 69; 86; 87; 88]. ## V Acknowledgments We thank Kees Storm for his critical reading of the manuscript. This work has been financially supported by the Dutch Research Council (NWO) through a Physics Projectruimte grant.
2310.08339
TTK is Getting MPI-Ready
This system paper documents the technical foundations for the extension of the Topology ToolKit (TTK) to distributed-memory parallelism with the Message Passing Interface (MPI). While several recent papers introduced topology-based approaches for distributed-memory environments, these were reporting experiments obtained with tailored, mono-algorithm implementations. In contrast, we describe in this paper a versatile approach (supporting both triangulated domains and regular grids) for the support of topological analysis pipelines, i.e. a sequence of topological algorithms interacting together. While developing this extension, we faced several algorithmic and software engineering challenges, which we document in this paper. We describe an MPI extension of TTK's data structure for triangulation representation and traversal, a central component to the global performance and generality of TTK's topological implementations. We also introduce an intermediate interface between TTK and MPI, both at the global pipeline level, and at the fine-grain algorithmic level. We provide a taxonomy for the distributed-memory topological algorithms supported by TTK, depending on their communication needs and provide examples of hybrid MPI+thread parallelizations. Performance analyses show that parallel efficiencies range from 20% to 80% (depending on the algorithms), and that the MPI-specific preconditioning introduced by our framework induces a negligible computation time overhead. We illustrate the new distributed-memory capabilities of TTK with an example of advanced analysis pipeline, combining multiple algorithms, run on the largest publicly available dataset we have found (120 billion vertices) on a cluster with 64 nodes (for a total of 1536 cores). Finally, we provide a roadmap for the completion of TTK's MPI extension, along with generic recommendations for each algorithm communication category.
Eve Le Guillou, Michael Will, Pierre Guillou, Jonas Lukasczyk, Pierre Fortin, Christoph Garth, Julien Tierny
2023-10-12T13:57:32Z
http://arxiv.org/abs/2310.08339v2
# A Generic Software Framework for Distributed Topological Analysis Pipelines ###### Abstract This system paper presents a software framework for the support of topological analysis pipelines in a distributed-memory model. While several recent papers introduced topology-based approaches for distributed-memory environments, these were reporting experiments obtained with tailored, mono-algorithm implementations. In contrast, we describe in this paper a general-purpose, generic framework for topological analysis pipelines, i.e. a sequence of topological algorithms interacting together, possibly on distinct numbers of processes. Specifically, we instantiated our framework with the _MPI_ model, within the Topology ToolKit (TTK). While developing this framework, we faced several algorithmic and software engineering challenges, which we document in this paper. We describe an MPI extension of TTK's data structure for triangulation representation and traversal, a central component to the global performance and generality of TTK's topological implementations. We also introduce an intermediate inference between TTK and MPI, both at the global pipeline level, and at the fine-grain algorithmic level. We provide a taxonomy for the distributed-memory topological algorithms supported by TTK, depending on their communication needs and provide examples of hybrid MPI+thread parallelizations. Detailed performance analyses show that parallel efficiencies range from \(20\%\) to \(80\%\) (depending on the algorithms), and that the MPI-specific preconditioning introduced by our framework induces a negligible computation time overhead. We illustrate the new distributed-memory capabilities of TTK with an example of advanced analysis pipeline, combining multiple algorithms, run on the largest publicly available dataset we have found (\(120\) billion vertices) on a standard cluster with \(64\) nodes (for a total of \(1,536\) cores). Finally, we provide a roadmap for the completion of TTK's MPI extension, along with generic recommendations for each algorithm communication category. Topological data analysis, high-performance computing, distributed-memory algorithms. ## 1 Introduction In most applications, modern datasets are constantly growing in size, thanks to the continuous improvements of acquisition technologies or computational systems. This growth in size induces finer and finer level of details, which themselves induce geometrical structures in the data which are getting more and more intricate. To apprehend and process this geometrical complexity, advanced computational techniques are required to extract concise feature representations of the core patterns present in the data, to facilitate further analysis, visualization and reasoning. Topological Data Analysis (TDA) [17] is a class of techniques that precisely serve that purpose. It is based on robust, combinatorial, and multi-scale algorithms [18], which can capture a large variety of structural features [31]. Examples of successful applications of topology-based data analysis include combustion [10, 27, 39], material sciences [20, 29, 56], nuclear energy [43], fluid dynamics [36, 48], bioimaging [9, 12], data science [14, 15], quantum chemistry [7, 22, 49, 50] or astrophysics [55, 57]. However, with the data size increase discussed above, it becomes more and more frequent in the applications that the size of a single dataset exceeds the main memory capacity of a single commodity computer, hence requiring to consider distributed-memory systems, whose combined memory provides much larger capacities. While a significant effort has been dedicated to the transition of TDA towards shared-memory parallelism [1, 6, 24, 25, 26, 35, 40, 42, 54, 35, 40], fewer efforts have been reported towards distributed-memory parallelism [11, 32, 46, 5, 47, 52]. Moreover, these efforts focused on tailored, mono-algorithm implementations, which neither needed to interact with other algorithms within a single analysis pipeline, nor needed to support compatibility with outputs computed sequentially. However, in real-life scenarios, data analysis pipelines can be composed of a significant number of algorithms, organized sequentially - i.e. the output of an algorithm \(A_{i}\) provides an input for the next algorithm \(A_{i+1}\) (see TTK's Online Example Database [62] for real-life instances of advanced analysis pipelines). Moreover, depending on the usage scenario or the computational environment, distinct algorithms within a single analysis pipeline can be run on a distinct number of processes (either to optimize resource allocation on a single node or to assign specific tasks of a pipeline to a dedicated node). Also, when the output of the overall pipeline is sufficiently small, it can be useful in practice to transfer this output to a single workstation (where the number of processes is typically equal to one) for further interactive inspection by a human user. Then, for the above usability reasons, it becomes crucial in a distributed environment that topological algorithms support a high level of interoperability, irrespective of the number of processes used at a given stage of the pipeline. However, such interoperability requirements trigger a number of technical challenges (see Sec. 3.1.3 for further details). In this system paper, we address this issue by introducing a generic software framework for the support of topological analysis pipelines in a distributed-memory model. In particular, we instantiate this framework with _MPI_ (the Message Passing Interface), within the Topology ToolKit (TTK) [59, 8]. Specifically, we re-visit TTK's internal triangulation data structure (Sec. 4) to support both _(i)_ local operations on the data block present on the local node and _(ii)_ global operations on the entire input dataset. Such an extension is necessary to guarantee a consistent interplay between multiple algorithms within a single distributed pipeline (see Sec. 3.1.3 for further details). This extension is also necessary to guarantee that outputs computed in distributed mode are compatible with outputs computed on a single computer (to enable reliable post-processing interactive sessions of the outputs on a workstation, if permitted by the output size). We also document an interface between TTK and MPI (Sec. 5) which acts at two levels: _(i)_ at a fine-grain level within the implementation of a given algorithm (Sec. 5.3) and _(ii)_ at a pipeline level (Sec. 5.2), to ensure a proper communication between the distinct algorithms composing the pipeline. We provide a taxonomy of TTK's topological algorithms (Sec. 6.1), depending on their communication needs and provide examples of hybrid MPI+thread parallelizations for each category (Sec. 6.3), with detailed performance analyses (Sec. 7.1). We illustrate the new distributed capabilities of TTK with an example of advanced analysis pipeline (Sec. 6.4), combining multiple algorithms, run on a dataset of 120 billion vertices distributed on 64 nodes (Sec. 7.2) of 24 cores each. Finally, we provide a roadmap for the completion of TTK's MPI extension, with generic recommendations for each algorithm communication category (Sec. 8). This work has been integrated in the main source code of TTK and is available in open-source. ### _Related work_ Concepts from computational topology [17] have been investigated and extended by the visualization community [31] over the last two decades. Popular topological representations include the persistence diagram, the Reeb graphs and its variants, or the Morse-Smale complex [17]. To improve the time efficiency of the algorithms computing the above representations, a significant effort has been carried out to re-visit TDA algorithms for shared-memory parallelism. Several authors focused on the shared-memory computation of the persistence diagram [26, 6], others focused on the merge and contour trees [1, 23, 24, 13, 42] or the Reeb graph [25], while several other approaches have been proposed for the Morse-Smale complex [53, 28, 54]. While the above parallel approaches succeed in improving computation times, they still require a shared-memory system, capable of storing the entire input dataset into memory. Thus, when the size of the input dataset exceeds the capacity of the main memory of a single computer, distributed-memory approaches need to be considered. Moreover, provided that the performance of these distributed approaches scales with the number of nodes, they also contribute to reducing computation times. Fewer approaches have been documented for the computation of topological data representations in a distributed-memory environment. First, distributed-memory computers are much less accessible in practice than parallel shared-memory architectures, which have become ubiquitous in recent years (workstations, laptops, etc.). Second, the algorithmic advances in terms of parallelism described in the shared-memory approaches do not translate directly to a distributed environment. Indeed, a key to the performance of the shared-memory approaches discussed above is the ability of a thread to access any arbitrary element in the input dataset. It also allows for easily implementable and efficient dynamic load balancing across threads. In contrast, in a distributed setup, the initial per-process decomposition of the input dataset is often a _given_, which the topological algorithm cannot modify easily and which is likely to be unfavorable to its performances. Then, existing efforts for distributing TDA approaches typically consist in first computing a _local_ topological representation (i.e. persistence diagram, contour tree, etc.) given the local block of input dataset accessible to the process and then, in a second stage, to aggregate the local representations into a common _global_ representation while attempting to minimize communications between processes (which are much more costly than synchronizations in shared-memory parallelism). Note that in several approaches [46, 32, 47], the final _global_ representation may not be strictly equivalent to the output obtained by a traditional sequential algorithm, but more to a distributed representation, capable of supporting access queries by post-processing algorithms in a distributed fashion. Following the above general strategy, approaches have been documented for the distributed computation of the persistence diagram [5] as well as the merge and contour trees [11, 46, 47, 52, 47, 52]. In this work, we do not focus on the distributed computation of a specific algorithm, but rather on the generic building blocks which are necessary for the distributed computation of complete topological analysis pipelines, consisting of multiple algorithms interacting together. To show the utility of our work, we illustrate the distributed computation with our framework of _simple_ topological algorithms (Sec. 6) and leave the case of advanced algorithms for future work. A necessary building block for distributing TDA algorithms is an infrastructure supporting a distributed access to the input dataset. While several general purpose libraries have been documented [45] to support complex communication patterns, we focus instead in this work on the necessary, high-level pre-processing routines enabling pipelines consisting of multiple topological algorithms (Sec. 5.2) and rely for that on simple, ad-hoc, low-level communication routines between processes (Sec. 5.3). To support topological algorithms, a data structure must be available to efficiently traverse the input dataset, with possibly advanced traversal queries. TTK [59, 8] implements such a triangulation data structure, providing advanced, constant-time, traversal queries, supporting both explicit meshes as well as the implicit triangulation of regular grids (with no memory overhead). While several data structures have been proposed for the distributed support of meshes [19, 34] (with a focus on simulation driven remeshing), we consider in this work the distribution of TTK's triangulation data structure (Sec. 4), with a strong focus on traversal time efficiency and compatibility with a non-distributed usage, to support post-processing interactive sessions on a workstation (c.f. Sec. 3). ### _Contributions_ This system paper makes the following new contributions. 1. _An efficient, distributed triangulation data structure_ (Sec. 4): We introduce an extension of TTK's triangulation data structure for the support of distributed datasets. 2. _A software infrastructure for distributed topological pipelines_ (Sec. 5): We document a software infrastructure supporting advanced, distributed topological pipelines, consisting of multiple algorithms, possibly run on a distinct number of processes. 3. _Examples of distributed topological algorithms_ (Sec. 6): We provide a taxonomy of the algorithms supported by TTK, depending on their communication needs, and document examples of distributed parallelizations, with detailed performance analyses, following an MPI+thread strategy. This includes an advanced pipeline consisting of multiple algorithms, run on a dataset of 120 billion vertices on a compute cluster with 64 nodes (1,536 cores, total). 4. _An open-source implementation_: Our implementation is integrated in TTK 1.2.0, to enable others to reproduce our results or extend TTK's distributed capabilities. 5. _A reproducible example:_ We provide a reference Python script of one of our advanced pipelines for replicating our results with a dataset size that can be adjusted to fit the capacities of any system (publicly available at: [https://github.com/eve-le-guillou/TTK-MPI-at-example](https://github.com/eve-le-guillou/TTK-MPI-at-example)). ## 2 Background This section describes our formal setting and formalizes a few topological data representations, used later in the paper when discussing examples (Sec. 6). All these descriptions are given in a _non-distributed_ context. The formalization of our distributed model is documented in Sec. 3. We refer the reader to reference textbooks [17] for a comprehensive introduction to computational topology. ### _Input data_ The input is a piecewise linear (PL) scalar field \(f:\mathcal{M}\rightarrow\mathbb{R}\) defined on a \(d\)-dimensional simplicial complex, with \(d\leqslant 3\) in our applications (Fig. 1(_a_)). The set of \(i\)-simplices of \(\mathcal{M}\) is noted \(\mathcal{M}^{i}\). The _star_\(St(\sigma)\) of a simplex \(\sigma\) is the set of simplices of \(\mathcal{M}\) which contain \(\sigma\) as a face. The _link_\(Lk(\sigma)\) is the set of faces of the simplices of \(St(\sigma)\) which do not intersect \(\sigma\). The input field \(f\) is provided on the vertices of \(\mathcal{M}\) and is interpolated on the simplices of higher dimension. \(f\) is assumed to be injective on the vertices, which is achieved in practice by substituting the \(f\) value of a vertex by its position in the vertex order (by increasing \(f\) values). ### _Critical points_ The sub-level set \(f_{-\infty}^{-1}(w)\) of an isovalue \(w\in\mathbb{R}\) is defined as \(f_{-\infty}^{-1}(w)=\{p\in\mathcal{M}\mid f(p)<w\}\). It can be interpreted as a subset of the data, below the isovalue \(w\). As \(w\) continuously increases, the topology of \(f_{-\infty}^{-1}(w)\) changes at specific vertices of \(\mathcal{M}\), called the _critical points_ of \(f\). Let \(Lk^{-}(v)\) be the _lower link_ of the vertex \(v\): \(Lk^{-}(v)=\{\sigma\in Lk(v)\mid\forall u\in\sigma:f(u)<f(v)\}\) (blue edges and vertices in Fig. 1(_b-e_), top). The _upper link_ of \(v\) is defined symmetrically: \(Lk^{+}(v)=\{\sigma\in Lk(v)\mid\forall u\in\sigma:f(u)>f(v)\}\) (orange edges and vertices in Fig. 1(_b-e_), top). A vertex \(v\) is _regular_ if and only if both \(Lk^{-}(v)\) and \(Lk^{+}(v)\) are simply connected. Otherwise, \(v\) is a _critical vertex_ of \(f\)[4]. A critical vertex \(v\) can be classified by its _index_\(\mathcal{I}(v)\), which is \(0\) for minima (Fig. 1(_c_)), \(1\) for \(1\)-saddles (Fig. 1(_d_)), \((d-1)\) for \((d-1)\)-saddles and \(d\) for maxima (Fig. 1(_e_)). Vertices for which the number of connected components of \(Lk^{-}(v)\) or \(Lk^{+}(v)\) are greater than \(2\) are called _degenerate saddles_. ### _Integral lines_ Integral lines are curves on \(\mathcal{M}\) which locally describe the gradient of \(f\) (orange curves in Fig. 1(_f_)). They can be used to capture and visualize adjacency relations between critical points. The starting vertex of an integral line is called a _seed_. Given a seed \(v\), its _forward_ integral line, noted \(\mathcal{L}^{+}(v)\), is a path along the edges of \(\mathcal{M}\), initiated in \(v\), such that each edge of \(\mathcal{L}^{+}(v)\) connects a vertex \(v^{\prime}\) to its highest neighbor \(v^{\prime\prime}\). When encountering a saddle \(s\), we say that an integral line _forks_: it yields one new integral line per connected component of \(Lk^{+}(s)\). Integral lines can _merge_ (and possibly fork later). A _backward_ integral line, noted \(\mathcal{L}^{-}(v)\), is defined symmetrically (i.e. integrating downwards). Fig. 1: Topological objects considered in this paper on a toy example (elevation \(f\) on a terrain \(\mathcal{M}\), _(a)_). The vertices of \(\mathcal{M}\) can be classified based on their star into regular vertices (_(b)_, top: PL setting, bottom: DMT setting), local minima _(c)_, saddle points _(d)_ or local maxima _(e)_. Integral lines (orange curves, _(f)_) are curves which are tangential to the gradient of \(f\). ### _Discrete gradient_ In recent years, an alternative emerged to the PL formalism of critical points described above (Sec. 2.2), namely Discrete Morse Theory (DMT) [21]. This formalism implicitly resolves several challenging configurations (such as degenerate saddles on manifold domains), which has been particularly useful for the development of robust algorithms in the context of Morse-Smale complex computation [28, 53]. We also consider in this work this alternative representation to critical points, as it nicely exemplifies a large set of the traversal features supported by TTK's triangulation (Sec. 4). A _discrete vector_ (small orange arrows, Fig. 1_(b-e)_, bottom) is a pair formed by a simplex \(\sigma_{i}\in\mathcal{M}\) (of dimension \(i\)) and one of its co-facets \(\sigma_{i+1}\) (i.e. one of its co-faces of dimension \(i+1\)), noted \(\{\sigma_{i}<\sigma_{i+1}\}\). \(\sigma_{i+1}\) is usually referred to as the _head_ of the vector represented with a small orange cylinder in Fig. 1(b-e), bottom), while \(\sigma_{i}\) is its _tail_ (represented with a small orange sphere in Fig. 1(b-e), bottom). Examples of discrete vectors include a pair between a vertex and one of its incident edges, or a pair between an edge and a triangle containing it. A _discrete vector field_ on \(\mathcal{M}\) is then defined as a collection \(\mathcal{V}\) of pairs \(\{\sigma_{i}<\sigma_{i+1}\}\), such that each simplex of \(\mathcal{M}\) is involved in at most one pair. A simplex \(\sigma_{i}\) which is involved in no discrete vector \(\mathcal{V}\) is called a _critical simplex_. A _\(v\)-path_ is a sequence of discrete vectors \(\{\{\sigma_{0}^{0}<\sigma_{i+1}^{0}\},\ldots,\{\sigma_{i}^{k}<\sigma_{i+1}^{k }\}\}\), such that _(i)_\(\sigma_{i}^{j}\neq\sigma_{i}^{j+1}\) (i.e. the tails of two consecutive vectors are distinct) and _(ii)_\(\sigma_{i}^{j+1}<\sigma_{i+1}^{j}\) (i.e. the tail of a vector in the sequence is a face of the head of the previous vector), for any \(0<j<k\). A _discrete gradient field_ is a discrete vector field such that all its possible _\(v\)-paths_ are loop-free. Several algorithms have been proposed to compute such a discrete gradient field from an input PL scalar field. We consider in this work the algorithm by Robins et al. [53], given its proximity to the PL setting: each critical cell identified by this algorithm is guaranteed to be located in the star of a PL critical vertex (Sec. 2.2). ## 3 Distributed Model We now formalize our distributed model, which will eventually be used as a blueprint to port the algorithms described above (Sec. 2) to distributed computations (Sec. 6). ### _Input distribution formalization_ #### 3.1.1 Decomposition In our distributed-memory model, \(f\) is assumed to be loaded in the memory of \(n_{p}\) processes in the form of \(n_{p}\) disjoints _blocks_ of data (Fig. 2_(a-b)_). Specifically, each process \(i\in\{0,\ldots,n_{p}-1\}\) is associated with a local block \(f_{i}:\mathcal{M}_{i}\rightarrow\mathbb{R}\), such that * \(\mathcal{M}_{i}\subset\mathcal{M}\): each block is a \(d\)-dimensional simplicial complex, being a subset of the global input; * \(\cup_{\mathcal{M}_{i}}=\mathcal{M}\): the union of the blocks is equal to the input; * \(\mathcal{M}_{i}^{d}\cap\mathcal{M}_{j}^{d}=\emptyset\): the intersection of the set of \(d\)-simplices of \(\mathcal{M}_{i}\) (noted \(\mathcal{M}_{i}^{d}\)) with the set of \(d\)-simplices of any other block \(\mathcal{M}_{j}\) is empty (i.e. we say that each \(d\)-simplex is _exclusively owned_ by a single process \(i\)). #### 3.1.2 Ghost layer In such a distributed setting, a _ghost_ layer is typically considered, in order to save communications between processes for local tasks. Specifically, each block \(\mathcal{M}_{i}\) is _ghosted_ into a block \(\mathcal{M}_{i}^{d}\) with one level of ghost simplices. In particular, let \(\mathcal{G}(\mathcal{M}_{i}^{d})\) be the set of \(d\)-simplices of \(\mathcal{M}\) which share a face with \(d\)-simplices of \(\mathcal{M}_{i}\), but which do not belong to \(\mathcal{M}_{i}^{d}\). We note \(\mathcal{M}_{i}^{d}\) the \(d\)-dimensional simplicial complex obtained by considering a layer of ghost simplices, i.e. by adding to \(\mathcal{M}_{i}\) the set of \(d\)-simplices \(\mathcal{G}(\mathcal{M}_{i}^{d})\), along with all their \(d^{\prime}\)-dimensional faces (with \(d^{\prime}\in\{0,\ldots,d-1\}\)). Overall, all the simplices added in this way to the block \(\mathcal{M}_{i}\) to form the _ghosted block_\(\mathcal{M}_{i}^{\prime}\) are called _ghosted_ _simplices_ (Fig. 2_(c-d)_). The usage of such a ghost layer is typically motivated in practice by algorithms which perform local traversals (e.g. PL critical point extraction, Sec. 2.2). Then, when such algorithms reach the boundary of a block, they can still perform their task without any communication, thanks to the ghost layer. Also, the usage of a ghost layer facilitates the identification of boundary simplices (i.e. located on the boundary of the _global_ domain \(\mathcal{M}\), see Sec. 4.2.1). The blocks are also positioned in relation to one another. Processes \(i\) and \(j\) will be considered adjacent (Fig. 2_(e)_) if \(\mathcal{M}_{i}^{\prime}\) contains \(d\)-simplices that are exclusively owned by \(j\) and if \(\mathcal{M}_{j}^{\prime}\) contains \(d\)-simplices that are exclusively owned by \(i\). Fig. 2: The input data _(a)_ is assumed to be loaded in the memory of \(n_{p}\) independent processes in the form of \(n_{p}\) disjoint _blocks_ of data (_(b)_, one color per block, \(n_{p}=4\) in this example). A layer of _ghost_ simplices (_(c)_, coming from adjacent blocks, matching colors) is added to each block. This local data duplication (_(d)_, transparent) eases subsequent processing on block boundaries. A local adjacency graph is constructed to encode local neighbor relations between blocks _(e)_. #### 3.1.3 Global simplex identifiers For any \(d^{\prime}\in\{0,\ldots,d\}\), each \(d^{\prime}\)-simplex \(\sigma_{j}\) of each block \(\mathcal{M}^{\prime}_{i}\) is associated with a _local_ identifier \(j\in[0,|\mathcal{M}^{d^{\prime}}_{i}-1]\). This integer uniquely identifies \(\sigma_{j}\) within the local block \(\mathcal{M}^{\prime}_{i}\). The simplex \(\sigma_{j}\) is also associated with a _global_ identifier \(\phi_{d^{\prime}}(j)\in[0,|\mathcal{M}^{d^{\prime}}|-1]\), which uniquely identifies \(\sigma_{j}\) within the _global_ dataset \(\mathcal{M}\). Such a global identification is motivated by the need to support varying numbers of processes. In particular, assume that a first analysis pipeline \(P_{1}\) (for instance extracting critical vertices, Sec. 2.2) uses \(n_{p}(P_{1})\) processes to generate an output (e.g. the list of critical vertices). Let us consider now a second analysis pipeline \(P_{2}\) using \(n_{p}(P_{2})\) processes (possibly on a different machine) to post-process the output of \(P_{1}\) (for instance, seeding integral lines, Sec. 6.3.5, at the previously extracted critical vertices). Since \(n_{p}(P_{1})\) and \(n_{p}(P_{2})\) differ between the two sub-pipelines, their input decompositions into local blocks will also differ. Then the local identifiers of the critical vertices employed in \(P_{1}\) may no longer be usable in \(P_{2}\). For instance, if \(n_{p}(P_{1})<n_{p}(P_{2})\), the local blocks of \(P_{2}\) may be much smaller than those of \(P_{1}\) and the local identifiers of \(P_{1}\) can become out of range in \(P_{2}\). Thus, a common ground between the two pipelines need to be found to reliably exchange information, hence the global, unique identifiers. Note that the support for a varying number of processes is a necessary feature for practical distributed topological algorithms. While it is a challenging constraint (c.f. Sec. 4), it is beneficial to various application use cases. For instance, \(P_{2}\) can be a post-processing pipeline run on a workstation. \(P_{2}\) can also be executed on a different (possibly larger) distributed-memory system than \(P_{1}\). Last, \(P_{1}\) and \(P_{2}\) can be part of a single, large pipeline, which would include an aggregation step of the outputs of \(P_{1}\) to a different number of processes (\(n_{p}(P_{2})\)). #### 3.1.4 Process identifiers Each block \(\mathcal{M}^{\prime}_{i}\) is associated with \(d{+}1\)_process identifiers_\(p^{d^{\prime}}:\mathcal{M}^{\prime d^{\prime}}_{i}\to\mathbb{N}\) (with \(d^{\prime}\in\{0,\ldots,d\}\)), which map each \(d^{\prime}\)-simplex to the identifier of the process which owns it. Specifically, we say that a vertex \(v\) of \(\mathcal{M}^{\prime}_{i}\) is _exclusively owned_ by the process \(i\) if \(v\) is not a ghost vertex in \(\mathcal{M}^{\prime}_{i}\). We say that a \(d^{\prime}\)-simplex \(\sigma^{\prime}\) (\(d^{\prime}\in\{1,\ldots,d-1\}\)) is _inclusively owned_ by the process \(i\), if \(i\) is the exclusive owner of the \(d\)-simplex \(\sigma\in St(\sigma^{\prime})\) with the lowest global identifier \(\phi_{d}(\sigma)\). ### _Output distribution formalization_ Topological algorithms typically consume an input (possibly complex), to produce a (usually) simpler output (such as the topological representations described in Sec. 2). Moreover, multiple topological algorithms can be combined sequentially to form an analysis pipeline. For instance, a first algorithm \(A_{1}\) may compute integral lines (Sec. 6.3.5) for a first field \(f\), while a second algorithm \(A_{2}\) may extract the critical vertices (Sec. 2.2) for a second field \(g\), defined on the integral lines generated by the first algorithm \(A_{1}\). Thus, the output produced by a distributed topological algorithm \(A_{1}\)_must_ be readily usable by another distributed algorithm \(A_{2}\). This implies that the output computed by a topological algorithm must also strictly comply to the input specification (Sec. 3.1) and should contain: _(i)_ a ghost layer, _(ii)_ global simplex identifiers, and _(iii)_ process identifiers. Note that, according to this formalism, the output of a topological algorithm _is_ distributed among several processes. Depending on the complexity of this output, specialized manipulation algorithms (handling communication between processes) may need to be later developed to exploit them appropriately in a post-process. ### _Implementation specification_ We now review the building blocks which are necessary to support the distributed model specified in Secs. 3.1 and 3.2. The pipeline combining the different topological algorithms can be encoded in the form of a Python script (c.f. contribution 5, Sec. 1.2). The initial decomposition of the global domain \(\mathcal{M}\) and the ghost layer (specifically, the ghost vertices and the ghost \(d\)-simplices) are computed by ParaView [2]. Then, the TTK algorithms present in the pipeline will be instantiated by ParaView on each process and from this point on, they will be able to access their own local block of _ghosted_ data and communicate with other processes. While ParaView offers in principle the possibility to compute process identifiers, we have observed several inconsistencies (in particular when using ghost layers), which prevented us to use it reliably. This required us to develop our own process identification strategy (Sec. 5.2). Moreover, while ParaView also offers in principle the possibility to generate global identifiers for vertices and cells (i.e. \(d\)-simplices), we have experienced technical difficulties with it (such as a dependence of the resulting identifiers on the number of processes), as well as issues which made it unusable for large-scale datasets (such as an excessively large memory footprint). This required us to develop our own strategy for the global identification of vertices and cells (i.e. \(d\)-simplices), documented in Secs. 4.2 and 4.3. The input PL scalar field \(f\) is required to be injective on the vertices (c.f. Sec. 2.1). This can be obtained easily on a per block basis, by locally sorting (within each process) the vertices by increasing \(f\) values. Then, when a topological algorithm needs to compare two vertices \(v\) and \(v^{\prime}\) from \(\mathcal{M}^{\prime}_{i}\), it just needs to retrieve their positions in the local order of \(f\) values. In the case where a process \(i\) needs to compare a vertex \(v_{j}\in\mathcal{M}^{\prime}_{j}\) to a vertex \(v_{k}\in\mathcal{M}^{\prime}_{k}\), then their respective order are disambiguated by considering their actual data values (i.e. \(f(v_{j})\) and \(f(v_{k})\)). If these are equal, the order is disambiguated based on their global vertex identifiers. In addition, Sec. 5.3 documents low-level convenience procedures built on top of MPI, which facilitate the communication between MPI processes of informations frequently manipulated by the core of the topological algorithms. Finally, Sec. 4 documents the extension of TTK's triangulation data structure to support our model of distributed input and output (Secs. 3.1 and 3.2). ## 4 Distributed Triangulation This section describes the distributed extension of TTK's triangulation data structure, later used by each topological algorithm. In the following, we assume that the input block is loaded in the memory of the local process \(i\) and ghosted (i.e. we consider the ghosted block \(\mathcal{M}^{\prime}_{i^{\prime}}\), Sec. 3.1.2). Moreover, we consider that, for each process \(i\), a list of _neighbor processes_ is available (Fig. 2_(e)_). ### _Initial design_ For completeness, we briefly summarize the initial implementation of TTK's triangulation data structure (see [59]). In the explicit case (the input is a simplicial mesh), this data structure takes as an input a pointer to an array of 3D points (modeling the vertices of \(\mathcal{M}\)), as well as a pointer to an array of indices (modeling the \(d\)-simplices of \(\mathcal{M}\)). In the implicit case (the input is a regular grid), it takes as an input the origin of the grid as well as its resolution and spacing across each dimension. These can be provided by any IO library (in our experiments, these are provided by VTK). Based on this input, the triangulation supports a variety of traversal routines, to address the needs of the algorithms. 1. **Simplex enumeration:** for any \(d^{\prime}\in\{0,\ldots,d\}\), the data structure can enumerate all the \(d^{\prime}\)-simplices of \(\mathcal{M}\). 2. **Stars and links:** for any \(d^{\prime}\in\{0,\ldots,d\}\), the data structure can enumerate all the simplices of the star and the link of any \(d^{\prime}\)-simplex \(\sigma\). 3. **Face / co-face:** for any \(d^{\prime}\in\{0,\ldots,d\}\), the data structure can enumerate all the \(d^{\prime\prime}\)-simplices \(\tau\) which are faces or co-faces of a \(d^{\prime}\)-simplex \(\sigma\), for any dimension \(d^{\prime\prime}\) (i.e. \(d^{\prime\prime}\neq d^{\prime}\) and \(d^{\prime\prime}\in\{0,\ldots,d\}\)). 4. **Boundary tests:**\(d^{\prime}\in\{0,\ldots,d-1\}\), the data structure can be queried to determine if a \(d^{\prime}\)-simplex \(\sigma\) is on the boundary of \(\mathcal{M}\) or not. As discussed in the original paper [59], such traversals are rather typical of topological algorithms, which may need to inspect extensively the local neighborhoods of simplices. All traversal queries (e.g. getting the \(i^{th}\)\(d^{\prime\prime}\)-dimensional co-face of a given \(d^{\prime}\)-simplex \(\sigma\)) are addressed by the data structure in constant time, which is of paramount importance to guarantee the runtime performance of the calling topological algorithms. This is supported by the data structure via a preconditioning mechanism. Specifically, in a preprocessing phase, each calling topological algorithm needs to explicitly declare the list of the types of traversal queries it is going to use during its main routine. This declaration will trigger a preconditioning of the triangulation, which will pre-compute and cache all the specified queries, whose results will later be addressed in constant time at query time. This design philosophy is particularly relevant in the context of analysis pipelines, where multiple algorithms are typically combined together. There, the preconditioning phase only pre-computes the information once (i.e. if it is not already available in cache). Thus, multiple algorithms can benefit from a common preconditioning of the data structure. Moreover, another benefit of this strategy is that it adapts the memory footprint of the data structure, based on the types of traversals required by the calling algorithm. Finally, note that in the specific case of regular grids, adjacency relations can be easily inferred, given the regular pattern of the grid sampling (considering the Freudenthal triangulation [30, 33] of the grid). Then, TTK's triangulation supports an _implicit_ mode for regular grids: for such inputs, the preconditioning does not store any information and the results of all the queries are emulated at runtime [59]. An extension to periodic grids (i.e. with periodic boundary conditions) is also implemented. The switch from one implementation to the other (explicit mode for meshes or implicit mode for grids) is automatically handled by TTK and developers of topological algorithms only need to produce one implementation, interacting with TTK's generic triangulation data structure. ### _Distributed explicit triangulation_ This section describes our distributed implementation of the TTK triangulation in explicit mode, i.e. when an explicit simplicial complex is provided as a global input. Fig. 3: Preconditioning of our distributed explicit triangulation. _(a)_ Each process \(i\) enumerates its number \(n_{v_{i}}\) of exclusively owned vertices and \(d\)-simplices. Next, an MPI prefix sum provides a local offset for each process to generate global identifiers. _(b)_ For each process \(i\), simplices of intermediate dimensions (edges (\(n_{e_{i}}\)), triangles) are locally enumerated for contiguous intervals of global identifiers of \(d\)-simplices (white numbers). Next, all the intervals are sent to the process \(0\) which sorts them first by process identifier, then by interval start, yielding a per-interval offset that each process can use to generate its global identifiers (black numbers). _(c)_ Within a given block, the vertices at the boundary of the domain \(\mathcal{M}\) are identified as non-ghost boundary vertices (large spheres). Next, a simplex which only contains boundary vertices is considered to be a boundary simplex (larger cylinders). _(d)_ The global identifiers and boundary information of the ghost simplices are retrieved through MPI communications with the neighbor processes. The ghost simplices on the global boundary are flagged as boundary simplices (larger spheres and cylinders). #### 4.2.1 Distributed explicit preconditioning The preconditioning of explicit triangulations in the distributed setting involves the computation of four main informations: **(1)** global identifiers, **(2)** ghost global identifiers, **(3)** boundary, and **(4)** ghost boundary. **(1) Global identifiers:** The first step consists in determining global identifiers for the vertices (i.e., the map \(\phi_{0}\), Sec. 3.1, as well as its inverse, \(\phi_{0}^{-1}\)). This step is not optional and is triggered automatically. Specifically, for each ghost block \(\mathcal{M}^{\prime}_{i}\), the number \(n_{v_{i}}\) of _non-ghost_ vertices that the block exclusively owns is computed (Fig. 3_(a)_). Next, an MPI prefix sum is performed to determine the offset \(o_{0}(i)\) that each block \(i\) should add to its local vertex identifiers to obtain its global vertex identifiers. This map (\(\phi_{0}\)) is typically stored with a std::vector, while its inverse (\(\phi_{0}^{-1}\)) is stored with a std::unordered_map. The map \(\phi_{d}\) and its inverse \(\phi_{d}^{-1}\) are computed similarly. Note that the global maps \(\phi_{0}\) and \(\phi_{d}\) are explicitly stored as VTK data arrays attached to the vtkDataSet data structure. This is useful for example when the number of processes changes along the pipeline (e.g. by using the RedistributeDataSet ParaView filter). Next, global identifiers need to be computed for the simplices of intermediate dimension (Fig. 3_(b)_), i.e. the map \(\phi_{d^{\prime}}\) and its inverse \(\phi_{d^{\prime}}^{-1}\) for any \(d^{\prime}\in\{1,\ldots,d-1\}\). This step is optional and is only triggered if the calling algorithm pre-declared the usage of these simplices in the preconditioning phase. A different algorithm, detailed below, is required for the computation of the global identifiers for the simplices of intermediate dimension because VTK data structures only model the vertices and the \(d\)-simplices. Generating global identifiers for intermediary simplices before distribution of the data (through RedistributeDataSet) would result in the loss of the global identifiers, as only VTK data structures are redistributed. First, each process \(i\) identifies, among its list of exclusively owned \(d\)-simplices, intervals of contiguous global identifiers. These are typically interleaved with global identifiers of ghost \(d\)-simplices. For each interval \(x\), the \(d^{\prime}\)-simplices are provided with a local identifier (with the same procedure as used in the non-distributed setting). Given a \(d^{\prime}\)-simplex \(\sigma\) at the interface between two blocks (i.e. \(\sigma\) is a face of a ghost \(d\)-simplex), a tie break strategy needs to be established, to guarantee that only one process tries to generate an identifier for \(\sigma\). Specifically, following Sec. 3.1, the process \(i\) will generate an identifier for \(\sigma\) only if \(i\) is the lowest process identifier among the exclusive owners of the \(d\)-simplices in \(\mathcal{H}(\sigma)\). Next, all the intervals (along with their process identifier and number of \(d^{\prime}\)-simplices) are sent to the process \(0\) which, after ordering the intervals of \(d\)-simplices first by process identifier then by local identifier, determines the offset \(o_{d^{\prime}}(x)\) that each interval \(x\) should add to its local \(d^{\prime}\)-simplex identifiers to obtain its global identifiers. **(2) Ghost global identifiers:** The second step of the preconditioning consists in retrieving for a given block \(\mathcal{M}^{\prime}_{i}\) the global identifiers of its ghost simplices. This step is not optional and is always triggered (on a per simplex dimension basis). This feature can be particularly useful when performing local computations on the boundary of the block (e.g. discrete gradient, Sec. 6). Once all the processes have established their vertex global identifiers, each process \(i\) queries each of its neighbor processes \(j\), to obtain the global identifiers of its ghost vertices. Specifically, to determine the correspondence between the vertices exclusively owned by \(j\) and the ghost vertices in \(\mathcal{M}^{\prime}_{i^{\prime}}\), a KD-tree data structure is employed (to find the closest ghost vertex in \(\mathcal{M}^{\prime}_{i}\) corresponding to a given vertex from \(\mathcal{M}_{j}\)). Once global vertex identifiers are available for the ghost vertices of \(\mathcal{M}^{\prime}_{i^{\prime}}\) a simpler exchange procedure can be established to collect the global identifiers of each ghost \(d^{\prime}\)-simplex of \(\mathcal{M}_{i}\) (with \(d^{\prime}\in\{1,\ldots,d\}\)). Specifically, each process \(i\) queries each of its neighbor processes \(j\), to collect the global identifiers of its ghost \(d\)-simplices. Next, the correspondence between the ghost \(d\)-simplices of \(\mathcal{M}^{\prime}_{i}\) and the non-ghost \(d\)-simplices of \(\mathcal{M}^{\prime}_{j}\) is established based on their global vertex identifiers. Finally, the correspondence between the ghost \(d^{\prime}\)-simplices for \(d^{\prime}\in\{1,\ldots,d-1\}\) of \(\mathcal{M}^{\prime}_{i}\) and the non-ghost \(d^{\prime}\)-simplices of \(\mathcal{M}^{\prime}_{j}\) is established based on the global \(d\)-simplex identifier the \(d^{\prime}\)-simplex is a face of. **(3) Boundary:** The third step consists in determining the simplices which are on the boundary of the global domain \(\mathcal{M}\). This step is optional and is only triggered (on a per simplex dimension basis) if the calling algorithm pre-declared the usage of boundary simplices in the preconditioning phase. This feature is particularly useful for algorithms which process as special cases the simplices which are on the boundary of \(\mathcal{M}\) (e.g. critical point extraction, Sec. 6). First, each process \(i\) identifies the boundary vertices of its _glosted block_\(\mathcal{M}^{\prime}_{i}\), noted \(\partial\mathcal{M}^{0^{\prime}}_{i}\) (See Fig. 3_(c)_), with the exact procedure used in the non-distributed setting [59]. Then, thanks to the ghost layer, it is guaranteed that among the set of vertices identified above, \(\partial\mathcal{M}^{0^{\prime}}_{i}\), the non-ghost vertices are indeed on the boundary of the global domain \(\mathcal{M}\). Finally, any \(d^{\prime}\)-simplex \(\sigma\in\mathcal{M}^{\prime}_{i}\) will be marked as a boundary simplex if all its vertices are on the boundary of \(\mathcal{M}\). **(4) Ghost boundary:** Similarly to step _(2)_, a final step of data exchange between the process \(i\) and its neighbors enables the retrieval of the ghost simplices of \(\mathcal{M}^{\prime}_{i}\) which are also on the boundary (Fig. 3_(d)_). This step is optional and is only triggered if the calling algorithm pre-declared the usage of boundary simplices in the preconditioning phase. Finally, the preconditioning of any other traversal routine is identical to the non-distributed setting. #### 4.2.2 Distributed explicit queries In this section, we describe the implementation of the traversals of the triangulation, as queried by a calling algorithm. This assumes that the calling algorithm first called the appropriate preconditioning functions in a pre-process. The traversal of a local ghosted block \(\mathcal{M}^{\prime}_{i}\) by an algorithm instantiated on the process \(i\) is performed identically to the non-distributed setting, with local simplex identifiers. The only difference is for \(d^{\prime}\)-simplices provided on the algorithm input by their identifiers (see Sec. 6.3.5 for an example). These simplices will have to be expressed as _global identifiers_ (i.e. with the map \(\phi_{d^{\prime}}\)). A translation into local identifiers will be necessary (with the inverse map \(\phi_{d^{\prime}}^{-1}\)) prior to any processing. Symmetrically, identifiers of \(d^{\prime}\)-simplices stored on its output (see Sec. 6.3.1 for an example) will need to be expressed as _global identifiers_ (i.e. with the map \(\phi_{d^{\prime}}\)). These input and output identifier translations must be implemented by the calling algorithm, as they are required to ensure the interoperability with other calling algorithms, further down the analysis pipeline. ### _Distributed implicit triangulation_ This section describes our distributed implementation of the TTK triangulation in implicit mode, i.e. when a regular grid is provided as a global input. Then, as described below, most traversal information can be emulated at runtime, given the regular sampling pattern of regular grids (by considering the Freudenthal triangulation [30, 33] of the input grid). #### 4.3.1 Distributed implicit preconditioning In implicit mode, the preconditioning of the triangulation involves only one step, which consists in identifying the position of the local ghosted grid \(\mathcal{M}_{i}^{\prime}\) within the global grid \(\mathcal{M}\). This step is not optional and is triggered automatically. The preconditioning of any traversal routine returns immediately without any processing (all queries are emulated). First, each process \(i\) will compute the local bounding box \(\mathcal{B}_{i}\) of its ghosted block \(\mathcal{M}_{i}^{\prime}\) by recording the minimum and maximum \(x\), \(y\) and \(z\) coordinates among all its vertices. Second, a MPI parallel reduction will be performed on the resulting bounding boxes to determine the global bounding box \(\mathcal{B}\) for the global domain \(\mathcal{M}\). The vertex \(o\) is defined as the origin of \(\mathcal{M}_{i}^{\prime}\), with \((X_{o}^{\prime},Y_{o}^{\prime},Z_{o}^{\prime})\) its floating-point coordinates (Fig. 4, (a)). The vertex \(O\) is defined as the origin of \(\mathcal{M}\), with \((X_{O}^{\prime},Y_{O}^{\prime},Z_{O}^{\prime})\) its floating-point coordinates (Fig. 4, (a)). Given the floating-point spacing on each dimension \((s_{x},s_{y},s_{z})\) observed on the block \(\mathcal{M}_{i}^{\prime}\) (Fig. 4, (b)), each process \(i\) will infer the global resolution of the global grid, i.e. the global number of vertices along each dimension, \(n_{X}\), \(n_{Y}\) and \(n_{Z}\) (the number of vertices of \(\mathcal{M}\)). Then the local grid offset \((X_{o},Y_{o},Z_{o})\) (Fig. 4, (b)), that corresponds to the global discrete coordinates of \(o\), is computed from \((X_{O}^{\prime},Y_{O}^{\prime},Z_{O}^{\prime})\), \((X_{o}^{\prime},Y_{o}^{\prime},Z_{o}^{\prime})\) and \((s_{x},s_{y},s_{z})\). Each process can now locally instantiate its own global implicit triangulation object, modeling the entire domain \(\mathcal{M}\). Note that, since the memory footprint of the implicit triangulation is extremely modest (only the bounding box and the grid resolution are stored), this data duplication implies no significant memory overhead. #### 4.3.2 Distributed implicit queries In this section, we describe the implementation of the traversals of the triangulation, as queried by a calling algorithm. The traversal of a local ghosted block \(\mathcal{M}_{i}^{\prime}\) by an algorithm instantiated on the process \(i\) is performed identically to the non-distributed setting, with local simplex identifiers. Similarly to the explicit case (see Sec. 4.2.2), input and output identifiers are now expressed globally and need to be translated to local identifiers for the local computation on the local block of data. The important difference with the explicit mode is that all the information computed in explicit preconditioning (i.e. _(1)_ global identifiers, _(2)_ ghost global identifiers, _(3)_ boundary, and _(4)_ ghost boundary, see Sec. 4.2.1) now needs to be emulated at run time (i.e. upon the query of this information by the calling algorithm). _(1)_ **Global identifiers:** Given a local vertex identifier, its global discrete coordinates \((X,Y,Z)\) (with \(X\in[0,n_{X}-1]\), \(Y\in[0,n_{Y}-1]\), and \(Z\in[0,n_{Z}-1]\)) in the global grid \(\mathcal{M}\) are inferred from its local discrete point coordinates \((x,y,z)\) (with \(x\in[0,n_{x}-1]\), \(y\in[0,n_{y}-1]\), and \(z\in[0,n_{z}-1]\), \(n_{x},n_{y}\) and \(n_{z}\) being the number of vertices of the grid \(\mathcal{M}_{i}^{\prime}\) in each direction (Fig. 4, (c))), and the local grid offset \((X_{o},Y_{o},Z_{o})\). Using these coordinates \((X,Y,Z)\), the global identifier of \(v\) is computed on-the-fly with the same procedure used in the non-distributed setting [59] (i.e. by global row-major indexing). The same is done for \(d\)-simplices, determining the bounding box of \(\mathcal{M}_{i}^{\prime}\) using \(d\)-simplices instead of vertices. The global identifier of any \(d^{\prime}\)-simplex \(\sigma\in\mathcal{M}_{i}^{\prime}\) with \(d^{\prime}\in\{1,\ldots,d-1\}\) is computed based on the global identifiers of its vertices. Let \(v\) be the vertex with the lowest global identifier within \(\sigma\). The global identifier of \(\sigma\) is then obtained by querying the local copy of the global grid \(\mathcal{M}\) (see Sec. 4.3.1). Specifically, the star of \(v\) is traversed in search of a \(d^{\prime}\)-simplex \(\sigma^{\prime}\in\mathcal{M}\) which has exactly the same set of global vertex identifiers as \(\sigma\). Finally, the simplex identifier of \(\sigma^{\prime}\) returned by \(\mathcal{M}\) is used as global simplex identifier. _(2)_ **Ghost global identifiers:** The computation of the global identifier of a ghost \(d^{\prime}\)-simplex of \(\mathcal{M}_{i}^{\prime}\) is computed with the same procedure as for non-ghost simplices (above). _(3)_ **Boundary:** To decide if a given \(d\)-simplex \(\sigma\in\mathcal{M}_{i}^{\prime}\) is on the boundary of \(\mathcal{M}\), its global identifier is first retrieved (cf. above) and the local copy of the global grid \(\mathcal{M}\) is queried for boundary check based on this global identifier (with the exact procedure used in the non-distributed setting [59]). Fig. 4: Preconditioning of our distributed implicit triangulation. _(a)_ Each process \(i\) computes the bounding box \(\mathcal{B}_{i}\) of its ghosted block \(\mathcal{M}_{i}^{\prime}\). The vertex \(o\), respectively \(O\), is the origin of \(\mathcal{M}_{i}^{\prime}\), respectively \(\mathcal{M}\), with \((X_{o}^{\prime},Y_{o}^{\prime},Z_{o}^{\prime})\), respectively \((X_{O}^{\prime},Y_{O}^{\prime},Z_{O}^{\prime})\), its floating-point coordinates. The bounding box \(\mathcal{B}\) of \(\mathcal{M}\) is computed from all the local \(\mathcal{B}_{i}\). _(b)_ Two key pieces of information are computed at the step: the dimensions of the global grid \((n_{X},n_{Y},n_{Z})\) (the number of vertices of \(\mathcal{M}\) in each direction) and the local grid offset \((X_{o},Y_{o},Z_{o})\) (the global discrete coordinates of \(o\)). It is computed from \((X_{O}^{\prime},Y_{O}^{\prime},Z_{O}^{\prime})\), \((X_{o}^{\prime},Y_{o}^{\prime},Z_{o}^{\prime})\) and the floating-point spacing of the grid \((s_{x},s_{y},s_{z})\). Following that, each process locally instantiates a global implicit triangulation model of \(\mathcal{M}\). _(c)_ Given a local vertex identifier, its global discrete coordinates \((X,Y,Z)\) in \(\mathcal{M}\) are inferred from its local discrete point coordinates \((x,y,z)\) (with \(x\in[0,n_{x}-1]\), \(y\in[0,n_{y}-1]\), and \(z\in[0,n_{z}-1]\), \(n_{x}\), \(n_{y}\) and \(n_{z}\) being the number of vertices of the grid \(\mathcal{M}_{i}^{\prime}\) in each direction), and its local grid offsets. Next, its global identifier, \(\phi_{0}(v)\), is determined on-the-fly by global row-major indexing. **(4) Ghost boundary:** The boundary check for ghost simplices is computed with the exact same procedure as for non-ghost simplices (previous paragraph). ### _Distributed implicit periodic triangulation_ Since Paraview does not natively support periodic grids, the periodic grid implementation is specific to TTK and is implemented as a triangulation similar to the implicit triangulation. It is now also supported in distributed mode and can be used with the same algorithms as the explicit and implicit triangulations. The preconditioning, although similar to the implicit preconditioning, requires adjustments to account for the periodicity. First, since ParaView's ghost cell generator only produces ghosts at the interface of the domain of processes, an extra layer of ghost simplices needs to be computed, as illustrated in Fig. 5. Specifically, each process \(i\) checks if its block \(\mathcal{M}^{\prime}_{i}\) is located on the boundary of the global grid \(\mathcal{M}\). This is achieved by comparing the corresponding bounding boxes. If so, the list of _periodic faces_ of the bounding box \(\mathcal{B}\) of \(\mathcal{M}\) along which \(\mathcal{M}^{\prime}_{i}\) is located is identified (i.e. left, right, bottom, top, front, back) as well as the center of the bounding box \(\mathcal{B}_{i}\). Processes exchange these lists with each other and use them to identify their periodic neighbors as well as the chunks of data to be sent. In 3D, the chunks can take three forms: a face, an edge or a corner. In Fig. 5, as it is a 2D dataset, chunks take two forms: edges (yellow and grey) and corners (blue). The number of necessary periodic faces to identify a chunk changes depending on its form. In Fig. 5, the grey process identifies its edge to be sent to the orange process because one of the face of the local domain \(\mathcal{M}^{\prime}_{i}\) is a periodic face: upper (for grey) and lower (for orange). However, for the blue process, two periodic faces are required: upper and left (for blue) and lower and right (for orange). Next, processes exchange their data and each received chunk is added to the periodic ghost layer, based on the periodic faces along which the sender and receiver are located. Additionally, the local adjacency graph of \(i\) is updated to include an edge connecting \(i\) and \(j\). Second, modifications are necessary in the core periodic triangulation data structure. Most of the preconditioning for distributed computation is identical to the implicit triangulation, however the formulas used to compute global (from a local identifier) and local (from a global identifier) coordinates of a vertex need to be adapted to account for the additional vertices added as periodic ghosts. Note that since periodic grids have no boundary, the boundary check will always return false for this triangulation. ## 5 Distributed Pipeline This section provides an overview of the overall processing by TTK of a distributed dataset. Specifically, it documents all the preconditioning steps that will be automatically handled by the core infrastructure of TTK, in addition to the preconditioning of the triangulation (Sec. 4), in order to complete the implementation of the distributed model described in Sec. 3. Moreover, it documents all the low-level features interfacing TTK with MPI, which ease inter-process communication at the lowest levels of TTK. ### _Overview_ The input data is typically provided in the form of a distributed dataset loaded from a filesystem (e.g. _PVTI_ file format) or provided in-situ (e.g. with _Catalyst_). Specifically, the data is given in the form of disjoints blocks and each process \(i\) is initially given a data block \(\mathcal{M}_{i}\) (Sec. 3.1). Each topological algorithm implemented in TTK inherits from the generic class named ttkAlgorithm, itself inheriting from the generic VTK data processing class named vtkAlgorithm. Then, when reaching a TTK algorithm within a distributed pipeline, ParaView will call Fig. 5: Preconditioning of our distributed periodic implicit triangulation. This triangulation type is handled similarly to the implicit case, but additional ghost simplices need to be computed. Given a data block \(\mathcal{M}_{i}\) (_(a)_, orange), ParaView generates a first layer of ghost \(d\)-simplices (_(b)_, blue, grey, yellow). If \(\mathcal{M}_{i}\) was located on the boundary of the global grid \(\mathcal{M}\), periodic boundary conditions must be considered by adding an extra layer of ghost \(d\)-simplices (arrows) for each periodic face of \(\mathcal{M}\)_(c)_. Fig. 6: Overview of the overall pipeline upon the the delivery of a data block \(\mathcal{M}_{i}\) by ParaView (top). A step of pipeline preconditioning specialized for the distributed setting (top yellow frame) is automatically triggered before calling the actual implementation of the topological algorithm. Note that each preconditioning phase is only triggered if the corresponding information has not been cached yet. Then, for practical pipelines, the preconditioning typically only occurs before the first algorithm of the pipeline. the function ProcessRequest (from the vtkalgorithm interface). The re-implementation of this function in the ttkAlgorithm class will trigger all the necessary pre-conditioning (described below) before calling the actual topological algorithm (see Sec. 6 for examples), implemented in the generic function RequestData (defined in the vtkAlgorithm interface). Specifically, this distributed preconditioning includes the following phases (Fig. 6). _(1) Ghost layer generation:_ if the local data block does not include any ghost cells, the ghost layer generation algorithm (implemented by ParaView) is automatically triggered. This step is omitted if a valid ghost layer is already present. _(2) Local adjacency graph (LAG) initialization:_ A first estimation of the adjacency graph local to the data block (i.e. connecting it to its neighbor blocks) is constructed. This step is described in Sec. 5.2. This step is omitted if a valid LAG is already present. _(3) Triangulation instantiation:_ this step instantiates a new TTK triangulation data structure (Sec. 4). This step is omitted if a valid triangulation is already present. _(4) Process identifier generation:_ this step computes the process identifier \(p^{d^{\prime}}\) for each \(d^{\prime}\)-simplex (as specified in Sec. 3.1). This is described in Sec. 5.2. This step is omitted if valid process identifiers are already present. _(5) Ghost data exchange:_ this step computes for each neighbor process \(i\) the list of vertices or cells owned by the process \(i\), which are ghosts in the process \(j\). This is described in Sec. 5.3. This step is optional and is only triggered if the calling algorithm pre-declared its usage in the preconditioning phase. After these steps, the traditional TTK preconditioning is executed. This includes the pre-sorting of the local data values (if sorted values are not already present) as well as the preconditioning of the triangulation (if it has not be preconditioned already). ### _High-level infrastructure_ This section describes the implementation of the pipeline preconditioning mentioned in the above overview (Sec. 5.1), specifically, the routines which are not directly related to the distributed triangulation (which has been covered in Sec. 4). _(1) Local adjacency graph (LAG) initialization:_ Given a ghosted block \(\mathcal{M}^{\prime}_{i}\), the goal of this step is to store a list of processes, which are responsible for the blocks adjacent to \(\mathcal{M}^{\prime}_{i}\) (Fig. 2(e)). First, each process \(i\) computes the bounding box \(\mathcal{B}_{i}\) of its ghosted block \(\mathcal{M}^{\prime}_{i}\). Next, all processes exchange their bounding boxes. After that, each process can initialize a list of neighbor processes by collecting the processes whose bounding box intersects with \(\mathcal{B}_{i}\). Note that, in the case of an explicit input, this algorithm can lead to an over-estimation: i.e. two processes \(i\) and \(j\) can have intersecting bounding boxes without sharing ghost simplices (e.g. concave blocks). This first estimation of the LAG will be updated after the generation of the process identifiers (next paragraph). _(2) Process identifier generation:_ As discussed in Sec. 3.1, we associate to each \(d\)-simplex \(\sigma\) the value \(p^{d^{\prime}}\), which encodes the identifier of the process which exclusively owns \(\sigma\). This convenience feature can be particularly useful to quickly identify where to continue a local processing when reaching the boundary of a block (e.g. integral lines, Sec. 6). Each vertex \(v\in\mathcal{M}^{\prime}_{i}\) is already marked by ParaView as being a ghost vertex or not. For each vertex \(v\) which is marked as non-ghost, we set \(p^{0}(v)=i\). Then, we construct the list \(L_{g}(i)\) which includes the global identifiers of all ghost vertices. Next, \(L_{g}(i)\) is sent to each process \(j\) marked as being adjacent in the LAG (previous paragraph). Then the process \(j\) will return its process identifier (\(j\)) as well as the subset of vertices of \(L_{g}(i)\) which are marked as non-ghost in \(\mathcal{M}^{\prime}_{j}\). Finally, the process \(i\) will set the value \(p^{0}(v)\) to \(j\) for each of the vertices returned by \(j\). The procedure for the \(d\)-simplices is identical. Finally, the process identifiers for the simplices of intermediate dimensions is inferred from these of the \(d\)-simplices, as described at the end of Sec. 3.1. As discussed in the previous paragraph, the first computation of the LAG, as it uses bounding boxes, may lead to over-estimation for explicit grids (e.g. concave blocks). Therefore, following the generation of process identifiers, the LAG is revised using these identifiers. Each process will make a new list of neighboring processes using the process identifiers of its ghost vertices. All lists are then sent to process 0 that will ensure reciprocity of the new neighbor relation. Note that the above description covers the procedure when the input is an explicit triangulation. In implicit mode (regular grids), the preconditioning step of process identifier generation is limited to the computation of discrete bounding boxes of the non-ghosted block \(\mathcal{M}_{i}\). A discrete bounding box corresponds to the minimum and maximum of discrete coordinates of a block in all directions. The bounding boxes are then exchanged between neighboring processes. The process identifiers themselves are inferred on-the-fly, at query-time, from the discrete bounding boxes (i.e. with very little memory overhead). ### _Low-level infrastructure_ This section describes the implementation of a few, low-level convenience features for interfacing TTK with MPI. _(1) Ghost data exchange:_ In many scenarios, it may be desirable to update the data attached to the ghost simplices of a given block \(\mathcal{M}^{\prime}_{i}\). This is the case for instance when considering a smoothing operation (Sec. 6.1). At each iteration, the process \(i\) needs to retrieve the new, smoothed \(f\) data values for its ghost vertices, prior to run a new smoothing iteration. We implement this task in TTK as a simple convenience function. Using the list of neighbors (collected from the LAG), the process \(i\) will, for each neighbor process \(j\), send the global identifiers of the simplices which are ghost for \(i\) and owned by \(j\) (using their process identifiers). The previous computation steps are done once in an optional preconditioning step (step 5, Sec. 5.1). The list of ghost vertex identifiers is cached and used at runtime by \(j\) to send the updated data values when necessary. A similar procedure is available for \(d\)-simplices. _(2) Fine-scale time performance measurements:_ When timing the execution of a specific distributed algorithm, simply measuring the execution time on one process may not represent the execution time of the whole algorithm, as the local execution time may greatly vary from one process to the next. An established way to measure time in a distributed-memory environment consists in adding a MPI barrier before starting and stopping the timer. The call before starting the timer forces all processes to start simultaneously and the call before stopping the timer ensures that the time measurement includes the slowest process. Doing so, the execution time from (e.g.) process 0 then corresponds to the overall MPI execution time. In TTK, this can be done using the two functions startMPITimer and stopMPITimer. However, the two MPI barriers add synchronization points, slowing down the execution. Hence, the execution time is not measured by default but only when the compilation variable TTK_ENABLE_MPI_TIME is set to ON for TTK. This fine-scale time performance procedure has been used to evaluate the individual performance of each algorithm in Sec. 7.1, whereas the aggregated overall time measurements for integrated pipelines (including multiple algorithms, Sec. 7.2) have been obtained with ParaView's timer. ## 6 Examples Secs. 4 and 5 documented the implementation of the distributed model specified in Sec. 3. In this section, we now describe how to make use of this model to extend topological algorithms to the distributed setting. Specifically, we will mostly focus on the algorithms described in Sec. 2. ### _Algorithm taxonomy_ In this section, we present a taxonomy of the topological algorithms implemented in TTK, based on their needs of communications on distributed-memory architectures. _(1)_ **No Communication (NC):** This category includes algorithms for which processes do not need to communicate with each other to complete their computation. This is the simplest form of algorithms and the easiest to extend to a distributed setting. Such algorithms are often referred to as _embarrassingly parallel_. In TTK, this includes algorithms performing local operations and generating a local output, e.g.: critical point classification Sec. 2.2, discrete gradient computation Sec. 2.4, Jacobi set extraction [16], Fiber surface computation [38] and marching tetrahedra. _(2)_ **Data-Independent Communications (DIC):** This category includes algorithms for which processes do need to communicate with each other, but at predictable stages of the algorithm, with a predictable set of processes and communication volume, independently of the data values. This typically corresponds to algorithms performing a local operation on their block that need intermediate results from adjacent blocks to finalize their computation. In TTK, this includes for instance: data normalization, data or geometry smoothing (Sec. 6.3), or continuous scatter plots [3]. _(3)_ **Data-Dependent Communications (DDC):** This category includes algorithms which do not fall within the previous categories, i.e. for which communications can occur at unpredictable stages of the algorithm, with an unpredictable set of processes or communication volume, depending on the data values. This is the most difficult category of algorithms to extend to the distributed setting, since an efficient port would require a complete re-design of the algorithm. Unfortunately, we conjecture that most topological algorithms fall into that category. In TTK, this includes for instance: integral lines (Sec. 2.3), persistence diagrams [26], merge and contour trees [24], path compression [41], Reeb graphs [25], Morse-Smale complexes [59], Rips complexes, topological simplification [40, 60], Reeb spaces [58], etc. ### _Hybrid MPI+thread strategy_ In this section, we present general aspects regarding the combination of distributed and shared-memory parallelisms, that will be used within the port examples (Sec. 6.3). Current compute clusters are based on multiple nodes, each node including multi-core processors. For performance reasons, one then run one execution flow per core following either a _pure MPI_ strategy, i.e. with one MPI process per core, or a _MPI+thread_ strategy, i.e. with one MPI process per node (or per processor) and multiple threads within each MPI process. The latter can improve performance thanks to fewer MPI communications (due to fewer MPI processes), to a (better) dynamic load balancing among threads within each MPI process, and to a multi-core speedup for computations specifically performed by the MPI process 0 (e.g. Sec. 4.2.1). The overall memory footprint is also lower with the latter, since using fewer MPI processes implies fewer ghost simplices and less data duplication. Regarding the MPI+thread strategy and the port examples described in Sec. 6.3, we rely in TTK on the MPI_THREAD_FUNNELED thread support level in MPI [44]. According to this level of thread support, only the master (i.e. original) thread can issue calls to MPI routines. In each port example, within each MPI process, the communication steps (if any) are thus performed in serial whereas the computation steps are multi-threaded, using the OpenMP implementations already available in TTK [59]. Besides, when using the MPI+thread implementations, one can choose to run one MPI process per node or one MPI process per processor, hence e.g. two MPI processes on a node with two processors. The former leads to fewer MPI processes in total, and enables one to balance the compute load among all the cores of the node. The latter can avoid performance issues due to NUMA (non-uniform memory access) effects which occur when a thread running on a given processor accesses data on the memory local to the other processor. Options specific to the MPI implementation enable the user to choose one of the two possibilities. The threads are also bound to the CPU cores using the OpenMP thread affinity features [51]. ### _Distributed algorithm examples_ We now illustrate the taxonomy in Sec. 6.1 by describing the distributed-memory parallelization of algorithms belonging to each of the categories, while exploiting the distributed model we introduced (Sec. 3). At the code level, we try as much as possible to keep only one implementation for both settings (i.e. distributed VS non-distributed), where the specific instructions dealing with the distributed setting are protected by #ifdef TTK_ENABLE_MPI preprocessor directives. #### 6.3.1 NC: Scalar Field Critical Points This algorithm processes each vertex \(v\) of the domain independently and classifies its critical type based on the classification presented in Sec. 2.2. Since it processes a local piece of data (the lower and upper links \(Lk^{-}(v)\) and \(Lk^{+}(v)\)) and that it generates a localized output (a list of critical points for the local block), it does not require any communication (Fig. 7_(a)_). Thus, it is classified in the category NC of the above taxonomy. To port this embarrassingly parallel algorithm to the distributed setting, two modifications are required. First, the algorithm does not classify ghost vertices (which will be classified by other processes). Second, to fulfill the distributed output specification (Sec. 3.2), each output critical point is associated with its _global_ vertex identifier (instead of its local identifier). #### 6.3.2 NC: Discrete Gradient Similarly to the previous case, this algorithm processes each vertex \(v\) of the domain independently. Specifically, it generates discrete vectors for the lower star \(St^{-}(v)\) and the simplices which are assigned to no discrete vectors are stored as critical simplices (Sec. 2.4). Similarly to the previous case, this algorithm only requires local data and only produces local outputs, without needing communications (hence its NC classification) (Fig. 7 (_b_)). The port of this embarrassingly parallel algorithm requires two modifications. First, only the vertices which are exclusively owned by the current process (Sec. 3.1) are processed. The gradient for ghost vertices, and the simplices in their lower links, is not computed. Second, similarly to the previous case, the simplex identifiers associated with the discrete vectors and critical simplices are expressed with _global_ identifiers (instead of local ones). #### 6.3.3 DIC: Scalar Field Normalizer This convenience procedure simply normalizes an input scalar field \(f\) to the range \([0,1]\). This algorithm can be divided into two steps. First, each process first computes its local extreme values and all processes exchange their extreme values to determine the values \(f_{min}\) and \(f_{max}\) for the entire domain \(\mathcal{M}\) using MPI collective communications (namely Allreduce, using the minimum and maximum operations). Second, all data values are normalized independently, based on \(f_{min}\) and \(f_{max}\). The first step of the algorithm requires inter-process communications in a way which is predictable and independent of the actual data values (hence its DIC classification in the taxonomy). #### 6.3.4 DIC: Scalar Field Smoother This convenience procedure simply smooths a scalar field \(f\) by local averaging (i.e. by replacing \(f(v)\) with the average data values on the vertices of \(St(v)\)). This averaging procedure is typically iterated for a user-defined number of iterations. However, at a given iteration \(t\), in order to guarantee a correct result for each vertex \(v\) located on the boundary of the local block (i.e. \(v\) is a non-ghost vertex adjacent to ghost-vertices), the updated \(f\) values from the previous iteration \(t-1\) need to be retrieved for each of its ghost neighbors (Fig. 7(_c_)). Thus, at the end of each iteration, each process \(i\) needs to communicate with its neighbors to retrieve the smoothed values for its ghost vertices, which is achieved by using the generic ghost data exchange procedure described in Sec. 5.3 (hence the DIC classification for this algorithm). #### 6.3.5 DDC: Integral lines Unlike the previous cases, the port of this algorithm requires quite extensive modifications. The first step is similar to its sequential version (Sec 2.3): each process will compute the integral lines whose seeds lie within its block \(\mathcal{M}_{i}\). From there, two possibilities arise: either the integral line reaches its final vertex within \(\mathcal{M}_{i}\), _completing_ the computation, or the integral line reaches a ghost vertex owned by another process \(j\) and is _incomplete_. In the latter case, some of the integral line data (such as global identifier, the distance from the seed or the global identifier of the seed) is stored in a vector to be sent to process \(j\) later (Fig. 7(_d_) and (_e_)). Once all integral lines on all processes are marked as either complete or incomplete, all processes exchange the data of their incomplete integral lines (aggregated in one single MPI Fig. 7: Examples of topological algorithm modifications for the support of distributed memory computation. _(a)_ Scalar Field Critical Points (NC): Critical points are generated similarly to the sequential mode. Upper and lower links (\(+\) and \(-\) signs in the figure) of non-ghost vertices on the boundary of \(\mathcal{M}_{i}\) are computed using ghost vertices (here in yellow). _(b)_ Discrete Gradient (NC): Similarly to (a), this algorithm processes each vertex of the domain independently. For each non-ghost vertex on the boundary of \(\mathcal{M}_{i}\), the lower link computation can rely on ghost vertices. Critical simplices are represented by bigger spheres. _(c)_ Scalar Field Smoother (DIC): This procedure smooths a scalar field \(f\) by local averaging for a user-defined number of iterations. The values of ghost vertices (in yellow) will need to be updated after each iteration. _(d)_ and _(e)_ Integral Lines (DDC): _(e)_ each process will compute the integral lines whose seeds lie within its block \(\mathcal{M}_{i}\). Then either the integral line reaches its final vertex within \(\mathcal{M}_{i}\), _completing_ the computation, or the integral line reaches a vertex outside of \(\mathcal{M}_{i}\) (here in yellow in _(d)_). In the latter case, the integral line data is stored to be sent later to the yellow process. Once all the work is done on all processes, they exchange the data of incomplete integral lines and resume the computation of the integral lines on their block. The computation stops when all integral lines have completed. message per neighbor process) and use that data to resume computation of the integral lines on their block. These computation and communication steps are run until all integral lines on all processes are completed. Consequently, depending on the dataset, and the process, there may be very little communication, e.g. if all the integral lines lie within the bounds of a block, or a lot of communications, e.g. if some integral lines are defined across the blocks of multiple processes (hence its DDC classification). Regarding this DDC algorithm, we also tried to transfer the incomplete integral lines earlier among neighbor processes, thanks to a thread dedicated to communications, but to no avail regarding performance (Sec. 7.1). ### _Integrated pipeline_ In this section, we describe an integrated pipeline that produces a real-life use case combining all the the port examples presented in Sec.6.3. All of the algorithms, their order as well as their input are described in Table I. The input dataset is a three-dimensional regular grid with two scalar fields \(f\), the electronic density in the _Acatine Thymine_ complex (AT) and its gradient magnitude \(|\nabla f|\). First, \(f\) and \(|\nabla f|\) are smoothed and \(f\) is normalized. Critical points of \(f\) are computed and used as seeds to compute integral lines of \(f\). The extracted integral lines capture the covalent and hydrogen bonds within the molecule complex (Fig. 8). Then critical points are computed for \(|\nabla f|\) on the integral lines. The extracted critical points indicate locations of covalent bonds where the electronic density experiences rapid changes, indicating transition points occurring within the bond (Fig. 8). The local order of \(f\) is required by two algorithms: the first critical points (SFCP1) and the integral lines (IL). Since these two algorithms are separate leaves of the pipeline, each of them would trigger the automatic local order computation. Instead, to avoid this duplicated computation, we manually call the local order computation in a preprocess (i.e. by calling the _ArrayPreconditioning_ algorithm). The chosen AT dataset is intentionally quite small (\(177\times 95\times 48\)) to ensure reproducibility. It is resampled before the pipeline in order to create a more sizable example, using _ResampleTolmage_, a Paraview algorithm. Anyone can execute this pipeline to the best of their resources, by choosing the appropriate resampling dimensions. In our case, the new dataset is of dimensions (\(2,048\times 2,048\times 2,048\)), encompassing roughly 8.5 billion vertices. Computations with higher dimensions were attempted, but the memory footprint of Paraview's resampling algorithm exceeded the available memory on the nodes of our supercomputer (Sec. 7). The pipeline was also run on a second, larger, dataset (_Turbulent Channel Flow_), to show TTK's capability to handle massive datasets (specifically, the largest publicly available dataset we have found). This dataset represents a three dimensional pressure field of a direct numerical simulation of a fully developed flow at different Reynolds numbers in a plane channel (obtained from the Open Scientific Visualization Datasets [37]). Its dimensions are (\(10,240\times 7,680\times 1,536\)), which is approximately 120 billion vertices. Before applying the pipeline, the gradient magnitude is computed and added to the dataset, and the result is converted to the _PVTI_ file format (a Paraview format for distributed regular grids) using single-precision floating-point numbers (thereby reducing memory consumption at runtime). ## 7 Results For the following results, we rely on Sorbonne Universite's supercomputer, MeSU-beta. MeSU-beta is a compute cluster with 144 nodes of 24 cores each (totaling 3,456 cores). Its nodes are composed of 2 Intel Xeon E5-2670v3 (2.7 GHz, 12 cores), with SMT (simultaneous multithreading) disabled (i.e. running 1 thread per core), and with 128GB of memory each. The nodes are interconnect with Mellanox Infiniband. When measuring the performance of an algorithm, only the execution itself of the algorithm is timed. None of the preconditioning or input and output formatting is timed unless explicitly stated. The preconditioning steps are an investment in time: they can be used again by other algorithms later on in the pipeline, thus, including the cost of these steps in the execution time of a single algorithm would \begin{table} \begin{tabular}{|l l l|} \hline Abbreviation & Algorithm & Input \\ \hline 1. SFS1 & ScalarFieldSmoother & \(f\) \\ \hline 2.FS2 & ScalarFieldSmoother & \(|\nabla f|\) \\ \hline 3.SFN1 & ScalarFieldNormalizer & \(f_{SFS1}\) \\ \hline 4.AP & ArrayPreconditioning & \(f_{SFN1}\) \\ \hline 5.SFCP1 & ScalarFieldCriticalPoints & \(f_{AP}\) \\ \hline 6.IL & IntegralLines & \(f_{SFCP1}\) (seeds) \\ \hline 7.GS & GeometrySmoother & \(f_{IL}\) \\ \hline 8.SFCP2 & ScalarFieldCriticalPoints & \(|\nabla f|_{SFS2}\) on \(\mathcal{M}_{GS}\) \\ \hline \end{tabular} \end{table} TABLE I: Composition of the integrated pipeline. Each line denotes an algorithm in the pipeline, by order of appearance (top to bottom), as well as its input. \(f\) is the input scalar field. Each algorithm modifies the scalar field: \(f_{A}\) is the modified scalar field \(f\), output of algorithm \(A\). \(\mathcal{M}_{GS}\) is the output domain of GeometrySmoother. Fig. 8: Output of the integrated pipeline on the AT dataset, a three-dimensional regular grid of the electronic density (and its gradient magnitude) in the Adenine Thymine complex (AT). The extracted integral lines capture the covalent and hydrogen bonds within the molecule complex. The transparent spheres are the critical points used as seeds of the integral lines while the full spheres are the critical points of \(|\nabla f|\) and show where the electronic density experiences rapid changes, indicating transition points occurring within the bond. This image was obtained by resampling the original dataset to \(2,048^{3}\) and executing the integrated pipeline on 64 nodes of 24 cores each (1,536 cores) on MeSU-beta. not provide an accurate representation of performance in a more complicated pipeline. They are therefore excluded from the individual benchmarks (Sec. 7.1) but included in the study of the global, integrated pipeline Sec. 7.2. Time measurement is done as explained in Sec. 5.3. The speedup \(s_{p}\) for \(p\) cores is defined as \(s_{p}=\frac{t_{p}}{t_{1}}\), with \(t_{p}\) being the execution time for \(p\) cores. The efficiency for \(p\) cores is defined as \(\frac{s_{p}}{p}\times 100\). The benchmark is performed on five different datasets: _Wavelet_ (3D wavelets on a cube), _Elevation_ (synthetic dataset of the altitude within a cube, with a unique maximum at one corner of the cube and a unique minimum at the opposite corner), _Isabel_ (magnitude of the wind velocity in a simulation of the hurricane Isabel that hit the east coast of the USA in 2003), _Random_ (random field on a cube) and _Backpack_ (density in the CT-scan of a backpack filled with items). The datasets all originate from publicly available repositories [37, 61], and have been resampled to \(512^{3}\) to increase work load. ### _Distributed algorithms performance_ We first compare the pure MPI and the MPI+thread strategies (Sec. 6.2). Regarding the MPI+thread strategy, we rely on one MPI process per node (and 24 threads within) instead of one MPI process per processor (and 12 threads each). According to performance tests (not shown here) both options lead indeed to similar performance results, except when using one single node: in this case, having one single MPI process (no communication and no ghost simplices required) is more efficient than two. Having one MPI process per node also leads to a lower memory consumption. As shown in Fig. 9, using MPI+thread (with one MPI process and 24 threads per node) is then substantially more efficient than using a pure MPI design for the integral line algorithm, for all datasets except the _Random_ dataset. More precisely, even for MPI+thread, the efficiency decreases with the number of cores and depends significantly on the dataset. This is due to a strong workload imbalance between the processes: the integral lines are not evenly distributed on the MPI processes which can lead to long idle periods for some processes (waiting for the other to process their integral lines). This applies to the _Backpack_ dataset for example. Regarding the _Elevation_ dataset (very smooth, with only one maximum and one minimum) or the _Isabel_ one (very smooth too), the generated integral lines are here especially lengthy and span several (but not all) processes, leading to low efficiencies. On the contrary the _Random_ dataset is very balanced, but is also very noisy, leading to very short integral lines: for the same number of integral lines, the computation times are much shorter than for the other datasets which makes the communication cost more detrimental to performance. Finally, the _Wavelet_ dataset is the most balanced one, with long enough integral lines, and thus shows the best performance results. Compared to the pure MPI strategy, the MPI+thread one benefits from fewer MPI processes and therefore from a lower load imbalance. As briefly mentioned in Sec. 6.3.5, we tried to improve the parallel efficiencies of the integral line algorithm, by dedicating a thread to MPI communications. Thanks to this thread, an incomplete integral line is sent right away, without waiting for all integral lines on the process to be computed. Each process also continuously receives integral lines and adds them immediately to the pool of integral lines to be computed. However this design based on a communication thread adds a significant amount of complexity to the implementation (due to the required thread synchronizations), and did not improved the parallel efficiencies since the main performance bottleneck is the load imbalance among processes. As a result, we do not rely on this communication thread design in our distributed integral lines implementation. The performance results for the other distributed algorithms can be found in Fig. 10. For the _Scalar Field Critical Points_, a very good efficiency (\(80\%\)) is achieved (which is comparable to its shared-memory parallel implementation on one node, \(90\%\)), with little dependence on the dataset. The _Discrete Gradient_ likewise performs very well in terms of efficiency, albeit slightly less, due to the parallelization method of the algorithm, for which adding ghost simplices will add a small amount of extra work in parallel. These two algorithms strongly benefit from parallel computing, even when using hundreds of cores. The _Scalar Field Smoother_ exhibits lower efficiency. This can be explained by the need for communications at each iteration, as well as by the low cost of the smoothing process (which is a simple averaging operation). Indeed, the faster a computation, the stronger the impact of communications on the overall performance. ### _Integrated pipeline performance_ We now present experimental results for the integrated pipeline (Sec. 6.4), which exemplifies a real-life use case combining all of the port examples described in Sec. 6.3. The results for the integrated pipeline are twofold: an output image (Fig. 8 and Fig. 12) and the time profiling of the pipeline (Fig. 11). The image is produced using offscreen rendering with OSMesa on our supercomputer. Profiling is done using both Paraview's timer (average, minimum and maximum computation times across processes, for an overall algorithm, preconditioning included) and the TTK timer defined in Sec. 5.3 (for a fine-grain account of the execution time within an algorithm and its preconditioning). Fig. 9: Parallel efficiencies for the Integral line computation algorithm with \(500,000\) seeds, randomly distributed on all processes, using the pure MPI strategy (left) and the MPI+thread one (right) with 1 MPI process and 24 threads per node. The MPI+thread strategy is significantly more efficient than the pure MPI one. #### 7.2.1 The Adenine Thymine complex (AT) dataset For the experiments of Figs. 8 and 11 (left), the selected resampling dimensions for the input regular grid are \(2,048^{3}\), a choice explained in Sec. 6.4. The overall computation takes \(241.2\) seconds. Preconditionning is triggered once, before executing the first TTK algorithm. The longest preconditioning step is Paraview's ghost cells generation (\(24.2\%\) of the total pipeline time), a step commonly used in a distributed-memory setting, regardless of TTK. The preconditioning specific to TTK's use of MPI (i.e. _Local Adjacency Graph_, _Process Identifiers_, _Ghost Data Exchange_) is significantly faster and takes only \(1.2\%\) of the overall pipeline computation time, which can be considered as negligible next to the rest of the pipeline. TTK computations (preconditioning included) make up \(70.1\%\) of the total pipeline computation, which can be considered as a satisfactory efficiency. #### 7.2.2 The Turbulent Channel Flow dataset The computation shown in Fig. 11 (right) was performed on the complete dataset (120 billion vertices, single-precision, Sec. 6.4). The overall computation takes \(5,257.5\) seconds. The execution time of this pipeline includes the algorithms listed in Tab. 1. Note that the rendering time is not included in the time profiling reported in Fig. 11 (for both datasets). For the turbulent flow dataset, explicit glyphs were used for the rendering of the critical points (spheres) and integral lines (cylinders), as the screen-space glyph rendering features of ParaView did not produce satisfactory results in a distributed setting. However, the generation of glyph geometry required a lot of memory, therefore the rendering in Fig. 12 was performed on only a quarter of the dataset. The pipeline profiled in Fig. 11, however, was indeed executed on the whole dataset. Similarly to the AT dataset, the longest preconditioning step is Paraview's ghost cells generation (\(30.7\%\) of the total pipeline time). Again, TTK's specific MPI-preconditioning is marginal and takes up only \(0.7\%\) of the overall pipeline computation time. Computations of TTK algorithms (preconditioning included) make up for \(59.2\%\) of the total execution time. When compared to the AT dataset, the execution time of SFCP1 is multiplied by a factor of roughly 15, which is comparable to the increase in data size between datasets, Fig. 11: Time profiling for the integrated pipeline for the AT dataset resampled to roughly \(8.5\) billion vertices (left) and the _Turbulent Channel Flow_ dataset (right) of 120 billion vertices. The execution was conducted using 64 nodes of 24 cores each (1,536 cores in total) on MeSU-beta. Each bar corresponds to the execution time of one algorithm. SFS1 is computed for 1 iteration for the AT dataset and 10 iterations for the turbulent flow dataset (which is more irregular). The _Other_ step consists in steps that are not part of an algorithm, such as loading the TTK plugin in Paraview, Paraview overhead and I/O operations. Only algorithms that take up a significant amount of time are shown in the profiling (see Tab. 1 for a description of the abbreviations). In both cases, the MPI preconditioning computed by our framework (_Local Adjacency Graph_, _Process Identifiers_, _Ghost Data Exchange_) represents a negligible part of the overall pipeline execution time (at most \(1.2\%\)). Fig. 10: Parallel efficiencies for various algorithms, using MPI+thread (with 1 MPI process and 24 threads per node). indicating good scalability. Overall, this experiment shows that, thanks to our MPI-based framework, TTK can now run advanced analysis pipelines on massive datasets (up to \(120\) billion vertices on our supercomputer) in an acceptable amount of time, while requiring a TTK-MPI specific preconditioning of negligible computation time overhead (\(0.6\%\) of the computation). ### _Limitations_ Sec. 4 presented our strategy to provide consistent global simplex identifiers, irrespective of the number of processes. This guarantees a per-bit compatibility of the input data representation with the sequential mode of TTK, and consequently a per-bit compatibility of the pipeline outputs. However, the usage of threads can challenge the determinism of certain algorithms, given the non-deterministic nature of the thread scheduler. Then, an additional effort may need to be made by the developers to address this non-determinism within their implementation of a topological algorithm (to ensure per-bit compatibility). In our experiments, we opted not to enforce determinism for integral lines, given the lack of control over the thread scheduler. A significant difficulty occurring when processing massive datasets with ParaView is the substantial memory footprint induced by ParaView's interactive pipeline management. Data flows through the pipeline, being transformed at each step by algorithms. Rather than modifying data in-place, algorithms generate copies before implementing changes. This methodology offers several advantages, such as preventing redundant computation of inputs when multiple branches share the same input, resulting in better efficiency, especially when adjusting interactively the algorithm parameters. However, this copy-before-computation approach leads to a rapid increase in memory usage during computations, which can become problematic in practice for pipelines counting a large number of algorithms. Another limitation deals with the current support of the global order of scalar values. As described in Sec. 3.3, when two vertices \(v_{i}\) and \(v_{j}\) need to be compared, two different scenarios occur. If \(v_{i}\) and \(v_{j}\) belong to the same block, the ordering is efficiently performed by a single comparison of their local scalar order (i.e. within the block). When \(v_{i}\) and \(v_{j}\) belong to different blocks, the ordering may require two comparisons (one for the actual scalar values \(f\) and one for the global identifiers for tie-breaking). Moreover, the distinction between the two scenarios involves one comparison of process identifiers (to test if \(v_{i}\) and \(v_{j}\) belong to the same block). Overall, these additional comparisons may induce a non-negligible workload overhead in comparison to a non-MPI execution. A workaround would consist in pre-sorting the data globally. However, from our experience, the global sorting of data values in distributed mode may induce a significant time overhead, which is likely to be compensated only for very computation-intensive algorithms. ## 8 Conclusion and Road Map In this paper, we presented a software framework for the support of topological analysis pipelines in a distributed-memory model. Specifically, we instantiated our framework with the MPI model, within the Topology ToolKit (TTK). An extension of TTK's efficient triangulation data structure to a distributed-memory context was presented, as well as a software infrastructure supporting advanced and distributed topological pipelines. A taxonomy of algorithms supported by TTK was provided, depending on their communication requirements. The ports of several algorithms were described, with detailed performance analyses, following a MPI+thread strategy. We also provided a real-life use case Fig. 12: Output of the integrated pipeline on the _Turbulent Channel Flow_ dataset (120 billion vertices), a three-dimensional regular grid with two scalar fields, the pressure of the fluid and its gradient magnitude. The pipeline was executed up to the Geometry Smoother algorithm. The spheres corresponds to the pressure critical points and the tubes are the integral lines starting on saddle points. Figure (a) shows all of the produced geometry, while (b) and (c) show parts of the output zoomed in. These images were produced on a quarter of the total dataset due to rendering related issues (see Sec. 7.2.2), while Fig. 11 was produced on the full dataset. consisting of an advanced pipeline of multiple algorithms, run on a dataset of 120 billion vertices on a compute cluster with 64 nodes (1,536 cores), showing that the cost of TTK's MPI preconditioning is marginal next to the execution time of the pipeline. TTK is now able to compute complex pipelines involving several algorithms on datasets too large to be processed on a commodity computer. Our framework is available in TTK 1.2.0, enabling others to reproduce our results or extend TTK's distributed capabilities. The next step consists in adding distributed-memory support to all of TTK's topological algorithms. The challenge here depends on the algorithm class (see Sec. 6.1). The port of NC and DIC algorithms (such as ContinuousScatterPlot, ManifoldCheck, DistanceField, JacobiSet or FiberSurface) is relatively straightforward. For DIC algorithms, the initial step entails identifying the data to be exchanged, the processes involved in the exchange, and the appropriate timing for these communications. For NC algorithms, no exchange between processes take place. Then, the implementation can be done in TTK, using TTK's MPI-API as well as low-level MPI directives (for specific communications). This could for example be done during a hackathon. For DDC algorithms (such as Discrete Morse Sandwich, Topological Simplification or Contour trees), the port may be much more complicated. For each of these DDC algorithms, their distributed-memory parallelization may be a substantial research problem, on which we will focus in future work. ## Acknowledgments This work is partially supported by the European Commission grant ERC-2019-COG "_TORI_" (ref. 863464, [https://erc-tori.github.io/](https://erc-tori.github.io/)).
2310.10576
Implicative models of set theory
In this paper we show that using implicative algebras one can produce models of set theory generalizing Heyting/Boolean-valued models and realizability models of (I)ZF, both in intuitionistic and classical logic. This has as consequence that any topos which is obtained from a Set-based tripos as the result of the tripos-to-topos construction hosts a model of intuitionistic or classical set theory, provided a large enough strongly inaccessible cardinal exists.
Samuele Maschio, Alexandre Miquel
2023-10-16T16:53:27Z
http://arxiv.org/abs/2310.10576v2
# Implicative Models of Set Theory ###### Abstract In this paper we show that using implicative algebras one can produce models of set theory generalizing Heyting/Boolean-valued models and realizability models of (**I**)**ZF**, both in intuitionistic and classical logic. This has as consequence that any topos which is obtained from a **Set**-based tripos as the result of the tripos-to-topos construction hosts a model of intuitionistic or classical set theory, provided a large enough strongly inaccessible cardinal exists. m + Footnote †: footnote]Email: [email protected] Footnote †: thanks: [ footnote]Email: [email protected] ## 1 Introduction Implicative algebras were introduced by the second author [12] in order to provide a common foundation for the model-theoretic constructions underlying forcing and realizability. Forcing was first introduced by Cohen [3] in the 60's to prove the independence of the Continuum Hypothesis with respect to **ZFC** and it is the main technique used in set theory to obtain relative consistency results. From an algebraic point of view, the technique of forcing amounts to the construction of a Boolean-valued model of the considered theory, and this construction can be further generalized to intuitionistic theories by considering Heyting-valued models. On the other hand, realizability has been introduced by Kleene [6] in the 40's to interpret the computational content of intuitionistic proofs. For a long time, this technique was limited to intuitionistic theories, but in the mid-90's Krivine [8] showed how to reformulate its very principles to make them compatible with classical logic. In order to compare forcing and realizability, Hyland, Johnstone and Pitts introduced the notion of tripos [5], that can be seen as a categorical model of higher-order logic. Such triposes may be constructed from complete Heyting algebras, thus yielding forcing triposes, or from partial combinatory algebras, thus yielding intuitionistic realizability triposes. More recently, Streicher [15] showed how to turn an abstract Krivine structure into a tripos, thus describing classical realizability in categorical terms. All these triposes \(\mathsf{P}\) can then be turned into toposes \(\mathbf{Set}[\mathsf{P}]\) by applying the tripos-to-topos construction [5]. In [12] the second author showed that forcing and realizability triposes are instances of a more general notion of tripos induced by an implicative algebra (which is called _implicative tripos_). Later, in [13], he proved that every \(\mathbf{Set}\)-based tripos is in fact (isomorphic to) an implicative tripos. These triposes include those arising from many different variants of realizability, such as modified realizability triposes, relative realizability triposes, classical realizability triposes and so on. However, it is worth recalling that, from a proof-theoretic perspective, topos theory is much weaker than set theory. Indeed, the internal theory of a topos with a natural numbers object is strictly weaker than (intuitionistic or classical) Zermelo set theory (IZ/Z), that is itself much weaker than (intuitionistic or classical) Zermelo-Fraenkel set-theory (IZF/ZF). Intuitively, the main difference is that in topos theory, one can only quantify over the elements of a given set or object (i.e. bounded quantification), whereas in set theory, one can also quantify over all sets/objects (i.e. unbounded quantification). In this paper we shall see how implicative algebras can be used to construct _implicative_ models of intuitionistic and classical set theory (depending on whether the underlying implicative algebra is intuitionistic or classical). We will then see that our implicative models of set theory encompass Heyting/Boolean-valued models for \(\mathbf{(I)ZF}\)[1, 2] and, up to logical equivalence, Friedman/Rosolini/McCarty realizability models for \(\mathbf{IZF}\)[4, 14, 11] as well as the classical realizability models of \(\mathbf{ZF}\) introduced by Krivine [7, 10]. Finally, in the last section, we will use the relationship between the logic of a tripos \(\mathsf{P}\) and the internal logic of the corresponding topos \(\mathbf{Set}[\mathsf{P}]\) to show that our implicative models of \(\mathbf{(I)ZF}\) can be seen as internal models of set theory in the toposes constructed from implicative triposes. Since the second author proved [13] that every \(\mathbf{Set}\)-based tripos is an implicative tripos (up to isomorphism), we can conclude that every topos induced by a \(\mathbf{Set}\)-based tripos hosts an internal model of set theory (provided a large enough cardinal exists). ## 2 Intuitionistic and classical set theory In the language of Zermelo-Fraenkel set theory the only terms are variables and there are two binary predicate symbols: equality = and membership \(\in\). As usual in the language of set theory \(\forall x\in y\,\varphi\) is a shorthand for \(\forall x(x\in y\to\varphi)\) and \(\exists x\in y\,\varphi\) is a shorthand for \(\exists x(x\in y\wedge\varphi)\), while \(x\subseteq y\) is a shorthand for \(\forall z\in x\,(z\in y)\). The theories \(\mathbf{ZF}\) and \(\mathbf{IZF}\) have both the following axioms, but the logic underlying the former is classical, while it is intuitionistic for the latter: * \(\forall x\forall y\,(x\subseteq y\wedge y\subseteq x\to x=y)\) * \(\forall x\forall y\exists z\,(x\in z\wedge y\in z)\) * \(\forall x\exists u\forall y\in x\forall z\in y\,(z\in u)\) * \(\forall x\exists z\forall y\,(y\subseteq x\to y\in z)\) * \(\exists u\mathbf{Inf}(u)\) where \(\mathbf{Inf}(u)\) is the conjunction of the formulas \(\mathbf{Inf}_{1}(u):\equiv\exists x\in u\forall y\in x\,\bot\) and \(\mathbf{Inf}_{2}(u):\equiv\forall x\in u\exists y\in u(x\subseteq y\wedge x \in y\wedge\forall z\in y(z\in x\lor z=x))\) * \(\forall w_{1}....\forall w_{n}\forall x\exists y\,(\forall z\in y\,(z\in x \wedge\varphi)\wedge\forall z\in x\,(\varphi\to z\in y))\) for all formulas \(\varphi[w_{1},...w_{n},x,z]\) in context. * \(\forall w_{1}....\forall w_{n}\forall y\,(\forall x\in y\exists z\,\varphi \to\exists u\forall x\in y\exists z\in u\,\varphi)\) for all formulas in context \(\varphi[w_{1},...w_{n},x,y,z]\). * \(\forall w_{1}...\forall w_{n}(\forall x(\forall y\in x\,\varphi[y/x]\to \varphi)\to\forall x\,\varphi)\) for all formulas in context \(\varphi[w_{1},...,w_{n},x]\). ## 3 Implicative algebras and implicative triposes An _implicative algebra_ is a 4-tuple \(\mathbb{A}=(A,\leq,\to,\Sigma)\) where * \((A,\leq)\) is a complete lattice; * \(\to:A\times A\to A\) is a function which is monotone in the second component and anti-monotone in the first component, and which satisfies the following condition \[a\to\bigwedge_{i\in I}b_{i}=\bigwedge_{i\in I}\left(a\to b_{i}\right)\] for every indexed family \((b_{i})_{i\in I}\) of elements of \(A\) and every \(a\in A\); 3. \(\Sigma\subseteq A\) is upward closed, it contains \(b\) as soon as it contains \(a\to b\) and \(a\), and it contains \(\mathbf{K}:=\bigwedge_{a,b\in A}(a\to(b\to a))\) and \(\mathbf{S}:=\bigwedge_{a,b\in A}((a\to(b\to c))\to((a\to b)\to(a\to c)))\). Every complete Heyting algebra \((H,\leq)\) with Heyting implication \(\to\) gives rise to an implicative algebra \((H,\leq,\to,\{\top\})\). Moreover, every total combinatory algebra \((R,\cdot)\) gives rise to an implicative algebra \((\mathcal{P}(R),\subseteq,\Rightarrow,\mathcal{P}(R)\setminus\{\emptyset\})\), where \(A\Rightarrow B:=\{r\in R|\;\,r\cdot a\in B\text{ for every }a\in A\}\) for every \(A,B\subseteq R\). Other examples can be found in [12]. In the case of a partial combinatory algebra \((R,\cdot)\), the \(4\)-uple \((\mathcal{P}(R),\subseteq,\Rightarrow,\mathcal{P}(R)\setminus\{\emptyset\})\) is not in general an implicative algebra, but a quasi-implicative algebra (see [12]). However, there is a standard way to transform it into an implicative algebra in such a way that the tripos one obtains from it is equivalent to the realizability tripos built from \((R,\cdot)\) (for details, see [12]). An implicative algebra \(\mathbb{A}=(A,\leq,\to,\Sigma)\) is _classical_ if \(\bigwedge_{a,b\in A}(((a\to b)\to a)\to a)\in\Sigma\). Complete Boolean algebras give rise to classical implicative algebras following the recipe used for complete Heyting algebras. Last but not least, classical implicative algebras can also be constructed from Abstract Krivine Structures [15], the algebraic structure underlying classical realizability [9]. Closed \(\lambda\)-terms with constant parameters in an implicative algebra \(\mathbb{A}\) can be encoded as elements of \(\mathbb{A}\) itself as follows: \(a^{\mathbb{A}}:=a\) for every \(a\in A\), \((ts)^{\mathbb{A}}:=t^{\mathbb{A}}\cdot s^{\mathbb{A}}\) and \((\lambda x.t)^{\mathbb{A}}:=\bigwedge_{a\in A}\left(a\to(t[a/x])^{\mathbb{A}}\right)\) where the application \(\cdot\) is defined as follows for every \(a,b\in A\): \[a\cdot b:=\bigwedge\{x\in A|\,a\leq b\to x\}\] If we define the combinators \(\mathbf{k}\) as \(\lambda x.\lambda y.x\) and \(\mathbf{s}\) as \(\lambda x.\lambda y.\lambda z.xz(yz)\) as usual, one can show (see [12]) that \(\mathbf{K}=\mathbf{k}^{\mathbb{A}}\) and \(\mathbf{S}=\mathbf{s}^{\mathbb{A}}\). Useful properties of the encoding of \(\lambda\)-terms in \(\mathbb{A}\) are the following: 1. if \(t\)\(\beta\)-reduces to \(s\), then \(t^{\mathbb{A}}\leq s^{\mathbb{A}}\); 2. if \(t\) is a pure \(\lambda\)-term with free variables \(x_{1},...,x_{n}\) and \(a_{1},...,a_{n}\in\Sigma\), then \((t[x_{1}:a_{1},...,x_{n}:a_{n}])^{\mathbb{A}}\in\Sigma\)3; in particular the encodings of closed pure \(\lambda\)-terms are elements of \(\Sigma\). Footnote 3: We denote with \(t[x_{1}:a_{1},...,x_{n}:a_{n}]\) the \(\lambda\)-term obtained from \(t\) by substituting the variables \(x_{1},...,x_{n}\) with \(a_{1},...,a_{n}\), respectively. In what follows we will remove the superscript \(\mathbb{A}\) from the encoding of \(\lambda\)-terms in order to lighten the notation. For \(a,b\in A\), we write \(a\vdash_{\Sigma}b\) if \(a\to b\in\Sigma\), while we write \(a\equiv_{\Sigma}b\) if \(a\vdash_{\Sigma}b\) and \(b\vdash_{\Sigma}a\), moreover for every \(a,b\in A\) following [12] we define: \[a\times b:=\bigwedge_{x\in A}\left((a\to(b\to x))\to x\right)\] \[a+b:=\bigwedge_{x\in A}\left((a\to x)\to((b\to x)\to x)\right)\] and for every set indexed family \((a_{i})_{i\in I}\) we define \[\bigvee_{i\in I}a_{i}:=\bigwedge_{i\in I}a_{i}\qquad\qquad\overline{\exists} _{i\in I}a_{i}:=\bigwedge_{x\in A}\left(\bigwedge_{i\in I}(a_{i}\to x)\to x\right)\] Note that the operations \(a\times b\) (implicative conjunction) and \(a+b\) (implicative disjunction) are in general not associative, commutative or idempotent on the nose (think of intuitionistic or classical realizability), but they clearly are up to the equivalence \(\equiv_{\Sigma}\) (logical equivalence modulo the separator \(\Sigma\)). Also note that unlike universal quantifications (that are simply interpreted as meets), existential quantifications are not interpreted here as joins (as one would expect in forcing or in intuitionistic realizability), but they are rather interpreted using the standard second-order encoding of \(\exists\) in minimal second-order logic (thus using the same trick as in classical realizability). The reason is that in the framework of implicative algebras (that contains classical realizability as a particular case), joins do not satisfy the elimination rule of existential quantification, except in particular cases that will be discussed in Section 5.2. However, the price to pay for this encoding is that the corresponding realizers are in general more complex. We also introduce shorthands for some \(\lambda\)-terms: \(\overline{\mathbf{k}}:=\lambda x.\lambda y.y\), \(\mathbf{p}:=\lambda x.\lambda y.\lambda z.zxy\), \(\mathbf{p}_{1}:=\lambda u.u\mathbf{k}\), \(\mathbf{p}_{2}:=\lambda v.v\overline{\mathbf{k}}\), \(\mathbf{j}_{1}:=\lambda x.\lambda z.\lambda w.zx\), \(\mathbf{j}_{2}:=\lambda x.\lambda z.\lambda w.wx\), \(\mathbf{e}:=\lambda x.\lambda z.zx\). Notice that (the encodings of) all of them belong to the separator \(\Sigma\). If \(\Gamma\) is a finite list of variable assignments \(x_{1}:a_{1},...,x_{n}:a_{n}\) with \(a_{1},...,a_{n}\in A\) and \(x_{1},...,x_{n}\) distinct variables, and \(t\) is a \(\lambda\)-term with parameters in \(A\) and free variables among \(x_{1},..,x_{n}\), we write \(\Gamma\vdash t:a\) as a shorthand for \(t[\Gamma]^{\Lambda}\leq a\) (where \(t[\Gamma]\) is the result of the substitution corresponding to \(\Gamma\) applied to \(t\)) and the following rules are sound (this is a little variation on the system of rules presented in [12]): \[\frac{x:a\in\Gamma}{\Gamma\vdash x:a}\qquad\frac{\Gamma\vdash t:a \qquad a\leq b}{\Gamma\vdash t:b}\qquad\frac{\Gamma^{\prime}\leq\Gamma}{ \Gamma^{\prime}\vdash t:a}\] \[\frac{\Gamma\vdash t:\bot}{\Gamma\vdash t:a}\qquad\frac{\Gamma \vdash t:a}{\Gamma\vdash t:\top}\qquad\frac{\Gamma\vdash t:a\to b}{\Gamma \vdash ts:b}\qquad\frac{\Gamma,x:a\vdash t:b}{\Gamma\vdash\lambda x.t:a\to b}\] \[\frac{\Gamma\vdash t:a\qquad\Gamma\vdash s:b}{\Gamma\vdash \mathbf{p}ts:a\times b}\qquad\frac{\Gamma\vdash t:a\times b}{\Gamma\vdash \mathbf{p}_{1}t:a}\qquad\frac{\Gamma\vdash t:a\times b}{\Gamma\vdash \mathbf{p}_{2}t:b}\] \[\frac{\Gamma\vdash t:a}{\Gamma\vdash\mathbf{j}_{1}t:a+b}\ \frac{\Gamma \vdash t:b}{\Gamma\vdash\mathbf{j}_{2}t:a+b}\ \frac{\Gamma\vdash t:a+b}{\Gamma \vdash t(\lambda x.u)(\lambda y.v):c}\] \[\frac{\Gamma\vdash t:a_{i}\,(\text{for all }i\in I)}{\Gamma \vdash t:\bigvee_{i\in I}a_{i}}\qquad\frac{\Gamma\vdash t:\bigvee_{i\in I}a _{i}}{\Gamma\vdash t:a_{\overline{i}}}\,\overline{i}\in I\] \[\frac{\Gamma\vdash t:a_{\overline{i}}}{\Gamma\vdash\mathbf{e}t: \exists_{i\in I}a_{i}}\,\overline{i}\in I\qquad\frac{\Gamma\vdash t:\exists_{i \in I}a_{i}}{\Gamma,x:a_{i}\vdash u:b\,(\text{ for all }i\in I)}{\Gamma \vdash t(\lambda x.u):b}\] where \(\Gamma^{\prime}\leq\Gamma\) means that for every declaration \(x:a\) in \(\Gamma\) we have \(x:b\) in \(\Gamma^{\prime}\) for some \(b\leq a\). As shown in [12], to every implicative algebra \(\mathbb{A}\) can be associated a tripos (see [5] or [16]) \[\mathsf{P}_{\mathbb{A}}:\mathbf{Set}^{op}\rightarrow\mathbf{Heyt}\] by sending every set \(I\) to the posetal reflection of the preordered set \((A^{I},\vdash_{\Sigma[I]})\) where \(\varphi\vdash_{\Sigma[I]}\psi\) if and only if \(\bigwedge_{i\in I}(\varphi(i)\rightarrow\psi(i))\in\Sigma\) (we will write \(\varphi\equiv_{\Sigma[I]}\psi\) if \(\varphi\vdash_{\Sigma[I]}\psi\) and \(\psi\vdash_{\Sigma[I]}\varphi\)) and every function \(f:I\to J\) to the function induced by the pre-composition function \((-)\circ f:A^{J}\to A^{I}\). Componentwise use of \(\times\), \(+\) and \(\rightarrow\) defines a Heyting prealgebra structure (which needs not to be complete) on every preorder \((A^{I},\vdash_{\Sigma[I]})\), which is preserved by pre-composition. \(\exists\) and \(\bigvee\) are used to produce left and right adjoints to reindexing maps satisfying Beck-Chevalley condition, while a generic predicate is given by (the equivalence class of) the identity function on \(A\). A remarkable result in [14] is the following: **Theorem 3.1**: _Let \(\mathsf{P}:\mathbf{Set}^{op}\rightarrow\mathbf{Heyt}\) be a tripos. Then, there exists an implicative algebra \(\mathbb{A}\) such that \(\mathsf{P}\) is isomorphic to \(\mathsf{P}_{\mathbb{A}}\)._ Recall also (see e.g. [17]) that to every tripos \(\mathsf{P}\) over \(\mathbf{Set}\) is associated an elementary topos \(\mathbf{Set}[\mathsf{P}]\) obtained by means of the so-called "tripos-to-topos" construction (see [6]) whose internal logic can be reduced to that of the corresponding tripos as shown e.g. in [17]. ## 4 Implicative models of (I)ZF To define our implicative models we work in \(\mathbf{ZFC}\) as metatheory and for our convenience (see Remark 4.1) we further assume a strongly inaccessible cardinal \(\kappa\) to exist. Let now \(\mathbb{A}\) be a fixed implicative algebra with \(|A|<\kappa\). We define the following hierarchy of sets indexed by ordinals: \[W_{\alpha}^{\mathbb{A}}:=\begin{cases}\emptyset\text{ if }\alpha=0\\ \mathsf{Part}(W_{\beta}^{\mathbb{A}},A)\text{ if }\alpha=\beta+1\\ \bigcup_{\beta<\alpha}W_{\beta}^{\mathbb{A}}\text{ if }\alpha\text{ is a limit ordinal}\end{cases}\] where \(\mathsf{Part}(X,Y)\) denotes the set of partial functions from \(X\) to \(Y\). We take \(\mathbf{W}\) to be \(W_{\kappa}^{\mathbb{A}}\). Since \(W_{\alpha}^{\mathbb{A}}\subseteq W_{\beta}^{\mathbb{A}}\) if \(\alpha<\beta\), one can assign a rank in the hierarchy to every element of \(\mathbf{W}\) in the obvious way. In particular, we can define simultaneously, by recursion on rank, two functions \(\in_{\mathbf{W}},=_{\mathbf{W}}\colon\mathbf{W}\times\mathbf{W}\to A\): 1. \(\alpha\in_{\mathbf{W}}\beta:=\exists_{t\in\partial_{0}(\beta)}\left(\beta(t) \times(t=_{\mathbf{W}}\alpha)\right)\)4 Footnote 4: We denote with \(\partial_{0}(f)\) the domain of a partial function \(f\), that is the set of those \(x\) for which \(f(x)\) is defined. 2. \(\alpha=_{\mathbf{W}}\beta:=(\alpha\subseteq_{\mathbf{W}}\beta)\times(\beta \subseteq_{\mathbf{W}}\alpha)\) where \(\alpha\subseteq_{\mathbf{W}}\beta:=\bigvee_{t\in\partial_{0}(\alpha)}\left( \alpha(t)\to t\in_{\mathbf{W}}\beta\right)\). We interpret the language of set theory in such a way that to every formula in context \(\varphi[x_{1},...,x_{n}]\) we associate a function \[\|\varphi[x_{1},...,x_{n}]\|:\mathbf{W}^{n}\to A\lx@note{footnote}{If $n=0$, then $\|\varphi\,[ ]\|$ is identified with an element of $A$.}\] by recursion on complexity of formulas as follows: 1. \(\|x_{i}\in x_{j}\left[x_{1},...,x_{n}\right]\|\left(\alpha_{1},...,\alpha_{n} \right):\equiv\alpha_{i}\in_{\mathbf{W}}\alpha_{j}\) 2. \(\|x_{i}=x_{j}\left[x_{1},...,x_{n}\right]\|\left(\alpha_{1},...,\alpha_{n} \right):\equiv\alpha_{i}=_{\mathbf{W}}\alpha_{j}\) 3. \(\|\bot\underline{[x]}\|\left(\underline{\alpha}\right):\equiv\bot\) 4. \(\|\varphi\land\psi[\underline{x}]\|\left(\underline{\alpha}\right):\equiv\| \varphi[\underline{x}]\|\left(\underline{\alpha}\right)\times\|\psi[\underline {x}]\|\left(\underline{\alpha}\right)\) 5. \(\|\varphi\lor\psi[\underline{x}]\|\left(\underline{\alpha}\right):\equiv\| \varphi[\underline{x}]\|\left(\underline{\alpha}\right)+\|\psi[\underline{x}] \|\left(\underline{\alpha}\right)\) 6. \(\|\varphi\to\psi[\underline{x}]\|\left(\underline{\alpha}\right):\equiv\| \varphi[\underline{x}]\|\left(\underline{\alpha}\right)\to\|\psi[\underline {x}]\|\left(\underline{\alpha}\right)\) 7. \(\|\exists y\,\varphi\left[\underline{x}\right]\|\left(\underline{\alpha}\right) :\equiv\exists_{\beta\in\mathbf{W}}\left(\|\varphi\left[\underline{x},y\right] \|\left(\underline{\alpha},\beta\right)\right)\) 8. \(\|\forall y\,\varphi\left[\underline{x}\right]\|\left(\underline{\alpha}\right) :\equiv\bigvee_{\beta\in\mathbf{W}}\left(\|\varphi\left[\underline{x},y\right] \|\left(\underline{\alpha},\beta\right)\right)\)6 Footnote 6: In clauses (vii) and (viii), we assume, without loss of generality, that \(y\) is not a variable in the context \([\underline{x}]\). We write \(\mathbf{W}\vDash\varphi\left[\underline{x}\right]\) for \(\bigwedge_{\underline{\alpha}\in\mathbf{W}^{n}}\left(\|\varphi\left[\underline {x}\right]\|\left(\underline{\alpha}\right)\right)\in\Sigma\) when \([\underline{x}]\) has length \(n>0\), and for every closed formula \(\varphi\) we write \(\mathbf{W}\vDash\varphi\) for \(\|\varphi\left[\cdot\right]\|\|\in\Sigma\). Thus, \(\mathbf{W}\vDash\varphi\left[\underline{x}\right]\) just means that \(\|\varphi[\underline{x}]\|\) is in the maximal class of \(\mathsf{P}_{\mathbb{A}}(\mathbf{W}^{n})\), where \(n\) is the length of the context of variables \([\underline{x}]\). We will often write \(\|\varphi\|\) instead of \(\|\varphi[\cdot]\|\). **Remark 4.1**: Note that here, we chose to construct the model \(\mathbf{W}\) as a set, so that we can use it later (together with the suitable \(\mathbb{A}\)-equivalence) as an object of the topos \(\mathbf{Set}[\mathsf{P}_{\mathbb{A}}]\) induced by the implicative algebra \(\mathbb{A}\) (cf Section 6). However, if one is only interested in the set-theoretic part of the work, it is actually simpler to construct the model \(\mathbf{W}\) as a proper class (as it is traditionally done in forcing or in intuitionistic or classical realizability), thus removing the need of assuming the existence of an inaccessible cardinal \(\kappa\) (whose only purpose is to make the model \(\mathbf{W}\) fit into a set). ### Useful lemmas **Lemma 4.2**: _There exist \(\rho,\mathbf{j},\sigma,\mathbf{s}_{1},\mathbf{s}_{2},\mathbf{s}_{3}\in\Sigma\) such that_ 1. \(\rho\leq\bigwedge_{\alpha\in\mathbf{W}}\left(\alpha=_{\mathbf{W}}\alpha\right)\)__ 2. \(\mathbf{j}\leq\bigwedge_{\alpha\in\mathbf{W}}\bigwedge_{u\in\partial_{0}(\alpha )}\left(\alpha(u)\to u\in_{\mathbf{W}}\alpha\right)\)__ 3. \(\sigma\leq\bigwedge_{\alpha,\beta\in\mathbf{W}}\left(\alpha=_{\mathbf{W}}\beta \rightarrow\beta=_{\mathbf{W}}\alpha\right)\)__ 4. \(\mathbf{s}_{1}\leq\bigwedge_{\alpha,\beta,\gamma\in\mathbf{W}}\left(\alpha=_{ \mathbf{W}}\beta\times\gamma\in_{\mathbf{W}}\alpha\rightarrow\gamma\in_{ \mathbf{W}}\beta\right)\)__ 5. \(\mathbf{s}_{2}\leq\bigwedge_{\alpha,\beta,\gamma\in\mathbf{W}}\left(\alpha=_{ \mathbf{W}}\beta\times\alpha\in_{\mathbf{W}}\gamma\rightarrow\beta\in_{ \mathbf{W}}\gamma\right)\)__ 6. \(\mathbf{s}_{3}\leq\bigwedge_{\alpha,\beta,\gamma\in\mathbf{W}}\left(\alpha=_{ \mathbf{W}}\beta\times\gamma=_{\mathbf{W}}\alpha\rightarrow\gamma=_{\mathbf{W}}\beta\right)\)__ **Proof.** 1. Let \(\rho\) be \(\mathbf{y}f\in\Sigma\) where \(f:=\lambda r.\mathbf{p}(\lambda x.\mathbf{e}(\mathbf{p}xr))(\lambda x. \mathbf{e}(\mathbf{p}xr))\) and \(\mathbf{y}\) is a pure closed \(\lambda\)-term which is a fixed point operator such that \(\mathbf{y}f\)\(\beta\)-reduces to \(f(\mathbf{y}f)\) for every \(f\) (see e.g. [16]). We claim that \(\rho\leq\alpha=_{\mathbf{W}}\alpha\) for every \(\alpha\in\mathbf{W}\). Let \(\alpha\) be an arbitrary element of \(\mathbf{W}\) and let us assume that \(\rho\leq\beta=_{\mathbf{W}}\beta\) for every \(\beta\in\mathbf{W}\) with rank in the hierarchy strictly less than that of \(\alpha\) (and thus in particular for every element of the domain of \(\alpha\)). Then we can consider the following derivation tree in which we used only rules from the previous section. \[\begin{array}{c}\infer{x:\alpha(u)\vdash x:\alpha(u)\text{ (for all }u\in\partial_{0}(\alpha))} \infer{x:\alpha(u)\vdash\rho:u=_{\mathbf{W}}u\text{ (for all }u\in\partial_{0}(\alpha))} \infer{x:\alpha(u)\vdash\mathbf{p}x\rho:\alpha(u)\times u=_{\mathbf{W}}u\text{ (for all }u\in\partial_{0}(\alpha))} \infer{x:\alpha(u)\vdash\mathbf{e}(\mathbf{p}x\rho):u\in_{\mathbf{W}}\alpha \text{ (for all }u\in\partial_{0}(\alpha))} \infer{x:\alpha(u)\vdash\mathbf{e}(\mathbf{p}x\rho):u\in_{\mathbf{W}}\alpha \text{ (for all }u\in\partial_{0}(\alpha))} \infer{\vdash\lambda x.\mathbf{e}(\mathbf{p}x\rho):\alpha(u)\to u \in_{\mathbf{W}}\alpha\text{ (for all }u\in\partial_{0}(\alpha))} \infer{\vdash\lambda x.\mathbf{e}(\mathbf{p}x\rho):\alpha\subseteq_{ \mathbf{W}}\alpha} \infer{\vdash\mathbf{p}(\lambda x.\mathbf{e}(\mathbf{p}x\rho))(\lambda x. we obtain \(\Gamma(u)\vdash{\bf p}({\bf p}_{1}y)({\bf s}_{3}({\bf p}({\bf p}_{1}x)({\bf p}_{2 }y))):\gamma(u)\times u=_{\bf W}\beta\) from which it follows that \[\Gamma(u)\vdash{\bf e}({\bf p}({\bf p}_{1}y)({\bf s}_{3}({\bf p}({\bf p}_{1}x)( {\bf p}_{2}y)))):\beta\in_{\bf W}\gamma\] Since \(x:\alpha=_{\bf W}\beta\times\alpha\in_{\bf W}\gamma\vdash{\bf p}_{2}x:\alpha \in_{\bf W}\gamma\), we get \[x:\alpha=_{\bf W}\beta\times\alpha\in_{\bf W}\gamma\vdash({\bf p}_{2}x)( \lambda y.{\bf e}({\bf p}({\bf p}_{1}y)({\bf s}_{3}({\bf p}({\bf p}_{1}x)({\bf p }_{2}y))))):\beta\in_{\bf W}\gamma\] from which it follows that \[\vdash\lambda x.({\bf p}_{2}x)(\lambda y.({\bf e}({\bf p}({\bf p}_{1}y)({\bf s }_{3}({\bf p}({\bf p}_{1}x)({\bf p}_{2}y))))):\alpha=_{\bf W}\beta\times\alpha \in_{\bf W}\gamma\to\beta\in_{\bf W}\gamma\] From this it follows that \({\bf s}_{2}\) can be defined as \(\lambda x.({\bf p}_{2}x)(\lambda y.({\bf e}({\bf p}({\bf p}_{1}y)({\bf s}_{3} ({\bf p}({\bf p}_{1}x)({\bf p}_{2}y))))))\). Assume now \({\bf s}_{2}\) to exist and consider \(\Gamma^{\prime}(u)\) a shorthand for \[x:\alpha=_{\bf W}\beta\times\gamma\in_{\bf W}\alpha,y:\alpha(u)\times u=_{\bf W }\gamma\] where \(u\) is an arbitrary element of the domain of \(\alpha\). We easily see that \[\Gamma^{\prime}(u)\vdash{\bf p}({\bf p}_{2}y)(({\bf p}_{1}({\bf p}_{1}x))({ \bf p}_{1}y)):u=_{\bf W}\gamma\times u\in_{\bf W}\beta\] Thus \[\Gamma^{\prime}(u)\vdash{\bf s}_{2}({\bf p}({\bf p}_{2}y)(({\bf p}_{1}({\bf p} _{1}x))({\bf p}_{1}y))):\gamma\in_{\bf W}\beta\] Since \(x:\alpha=_{\bf W}\beta\times\gamma\in_{\bf W}\alpha\vdash{\bf p}_{2}x:\gamma \in_{\bf W}\alpha\), we have that \[x:\alpha=_{\bf W}\beta\times\gamma\in_{\bf W}\alpha\vdash({\bf p}_{2}x)( \lambda y.{\bf s}_{2}({\bf p}({\bf p}_{2}y)(({\bf p}_{1}({\bf p}_{1}x))({\bf p }_{1}y)))):\gamma\in_{\bf W}\beta\] from which it follows that \[\vdash\lambda x.({\bf p}_{2}x)(\lambda y.{\bf s}_{2}({\bf p}({\bf p}_{2}y)(({ \bf p}_{1}({\bf p}_{1}x))({\bf p}_{1}y)))):\alpha=_{\bf W}\beta\times\gamma\in _{\bf W}\alpha\to\gamma\in_{\bf W}\beta\] Thus \({\bf s}_{1}\) can be defined as \(\lambda x.({\bf p}_{2}x)(\lambda y.{\bf s}_{2}({\bf p}({\bf p}_{2}y)(({\bf p}_{ 1}({\bf p}_{1}x))({\bf p}_{1}y))))\). Similarly, one can prove that if \({\bf s}_{1}\) is assumed to exist, then one can define \({\bf s}_{3}\) as a \(\lambda\)-term containing \({\bf s}_{1}\) as the unique parameter. The idea is now to compose this mutual dependences to define, by a fix point \({\bf y}g\), one among \({\bf s}_{1}\), \({\bf s}_{2}\) and \({\bf s}_{3}\), and then define the other two using that one. So for example, if we define \({\bf s}_{3}\) as a fixpoint, we can then define \({\bf s}_{2}\) using \({\bf s}_{3}\) and then \({\bf s}_{1}\) using \({\bf s}_{2}\). This works well since composing the proofs above one can see that \({\bf s}_{3}\leq\bigwedge_{\alpha,\beta,\gamma\in{\bf W}}(\alpha=_{\bf W}\beta \times\gamma=_{\bf W}\alpha\to\gamma=_{\bf W}\beta)\) whenever \({\bf s}_{3}\leq u=_{\bf W}v\times w=_{\bf W}u\to w=_{\bf W}v\) for every \(u,v,w\) with rank strictly less than the maximum of the ranks of \(\alpha\), \(\beta\) and \(\gamma\). Given two lists of parameters \(\underline{\alpha}=\alpha_{1},\ldots,\alpha_{n}\in{\bf W}^{n}\) and \(\underline{\beta}=\beta_{1},\ldots,\beta_{n}\in{\bf W}^{n}\) (for \(n\geq 0\)), we write \[\underline{\alpha}=_{\bf W}\underline{\beta}\ :=\ (\cdots((\alpha_{1}=_{\bf W} \beta_{1}\times\alpha_{2}=_{\bf W}\beta_{2})\times\alpha_{3}=_{\bf W}\beta_{3}) \cdots)\times\alpha_{n}=_{\bf W}\beta_{n}\] \(\underline{\alpha}=_{\bf W}\underline{\beta}:=\top\) in the particular case where \(n=0\). Note that the element \(\underline{\alpha}=_{\bf W}\underline{\beta}\ (\in A)\) depends on the order of the parameters \(\underline{\alpha}=\alpha_{1},\ldots,\alpha_{n}\) and \(\underline{\beta}=\beta_{1},\ldots,\beta_{n}\) (and on the choice to associate \(\times\)'s to the left) when considered on the nose, but up to the equivalence \(\equiv_{\Sigma}\), it is of course invariant under any (common) permutation of the parameters \(\underline{\alpha}\) and \(\underline{\beta}\). **Lemma 4.3**: _For every formula in context \(\varphi\,[\underline{x}]\) where \(\underline{x}\) has length \(n>0\), there exists \({\bf r}^{\varphi[\underline{x}]}\in\Sigma\) such that_ \[{\bf r}^{\varphi[\underline{x}]}\leq\bigwedge_{\underline{\alpha}\in{\bf W}^{n}} \bigwedge_{\underline{\beta}\in{\bf W}^{n}}(\underline{\alpha}=_{\bf W} \underline{\beta}\times\|\varphi\,[\underline{x}]\|\,(\underline{\alpha})\to\| \varphi\,[\underline{x}]\|\,(\underline{\beta})\big{)}\.\] **Proof.** By induction on complexity of formulas by using the previous lemma for the atomic cases. \(\Box\) Also the following lemma can be easily proved as a consequence of the previous results and of the rules in the previous section. **Lemma 4.4**: _Let \(\varphi[\underline{x}]\) and \(\psi[\underline{x}]\) be formulas in context in the language of set theory and let \(n\) be the length of \([\underline{x}]\). If \(\varphi\vdash_{\underline{\mathbf{IL}}^{=}}^{\underline{x}}\psi\), then \(\left\|\varphi\left[\underline{x}\right]\right\|\vdash_{\Sigma[\mathbf{W}^{n} ]}\left\|\psi\left[\underline{x}\right]\right\|\) (where with \(\mathbf{IL}^{=}\) we denote first-order intuitionistic logic with equality on the language of \(\mathbf{(I)ZF}\)). In the case in which \(\mathbb{A}\) is a classical implicative algebra, if \(\varphi\vdash_{\mathbf{CL}^{=}}^{\underline{x}}\psi\), then \(\left\|\varphi\left[\underline{x}\right]\right\|\vdash_{\Sigma[\mathbf{W}^{n} ]}\left\|\psi\left[\underline{x}\right]\right\|\) (where with \(\mathbf{CL}^{=}\) we denote first-order classical logic with equality on the language of \(\mathbf{(I)ZF}\))._ **Lemma 4.5**: _If \([\underline{x}]\) has length \(n\), then_ \[\left\|\exists z\in y\,\varphi\left[\underline{x},y\right]\right\|\equiv_{ \Sigma[\mathbf{W}^{n+1}]}\Lambda\underline{\alpha}.\Lambda\beta.\overrightarrow {\square}_{u\in\partial_{0}(\beta)}\left(\beta(u)\times\left\|\varphi\left[ \underline{x},y,z\right]\right\|\left(\underline{\alpha},\beta,u\right)\right)\ \lx@note{ footnote}{We use the notation $\Lambda\alpha.f(\alpha)$ to denote the function sending each $\alpha$ in the domain to $f(\alpha)$.}\] \[\left\|\forall z\in y\,\varphi\left[\underline{x},y\right]\right\|\equiv_{ \Sigma[\mathbf{W}^{n+1}]}\Lambda\underline{\alpha}.\Lambda\beta.\overrightarrow {\bigvee}_{u\in\partial_{0}(\beta)}\left(\beta(u)\rightarrow\left\|\varphi \left[\underline{x},y,z\right]\right\|\left(\underline{\alpha},\beta,u\right)\right)\] **Proof.** We consider the case of the existential quantifier and we leave the analogous proof of the universal case to the reader. We also restrict to the case in which \(\underline{x}\) is empty. The general case is analogous, but only heavier in notation. By definition of the interpretation we have that \(\left\|\exists z\in y\,\varphi\left[y\right]\right\|(\beta)\) is \[\eta:=\overrightarrow{\square}_{\gamma\in\mathbf{W}}\left(\overrightarrow{ \square}_{u\in\partial_{0}(\beta)}(\beta(u)\times u=_{\mathbf{W}}\gamma) \times\left\|\varphi\left[y,z\right]\right\|(\beta,\gamma)\right)\] We denote with \(\eta_{1}(\gamma)\) the scope of the quantifier \(\overrightarrow{\square}_{\gamma\in\mathbf{W}}\), while we denote with \(\eta_{2}(u,\gamma)\) the scope of the quantifier \(\overrightarrow{\square}_{u\in\partial_{0}(\beta)}\). It is immediate to check that the following sequent holds for every \(\beta,\gamma\in\mathbf{W}\) and every \(u\in\partial_{0}(\beta)\): \[x:\eta,y:\eta_{1}(\gamma),z:\eta_{2}(u,\gamma)\vdash\mathbf{1}:(\beta=_{ \mathbf{W}}\beta\times\gamma=_{\mathbf{W}}u)\times\left\|\varphi\left[y,z \right]\right\|(\beta,\gamma))\] where \(\mathbf{1}:=\mathbf{p}(\mathbf{p}\rho(\sigma(\mathbf{p}_{2}z)))(\mathbf{p}_{2}y)\). With the notation from Lemma 4.3, we can conclude that \[x:\eta,y:\eta_{1}(\gamma),z:\eta_{2}(u,\gamma)\vdash\mathbf{r}^{\varphi[y,z]} \mathbf{1}:\left\|\varphi\left[y,z\right]\right\|(\beta,u)\] Thus \(x:\eta,y:\eta_{1}(\gamma),z:\eta_{2}(u,\gamma)\vdash\mathbf{p}(\mathbf{p}_{1} z)(\mathbf{r}^{\varphi[y,z]}\mathbf{1}):\beta(u)\times\left\|\varphi\left[y,z \right]\right\|(\beta,u)\). Hence \[x:\eta,y:\eta_{1}(\gamma),z:\eta_{2}(u,\gamma)\vdash\mathbf{e}(\mathbf{p}( \mathbf{p}_{1}z)(\mathbf{r}^{\varphi[y,z]}\mathbf{1})):\exists_{w\in\partial_{ 0}(\beta)}(\beta(w)\times\left\|\varphi\left[y,z\right]\right\|(\beta,w))\] Using the rules of elimination of existential quantification, one can conclude that \[\eta\rightarrow\exists_{w\in\partial_{0}(\beta)}(\beta(w)\times\left\| \varphi\left[y,z\right]\right\|(\beta,w))\geq\mathbf{1}^{\prime}\in\Sigma\] where \(\mathbf{1}^{\prime}:=\lambda x.x\lambda y.(\mathbf{p}_{1}y)(\lambda z. \mathbf{e}(\mathbf{p}(\mathbf{p}_{1}z)(\mathbf{r}^{\varphi[y,z]}\mathbf{1})))\). It is easier to show that for every \(\beta\in\mathbf{W}\) \[\exists_{w\in\partial_{0}(\beta)}(\beta(w)\times\left\|\varphi\left[y,z \right]\right\|(\beta,w))\rightarrow\eta\geq\lambda x.x\lambda y.\mathbf{e}( \mathbf{p}(\mathbf{e}(\mathbf{p}(\mathbf{p}_{1}y)\rho))(\mathbf{p}_{2}y))\in\Sigma\] One can in fact write a deduction tree in which the existential quantifiers of the consequent are both witnessed by a \(w\in\partial_{0}(\beta)\) for which \(\beta(w)\times\left\|\varphi\left[y,z\right]\right\|(\beta,w)\) is assumed to hold. \(\Box\) **Corollary 4.6**: \(\left\|x\subseteq y\left[x,y\right]\right\|\equiv_{\Sigma[\mathbf{W}^{2}]} \Lambda\alpha.\Lambda\beta.\left(\alpha\subseteq_{\mathbf{W}}\beta\right)\)_._ ### Validity of axioms In this subsection we show that the interpretation we gave is in fact a model of \(\mathbf{IZF}\) (when \(\mathbb{A}\) is not classical) or a model of \(\mathbf{ZF}\) (when \(\mathbb{A}\) is classical), that is, if \((\mathbf{I})\mathbf{ZF}\vdash\varphi\), then \(\mathbf{W}\vDash\varphi\). In order to show this we prove that every axiom of \((\mathbf{I})\mathbf{ZF}\) is valid in the interpretation. #### 4.2.1 Extensionality Thanks to Corollary 4.6 we know that \[\|\mathbf{Ext}\|\equiv_{\Sigma}\bigvee\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! #### 4.2.4 Powerset Using Corollary 4.6, \(\|\mathbf{Pow}\|\equiv_{\Sigma}\bigvee_{\alpha\in\mathbf{W}}\exists_{\beta\in \mathbf{W}}\bigvee_{\gamma\in\mathbf{W}}(\gamma\subseteq_{\mathbf{W}}\alpha \rightarrow\gamma\in_{\mathbf{W}}\beta)\). Let us consider an arbitrary \(\alpha\in\mathbf{W}\) and define \(\pi_{\alpha}\in\mathbf{W}\) as that partial function having domain \(A^{\partial_{0}(\alpha)}\), and for which \(\pi_{\alpha}(u)=\top\) for every \(u\) in the domain. For every \(\gamma\in\mathbf{W}\) we also define \(\gamma_{\alpha}\in\mathbf{W}\) as follows. The domain of \(\gamma_{\alpha}\) is \(\partial_{0}(\alpha)\) and \(\gamma_{\alpha}(u):=u\in_{\mathbf{W}}\alpha\times u\in_{\mathbf{W}}\gamma\) for every \(u\) in the domain. We now use Lemma 4.2 and its notation. Let \(u\in\partial_{0}(\gamma)\) and \(t\in\partial_{0}(\alpha)\). Then: 1. \(x:\gamma\subseteq_{\mathbf{W}}\alpha,y:\gamma(u),z:\alpha(t)\times t=_{ \mathbf{W}}u\vdash\mathbf{j}(\mathbf{p}_{1}z):t\in_{\mathbf{W}}\alpha\) 2. \(x:\gamma\subseteq_{\mathbf{W}}\alpha,y:\gamma(u),z:\alpha(t)\times t=_{ \mathbf{W}}u\vdash\mathbf{p}_{2}z:t=_{\mathbf{W}}u\) 3. \(x:\gamma\subseteq_{\mathbf{W}}\alpha,y:\gamma(u),z:\alpha(t)\times t=_{ \mathbf{W}}u\vdash\mathbf{s}_{2}(\mathbf{p}(\sigma(\mathbf{p}_{2}z))( \mathbf{j}y)):t\in_{\mathbf{W}}\gamma\) From this it follows that \[x:\gamma\subseteq_{\mathbf{W}}\alpha,y:\gamma(u),z:\alpha(t)\times t=_{ \mathbf{W}}u\vdash\widetilde{\mathbf{r}}:=\mathbf{e}(\mathbf{p}(\mathbf{p}( \mathbf{j}(\mathbf{p}_{1}z))(\mathbf{s}_{2}(\mathbf{p}(\sigma(\mathbf{p}_{2} z))(\mathbf{j}y))))(\mathbf{p}_{2}z)):u\in_{\mathbf{W}}\gamma_{\alpha}\] Since \[x:\gamma\subseteq_{\mathbf{W}}\alpha,y:\gamma(u)\vdash xy:u\in_{\mathbf{W}}\alpha\] then \[x:\gamma\subseteq_{\mathbf{W}}\alpha,y:\gamma(u)\vdash xy(\lambda z.\widetilde {\mathbf{r}}):u\in_{\mathbf{W}}\gamma_{\alpha}\] From this it follows that \[x:\gamma\subseteq_{\mathbf{W}}\alpha\vdash\lambda y.(xy(\lambda z.\widetilde {\mathbf{r}})):\gamma\subseteq_{\mathbf{W}}\gamma_{\alpha}\] One can also easily show that \(\vdash\lambda z.(\mathbf{p}_{2}z):\gamma_{\alpha}\subseteq_{\mathbf{W}}\gamma\). Thus \[x:\gamma\subseteq_{\mathbf{W}}\alpha\vdash\overline{\mathbf{r}}:=\mathbf{p} \top(\mathbf{p}(\lambda z.(\mathbf{p}_{2}z))(\lambda y.(xy(\lambda z.\widetilde {\mathbf{r}})))):\top\times\gamma_{\alpha}=_{\mathbf{W}}\gamma\] Since \(\gamma_{\alpha}\) is in the domain of \(\pi_{\alpha}\) we hence have that \[x:\gamma\subseteq_{\mathbf{W}}\alpha\vdash\mathbf{e}\overline{\mathbf{r}}: \gamma\in\pi_{\alpha}\] We can thus conclude that \(\vdash\lambda x.\mathbf{e}\overline{\mathbf{r}}:\gamma\subseteq_{\mathbf{W}} \alpha\rightarrow\gamma\in_{\mathbf{W}}\pi_{\alpha}\). Since \(\lambda x.\mathbf{e}\overline{\mathbf{r}}\) and \(\mathbf{e}(\lambda x.\mathbf{e}\overline{\mathbf{r}})\) do not depend on \(\gamma\) and \(\alpha\) we get, \[\vdash\mathbf{e}(\lambda x.\mathbf{e}\overline{\mathbf{r}}):\bigvee_{\alpha \in\mathbf{W}}\overline{\exists}_{\beta\in\mathbf{W}}\bigvee_{\gamma\in\mathbf{ W}}(\gamma\subseteq_{\mathbf{W}}\alpha\rightarrow\gamma\in_{\mathbf{W}}\beta)\] Since \(\mathbf{e}(\lambda x.\mathbf{e}\overline{\mathbf{r}})\in\Sigma\), we can conclude that \(\mathbf{W}\vdash\mathbf{Pow}\). #### 4.2.5 Infinity For every \(n\in\omega\), we define \(\widehat{n}\in\mathbf{W}\) as follows: \(\partial_{0}(\widehat{n})=\{\widehat{m}|\,m<n\}\) and \(\widehat{n}(\widehat{m}):=\overline{m}\) where \(\overline{m}\in\Sigma\) is Church's encoding of the natural number \(m\)8. We define \(\widehat{\omega}\) as the element of \(\mathbf{W}\) with domain \(\{\widehat{n}|\,n\in\omega\}\) and defined by \(\widehat{\omega}(\widehat{n}):=\overline{n}\). Footnote 8: \(\overline{0}:=\lambda x.\lambda y.x\) and \(\overline{n+1}:=\overline{s}\,\overline{n}\) where \(\overline{s}:\lambda z.\lambda x.\lambda y.y(xyxy)\) First, if we consider \(\widehat{0}=\emptyset\) and we use Lemma 4.5, we can easily see that \[\vdash\mathbf{e}(\mathbf{p}\overline{0}\top):\overline{\exists}_{n\in\omega}( \widehat{\omega}(\widehat{n})\times\bigvee_{m<n}(\widehat{n}(\widehat{m}) \rightarrow\bot))\equiv_{\Sigma}\|\mathbf{Inf}_{1}(u)[u]\|\,(\widehat{\omega})\] Moreover, one can construct a closed \(\lambda\)-term \(f\) whose interpretation is in \(\Sigma\) such that for every \(n,m\in\omega\) \[\begin{cases}f\overline{n}\,\overline{m}\twoheadrightarrow_{\beta}\mathbf{j}_{ 1}(\mathbf{e}(\mathbf{p}\overline{m}\rho))\text{ if }\overline{m}\neq\overline{n}\\ f\overline{n}\,\overline{m}\twoheadrightarrow_{\beta}\mathbf{j}_{2}\rho\text{ if }\overline{m}=\overline{n}\end{cases}\] Then, for every \(n\in\omega\) \[\vdash\lambda u.f\overline{n}u:\bigvee_{i<n}(\widehat{n+1}(\widehat{i})\to(( \widehat{i}\in_{\mathbf{W}}\widehat{n})+(\widehat{i}=_{\mathbf{W}}\widehat{n})))\] Moreover \[\vdash\lambda x.\mathbf{e}(\mathbf{p}x\rho):\widehat{n}\subseteq_{\mathbf{W}} \widehat{n+1}\] \[\vdash\mathbf{e}(\mathbf{p}\overline{n}\rho):\widehat{n}\in_{\mathbf{W}} \widehat{n+1}\] Using these facts, Lemma 4.5 and Corollary 4.6, one can easily show that \(\left\|\mathbf{Inf}_{2}(u)[u]\right\|(\widehat{\omega})\in\Sigma\). Thus we can conclude that \(\mathbf{W}\vDash\mathbf{Inf}\). #### 4.2.6 Separation Assume \(\varphi\left[\underline{w},x,z\right]\) be a formula in context with \(\underline{w}\) a list of variable of length \(n\). \[\left\|\mathbf{Sep}_{\varphi}\right\|\equiv_{\Sigma}\bigvee_{\underline{ \omega}\in\mathbf{W}^{n}}\bigvee_{\alpha\in\mathbf{W}}\overline{\bot}_{ \beta\in\mathbf{W}}\Big{(}\bigvee_{u\in\partial_{0}(\beta)}(\beta(u)\to u\in_ {\mathbf{W}}\alpha\times\left\|\varphi\left[\underline{w},x,z\right]\right\|( \underline{\omega},\alpha,u))\times\] \[\bigvee_{u^{\prime}\in\partial_{0}(\alpha)}(\alpha(u^{\prime})\to(\left\| \varphi\left[\underline{w},x,z\right]\right\|(\underline{\omega},\alpha,u^{ \prime})\to u^{\prime}\in_{\mathbf{W}}\beta))\Big{)}\] For an arbitrary \(\alpha\in\mathbf{W}\) and \(\underline{\omega}\in\mathbf{W}^{n}\) we define \(\alpha_{\overline{\varphi}}^{\underline{\omega}}\in\mathbf{W}\) as follows: its domain is equal to the domain of \(\alpha\), while \(\alpha_{\overline{\varphi}}^{\underline{\omega}}(u):=\alpha(u)\times\left\| \varphi\left[\underline{w},x,z\right]\right\|(\underline{\omega},\alpha,u)\). In order to show that \(\mathbf{W}\vDash\mathbf{Sep}_{\varphi}\), it is sufficient to find a \(t\in\Sigma\) not depending on \(\overline{\omega}\) and \(\alpha\) such that \[\vdash t:\bigvee_{u\in\partial_{0}(\alpha)}(\alpha_{\varphi}^{\overline{ \omega}}(u)\to u\in_{\mathbf{W}}\alpha\times\left\|\varphi\left[\underline{w}, x,z\right]\right\|(\underline{\omega},\alpha,u))\times\] \[\bigvee_{u^{\prime}\in\partial_{0}(\alpha)}(\alpha(u^{\prime})\to(\left\| \varphi\left[\underline{w},x,z\right]\right\|(\underline{\omega},\alpha,u^{ \prime})\to u^{\prime}\in_{\mathbf{W}}\alpha_{\varphi}^{\underline{\omega}}))\] But this is immediate to prove, since using Lemma 4.2 \[\vdash\lambda x.\mathbf{p}(\mathbf{j}(\mathbf{p}_{1}x))(\mathbf{p}_{2}x): \bigvee_{u\in\partial_{0}(\alpha)}(\alpha_{\varphi}^{\overline{\omega}}(u) \to u\in_{\mathbf{W}}\alpha\times\left\|\varphi\left[\underline{w},x,z\right] \right\|(\underline{\omega},\alpha,u))\] \[\vdash\lambda x.\lambda y.\mathbf{e}(\mathbf{p}(\mathbf{p}xy)\rho):\bigvee_{u^ {\prime}\in\partial_{0}(\alpha)}(\alpha(u^{\prime})\to(\left\|\varphi\left[ \underline{w},x,z\right]\right\|(\underline{\omega},\alpha,u^{\prime})\to u^{ \prime}\in_{\mathbf{W}}\alpha_{\varphi}^{\overline{\omega}}))\] #### 4.2.7 \(\in\)-Induction We now consider the axiom schema of \(\in\)-induction and we restrict to the case of a formula in context \(\varphi[x]\) since the general case is analogous, but just heavier in notation. Let \(\mathbf{y}\) be the fix-point operator we have already used in the proof of Lemma 4.2 such that \(\mathbf{y}f\)\(\beta\)-reduces to \(f(\mathbf{y}f)\) for every \(f\) and consider \[\mathbf{h}:=\mathbf{y}(\lambda h.\lambda x.x(\lambda y.hx))\in\Sigma\] in such a way that \(\mathbf{h}\leq(\lambda h.\lambda x.x(\lambda y.hx))\mathbf{h}\leq\lambda x.x( \lambda y.\mathbf{h}x)\). Fix an arbitrary \(\overline{\alpha}\) and assume that \[\mathbf{h}\leq\bigvee_{\alpha\in\mathbf{W}}\Big{(}\bigvee_{u\in\partial_{0}( \alpha)}(\alpha(u)\to\left\|\varphi[x]\right\|(u))\to\left\|\varphi[x]\right\|( \alpha)\Big{)}\to\left\|\varphi[x]\right\|(\beta)\] for every \(\beta\) with rank strictly less than that of \(\overline{\alpha}\). Let us use \(\varepsilon^{\alpha}\) as a shorthand for \[\bigvee_{u\in\partial_{0}(\alpha)}(\alpha(u)\to\left\|\varphi[x]\right\|(u)) \to\left\|\varphi[x]\right\|(\alpha)\] and \(\varepsilon\) as a shorthand for \(\bigvee_{\alpha\in\mathbf{W}}\varepsilon^{\alpha}\). If we consider the following derivation tree \(\vdash\mathbf{h}:\varepsilon\rightarrow\left\|\varphi\left[x\right]\right\|\left(u \right)\left(\text{ for every }u\in\partial_{0}(\overline{\alpha})\right)\)\(x:\varepsilon\vdash x:\varepsilon\)\(x:\varepsilon\vdash\mathbf{h}:\varepsilon\rightarrow\left\|\varphi\left[x\right]\right\|\left(u \right)\left(\text{ for every }u\in\partial_{0}(\overline{\alpha})\right)\)\(x:\varepsilon\vdash x:\varepsilon\)\(x:\varepsilon\vdash\mathbf{h}x:\left\|\varphi\left[x\right]\right\|\left(u\right)\left(\text{ for every }u\in\partial_{0}(\overline{\alpha})\right)\)\(x:\varepsilon,y:\overline{\alpha}(u)\vdash\mathbf{h}x:\left\|\varphi\left[x\right]\right\|\left(u \right)\left(\text{ for every }u\in\partial_{0}(\overline{\alpha})\right)\)\(x:\varepsilon\vdash x:\varepsilon\)\(x:\varepsilon\vdash\lambda y.\mathbf{h}x:\overline{\alpha}(u)\rightarrow\left\|\varphi\left[x\right] \right\|\left(u\right)\left(\text{ for every }u\in\partial_{0}(\overline{\alpha})\right)\)\(x:\varepsilon\vdash\lambda y.\mathbf{h}x:\overline{\alpha}(u)\rightarrow\left\|\varphi\left[x\right] \right\|\left(u\right)\left(\text{ for every }u\in\partial_{0}(\overline{\alpha})\right)\)\(x:\varepsilon\vdash\lambda y.\mathbf{h}x:\overline{\nu}_{u\in\partial_{0}(\overline{\alpha})}( \overline{\alpha}(u)\rightarrow\left\|\varphi\left[x\right]\right\|\left(u \right))\)\(x:\varepsilon\vdash x(\lambda y.\mathbf{h}x):\left\|\varphi\right\| \left(\overline{\alpha}\right)\)\(x:\varepsilon\vdash x(\lambda y.\mathbf{h}x):\varepsilon\rightarrow\left\| \varphi\right\|\left(\overline{\alpha}\right)\) we can conclude that \(\mathbf{h}\leq\varepsilon\rightarrow\left\|\varphi[x]\right\|\left(\overline{ \alpha}\right)\). By transfinite induction we can hence conclude that \[\mathbf{h}\leq\bigvee_{\alpha\in\mathbf{W}}\left(\varepsilon\rightarrow\left\| \varphi[x]\right\|\left(\alpha\right)\right)\] Since \(\mathbf{h}\in\Sigma\) and, by using lemmas 4.4 and 4.5, \(\bigvee_{\alpha\in\mathbf{W}}\left(\varepsilon\rightarrow\left\|\varphi[x] \right\|\left(\alpha\right)\right)\equiv_{\Sigma}\left\|\in\text{-}\mathbf{ Ind}_{\varphi}\right\|\), we can conclude that \(\mathbf{W}\vDash\in\text{-}\mathbf{Ind}_{\varphi}\). #### 4.2.8 Collection In order to lighten the notation we will consider \(\mathbf{Col}_{\varphi}\) for a formula \(\varphi\) in context \([x,y]\) (so without any additional parameter). Moreover we will write \(\varphi(a,b)\) instad of \(\left\|\varphi\left[x,y\right]\right\|\left(a,b\right)\). Assume \(\alpha\in\mathbf{W}\) and \(u\in\partial_{0}(\alpha)\). Since \(\kappa\) is inaccessible, \(\left|A\right|<\kappa\) and \(\left\{\varphi(u,\gamma)|\,\gamma\in\mathbf{W}\right\}\subseteq A\), there exists \(\eta<\kappa\) such that \(\overrightarrow{\exists}_{\gamma\in\mathbf{W}}\left(\top\times\varphi(u, \gamma)\right)=\overrightarrow{\exists}_{\gamma\in W_{\eta}^{A}}\left(\top \times\varphi(u,\gamma)\right)\). We define \(\eta_{u}\) to be the minimum such an \(\eta\) and we define \(\overline{\eta}_{\alpha}:=\bigvee\{\eta_{u}|\,u\in\partial_{0}(\alpha)\}\) which is strictly less than \(\kappa\), since the cardinality of \(\partial_{0}(\alpha)\) is strictly less than \(\kappa\) (because \(\kappa\) is strongly inaccessible). We define \(\beta_{\alpha}\in\mathbf{W}\) as the constant function with value \(\top\) and domain \(W_{\overline{\eta}_{\alpha}}^{\mathbb{A}}\). Using the calculus we can show that there is an element \(r\in\Sigma\) not depending on \(\alpha\) such that \[\vdash r:\bigvee_{u\in\partial_{0}(\alpha)}\left(\alpha(u)\rightarrow\overrightarrow {\exists}_{\gamma\in\mathbf{W}}\varphi(u,\gamma)\right)\rightarrow\] \[\bigvee_{u\in\partial_{0}(\alpha)}\left(\alpha(u)\rightarrow\overrightarrow {\exists}_{w\in\partial_{0}(\beta_{\alpha})}(\beta_{\alpha}(w)\times\varphi(u,w))\right)\] and using this fact one can easily show that \(\mathbf{Col}_{\varphi}\) is validated in the model. ## 5 Relationship with forcing and realizability models of set theory In this section, we show that the implicative models of \(\mathbf{(I)ZF}\) constructed in the previous section encompass Heyting/Boolean-valued models for \(\mathbf{(I)ZF}\)[1, 2] and, up to logical equivalence, Friedman/Rosolini/McCarty realizability models for \(\mathbf{IZF}\)[4, 14, 11] as well as Krivine's realizability models of \(\mathbf{ZF}\)[7, 10]. ### The case of forcing When the parameterizing implicative algebra \(\mathbb{A}\) of our model is a complete Heyting/Boolean algebra (with a separator reduced to \(\left\{\top\right\}\)), existential quantifications \(\overrightarrow{\exists}_{i\in I}a_{i}\) coincide with suprema \(\bigvee_{i\in I}a_{i}\) whereas implicative conjunctions \(a\times b\) coincide with binary meets \(a\wedge b\), as shown in [12]. So that in this case, our implicative model of set theory boils down to the Heyting/Boolean-valued model of \(\mathbf{(I)ZF}\) induced by \(\mathbb{A}\), such as described e.g. in [1, 2]. Therefore forcing models of set theory (both in intuitionistic and classical logic) appear to be instances of our construction. ### The case of intuitionistic realizability The case of intuitionistic realizability corresponds to the implicative algebras \(\mathbb{A}\) that are _compatible with joins_, namely: the implicative algebras satisfying the additional requirement that \[\bigwedge_{i\in I}(a_{i}\to b)=\Bigl{(}\bigvee_{i\in I}a_{i}\Bigr{)}\to b\] for every family \((a_{i})_{i\in I}\) of elements of \(\mathbb{A}\) and for every \(b\) in \(\mathbb{A}\). Typical examples of implicative algebras that are compatible with joins are the ones coming from forcing (i.e. complete Heyting/Boolean algebras with a separator reduced to \(\{\top\}\)) as well as the implicative algebras induced by _combinatory algebras_ (CAs) or by _ordered combinatory algebras_ (OCAs). On the other hand, the implicative algebras coming from classical realizability are in general not compatible with joins. Note that unlike (possibly ordered) combinatory algebras, _partial combinatory algebras_ (PCAs) do not induce (full) implicative algebras, but _quasi-implicative algebras_[12], in which one may have \((\top\to\top)\neq\top\). However, as shown in [12], it is always possible to complete a quasi-implicative algebra into an implicative algebra, simply by adding an extra top element, and without changing the underlying logic. (Indeed, the triposes associated to a quasi-implicative algebra and to its completion are isomorphic.) Moreover, when applying this completion mechanism to a quasi-implicative algebra that comes from a PCA, the resulting implicative algebra is always compatible with joins. In an implicative algebra \(\mathbb{A}\) that is compatible with joins, existential quantification \(\overline{\exists}\) may not coincide with supremum \(\bigvee\), but both constructions are logically equivalent in the sense that \[\bigwedge_{(b_{i})_{i\in I}}\Biggl{(}\mathop{\overline{\exists}}_{i\in I}b_{i} \to\bigvee_{i\in I}b_{i}\Biggr{)}\ \in\ \Sigma\qquad\mbox{and}\qquad\bigwedge_{(b_{i})_{i\in I}}\Biggl{(}\bigvee_{i\in I }b_{i}\to\mathop{\overline{\exists}}_{i\in I}b_{i}\Biggr{)}\ \in\ \Sigma\,\lx@note{footnote}{Note that the first of this two properties actually holds in any implicative algebra.}\] Now, if we define on \(\mathbf{W}\) a new interpretation \(\|-\|^{\mathsf{J}}\) of the language of set theory replacing \(\overline{\exists}\) by \(\bigvee\) in the definition of the interpretation \(\|-\|\), we easily show (by a straightforward induction) that for all formulas in context \(\varphi\,[\underline{x}]\), both denotations \(\|\varphi\,[\underline{x}]\|\) and \(\|\varphi\,[\underline{x}]\|^{\mathsf{J}}\) are equivalent, in the sense that \[\|\varphi\,[\underline{x}]\|\ \vdash_{\Sigma[\mathbf{W}^{n}]}\ \|\varphi\,[ \underline{x}]\|^{\mathsf{J}}\qquad\mbox{and}\qquad\|\varphi\,[\underline{x}] \|^{\mathsf{J}}\ \vdash_{\Sigma[\mathbf{W}^{n}]}\ \|\varphi\,[\underline{x}]\|\,\] where \(n\) is the length of \(\underline{x}\). The main interest of the new interpretation is that when the implicative algebra \(\mathbb{A}\) comes from a \(\mathrm{CA}\), an \(\mathrm{OCA}\), or even a PCA through the completion mechanism mentioned above, the alternative interpretation \(\|-\|^{\mathsf{J}}\) coincides exactly with the Friedman/Rosolini/McCarty realizability interpretation [4, 14, 11]. Therefore, intuitionistic realizability models appear to be equivalent to some instances of our construction. ### The case of classical realizability As shown by Krivine [7, 10], classical realizability models of \(\mathbf{ZF}\) can be constructed from _classical realizability algebras_[9], or from the (slightly simpler) _abstract Krivine structures_ (AKSs) introduced by Streicher in [15]. Again, both structures are easily reformulated as implicative algebras [12], which makes possible to compare Krivine's model construction with ours. Moreover, it has been shown in [12] that every classical implicative algebra is equivalent (from the point of view of the induced triposes) to some Streicher's AKS, which shows that --at least from a conceptual point of view-- the models arising from classical implicative algebras are essentially the same as the ones arising from classical realizability. However, the relationship between Krivine's classical realizability models of \(\mathrm{ZF}\) and our implicative models of \(\mathrm{ZF}\) is much more intricate than in the intuitionistic case, due to reasons of _polarity_ we now need to explain. For that, let us first recall that in our construction, a name in \(\mathbf{W}\) is a partial function \(\alpha\in\mathsf{Part}(\mathbf{W},A)\) associating to each name \(\beta\in\partial_{0}(\alpha)\) (in the domain of \(\alpha\)) a truth value \(\alpha(\beta)\in A\) that intuitively expresses 'how much \(\beta\) belongs to \(\alpha\)'. So that when \(\beta\notin\partial_{0}(\alpha)\), it is convenient to think that such a truth value implicitly defaults to \(\bot\) (that is: '\(\beta\) does not belong to \(\alpha\)'). As a matter of fact, Krivine's classical realizability interpretation of ZF can be carried out entirely within the same universe \(\mathbf{W}\) as our interpretation--under the hypothesis that the parameterizing implicative algebra \(\mathbb{A}\) is classical, of course. However, the crucial point is that in Krivine's framework, the elements of \(\mathbf{W}\) have definitely not the same meaning as in ours, since for any two names \(\alpha\in\mathbf{W}\) and \(\beta\in\partial_{0}(\alpha)\), the truth value \(\alpha(\beta)\in A\) expresses 'how much \(\beta\)_does not_ belong to \(\alpha\)' (according to Krivine). So that when \(\beta\notin\partial_{0}(\alpha)\), such a truth value now implicitly defaults to \(\top\). Formally, Krivine's classical realizability interpretation, written \(\|-\|^{\mathsf{K}}\), takes place in a variant of the language of set theory where the membership predicate \(\in\) has been replaced by a _negated membership predicate_\(\notin\), from which the usual membership predicate is defined by \(x\in y:\equiv\neg(x\notin y)\). To each pair of names \(\alpha,\beta\in\mathbf{W}\), Krivine associates two truth values \(\alpha\notin^{\mathsf{K}}_{\mathbf{W}}\beta\) and \(\alpha=^{\mathsf{K}}_{\mathbf{W}}\beta\), that are defined (again) by induction on the ranks of \(\alpha\) and \(\beta\), letting: \[\alpha\notin^{\mathsf{K}}_{\mathbf{W}}\beta\ :=\ \bigtriangledown_{t\in \partial_{0}(\beta)}(t=^{\mathsf{K}}_{\mathbf{W}}\alpha\to\beta(t))\] \[\alpha=^{\mathsf{K}}_{\mathbf{W}}\beta\ :=\ (\alpha\subseteq^{\mathsf{K}}_{ \mathbf{W}}\beta)\times(\beta\subseteq^{\mathsf{K}}_{\mathbf{W}}\alpha),\quad \text{where:}\quad(\alpha\subseteq^{\mathsf{K}}_{\mathbf{W}}\beta)\ :=\ \bigtriangledown_{t\in \partial_{0}(\alpha)}(t\notin^{\mathsf{K}}_{\mathbf{W}}\beta\to\alpha(t))\] Notice that Krivine's definition of \(\alpha\notin^{\mathsf{K}}_{\mathbf{W}}\beta\) corresponds to the negation of our definition of \(\alpha\in_{\mathbf{W}}\beta\), whereas his definition of \(\alpha\subseteq^{\mathsf{K}}_{\mathbf{W}}\beta\) is exactly the contraposition of our definition of \(\alpha\subseteq_{\mathbf{W}}\beta\)--keeping in mind that in Krivine's setting, \(\beta(t)\) and \(\alpha(t)\) have the same meaning as \(\neg\beta(t)\) and \(\neg\alpha(t)\) in ours10. Once the three primitive relations \(\alpha\notin^{\mathsf{K}}_{\mathbf{W}}\beta\), \(\alpha\subseteq^{\mathsf{K}}_{\mathbf{W}}\beta\) and \(\alpha=^{\mathsf{K}}_{\mathbf{W}}\beta\) have been recursively defined, the usual notion of membership is then recovered letting \(\alpha\in^{\mathsf{K}}_{\mathbf{W}}\beta:=\neg(\alpha\notin^{\mathsf{K}}_{ \mathbf{W}}\beta)\), and the rest of the interpretation (written \(\|-\|^{\mathsf{K}}\)) is defined the same way as in our framework (cf Section 4). Footnote 10: The main benefit of focusing on \(\notin\) rather than on \(\in\) is that the recursive interpretations of \(\notin\) and \(\subseteq\) only rely on universal quantification, whose interpretation is way simpler than existential quantification. The cost of such a design is that it relies on many contrapositions, that require classical reasoning. Now if we want to relate Krivine's interpretation with ours, we need to formalize the fact that the same name \(\alpha\in\mathbf{W}\) has different meanings according to Krivine and according to us. For that, we introduce a _set-negation operator_\((\alpha\mapsto\tilde{\alpha}):\mathbf{W}\to\mathbf{W}\) that is defined by induction on the rank of \(\alpha\in\mathbf{W}\), letting: \[\tilde{\alpha}\ :=\ \big{\{}\big{(}\tilde{\beta},\neg\alpha(\beta)\big{)}\ :\ \beta\in \partial_{0}(\alpha)\big{\}}\] Intuitively, this operator associates to each name \(\alpha\in\mathbf{W}\) another name \(\tilde{\alpha}\in\mathbf{W}\) that has the same meaning in our framework (resp. in Krivine's framework) as \(\alpha\) in Krivine's (resp. in ours). In what follows, it is also convenient to consider set-negation as a unary function symbol as well, that is written and interpreted as the set-negation operator \((\alpha\mapsto\tilde{\alpha}):\mathbf{W}\to\mathbf{W}\). Using the fact that the parameterizing implicative algebra \(\mathbb{A}\) is classical, we easily check that set-negation is involutive (w.r.t. both interpretations), in the sense that: \[\|\forall x\,(x=\tilde{\tilde{x}})\|\ \in\ \Sigma\qquad\text{and}\qquad\|\forall x \,(x=\tilde{\tilde{x}})\|^{\mathsf{K}}\ \in\ \Sigma\,.\] We can now prove that both interpretations \(\|-\|\) and \(\|-\|^{\mathsf{K}}\) are equivalent _up to set-negation of parameters_, in the sense that for all formulas in context \(\varphi\,[\underline{x}]\), we have: \[\|\varphi\,[\underline{\tilde{x}}]\|\ \vdash_{\Sigma[\mathbf{W}^{n}]}\ \|\varphi\,[ \underline{x}]\|^{\mathsf{K}}\qquad\text{and}\qquad\|\varphi\,[\underline{x}] \|^{\mathsf{K}}\ \vdash_{\Sigma[\mathbf{W}^{n}]}\ \|\varphi\,[\underline{\tilde{x}}]\|\,\] where \(n\) is the length of \(\underline{x}\), and writing \(\varphi[\tilde{\underline{x}}]\) for \((\varphi[\tilde{\underline{x}}/\underline{x}])[\underline{x}]\). In particular, for each closed formula \(\varphi\) of ZF, we have: \[\|\varphi\|\ \vdash_{\Sigma}\ \ \|\varphi\|^{\mathsf{K}}\qquad\text{and}\qquad\| \varphi\|^{\mathsf{K}}\ \vdash_{\Sigma}\ \ \|\varphi\|\.\] So that classical realizability models of ZF are also equivalent to some instances of our construction. ## 6 Models of (I)ZF in a class of toposes In any elementary topos \(\mathcal{E}\) one can interpret first-order languages using the doctrine of subobjects. In particular, an interpretation of the language of \((\mathbf{I})\mathbf{ZF}\) is given by an object \(V\) of \(\mathcal{E}\) which interprets the universe of sets and by a subobject \(\varepsilon\) of \(V\times V\) which interprets the membership relation. Equality is always interpreted as the diagonal subobject of \(V\times V\). If all axioms of \((\mathbf{I})\mathbf{ZF}\) are validated by the interpretation (that is, if every axiom is interpreted as the maximum subobject) then we get a model of \(\mathbf{IZF}\) which is in fact a model of \(\mathbf{ZF}\) when \(\mathcal{E}\) is boolean. When the topos \(\mathcal{E}\) is obtained as the result of the tripos-to-topos construction from a tripos \(\mathsf{P}\), the internal logic of \(\mathcal{E}\) can be reduced to the logic of the tripos \(\mathsf{P}\) as explained in detail in [16]. Indeed, in this case the objects of \(\mathcal{E}\) are pairs \((A,\rho)\) where \(A\) is an object of the domain of \(\mathsf{P}\) and \(\rho\in\mathsf{P}(A\times A)\) is a partial equivalence relation on \(A\) with respect to the logic of \(\mathsf{P}\), and the subobjects of \((A,\rho)\) correspond to the predicates \(\psi\in\mathsf{P}(A)\) which respect the relation \(\rho\). Such correspondence extends to a correspondence of connectives and quantifiers between the logics of \(\mathsf{P}\) and \(\mathcal{E}\). The implicative model we produced in the previous section using an implicative algebra \(\mathbb{A}\) can be seen as a model of \((\mathbf{I})\mathbf{ZF}\) in the corresponding implicative tripos \(\mathsf{P}_{\mathbb{A}}\). Moreover, since \(\mathbf{W}\) is a set and \([=\!\!\mathbf{w}]\) is an equivalence relation on \(\mathbf{W}\) with respect to the logic of \(\mathsf{P}_{\mathbb{A}}\) by Lemma 4.2 (i),(iii),(vi), we have that the pair \((\mathbf{W},[=\!\!\mathbf{w}])\) is an object of the topos \(\mathbf{Set}[\mathsf{P}_{\mathbb{A}}]\). Finally, as a consequence of Lemma 4.2 (iv),(v), the relation \([\in\!\!\mathbf{w}]\) gives rise to a subobject \(\varepsilon_{\mathbf{W}}\) of \((\mathbf{W},[=\!\!\mathbf{w}])\times(\mathbf{W},[=\!\!\mathbf{w}])\) in \(\mathbf{Set}[\mathsf{P}_{\mathbb{A}}]\). By reducing the internal logic of \(\mathbf{Set}[\mathsf{P}_{\mathbb{A}}]\) to that of \(\mathsf{P}_{\mathbb{A}}\) one obtains that the object \((\mathbf{W},[=\!\!\mathbf{w}])\) and the subobject \(\varepsilon_{\mathbf{W}}\) define an interpretation of the language of \((\mathbf{I})\mathbf{ZF}\) in \(\mathbf{Set}[\mathsf{P}_{\mathbb{A}}]\) which is a model of set theory in there. We thus have proved the following: **Theorem 6.1**: _Every topos \(\mathcal{E}\) obtained from an implicative tripos by means of the tripos-to-topos construction from an implicative algebra \(\mathbb{A}=(A,\leq,\rightarrow,\Sigma)\) such that \(|A|<\kappa\) for some strongly inaccessible cardinal \(\kappa\) hosts a model of \(\mathbf{IZF}\). If \(\mathbb{A}\) is classical, then \(\mathcal{E}\) hosts a model of \(\mathbf{ZF}\)._ Now, as a consequence of Theorem 3.1, we obtain the following: **Corollary 6.2**: _If for every cardinal \(\kappa^{\prime}\) there exists a strongly inaccessible cardinal \(\kappa\) such that \(\kappa^{\prime}<\kappa\), then every topos obtained from a \(\mathbf{Set}\)-based tripos by means of the tripos-to-topos construction hosts a model of \(\mathbf{IZF}\) (which is a model of \(\mathbf{ZF}\) when the topos is boolean)._ _Acknowledgements_ The authors would like to acknowledge T. Streicher and F. Ciraulo for useful discussions.
2308.05457
Alpha-Clustering in Nuclei and Its Impact on Nuclear Symmetry Energy
Nuclear symmetry energy is a fundamental quantity currently under intense investigation in both nuclear physics and astrophysics. The {\it softness} or {\it stiffness} of symmetry energy is still under debate and the extraction of symmetry energy from neutron skin thickness $R_{\rm skin}$ remains a challenge. Parity-violating measurements PREX and CREX provide important opportunities for constraining $R_{\rm skin}$ in $^{208}$Pb and $^{48}$Ca. We investigate the occurrence of $\alpha$-cluster at the surface of nuclei and its impact on the extraction of symmetry energy from $R_{\rm skin}$. Our result indicates that the $\alpha$-clustering probability in $^{208}$Pb is small and the extracted density slope of symmetry energy $L$ is almost unaffected. In contrast, the $\alpha$-clustering probability in $^{48}$Ca is sizeable and the corresponding correction to $L$ should be taken into account. This correction progressively increases with the $\alpha$-clustering probability, leading to a modification of the $L$-$R_{\rm skin}$ correlation, a fact may have important implications in constraining nuclear symmetry energy.
Shuo Yang, Ruijia Li, Chang Xu
2023-08-10T09:31:08Z
http://arxiv.org/abs/2308.05457v1
# \(\alpha\)-Clustering in Nuclei and Its Impact on Nuclear Symmetry Energy ###### Abstract Nuclear symmetry energy is a fundamental quantity currently under intense investigation in both nuclear physics and astrophysics. The _softness_ or _stiffness_ of symmetry energy is still under debate and the extraction of symmetry energy from neutron skin thickness \(R_{\rm skin}\) remains a challenge. Parity-violating measurements PREX and CREX provide important opportunities for constraining \(R_{\rm skin}\) in \({}^{208}\)Pb and \({}^{48}\)Ca. We investigate the occurrence of \(\alpha\)-cluster at the surface of nuclei and its impact on the extraction of symmetry energy from \(R_{\rm skin}\). Our result indicates that the \(\alpha\)-clustering probability in \({}^{208}\)Pb is small and the extracted density slope of symmetry energy \(L\) is almost unaffected. In contrast, the \(\alpha\)-clustering probability in \({}^{48}\)Ca is sizeable and the corresponding correction to \(L\) should be taken into account. This correction progressively increases with the \(\alpha\)-clustering probability, leading to a modification of the \(L\)-\(R_{\rm skin}\) correlation, a fact may have important implications in constraining nuclear symmetry energy. pacs: 21.65.Ef, 21.10.Gv, 21.30.Fe, 21.60.Gx _Introduction.-_ The formation of compact clusters (_e.g._\(\alpha\)-clusters) is an interesting feature of nuclear quantum many-body system and plays an essential role in many important problems of astrophysics. The phenomena of \(\alpha\)-clustering are abundant in excited states of light nuclei close to decay threshold [1]. One of the famous instances is the \(3\alpha\)-structure Hoyle state in \({}^{12}\)C, which unlocks the puzzle in the production of heavy elements inside stars [2; 3]. In contrast to light nuclei, the \(\alpha\)-clustering problem in heavy nuclei is still not fully solved and the microscopic treatment of cluster dynamics beyond mean-field theory is a great challenge [4; 5; 6; 7; 8; 9; 10]. Recently, the PREX collaboration reported the measurement of parity-violating asymmetry \(A_{PV}\) and deduced a rather large neutron skin thickness \(R_{\rm skin}\) in \({}^{208}\)Pb (PREX-2) [11], Thick skin in \({}^{208}\)Pb suggests a very stiff symmetry energy in contrast to the previous constraints obtained from many other observations [12; 13; 14; 15; 16; 17]. Very recently, the CREX collaboration has successfully conducted the parity-violating experiment in \({}^{48}\)Ca and deduced a thin \(R_{\rm skin}\)[18], suggesting a soft symmetry energy. Much effort has been expended on attempting to reconcile these seemingly contradictory results. Special attention has been devoted to the problem of \(\alpha\)-clustering at the surface of nuclei that is expected to affect the density slope of symmetry energy \(L\)[19]. \(L\) is critical for understanding not only the structure of rare isotopes and the reaction mechanism of heavy-ion collisions, but also the structure and the composition of neutron stars [20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. By using the Hugenholtz-Van Hove (HVH) theorem, \(L\) can be decomposed in a unique way into kinetic energy, isoscalar potential and isovector potential contributions [30; 31]. As a fundamental relation for interacting self-bound infinite Fermi system, this theorem does not depend upon the precise nature of the interaction. While the kinetic energy and isoscalar potential contributions are relatively well constrained, the isovector potential contribution still has significant uncertainties. Owing to the existence of isovector potential, more neutrons are pushed from the inner region of finite nuclei outwards to the surface region, and thus contribute to \(R_{\rm skin}\). In this sense, \(L\) is related intrinsically to \(R_{\rm skin}\). The correlation between \(R_{\rm skin}\) and \(L\) could be modified by the occurrence of \(\alpha\)-clusters [32]. This is because the \(\alpha\)-clusters may appear in the low density region, _i.e._ the surface of finite nucleus, and its impact progressively increases with the \(\alpha\)-clustering probability [33; 34; 35; 36; 37; 38; 39]. Inside the core, \(\alpha\)-clustering is suppressed and its four nucleons are considered to move almost independently in a shell-model mean-field potential. The single-particle states are populated up to the Fermi energies of the neutrons or protons and pairing correlation exists among the single-particle orbits. Pairing remains at high densities but \(\alpha\)-cluster dissolves and its four nucleons (2n+2p) turn into the single-particle motions forming the continuum of scattering states. In this Letter, we address the question of whether \(\alpha\)-clustering at the surface of \({}^{208}\)Pb and \({}^{48}\)Ca has certain impact on \(R_{\rm skin}\) and \(L\). We use the quartetting wave function approach (QWFA) to do so because it treats correctly both the intrinsic motion between four nucleons in the \(\alpha\)-cluster and the relative motion of the \(\alpha\)-cluster versus the core [34; 35; 36; 37]. Strong closed shell structure effects and complex derivative terms of the intrinsic wave function are properly taken into account in QWFA. The key quantity for clustering modification on the \(R_{\rm skin}\)-\(L\) correlation is the \(\alpha\)-cluster formation probability, which is quantitatively obtained by solving the coupled equations of a first-principle approach to nuclear many-body systems without adjusting any parameter. _Intrinsic energy of \(\alpha\)-cluster embedded in nuclear medium.-_ Firstly, we simulate the \(\alpha\)-cluster formation at the surface of core nucleus by considering low-density nuclear medium, in which the \(\alpha\)-like four-nucleon correla tions are described by the in-medium Schrodinger equation. The corresponding wave function of four nucleons is decomposed into a center of mass (c.o.m.) motion part \(\Psi^{\rm com}\) and an intrinsic motion part \(\varphi^{\rm intr}\), which are coupled together with complex gradient terms. Such gradient terms are difficult to handle, but vanish in the case of homogeneous nuclear medium. With the Jacobian momenta \({\bf p}_{1}={\bf P}/4+{\bf k}/2+{\bf k}_{12},{\bf p}_{2}={\bf P}/4+{\bf k}/2-{\bf k }_{12},{\bf p}_{3}={\bf P}/4-{\bf k}/2+{\bf k}_{34},{\bf p}_{4}={\bf P}/4-{\bf k }/2-{\bf k}_{34}\), the in-medium wave equation for the intrinsic motion is reduced to [33] \[\frac{\hbar^{2}}{2m}[k^{2}+2k_{12}^{2}+2k_{34}^{2}]\varphi^{\rm intr }({\bf k},{\bf k}_{12},{\bf k}_{34})+\int\frac{d^{3}k^{\prime}}{(2\pi)^{3}} \frac{d^{3}k^{\prime}_{12}}{(2\pi)^{3}}\frac{d^{3}k^{\prime}_{34}}{(2\pi)^{3} }V_{4}\varphi^{\rm intr}({\bf k^{\prime}},{\bf k^{\prime}}_{12},{\bf k^{\prime} }_{34})=(W^{\rm ext}+W^{\rm intr})\varphi^{\rm intr}({\bf k},{\bf k}_{12},{\bf k }_{34}), \tag{1}\] where the centroid of \(\alpha\)-cluster is considered to be at rest (\({\bf P}\)=0). \(V_{4}\) is the effective in-medium interaction that contains the external mean field \(V_{4}^{\rm ext}\) as well as the intrinsic NN interaction modified by the Pauli blocking \(V_{4}^{\rm intr}=\Theta(p_{1}-k_{F})\Theta(p_{2}-k_{F})V_{NN}({\bf p}_{1},{\bf p }_{2};{\bf p^{\prime}_{1}},{\bf p^{\prime}_{2}})\delta({\bf p}_{3}-{\bf p^{ \prime}_{3}})\delta({\bf p}_{4}-{\bf p^{\prime}_{4}})+5\) permutations. The NN interaction is defined as a Gaussian form factor \(V_{NN}({\bf p}_{1},{\bf p}_{2};{\bf p^{\prime}_{1}},{\bf p^{\prime}_{2}})= \lambda{\rm e}^{-\frac{({\bf p}_{1}-{\bf p}_{2})^{2}}{4\gamma^{2}}}{\rm e}^{- \frac{({\bf p^{\prime}_{1}-{\bf p^{\prime}_{2}}})^{2}}{4\gamma^{2}}}\,\delta({ \bf p}_{1}+{\bf p}_{2}-{\bf p^{\prime}_{1}}-{\bf p^{\prime}_{2}})\) where the potential parameters \(\lambda=1449.6\) MeV fm\({}^{3}\) and \(\gamma=1.152\) fm\({}^{-1}\)[33]. The minimum of the intrinsic energy \(W^{\rm intr}\) has to be found for each density \(\rho\) with the Fermi-blocked Gaussian ansatz \(\varphi^{\rm intr}({\bf p}_{1},{\bf p}_{2},{\bf p}_{3},{\bf p}_{4})=\frac{1}{ \sqrt{N}}\varphi_{\tau_{1}}({\bf p}_{1})\varphi_{\tau_{1}}({\bf p}_{2})\varphi_ {\tau_{1}}({\bf p}_{3})\varphi_{\tau_{1}}({\bf p}_{4})\delta({\bf p}_{1}+{\bf p }_{2}+{\bf p}_{3}+{\bf p}_{4})\) where \(N\) is the normalization factor. The single nucleon wave function \(\varphi_{\tau}({\bf p})\) is given by \({\rm e}^{-\frac{{\bf p}^{2}}{2m}}\Theta\left[p-k_{F}\right]\) with the single variational parameter \(a\). The minimum energy of a free \(\alpha\)-cluster is \(W^{\rm intr}=-28.3\) MeV at \(a=0.535\) fm\({}^{-2}\) (see Fig.1(a)). The intrinsic energy \(W^{\rm intr}\) is shifted at finite density of the surrounding nuclear matter owing to the Pauli blocking. The bound state disappears and four nucleons become uncorrelated at the critical density \(\rho_{c}=0.03\) fm\({}^{-3}\) (see Fig.1(d)). Note that the matter density distribution of the \(\alpha\)-cluster at the surface of finite nucleus depends also on the surrounding density \(\rho_{\alpha}(r,\rho)=4(\frac{4a(\rho)}{3\pi})^{3/2}{\rm e}^{-\frac{4a(\rho)}{ 3}r^{2}}\) by treating correctly both the energy shift and Pauli blocking effect. _Density evolution of \(\alpha\)-cluster and formation probability in finite nuclei_- The density evolution of an \(\alpha\)-cluster approaching the core nucleus is depicted in Fig.2. The strong binding of \(\alpha\)-cluster is gradually reduced by the energy shift due to Pauli blocking after it feels the tail of the core density. As shown in Fig.2, the variational parameter \(a\) reflecting the size of \(\alpha\)-cluster is decreased from 0.534 to 0.355 when it merges with the continuum of single-particle states. Eventually the \(\alpha\)-cluster dissolves and its four nucleons go over into single-particle states with pair correlations in the open shells on top of the core. Before that, the \(\alpha\)-cluster remains a relatively compact entity with small extension even up to the critical density \(\rho_{c}\) (see Fig.2). The c.o.m. motion of \(\alpha\)-cluster is introduced as a dynamical collective degree of freedom, which simplifies the treatment of correlated nuclear systems beyond the mean-field approximation. By separating the intrinsic Figure 1: The variation of intrinsic energies of an \(\alpha\)-cluster in free space (a) and in homogeneous nuclear matters (b)-(d). A critical transition occurs at \(\rho_{c}=0.03\) fm\({}^{-3}\) where the \(\alpha\)-cluster dissolves and its four nucleons become uncorrelated (d). motion from the c.o.m. motion, the c.o.m. wave function of \(\alpha\)-cluster follows the equation [36; 37], \[\begin{split}&-\frac{\hbar^{2}}{2Am}\nabla_{R}^{2}\Psi^{\rm com}( \mathbf{R})-\frac{\hbar^{2}}{Am}\int ds_{j}\varphi^{intr,*}(\mathbf{R},\mathbf{ s}_{j})[\nabla_{R}\varphi^{\rm intr}(\mathbf{R},\mathbf{s}_{j})][\nabla_{R} \Psi^{\rm com}(\mathbf{R})]\\ &-\frac{\hbar^{2}}{2Am}\int ds_{j}\varphi^{\rm intr,*}(\mathbf{R}, \mathbf{s}_{j})[\nabla_{R}^{2}\varphi^{\rm intr}(\mathbf{R},\mathbf{s}_{j})] \Psi^{\rm com}(\mathbf{R})+\int dR^{\prime}W(\mathbf{R},\mathbf{R}^{\prime}) \Psi^{\rm com}(\mathbf{R}^{\prime})=E\Psi^{\rm com}(\mathbf{R}),\end{split} \tag{2}\] where the second and third terms are complex derivative terms and no investigations of such terms have been performed in previous researches. It can be strictly proved that the second term vanishes if the number of nucleons (\(A=4\)) embedded in medium is conserved. In contrast, the third term is nontrivial and is rather difficult to solve (9-fold integral). For the first time, we take this derivative term into account in QWFA and find that this term does affect the final \(\alpha\)-cluster formation probability. The fourth term is the effective potential describing the c.o.m. motion of \(\alpha\)-cluster under the influence of Pauli blocking with the surrounding medium. The inner c.o.m. effective potential \(W(R<R_{c})\) (\(R_{c}\) is the critical radius corresponding to \(\rho_{c}\)) is constructed from the shell model wave functions of four nucleons forming the \(\alpha\)-cluster. Note that only states near the Fermi energy can form an \(\alpha\)-like cluster because these shell model states extend to the low-density regions. The inner effective potential \(W(R<R_{c})\) joins with the outer one \(W(R>R_{c})=W(R)^{\rm ext}+W(R)^{\rm intr}\) at \(R=R_{c}\). An important feature of \(W(R)\) is that a pocket is formed on the surface region (see the small panel in Fig.3), resulting from the competition between strong nuclear force attraction and repulsive Pauli blocking. The pocket plays an essential role in the formation of \(\alpha\)-cluster at the surface of core nucleus. As seen in Fig.3, the normalized c.o.m. wave function shows a small peak around the pocket region. This is in agreement with the microscopic calculations on \(\alpha\)-clustering in Refs.[40; 41; 42]. By integrating the c.o.m. wave function from the critical radius \(R_{c}\) to infinity, the \(\alpha\)-cluster formation probability can be microscopically obtained \(P_{\alpha}=\int_{R>R_{c}}d^{3}\mathbf{R}|\Psi^{\rm com}(\mathbf{R})|^{2}\)[36; 37]. We go beyond the Thomas-Fermi approximation by taking the closed shell structure effects into account. The \(\alpha\)-cluster formation probability is expected to vary dramatically across the major shell closures. Indeed, it is found that the \(\alpha\)-clustering in doubly magic nuclei like \({}^{40}\)Ca, \({}^{132}\)Sn, and \({}^{208}\)Pb is significantly hindered by shell effects (see Fig.4). The _realistic_\(\alpha\)-cluster formation probability in \({}^{208}\)Pb is rather small \(P_{\alpha}=9.3\times 10^{-3}\). This is quite different to their neighbors \({}^{44}\)Ti, \({}^{136}\)Te, and \({}^{212}\)Po where enhanced \(\alpha\)-cluster formation probabilities are found by using exactly the same QWFA formulism. _Impact of \(\alpha\)-clustering on \(R_{\rm skin}\) and \(L\)._- A direct relationship between \(L\) and the underlying single-nucleon potential \(V_{n/p}(\rho,\delta,k)=V_{0}(\rho,k)\pm V_{\rm sym}(\rho,k)\delta\) is revealed by the HVH theorem [30; 31]. The advantage of this strict relationship is that it can be used to determine \(L\) in a fully transparent way. At the saturation density \(\rho_{0}\), \(L\) can be reformulated by using the effective mass \(m^{*}\), \[L=\frac{2}{3}t(k_{F}^{0})+\frac{3}{2}V_{\rm sym}(\rho_{0},k_{F}^{0})+\frac{ \partial V_{\rm sym}(\rho_{0},k)}{\partial k}|_{k_{F}^{0}}k_{F}^{0}, \tag{3}\] Figure 3: The normalized c.o.m. wave function of four nucleons forming the \(\alpha\)-cluster. The wave function in the range of \(0<R<R_{C}\) represents the c.o.m. wave function of four uncorrelated nucleons after dissolution. Only at the surface region with \(R>R_{C}\), the \(\alpha\)-cluster appears and the c.o.m. wave function with \(R>R_{C}\) corresponds to the formed \(\alpha\)-cluster, as marked by green color. The pocket region in the c.o.m. effective potential is denoted in the small panel. where the first term \(L(1)=\frac{\hbar^{2}k_{\rm B}^{0}{}^{2}}{3m^{*}}\) denotes the contributions from kinetic energy and isoscalar potential [30]. For the nucleon effective mass, we adopt the value of \(\frac{m^{*}}{m}\)=0.70\(\pm\)0.05 widely used in the literature, see, _e.g._ Ref.[43]. The isovector potential \(V_{\rm sym}(\rho_{0},k)\) can be deduced from the real part of global optical potentials, which is basically parameterized in the Woods-Saxon form, _i.e._\(V(r)=-V_{0}[1\pm\kappa(\frac{N-Z}{A})]/[1+\exp(\frac{r-R_{0}A^{1/3}}{a})]\) ("+" for protons and "\(-\)" for neutrons). The second term \(L(2)\) in Eq.(3) is determined by the product of the strength of the WS potential \(V_{0}\) and the isovector parameter \(\kappa\), _i.e._\(L(2)\!=\!\frac{3}{2}V_{\rm sym}(\rho_{0},k_{P}^{0})\!=\!\frac{3}{2}\kappa\cdot V_ {0}\). The WS potential does not have explicit energy (or momentum)-dependence. From the global optical potential (GOP) constrained by nuclear reaction data [30], the energy-dependence of the isovector potential is found to have a linear form \(V_{\rm sym}(\rho_{0},k)\!=\!22.75-0.21E(k)\)[30]. So the third term \(L(3)\) in Eq.(3) is negative because of the decreasing isovector potential with increasing energy. We use the same "QWFA" WS global optical potential to determine: 1) shell model states and densities in \({}^{208}\)Pb and \({}^{48}\)Ca, together with the Coulomb potential+\(ls\) coupling 2) density slope of symmetry energy \(L\) by using the HVH theorem. The "QWFA" parameterization is found to reproduce well the \(\alpha\)-cluster decay half-lives around doubly magic nuclei \({}^{208}\)Pb and \({}^{100}\)Sn [36; 37]. The neutron skin thickness \(R_{\rm skin}=r_{n}^{\rm rms}-r_{p}^{\rm rms}\) is calculated directly from shell model density distributions. With explicit \(\alpha\)-cluster degree of freedom, the r.m.s. radius is given by \(r^{\rm rms}=[\int r^{2}(\rho^{\rm cluster}(r)+\rho^{\rm core}(r))d^{3}r]^{1/2}\) where \(\rho^{\rm core}\) is the density distribution of protons or neutrons in the core. The density distribution of two neutrons or two protons forming the \(\alpha\)-cluster is \[\rho^{\rm cluster}(r) = 2\int_{R<R_{c}}d^{3}{\rm R}[|\Psi^{\rm com}({\bf R})|^{2}\,\rho( {\bf r})]\] \[+ \frac{1}{2}\int_{R>R_{c}}d^{3}{\rm R}[|\Psi^{\rm com}({\bf R})|^{2 }\,\rho_{\alpha}({\bf r}-{\bf R};{\bf R})],\] where the \(\alpha\)-cluster formation at the surface of core nucleus (\(R\geq R_{c}\)) is taken into account in the second integral and the spatial extension of the formed \(\alpha\)-cluster is well described by \(\rho_{\alpha}({\bf r}-{\bf R};{\bf R})\). We assume that the neutron skin thicknesses given by PREX-2 and CREX are all _measured_ quantities. Fig.5 shows the correlation between \(L\) and \(R_{\rm skin}\) with and without \(\alpha\)-clustering for \({}^{208}\)Pb. \(L\) increases with the increasing \(R_{\rm skin}\) and the \(L\)-\(R_{\rm skin}\) correlation is almost linear. We have checked the \(L\)-\(R_{\rm skin}\) correlation by using different WS parameterizations [44] and found that this behavior is general. As shown in Fig.5, the \(L\)-\(R_{\rm skin}\) correlation is modified significantly by assuming a large amount of formation probability (\(P_{\alpha}\)=1). However, the _realistic_\(\alpha\)-cluster formation probability in \({}^{208}\)Pb is quite small, and thus its influence on \(L\) is negligible. By considering the _realistic_\(\alpha\)-cluster formation probability in \({}^{208}\)Pb, the \(L\) value extracted from the PREX-2 data is 75.3 MeV (see Table 1). The uncertainties of all terms contributing to this \(L\) value are considered. The uncertainty in \(L(1)\) is due to the effective mass \(m^{*}\). With the \(m^{*}/m=0.70\!\pm\)0.05 we adopted, an error bar of \(+2.8/-2.4\) MeV is obtained. Since the \(R_{\rm skin}\) data of PREX-2 has large error bar, it is no surprise that there is large error bar associated with \(L(2)\) term. The error bar associated with \(L(3)\) term is obtained by the world data on nucleon-nucleus scatterings, (p, n) charge exchange reactions and single-particle energies of bound states [30]. Put all together the error bar of \(L\) is approximately 24 MeV. In contrast to \({}^{208}\)Pb, the _realistic_\(\alpha\)-cluster formation probability \(P_{\alpha}\) in \({}^{48}\)Ca is found to be 7.3\(\times 10^{-2}\), which is much larger than that in \({}^{208}\)Pb. Thus the impact of \(\alpha\)-clustering in \({}^{48}\)Ca on \(L\) cannot be ignored. The extracted \(L\) value with error bar from CREX data is \(15.0^{+25.6}_{-25.0}\) MeV and the correction due to \(\alpha\)-clustering is of the order of 14%. This correction progressively increases with the \(\alpha\)-cluster formation probability \(P_{\alpha}\), which could be close to unity if the contributing shell model orbits are rather similar, especially for self-conjugate nuclei such as \({}^{44}\)Ti in Fig.4. _Conclusion.-_\(\alpha\)-clustering survives at the surface of heavy nuclei, which is relevant to the neutron skin thick \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Nuclei & \(R_{\rm skin}\) [fm] & \(L\) [MeV] & \(P_{\alpha}\) & \(L\) [MeV] \\ & & no \(\alpha\)-cluster & & with \(\alpha\)-cluster \\ \hline \({}^{208}\)Pb & 0.283\(\pm\)0.071 & \(75.2^{+24.3}_{-24.5}\) & 9.3\(\times 10^{-3}\) & \(75.3^{+24.3}_{-24.6}\) \\ \hline \({}^{48}\)Ca & 0.121\(\pm\)0.050 & \(13.2^{+25.4}_{-24.9}\) & 7.3\(\times 10^{-2}\) & \(15.0^{+25.6}_{-25.0}\) \\ \({}^{*}\) & 0.071 (lower) & 1.7 & & 3.4 \\ \({}^{*}\) & 0.171 (upper) & 24.8 & & 26.8 \\ \hline \end{tabular} \({}^{*}\)The correction of \(L\) due to \(\alpha\)-clustering for the lower and upper limits of \(R_{\rm skin}\) is 100% and 8%, respectively. \end{table} Table 1: The extracted density slope parameter \(L\) by considering \(\alpha\)-clustering at the surface of \({}^{208}\)Pb (PREX-2) and \({}^{48}\)Ca (CREX). Figure 5: Correlation between \(L\) and \(R_{\rm skin}\) with and without \(\alpha\)-clustering in \({}^{208}\)Pb. In the case of \(\alpha\)-clustering, a large amount of \(\alpha\)-cluster formation probability is assumed (\(P_{\alpha}\)=1). ness of heavy nuclei. The latter is a precise tool in constraining the density slope of nuclear symmetry energy. The impact of \(\alpha\)-clustering on the neutron skin thickness depends closely on the amount of the formation probability. We emphasize that the approach presented here to calculate formation probability is based on a first-principle approach to nuclear many-body systems. A proper treatment of derivative terms of intrinsic wave function has been performed and the spatial extension of the \(\alpha\)-cluster has been considered to better account the correlation between \(L\) and \(R_{\rm skin}\). Present analysis shows that the \(L\) values deduced from PREX-2 and CREX experiments are not consistent with each other, even with \(\alpha\)-clustering effect included. We expect that a better account of model-dependence in extracting \(R_{\rm skin}\) from parity-violating asymmetry \(A_{PV}\) will further improve the estimation of \(L\). Moreover, state-of-art approaches can be applied to describe shell model states and nuclear densities and the Gaussian ansatz used in the variational calculations can be improved. Exact solution of the four-nucleon correlation near the critical density by including both self-energy corrections and Pauli blocking should be tackled in future. _Acknowledgments.-_ Discussions with G. Ropke, Z. Ren, Y. Funaki, H. Horiuchi, A. Tohsaki, T. Yamada, B. Zhou, L. W. Chen and C. D. Roberts are gratefully acknowledged. This work is supported by the National Natural Science Foundation of China (Grant No. 11822503).
2301.09766
Constrained Reinforcement Learning for Dexterous Manipulation
Existing learning approaches to dexterous manipulation use demonstrations or interactions with the environment to train black-box neural networks that provide little control over how the robot learns the skills or how it would perform post training. These approaches pose significant challenges when implemented on physical platforms given that, during initial stages of training, the robot's behavior could be erratic and potentially harmful to its own hardware, the environment, or any humans in the vicinity. A potential way to address these limitations is to add constraints during learning that restrict and guide the robot's behavior during training as well as roll outs. Inspired by the success of constrained approaches in other domains, we investigate the effects of adding position-based constraints to a 24-DOF robot hand learning to perform object relocation using Constrained Policy Optimization. We find that a simple geometric constraint can ensure the robot learns to move towards the object sooner than without constraints. Further, training with this constraint requires a similar number of samples as its unconstrained counterpart to master the skill. These findings shed light on how simple constraints can help robots achieve sensible and safe behavior quickly and ease concerns surrounding hardware deployment. We also investigate the effects of the strictness of these constraints and report findings that provide insights into how different degrees of strictness affect learning outcomes. Our code is available at https://github.com/GT-STAR-Lab/constrained-rl-dexterous-manipulation.
Abhineet Jain, Jack Kolb, Harish Ravichandar
2023-01-24T00:31:28Z
http://arxiv.org/abs/2301.09766v1
# Constrained Reinforcement Learning for Dexterous Manipulation ###### Abstract Existing learning approaches to dexterous manipulation use demonstrations or interactions with the environment to train black-box neural networks that provide little control over how the robot learns the skills or how it would perform post training. These approaches pose significant challenges when implemented on physical platforms given that, during initial stages of training, the robot's behavior could be erratic and potentially harmful to its own hardware, the environment, or any humans in the vicinity. A potential way to address these limitations is to add constraints during learning that restrict and guide the robot's behavior during training as well as roll outs. Inspired by the success of constrained approaches in other domains, we investigate the effects of adding position-based constraints to a 24-DOF robot hand learning to perform object relocation using Constrained Policy Optimization. We find that a simple geometric constraint can ensure the robot learns to move towards the object sooner than without constraints. Further, training with this constraint requires a similar number of samples as its unconstrained counterpart to master the skill. These findings shed light on how simple constraints can help robots achieve sensible and safe behavior quickly and ease concerns surrounding hardware deployment. We also investigate the effects of the strictness of these constraints and report findings that provide insights into how different degrees of strictness affect learning outcomes. Our code is available at _[https://github.com/GT-STAR-Lab/](https://github.com/GT-STAR-Lab/) constrained-rl-dexterous-manipulation_. ## 1 Introduction Dexterous manipulation often involves the use of high degree-of-freedom robots to manipulate objects. Representative dexterous manipulation tasks include relocating objects, picking up arbitrarily shaped objects, and sequential interactions with articulated objects (e.g. unlatching and opening a door). Indeed, factors such as high-dimensional state spaces and complex interaction dynamics make these tasks challenging to automate. Classical control methods are hard to recruit for dexterous manipulation due to the manual effort required to design controllers in high-dimensional spaces. Prior work in dexterous manipulation has succeeded by using self-supervised methods in simulation, and transferring learned policies to real robots [1]. Others have utilized demonstrations to improve reinforcement learning [17]. However, these approaches are hard to train on real robots, as initial robot behavior can be erratic and unsafe. In this work, we explore adding instance-specific constraints to an object relocation task (Fig. 1), that restrict and guide the robot's behavior during training as well as roll outs. Constrained Policy Optimization (CPO) is an effective method to solve constrained MDPs [1], built upon trust-region policy optimization (TRPO) [14]. We formulate a cylindrical boundary constraint for the initial motion of the robot hand towards the object (Fig. 2). The robot incurs a penalty when it moves outside the boundary. We find that using CPO with this simple geometric constraint can ensure the robot learns to move towards the object sooner than without constraints. Further, training with this constraint (CPO) requires a similar number of samples as its unconstrained counterpart (TRPO) to master the skill. These findings shed light on how simple constraints can help robots achieve sensible and safe behavior quickly and ease concerns surrounding hardware deploy Figure 1: The relocation task in MuJoCo. This task requires the robot hand to pick up the blue ball from the tabletop and carry it to the green goal region. ment. We also investigate the effects of the strictness of these constraints and report findings that provide insights into how different degrees of strictness affect learning outcomes. ## 2 Related Work Previous works explore self-supervised methods to manipulate objects by adding different types of constraints. To gently lift objects, tactile sensors have been used to constrain contact forces in a 24-DOF robot hand [19]. However, this approach does not consider task performance. Another work trains dynamic motion primitives (DMP) for a 10-DOF robot hand considering virtual joint constraints or friction [1]. This approach does not provide any safety guarantees beyond DMPs being deterministic. In low-dimensional environments, one work looks into stable robot path trajectories using graph optimization for motion planning, but does not focus on performance [1]. Another work focuses on in-hand object manipulation in a low-dimensional environment by adding constraints between the robot and object [11]. In multi-agent settings, boundaries based on robot geometry have been used to enable safe collaboration in close quarters to provide safety guarantees [10]. ## 3 Background ### Trust Region Policy Optimization (TRPO) TRPO is a policy gradient method to solve Markov Decision Processes, that avoids parameter updates that change the policy too much with a KL divergence constraint on the size of the policy update at each iteration [13]. We train TRPO on-policy i.e., the policy for collecting data is same as the policy that we want to optimize. The objective function \(J(\theta)\) measures the total advantage \(\hat{A}_{\theta_{old}}\) over the state visitation distribution \(p^{\pi_{old}}\) and actions from \(\pi_{\theta_{old}}\), while the mismatch between the training data distribution and the true policy state distribution is compensated with an importance sampling estimator. TRPO aims to maximize the objective function subject to a trust region constraint which enforces the distance between old and new policies measured by KL-divergence to be small enough, within a parameter \(\delta\). The same can be summarized in Eq. 1. \[\begin{split}\text{maximize }J(\theta)&=\mathbb{E}_{s\sim p ^{\pi_{old}},a\sim\pi_{\theta_{old}}}\left(\frac{\pi_{\theta}(a|s)}{\pi_{ \theta_{old}}(a|s)}\hat{A}_{\theta_{old}}(s,a)\right)\\ &\text{subject to }\mathbb{E}_{s\sim p^{\pi_{old}}}\left[D_{KL}( \pi_{\theta_{old}}(.|s)\parallel\pi_{\theta}(.|s)\right]\leq\delta\end{split} \tag{1}\] ### Constraint Policy Optimization (CPO) CPO [1] is built on top of the TRPO algorithm to solve Constrained Markov Decision Processes (CMDPs) which include a cost function, \(C\) and a cost discount factor \(\gamma_{c}\) along with the standard MDP learning problem \((S,A,T,R,p_{0},\gamma)\). In a local policy search for CMDPs, on top of the TRPO optimization, we additionally require policy updates to be feasible for the CMDP. Our objective function thus adds another condition to limit the expected discounted cost under a cost limit, \(cl\) for each constraint. The same can be summarized in Eq. 2. \[\begin{split}\text{maximize }J(\theta)&=\mathbb{E}_{s\sim p ^{\pi_{old}},a\sim\pi_{\theta_{old}}}\left(\frac{\pi_{\theta}(a|s)}{\pi_{ \theta_{old}}(a|s)}\hat{A}_{\theta_{old}}(s,a)\right)\\ &\text{subject to }\mathbb{E}_{s\sim p^{\pi_{old}}}\left[D_{KL}( \pi_{\theta_{old}}(.|s)\parallel\pi_{\theta}(.|s)\right]\leq\delta\\ &\text{and }\mathbb{E}_{\tau\sim\pi_{\theta}}\left[\sum_{t=0}^{ \infty}\gamma_{c}^{t}C_{i}(s_{t},a_{t},s_{t+1})\right]\leq cl_{i}\forall i \end{split} \tag{2}\] ### Problem Formulation We consider an object relocation task where the agent, a 24-DOF Adroit hand, learns a policy to grasp and relocate a blue ball from a tabletop to a green region (Fig. 1). We formulate this task as a Constrained MDP, \((S,A,T,R,p_{0},\gamma,C,\gamma_{c})\), where \(S\) is the state space, \(A\) is the action space, \(T\) is the transition function, \(R\) is the reward function, \(p_{0}\) is the initial state distribution, \(\gamma\) is a discount factor, \(C\) is the cost function and \(\gamma_{c}\) is a cost discount factor. In a typical episode, at each time \(t\), the agent receives an observation \(o_{t}\) based on the current state. After the agent takes an action \(a_{t}\sim\pi_{\theta}(o_{t})\) based on the observation, it gets reward \(R(s_{t})\) from the environment, incurs a penalty cost \(C(s_{t})\) and arrives at a new state with observation \(s_{t+1}=T(s_{t},a_{t})\). Based on the RL algorithm, CPO or TRPO, the agent optimizes the corresponding objective function. **Observation space**: The observation space is 39-dimensional including 24 hand joint angles, hand translation (3-D), hand orientation (3-D), relative position of the hand with respect to the object (3-D), relative position of the hand with respect to the goal (3-D) and relative position of the object with respect to the goal (3-D). **Action space**: Each action for the relocation task is 30-dimensional, including 24 hand joint angles, 3-D hand translation and 3-D hand rotation. **Reward**: The agent is rewarded for getting closer to the object, lifting the object up, taking the object closer to the goal, taking the hand closer to the goal, and reaching really close to the goal. ### Constraint Formulation We define boundary constraints to restrict the initial motion of the robot arm in the direction of the object for relocation. Figure 2: The boundary constraint defined in Eq. 3 for the initial motion of the robot hand towards the ball. The robot incurs a penalty when it moves outside the boundary. Considering \(\mathbf{x}_{h}\) as the initial hand position, \(\mathbf{x}_{b}\) as the initial object position, \(\mathbf{x}\) as the current hand position, the constraints are defined in Eq. 3, where \(r\) is the boundary radius. The same can be visualized in Fig. 2. The derivation of the constraints can be found in Appendix A. \[\begin{split}\frac{|(\mathbf{x}_{b}-\mathbf{x}_{h})\times( \mathbf{x}_{h}-\mathbf{x})|}{|\mathbf{x}_{b}-\mathbf{x}_{h}|}\leq r\\ 0\leq\frac{(\mathbf{x}-\mathbf{x}_{h})\cdot(\mathbf{x}_{b}- \mathbf{x}_{h})}{|\mathbf{x}_{b}-\mathbf{x}_{h}|^{2}}\leq 1\end{split} \tag{3}\] We penalize the agent with a fixed penalty cost whenever the agent violates any of the formulated constraints. If it violates both, it receives twice the penalty cost (set to 0.01). For practical purposes, we relax the second constraint to range between -0.1 and 1.1. A tight second constraint does not allow the robot to learn grasping the ball. ## 4 Experiments We design different experiments to evaluate the following research questions: 1. Does the policy learned via CPO allow _safe training_ and _safe roll outs_? How does it compare to a TRPO policy? 2. What is the effect of changing the constraint _boundary radius_ on training CPO? 3. What is the effect of changing the allowed _cost limit_ on training CPO? 4. What is the effect of changing the _cost of each penalty_ on training CPO? ### Setup We use the MuJoCo physics simulator as our testing platform. All our RL policies are pre-trained using behavior cloning via 25 demonstrations collected via CyberGlove III [17]. Our policy network is a Gaussian Multi-Layer Perceptron (MLP) with two hidden layers of 32 neurons each. We also train a value network and a cost value network (only for CPO), both MLPs with two hidden layers of 128 neurons each. The learning rate for behavior cloning on our policy network, value network and cost value network is 0.001. For training via CPO and TRPO algorithms, our reward and cost discount factors are both 0.995 and the GAE parameter is 0.97. The constraint configurations for our different experiments are detailed in Appendix B. Figure 3: **Comparing CPO with TRPO and TRPO-RP. Bottom**: CPO has _reduced average number of violations_ than both the TRPO policies for all the three different intensities of the boundary constraint. CPO learns to satisfy the constraints better throughout the training process. **Top**: CPO has _lower sample efficiency_ for all the three constraint cases, which is a small trade-off to ensure safe learning. ### Experiment 1: Evaluating CPO and TRPO We evaluate both CPO and TRPO algorithms on the relocation task to verify the effect of optimizing for constraints on sample efficiency and average cost during training. We evaluate two variants of the TRPO algorithm - one where the reward is penalized with the incurred cost (TRPO-RP), and one without the penalty (TRPO). We see that CPO has a **lower average number of violations** than both the TRPO policies for three different intensities of the boundary constraint. From Fig. 3 (bottom), we see that CPO learns to satisfy the constraints better throughout the training process, making it potentially safer to train on real robots. Qualitative results showing the behavior for all three algorithms during early, mid and late training can be found in the video at _this link_. CPO also has **lower sample efficiency** for the three constraint cases, which is a small trade-off to ensure safer learning (Fig. 3 (top)). We also evaluate the average number of violations for the CPO and TRPO policies after training. We find that the CPO policy continues to maintain fewer violations and is thus safer to roll out for real-world applications than the TRPO policies even though all the policies can perform the task successfully \(\geq 95\%\) of the times (Fig. 5). ### Experiment 2: Effect of Boundary Radius We evaluate the effect of changing the boundary radius in our constraint formulation on training a CPO policy. We find that a tighter radius takes significantly more samples to train, whereas a relaxed radius is more sample efficient (Fig. 4 (left top)). This behavior can also be reinforced by the average number of violations reducing more quickly for a relaxed constrained than a tighter one, although it is obviously lower to begin with (Fig. 4 (left bottom)). ### Experiment 3: Effect of Cost Limits We also evaluate how changing the overall cost limit affects the way CPO policies are trained. We see that as the cost limit decreases, the sample efficiency also decreases (Fig. 4 (center top)). From the average number of violations plot (Fig. 4 (center bottom)), we see that CPO optimizes perfectly for the Figure 4: **Experiment with constraint parameters. Left**: (Top) A smaller radius takes significantly more samples to train, whereas relaxed radii are more sample efficient. (Bottom) The average number of violations also reduce more quickly for a relaxed constraint than a smaller one, although they are obviously lower to begin with. **Center**: (Top) As the cost limit decreases, the sample efficiency also decreases. (Bottom) CPO optimizes perfectly for the respective limits and maintains the allowed cost throughout most of the training. **Right**: (Top) Higher the penalty cost for each violation, the longer it takes for the policy to train. (Bottom) Scaling of the penalty costs does not really impact how the average number of violations reduce during training. respective limits and maintains the allowed cost throughout most of the training. ### Experiment 4: Effect of Penalty Costs We evaluate how changing the scale of penalties impacts the way CPO trains. We linearly scale the cost limits in this case to maintain the same number of allowed constraint violations. We observe that the higher the penalty cost per violation, the longer it takes for the policy to train (Fig. 4 (right top)). However, scaling of the penalty costs does not really impact how the average number of violations reduce during training (Fig. 4 (right bottom)). ## 5 Conclusion We explore adding constraints to an object relocation task to potentially enable safe training on real robots. We formulate a constraint that restricts a robot hand's motion to within a boundary when approaching the object. We then learn a policy that uses CPO to optimize the constraint cost. We find that learning to follow the constraints via CPO reduces the average cost during training and roll outs, especially when compared to TRPO. We observe consistency in this result across different constraint boundaries and throughout the training process. We also evaluate the effect of changing the boundary radius, cost limits, and penalty costs on training CPO. We find that smaller constraints and larger penalty costs reduce training efficiency. We conclude that the cylindrical boundary constraint we formulate for the relocation task can help to quickly learn safe motion in training and roll out, and can thus be used for training dexterous manipulation tasks safely on real world robots. ## 6 Future Work To further investigate the robustness of our boundary constraint and approach, we plan to evaluate our methods on additional dexterous manipulation tasks, such as using a hammer and opening a door. We also plan to formulate a constraint that restricts the motion of the robot after the object has been grasped to further ensure safety throughout the trajectory. Finally, we plan to implement the CPO algorithm on a real robot, such as Shadow Hand and evaluate the effectiveness of our algorithm for real-world applications.
2306.02694
Social Robots As Companions for Lonely Hearts: The Role of Anthropomorphism and Robot Appearance
Loneliness is a distressing personal experience and a growing social issue. Social robots could alleviate the pain of loneliness, particularly for those who lack in-person interaction. This paper investigated how the effect of loneliness on the anthropomorphism of social robots differs by robot appearance, and how it influences purchase intention. Participants viewed a video of one of the three robots (machine-like, animal-like, and human-like) moving and interacting with a human counterpart. Bootstrapped multiple regression results revealed that although the unique effect of animal-likeness on anthropomorphism compared to human-likeness was higher, lonely individuals' tendency to anthropomorphize the animal-like robot was lower than that of the human-like robot. This moderating effect remained significant after covariates were included. Bootstrapped mediation analysis showed that anthropomorphism had both a positive direct effect on purchase intent and a positive indirect effect mediated by likability. Our results suggest that lonely individuals' tendency of anthropomorphizing social robots should not be summarized into one unified inclination. Moreover, by extending the effect of loneliness on anthropomorphism to likability and purchase intent, this current study explored the potential of social robots to be adopted as companions of lonely individuals in their real life. Lastly, we discuss the practical implications of the current study for designing social robots.
Yoonwon Jung, Sowon Hahn
2023-06-05T08:36:30Z
http://arxiv.org/abs/2306.02694v2
# Social Robots As Companions for Lonely Hearts: The Role of Anthropomorphism and Robot Appearance ###### Abstract Loneliness is a distressing personal experience and a growing social issue. Social robots could alleviate the pain of loneliness, particularly for those who lack in-person interaction. This paper investigated how the effect of loneliness on the anthropomorphism of social robots differs by robot appearance, and how it influences purchase intention. Participants viewed a video of one of the three robots (machine-like, animal-like, and human-like) moving and interacting with a human counterpart. Bootstrapped multiple regression results revealed that although the unique effect of animal-likeness on anthropomorphism compared to human-likeness was higher, lonely individuals' tendency to anthropomorphize the animal-like robot was lower than that of the human-like robot. This moderating effect remained significant after covariates were included. Bootstrapped mediation analysis showed that anthropomorphism had both a positive direct effect on purchase intent and a positive indirect effect mediated by likability. Our results suggest that lonely individuals' tendency of anthropomorphizing social robots should not be summarized into one unified inclination. Moreover, by extending the effect of loneliness on anthropomorphism to likability and purchase intent, this current study explored the potential of social robots to be adopted as companions of lonely individuals in their real life. Lastly, we discuss the practical implications of the current study for designing social robots. ## I Introduction The need to belong is fundamental to humans [1]. When this need for social connection is unmet at the desired magnitude, individuals experience loneliness [2]. Loneliness is linked to increased depressive symptoms and cognitive decline, and is also associated with increased risks of getting cardiovascular diseases and compounded immune systems [3]. These grave mental and physical declines lead to higher medical and general practitioner costs [4, 5]. In situations where in-person contact is limited, such as in solitary living arrangements, social robots could be a potential source of replenishing the sense of connection [6]. Indeed, empirical evidence demonstrated that social robots can effectively alleviate feelings of loneliness [7, 8]. Therefore, this paper explored the potential of social robots as real-life companions of lonely individuals by investigating the factors that contribute to positive human-robot interaction and enhance adoption intention. Among the factors that shape lonely individuals' interaction with social robots, anthropomorphism plays an important role. Anthropomorphism enhances the human counterparts' likability and trust in social robots [9, 10]. This favorable attitude, caused by increased anthropomorphism, leads to customers' increased likelihood of purchasing technology-driven products (i.e. chatbots). Previous research has identified a positive relationship between anthropomorphism and customers' likelihood of purchasing technology-driven products (i.e. web design, chatbot), with likability or enjoyment acting as a mediating factor [11, 12, 13]. Lonely individuals generally show a heightened tendency to anthropomorphize non-human objects and entities [14, 15, 16, 17, 18]. However, empirical evidence on the anthropomorphic inclination of lonely individuals toward social robots is inconsistent. One suggested that loneliness increases the inclination to anthropomorphize social robots [18], while another study reported that loneliness decreases this tendency [19]. Given that lonely individuals are prone to perceiving social cues as potential threats [3, 20], robot appearance could be an important factor that shapes lonely individuals' anthropomorphic tendency toward social robots. Yet, no research has examined the differential effects of robot appearance on this anthropomorphic tendency. In light of the aforementioned reasons, this paper aims to shed light on the intricate interplay between loneliness and robot appearance on the perception of social robots as human-like entities. Furthermore, we seek to uncover how individuals' inclination to anthropomorphize social robots influences their intention to purchase social robots. ## II Related Work ### _Anthropomorphism_ Humans anthropomorphize non-human objects or agents by attributing human-like characteristics (i.e. motivations, intentions, emotions) to them [21]. A group of scholars defined two dimensions of humanness traits as human nature (HN) and uniquely human nature (UHN) [22, 23]. HN encompasses traits shared by humans and animals, while UHN represents characteristics exclusive to humans. Similarly, another group of scholars suggested two dimensions of mind perception as agency and experience [24]. Experience represents bodily sensations and feelings, whereas agency refers to the capacity for intentional action. There is a notable overlap between the two concepts. Agency and experience align with UHN and HN, respectively [24]. Empirical studies have shown that individuals perceive typical humans as possessing high levels of both agency and experience, mammals as low in agency but high in experience, and robots as high in agency but low in experience [24, 25]. This suggests that when people anthropomorphize non-human objects or agents, they attribute the dimensions that they perceive to be lacking in the nature of these objects or agents. Indeed, it has been argued that attributing perceived experience to robots would help reduce their perceived machine-likeness, and imbue them with a more human-like bearing [19]. Therefore, this paper adopts perceived experience as a dimension of human likeness that people imbue on social robots when anthropomorphizing them. ### _Relationship between loneliness and anthropomorphism_ Previous research proposed a three-factor theory of anthropomorphism [14, 21]. One of the factors, sociality motivation, represents the desire for social connection and attachment, which suggests that lonely individuals compensate for unmet social needs by anthropomorphizing non-human entities [15]. Indeed, empirical evidence demonstrated that lonely individuals showed heightened inclination to perceive human-like attributes in various entities (i.e., animals, technical gadgets, robotic heads) [14, 15, 16, 17, 18]. On the other hand, a recent study reported that trait loneliness lowered the tendency to anthropomorphize and accept a human-like social robot [19]. The authors suggested that dispositionally lonely individuals, who are more prone to interpret their social surroundings as threatening [3, 20], may have perceived the robot as unsettling. Categorizing the studies based on the types of non-human entities used, a discernible pattern emerges in these inconsistencies. Lonely individuals tend to exhibit higher levels of anthropomorphism towards simple technical devices and animals [14, 15, 16, 17]. However, research involving robots reveals mixed results, reporting both positive and negative associations between loneliness and anthropomorphism [18, 19]. These disparities indicate potential variations in the anthropomorphic tendencies of lonely individuals specifically towards social robots, distinguishing them from other non-human entities. However, further investigation is needed to validate this, as the existing body of research on this topic that used sophisticated robots remains limited. _The effect of robot appearance on lonely individuals' anthropomorphic tendency toward social robots_ The research on anthropomorphizing social robots has mostly used either machine-like robots or human-like robots. Considering that human-like robots have been tested as a kind of gold standard for robot anthropomorphism, this may sound trivial. However, previous research on more general anthropomorphic tendencies of humans suggests that humans perceive humanness from not only human-looking entities but also from animals (e.g. pets) and machine-like objects (i.e. clocks, cars) [14, 15, 16, 17, 18]. Thus, when integrating general research on anthropomorphism into robotics research, exploring the varying degree of anthropomorphism across different robot appearances is needed. Therefore, we used three robots with different appearances (human-like, machine-like, and animal-like) to investigate whether the tendency of lonely individuals to anthropomorphize social robots differs by robot appearance. Although the robot used in [19] had a head with two eyes and a torso, the robot was generally more machine-like compared to that in [18]. Thus, we hypothesized that machine-likeness would decrease lonely individuals' tendency to anthropomorphize. Moreover, since human-like characteristics are more likely than animal-like characteristics to evoke anthropomorphism, we predicted that lonely individuals would anthropomorphize animal-like robots less than human-like ones. **H1**: Lonely individuals' anthropomorphic tendencies will differ by social robots' appearance. **H1-1**: Lonely individuals will show a lower tendency to anthropomorphize a machine-like robot than a human-like robot. **H1-2**: Lonely individuals will show a lower tendency to anthropomorphize an animal-like robot than a human-like robot. ### _The effect of anthropomorphism on the likability and purchase intent of social robots_ To facilitate the real-world adoption of social robots by lonely individuals, increasing the acceptance motivation is pivotal. Purchase intention, in particular, signifies the willingness to invest financially to utilize the robots in the users' everyday life. Thus, investigating how anthropomorphizing social robots affect purchase intention will foreground the practical aspect of investigating the relationship between loneliness and anthropomorphism. Attributing human traits to social robots increases likability and trust among human counterparts [9, 10]. Previous research has demonstrated that anthropomorphism increases purchase intent, and perceived enjoyment or likability acts as a mediating factor [11, 12, 13]. However, the association was tested on non-robotic products, web design, and disembodied chatdots. Therefore, additional investigation is needed to examine whether the relationship between anthropomorphism and purchase intent, along with the mediation effect of likability, extends to embodied social robots. Moreover, whether lonely individuals' heightened anthropomorphism leads to higher purchase intent remains unexplored. Although one study investigated the moderating role of loneliness in the association between anthropomorphism and consumer attitudes [13], the anthropomorphized objects were brands and advertisements. Therefore, exploring the behavioral consequences of lonely individuals' anthropomorphic inclination in relation to social robots is crucial. We predicted that if the effect of loneliness on anthropomorphizing social robots varies with robot appearance, such differences would also be reflected in purchase intent. **H2**: Higher anthropomorphism will lead to increased purchase intent via increased likability of social robots. **H3**: Differences in lonely individuals' anthropomorphic tendencies by robot appearance will predict differences in the purchase intent of social robots. ## III Study Design ### _Participants_ Participants for this study were recruited from multiple sources, including the participant recruitment system of Seoul National University and online student communities of both Seoul National University and Korea University. This study was approved by Seoul National University Institutional Review Board. From January to May of 2022, participants responded online through the survey built using Qualtrics. Therefore, we conducted extensive quality control to strictly judge the quality of responses. Participants who failed one of the two attention-check items (i.e. "For this question, please check'strongly agree'") or completed the survey too quickly were not considered to be fully engaged in the study [16]. Those who failed the attention check were unable to proceed with the survey as it terminated upon an incorrect response. Moreover, we measured the average time needed to fully watch the video and attentively respond to the questionnaires from a pilot study, and concluded that a proper response needs at least 320 seconds. Consequently, we excluded the responses that took less than 320 seconds to complete. ### _Materials_ #### Iii-B1 Robots We used DJI Robomaster S1, Softbank Robotics' NAO, and SONY's AIBO formachine-like, human-like, and animal-like robots, respectively (Fig. 1). #### Iii-B2 Human-robot interaction videos The interactions between robots and human counterparts were recorded as videos. The robots engaged with their counterparts following the same interaction scenario across all three conditions (Fig. 2). Robots responded 1.8 seconds after the human voice, and the lengths of videos were around 90 seconds. Three robots reflected different levels of lifelikeness during the interaction. To the human counterparts' speech, the machine-like robot responded using LED light or a basic beep sound. It also rotated its upper body or moved upfront to respond. The animal-like robot produced dog-like sounds and imitated the dog behaviors using ear flop, head movement, mouth opening, tail wagging, sitting, and front paw lowering. The human-like robot answered using spoken words and conveyed nonverbal cues by moving its head or limbs. Moreover, robots showed distinct positive responses to their human counterparts' praise. The machine-like robot blinked its light, produced a beeping sound twice, and quickly rotated left and right. The animal-like robot barked and wagged its tail, and then lowered its head and front paw. The human-like robot showed a proud gesture and said "Thank you!". We used Choreographe and Robomaster desktop application to program NAO and Robomaster S1, respectively. The basic functions of AIBO were used without programming. For the videos, We first filmed NAO and Robomaster executing the pre-programmed behaviors without human commands and added Korean human speech afterward. For AIBO, we filmed the interaction using Japanese commands and replaced the commands with Korean human speech. #### Iii-B3 Self-reported measures To assess loneliness, we used the Korean-validated version of the UCLA loneliness scale version 3 [26]. We used 4-point Likert scales, ranging from 'never' to 'often'. The internal reliability was 0.94. To measure anthropomorphism, we used scales based on the 2-dimensional model of mind perception [24]. Following studies suggesting the reduced version of the scales consisting of items with more distinguishable factor loadings [27, 28], we used 6 items for each dimension. Seven-point scales ranging from 1 to 7 were used. The internal reliability was 0.85 for perceived experience and 0.75 for perceived agency. For likability, we used 3-item differential scales [29]. Seven-point scales ranging from -3 to 3 were used, with -3 and 3 representing antipole adjectives (e.g. 'unlikable - likable'). The internal reliability was 0.9. For purchase intent, we used a single-item measure. Participants used 7-point scales ranging from 1 to 7. We also measured variables that could be covariates: age, gender, household income, and prior knowledge of robots [30, 31, 32]. ### _Procedure_ Participants completed questionnaires measuring loneliness. Then, participants were randomly assigned to one of three robot appearance conditions, and watched a video of a robot interacting with a human counterpart. Lastly, participants filled out questionnaires regarding their perception of the robot, along with demographic variables and their prior knowledge and familiarity with the robot. Participants who completed the study received the promised reward. Fig. 1: Robots of three robot appearance conditions Fig. 2: Human-robot interaction scenario and examples ## IV Results Of 185 total responses, 137 (\(N_{female}\)=84, \(M_{age}\)=22.803, \(SD_{age}\)=3.788) were selected as final data. It consisted of 46 responses from the animal-like condition, 45 from the human-like condition, and 46 from the machine-like condition. See Tab. I for other descriptive statistics. We conducted bootstrapped multiple regression to verify the impact of loneliness on robot anthropomorphism and the moderation effect of robot appearances (Tab. II-III and Fig. 3). CAR and boot packages were used for bootstrapping regression coefficients, R-squared values, and their BCA 95% confidence intervals [33, 34] in R. Robot appearances were coded as two dummy variables; robot1 (machine-like robot compared to human-like robot) and robot2 (animal-like robot compared to human-like robot). We used 2000 iterations with seeds set as 123 for reproducibility. The unique effect of loneliness on perceived agency was not significant (\(\beta\)=.115, _SE_=.180, _CI_=[-.250,.476]), nor the moderation effects of robot appearance (robot1-loneliness interaction: \(\beta\)=-.170, _SE_=.227, _CI_=[-.599,.291]; robot2-loneliness interaction: \(\beta\)=-.303, _SE_=.231, _CI_=[-.758,.168]). The unique effect of loneliness on perceived experience was not significant (\(\beta\)=.197, _SE_=.161, _CI_=[-.053,.587]). The interaction effect between robot1 and loneliness was not significant (\(\beta\)=-.288, _SE_=.192, _CI_=[-.699,.060]). However, the interaction term between robot2 and loneliness was significant and negative (\(\beta\)=-.500, _SE_=.260, _CI_=[-1.080, -.056]). This interaction effect persisted when covariates were introduced (\(\beta\)=-.496, _SE_=.249, _CI_=[-1.070, -.060]). This interaction effect is in contrast with the positive unique effect of robot2 (\(\beta\)=.470, _SE_=.221, _CI_=[-.045,.896]). Also, we tested the indirect and direct effect of anthropomorphism on purchase intent using PROCESS macro in R [35]. The mediator was likability. We included prior knowledge of robots, gender, and income as covariates, which were the significant factors in Tab. III. We used 1000 iterations with seeds set as 100. Perceived agency had a significant positive indirect effect on purchase intent (\(\beta\)=.225, _SE_=.545, _CI_=[.119,.332]) but the direct effect was not significant (\(\beta\)=.069, _SE_=.075, _CI_=[-.081,.218]). Perceived experience had both a significant positive indirect effect (\(\beta\)=.135, _SE_=.046, _CI_=[.043,.233]) and direct effect \begin{table} \begin{tabular}{c c c c c} \hline \hline & \(\beta\) & _SE_ & 95\% _BCa CI_ \\ \hline Intercept & -0.101 & 0.147 & [-0.341, 0.235] \\ Robot1(machine-like=1) & -0.147 & 0.185 & [-0.537, 0.200] \\ Robot2(animal-like=1) & **0.470*** & 0.221 & [0.045, 0.896] \\ Loneliness & 0.197 & 0.161 & [-0.053, 0.587] \\ Robot1-loneliness interaction & -0.288 & 0.192 & [-0.699, 0.060] \\ Robot2-loneliness interaction & **-0.500*** & 0.260 & [-1.080, -0.056] \\ \hline \hline \end{tabular} Note: Robot1 and Robot2 are contrast of robot appearance levels (Robot1; machine-like robot compared to human-like robot, Robot2;animal-like robot compared to human-like robot). All variables were standardized. Bootstrapped \(R^{2}\) value was 0.122 (95% _BCa CI_= [0.025, 0.219]). *= statistically significant at 95% confidence level \end{table} TABLE II: Bootstrapped regression results on robot appearance, loneliness, and their interactions as predictors for perceived experience \begin{table} \begin{tabular}{c c c c c c} \hline \hline & & \begin{tabular}{c} Perceived \\ Loneliness \\ \end{tabular} & \begin{tabular}{c} Perceived \\ Experience \\ \end{tabular} & \begin{tabular}{c} \\ Agency \\ \end{tabular} & \begin{tabular}{c} \\ Likability \\ \end{tabular} & \begin{tabular}{c} \\ Intent \\ \end{tabular} \\ \hline \begin{tabular}{c} Animal-like \\ \(M\) \\ _(N=46)_ \\ \end{tabular} & _SD_ & 0.586 & 1.163 & 1.240 & 1.140 & 1.398 \\ Human-like & \(M\) & _2.284_ & 2.063 & 4.074 & 0.644 & 2.733 \\ \begin{tabular}{c} (N=45) \\ \end{tabular} & _SD_ & 0.064 & 1.475 & 1.296 & 1.293 & 1.572 \\ \begin{tabular}{c} Machine-like \\ _(N=46)_ \\ \end{tabular} & _SD_ & 0.549 & 0.650 & 1.403 & 1.254 & 1.666 \\ \hline \begin{tabular}{c} Total \\ \(M\) \\ _(N=137)_ \\ \end{tabular} & \(M\) & 2.293 & 1.881 & 3.410 & 0.309 & 2.847 \\ \begin{tabular}{c} (N=137) \\ \end{tabular} & _SD_ & 0.576 & 1.001 & 1.399 & 1.443 & 1.581 \\ \hline \hline \end{tabular} Note: All variables were standardized. \end{table} TABLE III: Bootstrapped regression results on robot appearance, loneliness, and their interactions as predictors for perceived experience with covariates Fig. 3: The effect of loneliness on perceived experience by robot appearance conditions (\(\beta\)=.250, _SE_=.069, _CI_=[.115,.386]) on purchase intent. Lastly, we tested the final model using PROCESS macro in R [35]. We used 1000 iterations with seeds set as 100. Prior knowledge of robots, age, gender, and income were the covariates. Since the direct effect of anthropomorphism on purchase intent was significant, likability was excluded from the model for simplicity. We included perceived experience, the indicator of social robot anthropomorphism, as a mediator of the model. The moderated mediation was significant when the animal-like robot was compared to the human-like robot. See Tab. IV and Fig. 4 for the detailed results. ## V Discussion This study examined the effect of loneliness on anthropomorphizing social robots and the moderating effect of robot appearance. We also investigated how lonely individuals' anthropomorphic tendency predicts purchase intent. Results revealed that lonely individuals' anthropomorphic tendency was lower in the animal-like robot than in the human-like one. This effect remained significant after covariates were introduced. Thus, H1 and H1-2 were supported. The rejection of H1-1 implies that the machine-like robot did not significantly differ from the human-like robot in inducing lonely individuals to anthropomorphize. However, although not statistically significant, the regression coefficients show that while loneliness and anthropomorphism were positively associated in the human-like robot condition, the association was negative in the machine-like robot condition. This implies that the degree of machine-likeness could partly explain the inconsistency in lonely individuals' anthropomorphizing tendency toward social robots [18, 19]. Our findings supporting H1-2 suggests that lonely individuals are less inclined to anthropomorphize animal-like robots (e.g., AIBO) compared to human-like robots. This is in contrast with our other finding that participants generally anthropomorphized the animal-like robot more compared to the human-like robot. One possible explanation for this is that AIBO displaying a dog-like resemblance in an adorable way increased the general anthropomorphic tendency, but not for lonely individuals. AIBO was designed as a 'cute companion dog' by'showing moves and gestures in adorable patterns' [36]. It mimics not only the visual appearance but also the movements and sounds of a real dog seeking affection. For lonely people who are hypervigilant to social threats [3, 20], which makes them easier to perceive non-harmful social cues as threatening, such features could have been perceived as unsettling or potentially threatening. Furthermore, anthropomorphism had a positive direct effect and an indirect effect via likability on purchase intent. Moreover, the mediated moderation index was significant when the animal-like robot was compared to the human-like robot. Specifically, the mediating effect significantly decreased in animal-like robots compared to human-like robots, reflecting the H1-2 being supported. Thus, our findings support H2 and H3. This indicates that if a social robot induces higher anthropomorphism from lonely individuals, this leads lonely individuals to like the robot more, and see more value in the acquisition of the robot [13]. This study makes several theoretical contributions. Above all, this paper is the first to use animal-like and machine-like robots, along with human-like robots, to compare lonely individuals' anthropomorphic tendencies across different robot appearances. The results indicate that the inclination of lonely individuals to anthropomorphize social robots should be evaluated in relation to the specific characteristics of robots, rather than unifying it as one general inclination. Furthermore, we extend lonely individuals' anthropomorphic tendencies toward social robots to their adoption intent as consumers. By examining the possibilities of social robots as companions for lonely individuals in their real life, our results strengthen the importance of studying lonely individuals' anthropomorphic tendencies toward social robots. Lastly, this paper carries implications for future loneliness intervention research. Loneliness intervention research using robots mainly focused on animal-like robots (e.g. [7, 8]). Future studies testing the loneliness-mitigating effect of social robots should also test the effect of humanoid robots, which were shown to induce more anthropomorphism and likability toward social robots from lonely individuals. This study also provides insights for designing social \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{2}{c}{Moderated mediation test} & \multicolumn{3}{c}{Conditional Indirect Effect} \\ \(\beta\)_(SE)_, 95\% _BCa CI_ & & \(\beta\)_(SE)_, 95\% _BCa CI_ & & Direct Effect \\ \hline Robot1 & Robot2 & Human-like & Machine-like & Animal-like & \(\beta\)_(SE)_, 95\% _BCa CI_ \\ \hline -0.099(0.080), & **-0.194***(0.108), & 0.065(0.064), & -0.034(0.045), & -0.129(0.084), & 0.047(0.083), \\ CI=[-0.277, 0.035] & CI=[-0.445, -0.016] & CI=[-0.045, 0.203] & CI=[-0.124, 0.057] & CI=[-0.308, 0.015] & CI=[-0.117, 0.210] \\ \hline \hline \end{tabular} Note: Prior knowledge of robots, age, gender, and income were the covariates of the models. All variables were standardized. *= statistically significant at 95% confidence level \end{table} TABLE IV: Summary of the Results of the PROCESS model7 Fig. 4: Overview of the results of the PROCESS model7 robots for lonely individuals. Robots should evoke anthropomorphism only to match the degree of realism of their appearance. If it fails to do so, lonely individuals could perceive social robots as threatening [3, 20]. We suggest social robots demonstrate suitable levels of movement and interaction skills to ensure they maintain congruity with their appearance and avoid excessive mimicry of lifeforms. Despite the valuable findings and implications of this study, they should be regarded with caution due to some limitations. Firstly, data collection was conducted online due to COVID-19 safety concerns. Although we conducted extensive quality control measures, this could have potentially affected participant engagement. Additionally, the absence of on-site viewing of the interaction raises the need for future studies to replicate our study using in-person experiments that involve participants physically interacting with the robot. Lastly, the relationships could be tested in an experimental field study, increasing the results' ecological validity.
2303.07132
On Higher Dimensional Milnor Frames
A classic result of Milnor shows that any 3-dimensional unimodular metric Lie algebra admits an orthonormal frame with at most three nontrivial structure constants. These frames are referred to as Milnor frames. We define extensions of Milnor frames into higher dimensions and refer to these higher dimensional analogues as Lie algebras with Milnor frames. We determine that $n$-dimensional, $n \geq 4$, Lie algebras with Milnor frames are isomorphic to the direct sum of 3-dimensional Heisenberg Lie algebras $\mathfrak{h}^3$, 4-dimensional 3-step nilpotent Lie algebras $\mathfrak{h}^4$, and an abelian Lie algebra $\mathfrak{a}$. Moreover, for any Lie algebra $\mathfrak{g}\not\cong \mathfrak{h}^3 \oplus \mathfrak{a}$ with a Milnor frame, there exists an inner product structure $g$ on $\mathfrak{g}$ such that $(\mathfrak{g}, g)$ does not admit an orthonormal Milnor frame.
Hayden Hunter
2023-03-13T14:02:28Z
http://arxiv.org/abs/2303.07132v3
# On Higher Dimensional Milnor Frames ###### Abstract A classic result of Milnor shows that any 3-dimensional unimodular metric Lie algebra admits an orthonormal frame with at most three non-zero structure constants. We refer to these frames as Milnor frames. We define extensions of Milnor frames into higher dimensions. We determine that the higher dimensional analogue of Milnor's result can only hold on the 3-dimensional Heisenberg Lie algebra directly summed with an abelian Lie algebra. ###### Contents * 1 Introduction * 2 Algebraic Properties of Milnor Frames * 2.1 Extending Three Dimensional Milnor Frames * 2.2 Examples: \(\mathfrak{h}^{3}\) and \(\mathfrak{h}^{4}\) * 2.3 Algebraic Properties of Lie Algebras with Milnor Frames * 2.4 Milnor Graphs * 3 Geometric Properties * 3.1 Metric Lie Algebras with an Orthonormal Milnor Frame * 3.1.1 Ricci Tensors * 3.1.2 Ricci-Soliton Equation and Derivations * 3.2 Non-orthogonal Milnor frames * 3.2.1 Metrics on \(\mathfrak{h}^{4}\) * 3.2.2 Metrics on \(\mathfrak{h}^{3}\oplus\mathfrak{h}^{3}\) Introduction Given a Lie Group \(G\) and a metric \(g\) on \(G\), we say that \(g\) is left-invariant if, for any \(p\in G\), the diffeomorphism \[L_{p}:G\to G,\quad q\mapsto pq \tag{1}\] is an isometry with respect to the metric \(g\). A vector field \(X\) on \(G\) is said to be left-invariant if \(dL_{p}(X)=X\) for any \(p\in G\). By placing an inner product structure on the tangent plane at the identity, \(T_{e}G\), we can extend this inner product to a left-invariant metric on \(G\) (See for example Chapter 1 Section 2 in [1]). The collection of left-invariant metrics on \(G\) are in one-to-one correspondence with inner product structures on \(T_{e}G\). Thus given a left-invariant metric on a Lie Group \(G\), we only need to look at the inner product structure of its associated Lie algebra. With this in mind, whenever a Lie Group \(G\) is equipped with a left-invariant metric \(g\), we may call the Lie algebra \((\mathfrak{g},g)\) associated to the Lie Group a metric Lie algebra with respect to the metric \(g\). When it is obvious which left-invariant metric we are referring to, we say that \(\mathfrak{g}\) is a metric Lie algebra. When \(G\) is equipped with the Levi-Civita connection, the curvature is dependent on the structure constants induced by a Lie bracket \([\,,\,]\in\wedge^{2}\mathfrak{g}^{*}\otimes\mathfrak{g}\) (See sections 5 and 6 in [10]). Thus if a metric Lie algebra has few non-trivial structure constants, determining curvature requires fewer computations. For example, Milnor [10, Lemma 4.1] determined that any 3-dimensional unimodular metric Lie algebra admits an orthonormal frame with at most 3 non-zero structure constants. Moreover, these orthonormal frames diagonalize the Ricci-tensor. For higher dimensional metric Lie algebras, Hashinaga, Tamaru, and Terada [3] introduced a procedure that allows us to find an orthonormal frame with relatively few structure constants. Finding such a frame will reduce the amount of computation it takes to determine curvature. Let \(\mathfrak{g}\) be a three-dimensional metric Lie algebra with metric \(g\) and an orientation. For two linearly independent vectors \(X,Y\in\mathfrak{g}\), denote the cross product \(X\wedge Y\) to be the unique vector orthogonal to \(X\) and \(Y\) such that \(\{X,Y,X\wedge Y\}\) is a positively oriented frame and \[||X\wedge Y||^{2}:=g(X\wedge Y,X\wedge Y)=g(X,X)g(Y,Y)-g(X,Y)^{2}.\] If \(X\) and \(Y\) are not linearly independent then we say that \(X\wedge Y=0\). By the universal property of wedge products there exists a linear operator \(L:\mathfrak{g}\rightarrow\mathfrak{g}\) for which the diagram commutes. Moreover \(\mathfrak{g}\) is unimodular if and only if \(L\) is self-adjoint [1, Lemma 4.1]. It is well-known that a self-adjoint linear operator admits an orthonormal frame of eigenvectors. Thus there is an orthonormal frame \(\{X_{1},X_{2},X_{3}\}\) and real numbers \(\lambda_{1},\lambda_{2},\lambda_{3}\in\mathbb{R}\) such that \[[X_{1},X_{2}]=\lambda_{3}X_{3},\quad[X_{2},X_{3}]=\lambda_{1}X_{1},\quad[X_{3}, X_{1}]=\lambda_{3}X_{3}.\] Let \(\sigma=(1\,2\,3)\in S_{3}\) where \(S_{3}\) is the permutation group acting on \(3\) elements. We can rewrite the bracket identity so that for any \(i\in\{1,2,3\}\), \([X_{i},X_{\sigma(i)}]=\lambda_{\sigma^{2}(i)}X_{\sigma^{2}(i)}\). **Definition 1.0.1**.: _Let \(\mathfrak{g}\) be a Lie algebra with a frame \(\{X_{1},X_{2},X_{3}\}\). If there exists \(\lambda_{1},\lambda_{2},\lambda_{3}\in\mathbb{R}\) such that the bracket relation \([X_{i},X_{\sigma(i)}]=\lambda_{\sigma^{2}(i)}X_{\sigma^{2}(i)}\) holds for any \(1\leq i\leq 3\), then \(\{X_{1},X_{2},X_{3}\}\) is called a Milnor frame._ Using the permutation \(\sigma=(1\,\ldots\,n)\) we can extend this notion to higher dimensions. **Definition 1.0.2**.: _Let \(\mathfrak{g}\) be a Lie algebra of dimension \(n\) and \(\sigma=(1\,\ldots\,n)\in S_{n}\) where \(S_{n}\) is the permutation group acting on \(n\) elements. A frame, \(\{X_{1},\ldots,X_{n}\}\), is a Milnor frame if for all \(i,j\in\{1,\ldots,n\}\), there exist \(\lambda_{1},\ldots,\lambda_{n}\in\mathbb{R}\) such that_ \[[X_{i},X_{j}]=\begin{cases}\lambda_{\sigma^{2}(i)}X_{\sigma^{2}(i)}&\sigma(i) =j\\ -\lambda_{\sigma^{2}(j)}X_{\sigma^{2}(j)}&\sigma(j)=i\\ 0&\text{otherwise}\end{cases} \tag{2}\] An example of a Lie algebra with a Milnor frame of dimension \(n\geq 4\) is the Lie algebra, \(\mathfrak{h}^{3}\), associated to the \(3\)-dimensional Heisenberg Group directly summed with an abelian Lie algebra, \(\mathfrak{a}\), of dimension \(n\geq 1\). For a less trivial example let \(\mathfrak{h}^{4}\) be the \(4\)-dimensional Lie algebra with a frame \(\{X_{1},X_{2},X_{3},X_{4}\}\) whose bracket relations are generated by \[[X_{1},X_{2}]=X_{3}\quad[X_{2},X_{3}]=X_{4}.\] As we shall show in section 2.2, \(\mathfrak{h}^{4}\) is a \(3\)-step nilpotent Lie algebra and is thus not isomorphic to the direct sum of the \(3\)-dimensional Heisenberg Lie algebra and an abelian Lie algebra. It turns out that any Lie algebra with a Milnor frame is completely determined by these two types of Lie algebras: **Theorem 1.0.1**.: _For any Lie algebra \(\mathfrak{g}\) of dimension \(n\geq 4\) with a Milnor frame, \(\mathfrak{g}\cong(\oplus\mathfrak{h}^{3})\oplus(\oplus\mathfrak{h}^{4})\oplus \mathfrak{a}\) where \(\mathfrak{h}^{3}\) is the Lie algebra of the Heisenberg Group, \(\mathfrak{h}^{4}\) is a Lie algebra with a Milnor frame and two non-trivial structure constants, and \(\mathfrak{a}\) is an abelian Lie Alebra. Moreover, these Lie algebras are at most 3-step nilpotent._ A Corollary of Theorem 1.0.1 shows that any \(n\)-dimensional, \(n\geq 4\), Lie algebra with a Milnor frame admits a Milnor frame whose structure constants are either \(0\) or \(1\). By a result of Malcev [9], the simply connected Lie group associated to such a Lie algebra has compact quotient. If a metric Lie algebra has a Milnor frame that is also orthonormal, then we obtain some "nice" geometric properties. For example, given an \(n\)-dimensional nilpotent metric Lie algebra \((\mathfrak{g},g)\) with an orthonormal frame \(\{X_{1},\ldots,X_{n}\}\), Theorem 1.1 of [8] determines an algebraic constraint for when \(\{X_{1},\ldots,X_{n}\}\) is Ricci diagonalizable. If a metric Lie algebra \((\mathfrak{g},g)\) has an orthonormal Milnor frame \(\{X_{1},\ldots,X_{n}\}\), by Theorem 1.0.1 and Theorem 1.1 of [8], \(\{X_{1},\ldots,X_{n}\}\) diagonalizes the Ricci tensor. For example consider the Lie algebra \(\mathfrak{h}^{3}\oplus\mathfrak{a}\) with \(\mathfrak{a}\) being a finite dimensional Lie algebra. Corollary 5.3 of [7] shows that \(\mathfrak{h}^{3}\) has one metric up to scaling and automorphism. Thus any Lie algebra with a Milnor frame which is isomorphic to \(\mathfrak{h}^{3}\oplus\mathfrak{a}\), \(\mathfrak{a}\) abelian, must have an orthonormal Milnor frame. This raises the question: **Question 1.0.1**.: _Given a Lie algebra \(\mathfrak{g}\) that admits Milnor frame and a metric \(g\) on \(\mathfrak{g}\), is it possible to always find an orthonormal Milnor frame?_ Surprisingly, we find that this is not necessarily true. **Theorem 1.0.2**.: _For any non-abelian Lie algebra \(\mathfrak{g}\) as in Theorem 1.0.1 which is not isomorphic to \(\mathfrak{h}^{3}\oplus\mathfrak{a}\) where \(\mathfrak{a}\) is an abelian Lie algebra, there exists a metric \(g\) on \(\mathfrak{g}\) such that \((\mathfrak{g},g)\) does not admit an orthonormal Milnor frame._ The paper will be divided into two sections. In Section 2, we discuss how the algebraic structure of Milnor frames can be interpreted as directed graphs. In Section 3, we discuss the geometric properties of metric Lie algebras with Milnor frames. We shall show that a metric Lie algebra with an orthonormal Milnor frame has a diagonalizable Ricci tensor. Additionally we discuss which metrics on a metric Lie algebra with a Milnor frame are Ricci nilsolitons. Finally, we consider metric Lie algebras with a Milnor frame and determine necessary conditions for the Lie algebra to admit an orthonormal Milnor frame. We conclude with a proof of Theorem 1.0.2. ## Acknowledgments This paper is supported by the National Science Foundation (NSF-2104662). I would like to thank Dr. Luca Di Cerbo, my committee, peers, and those who attended my first talk in the Topology and Dynamics seminar for giving valuable feedback. ## 2 Algebraic Properties of Milnor Frames ### Extending Three Dimensional Milnor Frames Recall that a \(3\)-dimensional unimodular Lie algebra admits a frame \(\{X_{1},X_{2},X_{3}\}\) and structure constants \(\lambda_{1},\lambda_{2},\lambda_{3}\in\mathbb{R}\) where \[[X_{1},X_{2}]=\lambda_{3}X_{3},\quad[X_{2},X_{3}]=\lambda_{1}X_{1},\quad[X_{3},X_{1}]=\lambda_{2}X_{2}.\] If \(\sigma=(1\,2\,3)\in S_{3}\) where \(S_{3}\) is the group of permutations acting on \(3\) elements, then for any \(i\)\([X_{i},X_{\sigma(i)}]=\lambda_{\sigma^{2}(i)}X_{\sigma^{2}(i)}\). The goals of this section will be to construct an algebraic extension using the element \(\sigma_{n}=(1\,\ldots\,n)\) of the permutation group acting on \(n\) elements, \(S_{n}\), and to determine the algebraic properties of this extension. **Definition 2.1.1**.: _Let \(\mathfrak{g}\) be a Lie algebra of dimension \(n\) and \(\sigma=(1\,\ldots\,n)\in S_{n}\) where \(S_{n}\) is the permutation group acting on \(n\) elements. A frame, \(\{X_{1},\ldots,X_{n}\}\), is a Milnor frame if for all \(i,j\in\{1,\ldots,n\}\), there exist \(\lambda_{1},\ldots,\lambda_{n}\in\mathbb{R}\) such that_ \[[X_{i},X_{j}]=\begin{cases}\lambda_{\sigma^{2}(i)}X_{\sigma^{2}(i)}&\sigma(i)= j\\ -\lambda_{\sigma^{2}(j)}X_{\sigma^{2}(j)}&\sigma(j)=i\\ 0&\text{otherwise}\end{cases}. \tag{3}\] The terms \(\lambda_{1},\ldots,\lambda_{n}\in\mathbb{R}\) are referred to as the structure constants relative to the Milnor frame \(\{X_{1},\ldots,X_{n}\}\). We may represent the Lie bracket between two elements of a Milnor frame is in the following way: let \(\delta_{ij}\) be \(1\) if \(i=j\) and \(0\) if \(i\neq j\). For a Milnor frame \(\{X_{1},\ldots,X_{n}\}\) with structure constants \(\lambda_{1},\ldots,\lambda_{n}\in\mathbb{R}\) and \(i,j\in\{1,\ldots,n\}\) \[[X_{i},X_{j}]=\delta_{\sigma(i)j}\lambda_{\sigma^{2}(i)}X_{\sigma^{2}(i)}- \delta_{i\sigma(j)}\lambda_{\sigma(i)}X_{\sigma(i)}. \tag{4}\] If we let \(E_{ij}\) denote the matrix whose only nonzero coefficient is \(1\) at the entry \((i,j)\), then for a Milnor frame \(\{X_{1},\ldots,X_{n}\}\) with structure constants \(\{\lambda_{1},\ldots,\lambda_{n}\}\), \[\operatorname{ad}_{X_{i}}=\lambda_{\sigma^{2}(i)}E_{\sigma^{2}(i)\sigma(i)}- \lambda_{\sigma(i)}E_{\sigma(i)\sigma^{-1}(i)}. \tag{5}\] Knowing these other representations can simplify future computations. **Remark 2.1.1**.: For the remainder of the paper, \(E_{ij}\) will denote the \(n\times n\) matrix whose entry at \((i,j)\) is \(1\) with all other entries trivial. ### Examples: \(\mathfrak{h}^{3}\) and \(\mathfrak{h}^{4}\) We start with the classic example of a nilpotent Lie algebra with a Milnor frame. **Example 2.2.1**.: The \(3\)-dimensional Heisenberg group is a subgroup of \(M_{3}(\mathbb{R})\) whose elements are of the form \[\mathcal{H}=\left\{\begin{pmatrix}1&a&b\\ 0&1&c\\ 0&0&1\end{pmatrix}|a,b,c\in\mathbb{R}\right\}.\] The Lie algebra of \(\mathcal{H}\) is \[\mathfrak{h}^{3}=\left\{\begin{pmatrix}0&a&b\\ 0&0&c\\ 0&0&0\end{pmatrix}|a,b,c\in\mathbb{R}\right\}.\] Consider the frame \(X_{1}:=E_{12},X_{2}:=E_{13}\) and \(X_{3}:=E_{23}\). We obtain the bracket relations \([X_{1},X_{2}]=X_{3}\) and all other bracket relations trivial. If \(\mathfrak{a}\) is an abelian Lie algebra of dimension \(n\geq 1\), then \(\mathfrak{h}^{3}\oplus\mathfrak{a}\) forms an \(n+3\)-dimensional Lie algebra with a Milnor frame. Note that \(\mathfrak{h}^{3}\) is \(2\)-step nilpotent as shown below: \[\operatorname{ad}_{X_{1}} =0\] \[\operatorname{ad}_{X_{2}} =-E_{31}\implies\operatorname{ad}_{X_{2}}^{2}=E_{31}^{2}=0\] \[\operatorname{ad}_{X_{3}} =0.\] This observation will allow us to show that \(\mathfrak{h}^{4}\not\cong\mathfrak{h}^{3}\oplus\mathfrak{a}\). **Example 2.2.2**.: Let \(\mathfrak{h}^{4}\) be the \(4\)-dimensional Lie algebra with a frame \(\{X_{1},X_{2},X_{3},X_{4}\}\) and bracket relations \[[X_{1},X_{2}]=X_{3},\quad[X_{2},X_{3}]=X_{4}.\] This Lie algebra is 3-step nilpotent as shown below: \[\operatorname{ad}_{X_{1}}=E_{32}\implies\operatorname{ad}_{X_{1}}^{2}=E_{32}^{2}=0\] \[\operatorname{ad}_{X_{2}}=E_{43}-E_{31}\implies\operatorname{ad}_{ X_{2}}^{2}=E_{43}^{2}-E_{43}E_{31}-E_{13}E_{43}+E_{13}^{2}=E_{41}\] \[\implies\operatorname{ad}_{X_{2}}^{3}=E_{41}(E_{43}-E_{31})=0\] \[\operatorname{ad}_{X_{3}}=-E_{42}\implies\operatorname{ad}_{X_{3} }=E_{42}^{2}=0\] \[\operatorname{ad}_{X_{4}}=0.\] Because every 2-dimensional Lie algebra is either abelian or non-nilpotent, \(\mathfrak{h}^{4}\) cannot be split into two 2-dimensional Lie algebras. As shown in example 2.2.2, the Lie algebra \(\mathfrak{h}^{3}\) is at most 2-step nilpotent and so \(\mathfrak{h}^{4}\) is not isomorphic to \(\mathfrak{h}^{3}\oplus\mathfrak{a}\) with \(\mathfrak{a}\) an abelian Lie algebra of dimension 1. Set \(F_{1}=X_{4}\), \(F_{2}=X_{3}\), \(F_{3}=X_{1}\) and \(F_{4}=X_{2}\). The non-trivial structure constants with respect to the frame \(\{F_{1},\ldots,F_{4}\}\) are \(C_{2,4}^{1}=-1\), \(C_{3,4}^{2}=1\). By Lemma 3 of [5], \(\mathfrak{h}^{4}\) is the unique 3-step Nilpotent Lie algebra of dimension 4. ### Algebraic Properties of Lie Algebras with Milnor Frames Given a Lie algebra of dimension \(n\geq 4\) with a Milnor frame the upper bound for the total number of non-zero structure constants is at most \(n\). The next proposition shows that this upper bound cannot be achieved. **Proposition 2.3.1**.: _Let \(\mathfrak{g}\) be an \(n^{th}\) dimensional Lie algebra, \(n\geq 4\), with a Milnor frame \(\{X_{1},\ldots,X_{n}\}\) and structure constants \(\lambda_{1},\ldots,\lambda_{n}\). Then \(\lambda_{i}\lambda_{\sigma^{2}(i)}=0\) for any \(i\in\{1,\ldots,n\}\)._ Proof.: For any \(i\in\{1,\ldots,n\}\), \[[X_{\sigma^{3}(i)},[X_{i},X_{\sigma}(i)]]=\lambda_{\sigma^{2}(i)} [X_{\sigma^{3}(i)},X_{\sigma^{2}(i)}]=\lambda_{\sigma^{2}(i)}\lambda_{\sigma^{ 4}(i)}X_{\sigma^{4}(i)}\] \[[X_{i},[X_{\sigma(i)},X_{\sigma^{3}(i)}]]=[X_{i},0]=0\] \[[X_{\sigma(i)},[X_{\sigma^{3}(i)},X_{i}]]=\begin{cases}[X_{\sigma (i)},0]=0&n>4\\ \lambda_{\sigma(i)}[X_{\sigma(i)},X_{\sigma(i)}]=0&n=4\end{cases}\] By the Jacobi Identity \[0=[X_{\sigma^{3}(i)},[X_{i},X_{\sigma(i)}]]+[X_{i},[X_{\sigma(i)},X_{\sigma^{3 }(i)}]]+[X_{\sigma(i)},[X_{\sigma^{3}(i)},X_{i}]]=\lambda_{\sigma^{2}(i)} \lambda_{\sigma^{4}(i)}.\] Considering that \(i\) was arbitrary, we obtain the desired identity. The next subsection provides combinatorial argument that gives us a strict upper bound. ### Milnor Graphs Let \(\lambda:=(\lambda_{1},\ldots,\lambda_{n})\in\mathbb{R}^{n}\) such that \(\lambda_{i}\lambda_{\sigma^{2}(i)}=0\). Let \(V=\{v_{1},\ldots,v_{n}\}\) and \(E=\{\{v_{i},v_{\sigma^{2}(i)}\}\}_{i}\). Note that for each \(e\in E\), \(e\cap V\) has a vertex \(v_{i}\) such that \(\lambda_{i}=0\). We call this graph a _Milnor graph_. We will show later that there exists a one-to-one correspondence with Milnor graphs and Lie algebras with Milnor frames. Let \(A\subset V\) such that for any \(v_{i}\in A\), \(\lambda_{i}=0\) and let \(B=V\setminus A\). For each \(v_{i}\in B\), define the function \(f:B\to A\) as \(v_{i}\mapsto v_{\sigma^{2}(i)}\). This map is well defined and injective so that \(|B|\leq|A|\). Thus \[2|B|\leq|B|+|A|=|V|=n\implies B\leq\frac{n}{2}.\] Because \(B\) is an integer, \(B\leq\lfloor n/2\rfloor\). The bound is attained whenever \(4|n\). Let \(\mathfrak{g}\) be a Lie algebra with a Milnor frame and let \(\mathcal{G}=(V,E)\) be the associated Milnor graph. Observe that there are two disjoint cycles \((v_{1},v_{3},\ldots,v_{1})\) and \((v_{2},v_{4},\ldots,v_{2})\). Suppose that \(\lambda_{1},\lambda_{2}\neq 0\). If \(B\subset V\) is the set of all vertices \(v_{i}\) such that \(\lambda_{i}\neq 0\), then we may place \(\lambda_{1},\lambda_{2}\in B\), \(\lambda_{3},\lambda_{4}\in V\setminus B\), and so on. Because \(4|n\) we will obtain \(|A|=|B|\) and so by a previous observation, \(|B|=\frac{n}{2}=\frac{4k}{2}=2k\) for some \(k\in\mathbb{Z}\). Figure 1: Milnor Graphs As shown previously, Milnor graphs can determine an upper bound on the total number of non-trivial structure constants. We obtain more applications of Milnor graphs if we direct the edges in a natural way. **Definition 2.4.1**.: _Let \(\mathfrak{g}\) be a Lie algebra with a Milnor frame whose structure constants are \(\lambda_{1},\dots,\lambda_{n}\in\mathbb{R}\). Define the directed graph \(\mathcal{G}=(V,E)\) with vertices \(V=\{v_{1},\dots,v_{n}\}\) and edges \(E=\{(v_{i},v_{\sigma^{2}(i)})|1\leq i\leq n\}\). We call \(\mathcal{G}\) the directed Milnor graph with respect to the structure constants \(\lambda_{1},\dots,\lambda_{n}\)._ The definition above shows that we may construct a graph which represents a Lie algebra with a Milnor frame. A consequence of Theorem 1.0.1 is that each Lie algebra of dimension \(n\geq 4\) with a Milnor frame has a Milnor frame whose structure constants are either \(0\) or \(1\). This observation along with the next proposition will show that there is a one-to-one correspondence between Lie algebras with Milnor frames and their associated Milnor graphs. **Proposition 2.4.1**.: _Let \(\mathcal{G}=(V,E)\) with vertices \(V=\{v_{1},\dots,v_{n}\}\) and edges \(E=\{(v_{i},v_{\sigma^{2}(i)})|1\leq i\leq n)\}\). Suppose \(A\subset V\) such that \(\forall e\in E\), if \(e=(v_{i},v_{\sigma^{2}(i)})\) then \(v_{i}\) or \(v_{\sigma^{2}(i)}\) is in \(A\). Define_ \[\lambda_{i}=\begin{cases}1&v_{i}\not\in A\\ 0&v_{i}\in A\end{cases}.\] _so that \(\lambda_{i}\lambda_{\sigma^{2}(i)}=0\). We may define an algebra with bracket relations \([X_{i},X_{\sigma(i)}]=\lambda_{\sigma^{2}(i)}X_{\sigma^{2}(i)}\) with \(X_{1},\dots,X_{n}\) linearly independent and all other bracket relations trivial. The Lie algebra formed by the span of \(\{X_{1},\dots,X_{n}\}\) defined through the bracket relation aforementioned forms a Lie algebra._ Proof.: Bilinearity and anti-symmetry of the bracket is already given. To show that this algebra is a Lie algebra, we shall show that for any two elements \(X_{i},X_{j}\) of the frame \(\{X_{1},\dots,X_{n}\}\) the identity \(\mathrm{ad}_{[X_{i},X_{j}]}=\mathrm{ad}_{X_{i}}\mathrm{ad}_{X_{j}}-\mathrm{ad }_{X_{j}}\mathrm{ad}_{X_{i}}\) holds. Observe \[\mathrm{ad}_{[X_{i},X_{j}]} =\delta_{\sigma(i)j}\lambda_{\sigma^{2}(i)}\mathrm{ad}_{X_{\sigma ^{2}(i)}}-\delta_{\sigma(j)i}\lambda_{\sigma^{2}(j)}\mathrm{ad}_{X_{\sigma^{2 }(j)}}\] \[=\delta_{\sigma(i)j}\lambda_{\sigma^{2}(i)}(\lambda_{\sigma^{4}(i )}E_{\sigma^{4}(i)\sigma^{2}(i)}-\lambda_{\sigma^{3}(i)}E_{\sigma^{3}(i)\sigma (i)})\] \[\qquad\qquad-\delta_{\sigma(j)i}\lambda_{\sigma^{2}(j)}(\lambda_ {\sigma^{4}(j)}E_{\sigma^{4}(j)\sigma^{2}(j)}-\lambda_{\sigma^{3}(j)}E_{\sigma ^{3}(j)\sigma(j)})\] \[=\delta_{\sigma(i)j}\lambda_{\sigma^{2}(i)}\lambda_{\sigma^{4}(i )}E_{\sigma^{4}(i)\sigma^{2}(i)}-\delta_{\sigma(i)j}\lambda_{\sigma^{2}(i)} \lambda_{\sigma^{3}(i)}E_{\sigma^{3}(i)\sigma(i)}\] \[\qquad\qquad-\delta_{\sigma(j)i}\lambda_{\sigma^{2}(j)}\lambda_ {\sigma^{4}(j)}E_{\sigma^{4}(j)\sigma^{2}(j)}+\delta_{\sigma(j)i}\lambda_{ \sigma^{2}(j)}\lambda_{\sigma^{3}(j)}E_{\sigma^{3}(j)\sigma(j)}\] \[=-\delta_{\sigma(i)j}\lambda_{\sigma^{2}(i)}\lambda_{\sigma^{3}( i)}E_{\sigma^{3}(i)\sigma(i)}+\delta_{\sigma(j)i}\lambda_{\sigma^{2}(j)}\lambda_{ \sigma^{3}(j)}E_{\sigma^{3}(j)\sigma(j)}\] where equality of the last two terms follows from \(\lambda_{k}\lambda_{\sigma^{2}(k)}=0\) for any \(k\). Computing \(\mathrm{ad}_{X_{i}}\mathrm{ad}_{X_{j}}\) we find \[\mathrm{ad}_{X_{i}}\mathrm{ad}_{X_{j}} =(\lambda_{\sigma^{2}(i)}E_{\sigma^{2}(i)\sigma(i)}-\lambda_{\sigma (i)}E_{\sigma(i)\sigma^{-1}(i)})(\lambda_{\sigma^{2}(j)}E_{\sigma^{2}(j)\sigma( j)}-\lambda_{\sigma(j)}E_{\sigma(j)\sigma^{-1}(j)})\] \[=\lambda_{\sigma^{2}(i)}\lambda_{\sigma^{2}(j)}\delta_{i\sigma(j )}E_{\sigma^{2}(i)\sigma(j)}-\lambda_{\sigma(i)}\lambda_{\sigma^{2}(j)}\delta_ {i\sigma^{3}(j)}E_{\sigma(i)\sigma(j)}\] \[\qquad\qquad-\lambda_{\sigma^{2}(i)}\lambda_{\sigma(j)}\delta_{ ij}E_{\sigma^{2}(i)\sigma^{-1}(j)}+\lambda_{\sigma(i)}\lambda_{\sigma(j)}\delta_ {i\sigma^{2}(j)}E_{\sigma(i)\sigma^{-1}(j)}\] \[=\lambda_{\sigma^{2}(i)}\lambda_{\sigma^{2}(j)}\delta_{i\sigma( j)}E_{\sigma^{2}(i)\sigma(j)}-\lambda_{\sigma^{2}(i)}\lambda_{\sigma(j)} \delta_{ij}E_{\sigma^{2}(i)\sigma^{-1}(j)}\] where the last equality holds by \[\delta_{i\sigma^{3}(j)}\lambda_{\sigma(i)}\lambda_{\sigma^{2}(j)} =\begin{cases}0&i\neq\sigma^{3}(j)\\ \lambda_{\sigma^{4}(j)}\lambda_{\sigma^{2}(j)}=0&i=\sigma^{3}(j)\end{cases}\] \[\delta_{i\sigma^{2}(j)}\lambda_{\sigma(i)}\lambda_{\sigma(j)} =\begin{cases}0&i\neq\sigma^{2}(j)\\ \lambda_{\sigma^{3}(j)}\lambda_{\sigma(j)}=0&i=\sigma^{2}(j).\end{cases}\] Thus we have the following: \[\mathrm{ad}_{X_{i}}\mathrm{ad}_{X_{j}}-\mathrm{ad}_{X_{j}}\mathrm{ ad}_{X_{i}}\] \[=\lambda_{\sigma^{2}(i)}\lambda_{\sigma^{2}(j)}\delta_{i\sigma(j )}E_{\sigma^{2}(i)\sigma(j)}-\lambda_{\sigma^{2}(i)}\lambda_{\sigma(j)}\delta _{ij}E_{\sigma^{2}(i)\sigma^{-1}(j)}\] \[\qquad-\lambda_{\sigma^{2}(j)}\lambda_{\sigma^{2}(i)}\delta_{ \sigma(i)j}E_{\sigma^{2}(j)\sigma(i)}+\lambda_{\sigma^{2}(j)}\lambda_{\sigma(i )}\delta_{ji}E_{\sigma^{2}(j)\sigma^{-1}(i)}\] \[=\lambda_{\sigma^{2}(i)}\lambda_{\sigma^{2}(j)}\delta_{i\sigma( j)}E_{\sigma^{2}(i)\sigma(j)}-\lambda_{\sigma^{2}(j)}\lambda_{\sigma^{2}(i)} \delta_{\sigma(i)j}E_{\sigma^{2}(j)\sigma(i)}\] \[=\lambda_{\sigma^{3}(j)}\lambda_{\sigma^{2}(j)}\delta_{i\sigma( j)}E_{\sigma^{3}(j)\sigma(j)}-\lambda_{\sigma^{3}(j)}\lambda_{\sigma^{2}(i)} \delta_{\sigma(i)j}E_{\sigma^{3}(i)\sigma(i)}\] \[=-\delta_{\sigma(i)j}\lambda_{\sigma^{2}(i)}\lambda_{\sigma^{3}(i )}E_{\sigma^{3}(i)\sigma(i)}+\delta_{\sigma(j)i}\lambda_{\sigma^{2}(j)} \lambda_{\sigma^{3}(j)}E_{\sigma^{3}(j)\sigma(j)}\] \[=\mathrm{ad}_{[X_{i},X_{j}]}.\] Now that we have an association between directed Milnor graphs and Lie algebras with Milnor frames we may more easily construct examples of Lie algebras with Milnor frames. Suppose we have two directed Milnor graphs \(\mathcal{G}=(\{v_{1},\ldots,v_{n}\}=V=A\cup B,E)\) and \(\mathcal{G}^{\prime}=(\{v^{\prime}_{1},\ldots,v^{\prime}_{n}\}=V^{\prime}=A^{ \prime}\cup B^{\prime},E^{\prime})\) where \(E\cap A,E^{\prime}\cap A^{\prime}\neq\emptyset\). Additionally suppose that \(v_{1},v_{2}\in A\) and \(w_{1},w_{2}\in A^{\prime}\). Define \(\mathcal{G}_{1}\#\mathcal{G}_{2}=(V\cup V^{\prime},F)\) to be the graph obtained by deleting the edges \((v_{n-1},v_{1}),(v_{n},v_{2}),(w_{n-1},w_{1})\), \((w_{n},w_{2})\) and forming the edges \((v_{n-1},w_{1}),(v_{n},w_{2}),(w_{m-1},v_{1}),(w_{m},v_{2})\). Through relabeling we let \(\{u_{1}=v_{1},\ldots,u_{n}=v_{n},u_{n+1}=w_{1},u_{n+m}=w_{m}\}=V\cup V^{\prime}\) where one can see \(F=\{(u_{i},u_{\sigma^{2}(i)})|1\leq i\leq(n+m)\}\). Furthermore for any \(f\in F\), there exists \(u_{i}\in A\cup A^{\prime}\) such that \(f\) is \((u_{i},u_{\sigma^{2}(i)})\) or \((u_{\sigma^{-2}}(i),u_{i})\) so that \(\mathcal{G}\#\mathcal{G}^{\prime}\) is a directed Milnor graph. It turns out if \(\mathcal{G}\) is associated to the Lie algebra \(\mathfrak{g}\) and \(\mathcal{G}^{\prime}\) is associated to the Lie algebra \(\mathfrak{g}^{\prime}\), then \(\mathcal{G}^{\prime}\#\mathcal{G}\) is associated to the direct sum \(\mathfrak{g}\oplus\mathfrak{g}^{\prime}\). Let \(X_{1},\ldots,X_{n}\) be the Milnor frame constructed from the directed Milnor graph \(\mathcal{G}\) with structure constants \(\{\lambda_{1},\ldots,\lambda_{n}\}\) and \(\{Y_{1},\ldots,Y_{m}\}\) be the Milnor frame constructed from the directed Milnor graph \(\mathcal{G}^{\prime}\) with structure constants \(\{\eta_{1},\ldots,\eta_{m}\}\). Define \(W_{i}=(X_{i},0)\) for \(1\leq i\leq n\) and \(W_{n+j}=(0,Y_{j})\) for \(1\leq j\leq m\). We claim that \(\{W_{k}|1\leq k\leq n+m\}\) forms a Milnor frame. For \(1\leq k\leq n-2\), and \(1\leq\ell\leq m-2\), \[\mathrm{ad}_{W_{k}}=\mathrm{ad}_{X_{k}}\oplus 0 =(\lambda_{\sigma^{2}(k)}(E_{\sigma^{2}(k)\sigma(k)})_{n\times n}- \lambda_{\sigma(k)}(E_{\sigma(k)\sigma^{-1}(k)})_{n\times n})\oplus 0\] \[=\lambda_{\sigma^{2}(k)}(E_{\sigma^{2}(k)\sigma(k)})_{(n+m)\times(n +m)}-\lambda_{\sigma(k)}(E_{\sigma(k)\sigma^{-1}(k)})_{(n+m)\times(n+m)}\] \[\mathrm{ad}_{W_{n+\ell}}=0\oplus\mathrm{ad}_{Y_{\ell}} =0\oplus(\eta_{\sigma^{2}(\ell)}(E_{\sigma^{2}(\ell)\sigma(\ell)})_{n \times n}-\eta_{\sigma(\ell)}(E_{\sigma(\ell)\sigma^{-1}(\ell)})_{n\times n})\] \[=\eta_{\sigma^{2}(k)}(E_{\sigma^{2}(n+\ell)\sigma(n+\ell)})_{(n+m )\times(n+m)}\] \[\qquad\qquad\qquad-\eta_{\sigma(k)}(E_{\sigma(n+\ell)\sigma^{-1} (n+\ell)})_{(n+m)\times(n+m)}.\] Finally \(\mathrm{ad}_{W_{n-1}}=\mathrm{ad}_{W_{n}}=\mathrm{ad}_{W_{n+m-1}}=\mathrm{ad} _{W_{n+m}}=0\). And so \(\mathcal{G}\#\mathcal{G}^{\prime}\) corresponds to \(\mathfrak{g}\oplus\mathfrak{g}^{\prime}\). Given a Milnor graph \(\mathcal{G}\) one may consider reversing the above process to obtain two Milnor graphs \(\mathcal{G}_{1}\),\(\mathcal{G}_{2}\) such that \(\mathcal{G}_{1}\#\mathcal{G}_{2}=\mathcal{G}\). This suggests that any Lie algebra with a Milnor frame splits into lower-dimensional Lie algebras with Milnor frames as stated in Theorem 1.0.1. Before we prove Theorem 1.0.1 we prove the following lemmas. **Lemma 2.4.1**.: _Let \(\mathfrak{g}\) be a 3 dimensional Lie algebra with a Milnor frame with structure constants \(\lambda_{i}=\lambda_{j}=0\) and \(\lambda_{k}\neq 0\) for \(\{i,j,k\}=\{1,2,3\}\). Then \(\mathfrak{g}\cong\mathfrak{h}^{3}\)_ Proof.: If \(\mathfrak{g}\) is defined through the bracket relation \([X_{i},X_{j}]=\lambda_{k}X_{k}\) then by setting \(Y_{1}=X_{i}\), \(Y_{2}=X_{j}\) and \(Y_{3}=X_{k}\), we obtain \([Y_{1},Y_{2}]=\lambda_{3}Y_{3}\). By scaling \(Y_{3}\) to \(\lambda_{3}Y_{3}\), \(\{Y_{1},Y_{2},\lambda_{3}Y_{3}\}\) forms a Milnor frame. This shows that \(\mathfrak{g}\cong\mathfrak{h}^{3}\). **Lemma 2.4.2**.: _Let \(\mathfrak{g}\) be a 4-dimensional Lie algebra with a Milnor frame with structure constants \(\lambda_{i}=\lambda_{j}=0\) and \(\lambda_{k},\lambda_{\ell}\neq 0\) for \(\{i,j,k,\ell\}=\{1,2,3,4\}\). Then \(\mathfrak{g}\cong\mathfrak{h}^{4}\)._ Proof.: Because \(\{X_{1},X_{2},X_{3},X_{4}\}\) is a Milnor frame, \(\lambda_{k}\lambda_{\ell}\neq 0\) implies that \(\sigma(\ell)=k\) or \(\sigma(k)=\ell\). Without loss of generality, assume our bracket relations are defined as \[[X_{i},X_{j}]=\lambda_{k}X_{k}\quad[X_{j},X_{k}]=\lambda_{\ell}X_{\ell}\] By relabeling and rescaling we obtain that \(\{X_{i}=Y_{1},X_{j}=Y_{2},\lambda_{k}X_{k}=Y_{3},\lambda_{k}\lambda_{\ell}X_{ \ell}=Y_{\ell}\}\) forms a Milnor frame. This gives the relation \(\mathfrak{g}\cong\mathfrak{h}^{4}\). Proof of Theorem 1.0.1.: If \(\mathfrak{g}\) is abelian then we are done. Suppose otherwise and let \(\mathcal{I}\subset\{1,\ldots,n\}\) such that \(\lambda_{i}\neq 0\) and \(\lambda_{\sigma^{-1}(i)}=0\). If \(i\in\{1,\ldots,n\}\) such that \(\lambda_{i},\lambda_{\sigma^{-1}(i)}\neq 0\) then \(\lambda_{\sigma^{-2}(i)}=0\) and so \(\sigma^{-1}(i)\in\mathcal{I}\). Thus \(\mathcal{I}\neq\emptyset\). Consider the adjoint operators of \(X_{\sigma^{-2}(i)},X_{\sigma^{-1}(i)},X_{i}\), and \(X_{\sigma(i)}\): \[\operatorname{ad}_{X_{\sigma^{-2}(i)}}=\lambda_{i}E_{i\sigma^{-1}( i)}-\lambda_{\sigma^{-1}(i)}E_{\sigma^{-1}(i)\sigma^{-3}(i)}=\lambda_{i}E_{i \sigma^{-1}(i)}\] \[\operatorname{ad}_{X_{\sigma^{-1}(i)}}=\lambda_{\sigma(i)}E_{ \sigma(i)i}-\lambda_{i}E_{i\sigma^{-2}(i)}\] \[\operatorname{ad}_{X_{i}}=\lambda_{\sigma^{2}(i)}E_{\sigma^{2}( i)\sigma(i)}-\lambda_{\sigma(i)}E_{\sigma(i)\sigma^{-1}(i)}=-\lambda_{\sigma(i)}E_{ \sigma(i)\sigma^{-1}(i)}\] \[\operatorname{ad}_{X_{\sigma(i)}}=\lambda_{\sigma^{3}(i)}E_{ \sigma^{3}(i)\sigma^{2}(i)}-\lambda_{\sigma^{2}(i)}E_{\sigma^{2}(i)i}=\lambda _{\sigma^{3}(i)}E_{\sigma^{3}(i)\sigma^{2}(i)}\] If \(\lambda_{\sigma(i)}=0\) then \(\forall X\in\operatorname{Span}\{X_{\sigma^{-2}(i)},X_{\sigma^{-1}(i)},X_{i} \}=:\mathfrak{h}_{i}^{3}\), \(\operatorname{ad}_{X}(\mathfrak{h}_{i}^{3})\subset\mathfrak{h}_{i}^{3}\)) so that \(\mathfrak{h}_{i}^{3}\) is a sub-algebra. If \(\lambda_{\sigma(i)}\neq 0\) then \(\lambda_{\sigma^{3}(i)}=0\) and so \(\operatorname{ad}_{X_{\sigma(i)}}=0\). Thus \(\forall X\in\operatorname{Span}X_{\sigma^{-2}(i)},X_{\sigma^{-1}(i)},X_{i},X _{\sigma(i)}=:\mathfrak{h}_{i}^{4}\), \(\operatorname{ad}_{X}(\mathfrak{h}_{i}^{4})\subset\mathfrak{h}_{i}^{4}\) making \(\mathfrak{h}_{i}^{4}\) a subalgebra. Now we proceed by induction on the dimension of the Lie algebra. Assume that any Lie algebra of dimension \(k<n\) with a Milnor frame splits as stated in the theorem. Let \(\{X_{1},\ldots,X_{n}\}\), \(n\geq 4\), be a Milnor frame for the nonabelian Lie algebra \(\mathfrak{g}\). Noting that the operator \(S_{k}:\mathfrak{g}\rightarrow\mathfrak{g}\), defined by \(c^{i}X_{i}\mapsto c^{i}X_{\sigma^{k}(i)}\), takes Milnor frames to Milnor frames we may suppose without loss of generality that \(\lambda_{3}\neq 0\). We have two cases: either \(\lambda_{4}=0\) or \(\lambda_{4}\neq 0\). Suppose \(\lambda_{4}=0\). We show that \(\operatorname{Span}\{X_{i}\}_{i\not\in\{1,2,3\}}\) forms a subalgebra. Because \(\operatorname{ad}_{X_{i}}=\lambda_{\sigma^{2}(i)}E_{\sigma^{2}(i)\sigma(i)}- \lambda_{\sigma(i)}E_{\sigma(i)\sigma^{-1}(i)}\) we only need to consider the collection of \(i\not\in\{1,2,3\}\) such that \(\sigma(i),\sigma^{2}(i)\in\{1,2,3\}\). Because \(\sigma(i)=3\implies i=2\) and \(\sigma^{2}(i)=3\implies i=1\), it must be the case that \(\sigma(i),\sigma^{2}(i)\in\{1,2\}\). Because \(\lambda_{1}=0=\lambda_{2}\), for any \(i\) such that \(\sigma(i),\sigma^{2}(i)\in\{1,2,3\}\) \[\operatorname{ad}_{X_{i}}=\lambda_{\sigma^{2}(i)}E_{\sigma^{2}(i)\sigma(i)}- \lambda_{\sigma(i)}E_{\sigma(i)\sigma^{-1}(i)}=0.\] This shows that \(\operatorname{Span}\{X_{i}\}_{i\not\in\{1,2,3\}}\) forms a subalgebra. Suppose that \(\lambda_{4}\neq 0\). We only need to consider the collection of \(i\not\in\{1,2,3,4\}\) such that \(\sigma(i),\sigma^{2}(i)\in\{1,2,3,4\}\). If \(\sigma(i),\sigma^{2}(i)=4\), then \(i\in\{2,3\}\). So any \(i\not\in\{1,2,3,4\}\), \(\sigma(i),\sigma^{2}(i)\in\{1,2\}\). Thus \(\operatorname{ad}_{X_{i}}=0\) for any \(i\not\in\{1,2,3,4\}\) with \(\sigma(i),\sigma^{2}(i)\in\{1,2,3,4\}\) and so \(\operatorname{Span}\{X_{i}\}_{i\not\in\{1,2,3,4\}}\) is a subalgebra. By using the adjoint operator definition of a Milnor frame, one can see that these subalgebras form a Milnor frame. Through induction and Lemmas 2.4.2 and 2.4.1, we obtain the desired result. Geometric Properties ### Metric Lie Algebras with an Orthonormal Milnor Frame For this section we consider the collection of metric Lie algebras with an orthonormal Milnor frame. #### 3.1.1 Ricci Tensors Suppose we have a metric Lie algebra \((\mathfrak{g},g)\) with an orthonormal Milnor frame. By Theorem 1.0.1 the Lie algebra \(\mathfrak{g}\) is isomorphic to \((\oplus\mathfrak{h}^{3})\oplus(\oplus\mathfrak{h}^{4})\oplus\mathfrak{a}\) where \(\mathfrak{a}\) is an abelian Lie algebra. Due to \(\mathfrak{g}\) admitting an orthonormal Milnor frame, the metric \(g\) can be represented as \[g=(\oplus g|_{\mathfrak{h}^{3}})\oplus(\oplus g|_{\mathfrak{h}^{4}})\oplus( \oplus g|_{\mathfrak{a}})\] and so \[\mathrm{Ric}_{g}|_{\mathfrak{g}}=(\oplus\mathrm{Ric}_{g}|_{\mathfrak{h}^{4}}) \oplus(\oplus\mathrm{Ric}_{g}|_{\mathfrak{h}^{4}})\oplus(\mathrm{Ric}_{g}|_{ \mathfrak{a}}).\] With this in mind, to determine information about the Ricci tensor for \((\mathfrak{g},g)\), it suffces to determine information about the Ricci tensors for \((\mathfrak{h}^{3},g|_{\mathfrak{h}^{3}})\) and \((\mathfrak{h}^{4},g|_{\mathfrak{h}^{4}})\). Let \((\mathfrak{h}^{3},g)\) have an orthonormal Milnor frame whose non-trivial structure constant is \(\lambda_{3}\). A result of [10, pg. 305] shows that \[\mathrm{Ric}_{g}|_{\mathfrak{h}^{3}}=\frac{\lambda_{3}^{2}}{2}\begin{bmatrix} -1&0&0\\ 0&-1&0\\ 0&0&1\end{bmatrix}. \tag{6}\] Let \(\{X_{1},\ldots,X_{4}\}\) be an orthonormal Milnor frame for \(\mathfrak{h}^{4}\) with non trivial structure constants \(\lambda_{3}\) and \(\lambda_{4}\). The sectional curvatures can be computed using the formula found in Lemma 1.1 of [10]. The sectional curvature between any two elements of the Milnor frame can be found in Figure 4. By Theorem 1.1, Lauret and Will [8, pg. 3652]\(\mathrm{Ric}_{g}(X_{i},X_{j})=0\) for \(i\neq j\). By summing each row of Figure 4 we obtain the matri Ricci tensor: \[\mathrm{Ric}_{g}|_{\mathfrak{h}^{4}}=\frac{1}{2}\begin{bmatrix}-\lambda_{3}^{2}&0&0 &0\\ 0&-\lambda_{3}^{2}-\lambda_{4}^{2}&0&0\\ 0&0&\lambda_{3}^{2}-\lambda_{4}^{2}&0\\ 0&0&0&\lambda_{4}^{2}\end{bmatrix}. \tag{7}\] The eigenvalues of the Ricci signatures for \(\mathrm{Ric}_{g}|_{\mathfrak{h}^{3}}\) and \(\mathrm{Ric}_{g}|_{\mathfrak{h}^{4}}\) admit positive and negative terms which coincides with Theorem 2.4 of [10, pg. 301]. An immediate consequence of this theorem is that any non-commutative nilpotent metric Lie algebra does not admit an Einstein metric, a metric \(g\) for which \(\mathrm{Ric}_{g}=\lambda g\) for some \(\lambda\in\mathbb{R}\). A known extension of Einstein metrics are metrics which satisfy the _Ricci soliton equation_. For a Riemannian manifold \((M,g)\), the metric \(g\) is a Ricci soliton if \(g\) satisfies the equation \[-2\mathrm{Ric}_{g}=\lambda g+\mathcal{L}_{X}g,\quad\lambda\in\mathbb{R} \tag{8}\] where \(X\) is a vector field and \(\mathcal{L}_{X}\) is the Lie derivative in the direction of \(X\). #### 3.1.2 Ricci-Soliton Equation and Derivations Given a metric Lie algebra \((\mathfrak{g},g)\) and an orthonormal frame \(\{X_{1},\ldots,X_{n}\}\), the Ricci tensor \(\mathrm{Ric}_{g}\) has a matrix representation with respect to the frame, \(\left[\mathrm{Ric}_{g}(X_{i},X_{j})\right]_{n\times n}\), as shown in examples 6 and 7. Let \((\mathfrak{g},g)\) be a nilpotent metric Lie algebra. Whenever a metric \(g\) admits a Ricci tensor of the form \(\mathrm{Ric}_{g}\in\mathbb{R}I+\mathrm{Der}(\mathfrak{g})\in M_{n\times n}( \mathbb{R})\), where \(\mathrm{Der}(\mathfrak{g})\) is the set of derivations on \(\mathfrak{g}\), we say that \(g\) is a Ricci nilsoliton. By Corollary 2 of [4] and Theorem 1.1 of [6], \(g\) is a Ricci nilsoliton if and only if \(g\) is a Ricci soliton. For example the Ricci tensor 6 can be represented as \[\frac{\lambda_{3}^{2}}{2}\begin{bmatrix}-1&0&0\\ 0&-1&0\\ 0&0&1\end{bmatrix}=-\frac{3\lambda_{3}^{2}}{2}I+\frac{\lambda_{3}^{2}}{2} \begin{bmatrix}2&0&0\\ 0&2&0\\ 0&0&4\end{bmatrix}=-3I+D. \tag{9}\] where \(D\) is a derivation. The vector fields which satisfy (8) for \((\mathfrak{h}^{3},g)\) and \((\mathfrak{h}^{4},g^{\prime})\) cannot be left invariant [2, pg. 231]. By Theorem 1.0.1 if \((\mathfrak{g},g)\) is a metric Lie algebra with an orthonormal Milnor frame, then \((\mathfrak{g},g)\) is a Ricci nilsoliton where the vector field which satisfies equation (8) is not left-invariant. **Example 3.1.1**.: Let \((\mathfrak{h}^{3},g)\) be a metric Lie algebra with an orthonormal Milnor frame whose non-trivial structure constant is \(\lambda_{3}\neq 0\). By Theorem 4.3 of [10] the matrix representation for the Ricci quadratic form is \[\operatorname{Ric}_{g}=\frac{\lambda_{3}^{2}}{2}\begin{bmatrix}-1&0&0\\ 0&-1&0\\ 0&0&1\end{bmatrix}.\] The above matrix may be written as \[\frac{\lambda_{3}^{2}}{2}\begin{bmatrix}-1&0&0\\ 0&-1&0\\ 0&0&1\end{bmatrix}=-\frac{3\lambda_{3}^{2}}{2}I+\lambda_{3}^{2}\begin{bmatrix} 1&0&0\\ 0&1&0\\ 0&0&2\end{bmatrix}=-\frac{3\lambda_{3}^{2}}{2}I+D\] where \(D\) is a derivation. Suppose \((\mathfrak{h}^{4},g)\) has an orthonormal Milnor frame. We shall show precisely when \(g\) is a Ricci nilsoliton. **Theorem 3.1.1**.: _Suppose \((\mathfrak{h}^{4},g)\) has an orthonormal Milnor frame \(\{X_{1},\ldots,X_{4}\}\) such that the structure constants are \(\lambda_{1},\lambda_{2}=0\) and \(\lambda_{3},\lambda_{4}\neq 0\). Then \(g\) is a Ricci nilsoliton if and only if \(|\lambda_{3}|=|\lambda_{4}|\)._ Proof.: If \(|\lambda_{3}|=|\lambda_{4}|\) then \(\lambda_{3}^{2}=\lambda_{4}^{2}\). The matrix representation of \(\operatorname{Ric}_{g}\) with respect to the frame \(\{X_{1},\ldots,X_{4}\}\) can be written as \[2\text{Ric}_{g} =\begin{bmatrix}-\lambda_{3}^{2}&0&0&0\\ 0&-\lambda_{3}^{2}-\lambda_{4}^{2}&0&0\\ 0&0&\lambda_{3}^{2}-\lambda_{4}^{2}&0\\ 0&0&0&\lambda_{4}^{2}\end{bmatrix}=\begin{bmatrix}-\lambda_{3}^{2}&0&0&0\\ 0&-2\lambda_{3}^{2}&0&0\\ 0&0&0&0\\ 0&0&0&\lambda_{3}^{2}\end{bmatrix}\] \[=\lambda_{3}^{2}\begin{bmatrix}-1&0&0&0\\ 0&-2&0&0\\ 0&0&0&0\\ 0&0&0&1\end{bmatrix} \tag{10}\] where \[\begin{bmatrix}-1&0&0&0\\ 0&-2&0&0\\ 0&0&0&0\\ 0&0&0&1\end{bmatrix}=-3I+\begin{bmatrix}2&0&0&0\\ 0&1&0&0\\ 0&0&3&0\\ 0&0&0&4\end{bmatrix}=-3I+D \tag{11}\] and so by (10) and (11) \(2\text{Ric}_{g}=-3\lambda_{3}^{2}I+\lambda_{3}^{2}D\) where \(D\) is a derivation. Thus \(g\) is a Ricci nilsoliton. Now suppose that \(\text{Ric}_{g}\) is a Ricci nilsoliton. Let \(D\) be a derivation such that \(\text{Ric}_{g}=cI+D\) for some constant \(c\in\mathbb{R}\). Because \(\text{Ric}_{g}\) is diagonalized with respect to the frame \(\{X_{1},\ldots,X_{4}\}\), \(D\) must be diagonalized. For each \(i\), let \(D_{i}\in\mathbb{R}\) so that \(DX_{i}=D_{i}X_{i}\). Then \[\lambda_{3}D_{3}X_{3} =D(\lambda_{3}X_{3})=D[X_{1},X_{2}]=[DX_{1},X_{2}]+[X_{1},DX_{2}]\] \[=(D_{1}+D_{2})[X_{1},X_{2}]=\lambda_{3}(D_{1}+D_{2})X_{3} \tag{12}\] and so \(D_{3}=D_{1}+D_{2}\). Similarly \[\lambda_{4}D_{4}X_{4}=D[X_{2},X_{3}]=[DX_{2},X_{3}]+[X_{2},DX_{3}]=\lambda_{ 4}(D_{2}+D_{3})X_{4}\] and so \(D_{4}=D_{2}+D_{3}=D_{1}+2D_{2}\). We obtain the system of equations \[-\lambda_{3}^{2} =c+D_{1}\] \[-\lambda_{3}^{2}-\lambda_{4}^{2} =c+D_{2}\] \[\lambda_{3}^{2}-\lambda_{4}^{2} =c+D_{1}+D_{2}\] \[\lambda_{4}^{2} =c+D_{1}+2D_{2} \tag{13}\] which gives us the equation \[\begin{bmatrix}-1&-1&0\\ -2&-1&-3\\ 2&2&3\end{bmatrix}\begin{bmatrix}c\\ D_{1}\\ D_{2}\end{bmatrix}=\lambda_{3}^{2}\begin{bmatrix}1\\ 1\\ 1\end{bmatrix}\] and so \[\begin{bmatrix}c\\ D_{1}\\ D_{2}\end{bmatrix}=\frac{\lambda_{3}^{2}}{3}\begin{bmatrix}-3&-3&-3\\ 0&3&3\\ 2&0&1\end{bmatrix}\begin{bmatrix}1\\ 1\\ 1\end{bmatrix}=\lambda_{3}^{2}\begin{bmatrix}-3\\ 2\\ 1\end{bmatrix}.\] Thus \(\lambda_{4}^{2}=c+D_{1}+2D_{2}=\lambda_{3}^{2}(-3+2+2)=\lambda_{3}^{2}\) which implies \(|\lambda_{4}|=|\lambda_{3}|\). Now we may generalize to any metric Lie algebra with an orthonormal Milnor frame. **Corollary 3.1.1**.: _Let \((\mathfrak{g},g)\) be a Lie algebra with an orthonormal Milnor frame \(\{X_{1},\ldots,X_{n}\}\) whose structure constants are \(\lambda_{1},\ldots,\lambda_{n}\in\mathbb{R}\). The metric \(g\) is a Ricci nilsoliton if and only if for any \(i\) such that \(\text{Span}\{X_{i},X_{\sigma(i)},X_{\sigma^{2}(i)},X_{\sigma^{3}(i)}\}\cong \mathfrak{h}^{4}\), \(|\lambda_{\sigma^{2}(i)}|=|\lambda_{\sigma^{3}(i)}|\)._ ### Non-orthogonal Milnor frames Let \((\mathfrak{g},g)\) be a metric Lie algebra with a Milnor frame. By Theorem 1.0.1 we know that \(\mathfrak{g}\cong(\oplus\mathfrak{h}^{3})\oplus(\oplus\mathfrak{h}^{4})\oplus \mathfrak{a}\). Let us assume that such a decomposition of \(\mathfrak{g}\) is pair-wise orthogonal. Each \(\mathfrak{h}^{3}\) is a 3-dimensional unimodular metric Lie algebra, and so by Lemma 4.1 of [10] there exists an orthonormal Milnor frame of \(\mathfrak{h}^{3}\). Thus in order for \(\mathfrak{g}\) to have an orthonormal Milnor frame, the subalgebras in the decomposition of \(\mathfrak{g}\) isomorphic to \(\mathfrak{h}^{4}\) must have an orthonormal Milnor frame. Now we determine precisely when \(\mathfrak{h}^{4}\) admits an orthonormal Milnor frame. #### 3.2.1 Metrics on \(\mathfrak{h}^{4}\) Let \(\mathfrak{g}\) be a Lie algebra formed by the bracket relations \[[X_{2},X_{4}]=X_{1},\quad[X_{3},X_{4}]=X_{2}.\] By setting \(Y_{1}=X_{2}+X_{3}\), \(Y_{2}=X_{4}\), \(Y_{3}=X_{1}+X_{2}\), \(Y_{4}=-X_{1}\) we obtain \(\mathfrak{h}^{4}\) as shown in Figure 5. Applying the Gram-Schmidt algorithm to the frame \(\{X_{1},\ldots,X_{4}\}\) gives us an orthonormal frame \(\{F_{1},\ldots,F_{4}\}\) for which there are at most three non-trivial structure constants \(C^{1}_{2,4},C^{2}_{3,4}>0\), \(C^{1}_{3,4}\in\mathbb{R}\) as shown in page 254 of [5]. I found it easier to determine information about the structure constants with respect to the orthonormal frame \(\{F_{1},\ldots,F_{4}\}\) as opposed to the structure constants of the orthonormal frame obtained by applying the Gram-Schmidt algorithm to \(\{Y_{1},\ldots,Y_{4}\}\). We now show that \(\mathfrak{h}^{4}\) has an orthonormal Milnor frame if and only if \(C^{1}_{3,4}=0\). **Theorem 3.2.1**.: _Let \(\{F_{1},F_{2},F_{3},F_{4}\}\) be an orthonormal frame with a Lie algebra \(\mathfrak{g}\) generated by the following Lie bracket relations: \(a=C^{1}_{2,4}>0\), \(b=C^{1}_{3,4}\in\mathbb{R}\), and \(c=C^{2}_{3,4}>0\). Let \(O(4)\) be the orthogonal group of \(4\times 4\) matrices with real coefficients. There exists \(T\in O(4)\) which makes \(\{TF_{1},TF_{2},TF_{3},TF_{4}\}\) into a Milnor frame with 2 non-zero structure constants if and only \(b=0\)._ Proof.: We prove the above proposition by contradiction. Let \(T\in O(4)\) be a linear operator such that \(\{TF_{1},TF_{2},TF_{3},TF_{4}\}\) forms a Milnor frame. By Lemma 2.4.2 we may assume that the non-trivial structure constants of this Milnor frame are \(\lambda_{3},\lambda_{4}\neq 0\) and \(T\in O(4)\). For any \(i,j\) such that \([TF_{i},TF_{j}]=0\) \[0 =[TF_{i},TF_{j}]=[T_{i}^{p}F_{p},T_{j}^{q}F_{q}]=T_{i}^{p}T_{j}^{ q}[F_{p},F_{q}]\] \[=\left[a(T_{i}^{2}T_{j}^{4}-T_{i}^{4}T_{j}^{2})+b(T_{i}^{3}T_{j}^ {4}-T_{i}^{4}T_{j}^{3})\right]F_{1}+c(T_{i}^{3}T_{j}^{4}-T_{i}^{4}T_{j}^{3})F_ {2}.\] As a result \(T_{i}^{3}T_{j}^{4}=T_{i}^{4}T_{j}^{3}\) and \(T_{i}^{2}T_{j}^{4}=T_{i}^{4}T_{j}^{2}\). By the fact that \([\mathfrak{g},\mathfrak{g}]\in\mathrm{Span}\{F_{1},F_{2}\}\), \[[TF_{1},TF_{2}]=\lambda_{3}TF_{3}\implies T_{3}^{3}=T_{3}^{4}=0\] \[[TF_{2},TF_{3}]=\lambda_{4}TF_{4}\implies T_{4}^{3}=T_{4}^{4}=0.\] Figure 5: Entry \(i,j\) as \([Y_{i},Y_{j}]\) We claim that \(T_{1}^{4}\) is necessarily trivial. Suppose otherwise. Because \(T_{3}^{4}=0\) and \([TF_{1},TF_{3}]=[TF_{1},TF_{4}]=0\), \(0=T_{1}^{2}T_{3}^{4}=T_{1}^{4}T_{3}^{2}\implies T_{3}^{2}=0\). Similarly \(0=T_{1}^{2}T_{4}^{4}=T_{1}^{4}T_{4}^{2}\implies T_{4}^{2}=0\) so that the vectors \([T_{1}^{4}\ldots T_{4}^{4}]^{t}\) and \([T_{1}^{3},\ldots,T_{4}^{3}]^{t}\) are linearly dependent, a contradiction to \(T\in O(4)\). Because \(T_{j}^{4}=0\) for \(j=1,3,4\) and \(T\in O(4)\), \(T_{2}^{4}T_{2}^{j}=\sum_{k=1}^{4}T_{k}^{4}T_{k}^{j}=\delta_{4j}\). We obtain that \(T_{2}^{4}=\pm 1\) which implies \(T_{2}^{1}=T_{2}^{2}=T_{2}^{3}=0\). Similarly \(T_{1}^{3}T_{1}^{j}=\sum_{k}T_{k}^{3}T_{k}^{j}=\delta_{3j}\) implies \(T_{1}^{3}=\pm 1\) and \(T_{1}^{1}=T_{1}^{2}=T_{1}^{4}=0\). The matrix representation of \(T\) under the basis \((F_{1},\ldots,F_{4})\) is given by \[T=\begin{bmatrix}0&0&\gamma&\epsilon\\ 0&0&\delta&\iota\\ \alpha&0&0&0\\ 0&\beta&0&0\end{bmatrix}\] where \(\alpha,\beta\in\{-1,1\}\) and \(\gamma,\delta,\epsilon,\iota\in\mathbb{R}\). Because \([TF_{2},TF_{3}]=\lambda_{4}TF_{4}\) and \(\mathrm{ad}_{F_{1}}=0\), \(\iota=0\). In order for \(0=g(TF_{3},TF_{4})=\gamma\epsilon\), \(\gamma=0\). Finally \([TF_{1},TF_{2}]=\alpha\beta[F_{3},F_{4}]=\alpha\beta(bF_{1}+cF_{2})=\lambda_{ 3}TF_{3}=\lambda_{3}\delta F_{2}\). Considering that \(\alpha\beta\in\{-1,1\}\), it must be the case that the linear operator \(T\) turns the frame \(\{F_{1},F_{2},F_{3},F_{4}\}\) into an orthonormal Milnor frame if and only if \(b=0\). As a result of Theorem 3.2.1 we can construct a metric \(g\) so that the metric Lie algebra \((\mathfrak{h}^{4},g)\) does not admit an orthonormal Milnor frame. If \((\mathfrak{h}^{4},g)\) has an orthonormal Milnor frame \(\{X_{1},\ldots,X_{4}\}\), then there exists \(T\in O(4)\) such that \(\{TX_{1},\ldots,TX_{4}\}=\{F_{1},\ldots,F_{4}\}\) where \(\{F_{1},\ldots,F_{4}\}\) is as in (3.2.1). Thus \(\{T^{-1}F_{1},\ldots,T^{-1}F_{4}\}\) is an orthonormal Milnor frame which implies that \(g([F_{3},F_{4}],F_{1})=0\). An example of a metric \(g\) such that \(g([F_{3},F_{4}],F_{1})\neq 0\) is found in Lemma 3 of [5]. In Section 3.1.2, we determined that the set of metrics which admit an orthonormal Milnor frame \(\{X_{1},\ldots,X_{4}\}\) whose non-trivial structure constants are \(\lambda_{3},\lambda_{4}\neq 0\) such that \(|\lambda_{3}|=|\lambda_{4}|\) are Ricci nilsolitons. A result of Theorems 3.2.1 and 3.1.1 shall classify all metrics on \(\mathfrak{h}^{4}\) which are Ricci nilsolitons. **Corollary 3.2.1**.: _Let \(g\) be a metric on \(\mathfrak{h}^{4}\). Then \(g\) is a Ricci nilsoliton if and only if there exists an orthonormal Milnor frame \(\{X_{1},\ldots,X_{4}\}\) with non-trivial structure constants \(\lambda_{3},\lambda_{4}\neq 0\) such that \(|\lambda_{3}|=|\lambda_{4}|\), i.e if and only if \((\mathfrak{h}^{4},g)\) has an orthonormal Milnor frame where the Ricci signature is \((--,0,+)\)._ Proof.: Let \(\{F_{1},\ldots,F_{4}\}\) be an orthonormal frame for the metric Lie algebra \((\mathfrak{h}^{4},g)\) whose structure constants are \(C_{2,4}^{1}=a>0\), \(C_{3,4}^{1}=b\in\mathbb{R}\) and \(C_{3,4}^{2}=c>0\) \(0\). The matrix representation of \(\mathrm{Ric}_{g}\) with respect to the frame \(\{F_{1},\ldots,F_{4}\}\) is of the form \[2\mathrm{Ric}_{g}=\begin{bmatrix}a^{2}+b^{2}&bc&0&0\\ bc&c^{2}-a^{2}&-ab&0\\ 0&-ab&-b^{2}-c^{2}&0\\ 0&0&0&-a^{2}-b^{2}-c^{2}\end{bmatrix}. \tag{14}\] Suppose that \(\mathrm{Ric}_{g}\) is a Ricci nilsoliton. Then there exists \(k\in\mathbb{R}\) and \(D\in\mathrm{Der}(\mathfrak{g})\) such that \(D=\mathrm{Ric}_{g}-kI\). Thus \(DF_{1}=(a^{2}+b^{2}-k)F_{1}+bcF_{2}\). Because \(\mathrm{ad}_{F_{1}}=0\), \[0=D[F_{1},F_{4}]=[DF_{1},F_{4}]+[F_{1},DF_{4}]=(a^{2}+b^{2}-k)[F_{1},F_{4}]+ bc[F_{2},F_{4}]=abcF_{1}\] where \(a,c>0\) so that \(b=0\). By Theorem 3.2.1\((\mathfrak{h}^{4},g)\) must have an orthonormal Milnor frame \(\{X_{1},\ldots,X_{4}\}\) with non-trivial structure constants \(\lambda_{3},\lambda_{4}\neq 0\). By Theorem 3.1.1, \(|\lambda_{3}|=|\lambda_{4}|\) and so the Ricci signature is of the form \((--,0,+)\). It is not the case that the set of metrics which admit a Ricci signature of the form \((-,-,0,+)\) admit an orthonormal Milnor frame. By Lemma 3 of [5], we can find a metric which admits an orthonormal frame \(\{f_{1},\ldots,f_{4}\}\) such that \(\mathrm{span}\{f_{1},\ldots,f_{4}\}=\mathfrak{h}^{4}\) and has non-trivial structure constants \(C^{1}_{2,4}=C^{2}_{3,4}>0\) and \(C^{1}_{2,4}\neq 0\). Because \(C^{1}_{2,4}\neq 0\), \((\mathfrak{h}^{4},g)\) does not admit an orthonormal Milnor frame. **Example 3.2.1**.: Let \(\mathfrak{h}^{4}\) be a Lie algebra with a frame \(\{f_{1},\ldots,f_{4}\}\) with non-trivial structure constants \(C^{1}_{2,4},C^{1}_{3,4},C^{2}_{3,4}=1\). Existence of such a Lie algebra is given by Lemma 3 of [5]. Let \(g\) be a metric which makes \(\{f_{1},\ldots,f_{4}\}\) an orthonormal frame. The Ricci tensor is of the form \[2\mathrm{Ric}_{g}=\begin{bmatrix}2&1&0&0\\ 1&0&-1&0\\ 0&-1&-2&0\\ 0&0&0&-3\end{bmatrix}. \tag{15}\] The characteristic polynomial of the above matrix is \(x(x+3)(x^{2}-6)\) and so the signature of \(\mathrm{Ric}_{g}\) in (15) is \((-,-,0,+)\). If \(g\) is a Ricci nilsoliton then there exists \(k\in\mathbb{R}\) such that \(D:=\mathrm{Ric}_{g}-kI\) is a derivation. Because \([f_{1},f_{4}]=0\), \(D[f_{1},f_{4}]=0\) and so \[0=D[f_{1},f_{4}]=[Df_{1},f_{4}]+[F_{1},Df_{4}]=[(2-k)f_{1}+f_{2},f_{4}]+[f_{1},-3f_{4}]=f_{1}\] which is a contradiction. Thus \(g\) must not be a Ricci nilsoliton. #### 3.2.2 Metrics on \(\mathfrak{h}^{3}\oplus\mathfrak{h}^{3}\) Now we consider the collection of metric Lie algebras \(\mathfrak{g}\) which contain an isomorphic copy of \(\mathfrak{h}^{3}\oplus\mathfrak{h}^{3}\). If we are able to provide a metric that does not allow \(\mathfrak{h}^{3}\oplus\mathfrak{h}^{3}\) to have an orthogonal Milnor frame, then we obtain Theorem 1.0.2 as a result. First we prove the following result from linear algebra. **Lemma 3.2.1**.: _Let \(A,B\in M_{2}(\mathbb{R})\). Denote the \(i^{th}\) column of an arbitrary matrix \(C\in M_{2}(\mathbb{R})\) as \(C_{i}\). If \(A\) and \(B\) have the property \(A_{i}\) and \(B_{j}\) are linearly dependent for any \(i,j\in\{1,2\}\) then either \(A\) or \(B\) has determinant \(0\)._ Proof.: Suppose that \(A\) is non-singular. By our hypothesis \(B_{i}=c_{ij}A_{j}\) for some \(c_{ij}\in\mathbb{R}\) and \(i,j\in\{1,2\}\). Thus \[\begin{bmatrix}A_{1}&A_{2}\end{bmatrix}\begin{bmatrix}0&c_{21}\\ c_{12}&0\end{bmatrix}=\begin{bmatrix}B_{1}&B_{2}\end{bmatrix}=\begin{bmatrix} A_{1}&A_{2}\end{bmatrix}\begin{bmatrix}c_{11}&0\\ 0&c_{22}\end{bmatrix}.\] Because \(A\) is invertible, \(c_{ij}=0\) for all \(i,j\) which implies \(B_{1}=B_{2}=0\). **Proposition 3.2.1**.: _Let \((\mathfrak{h}^{3}\oplus\mathfrak{h}^{3},g)\) be a metric Lie algebra. Let \(\{U_{1},U_{2},U_{3},V_{1},V_{2},V_{3}\}\) be a Milnor frame for \((\mathfrak{h}^{3}\oplus\mathfrak{h}^{3})\) where \(\mathfrak{h}^{3}\oplus\{0\}=\text{Span}\{U_{1},U_{2},U_{3}\}\) and \(\{0\}\oplus\mathfrak{h}^{3}=\text{Span}\{V_{1},V_{2},V_{3}\}\). If there exists a Lie isomorphism \(T\) such that \(g(T(\mathfrak{h}^{3}\oplus\{0\}),T(\{0\}\oplus\mathfrak{h}^{3}))=0\) then \(g(U_{3},V_{3})=0\)._ Proof.: We may always scale the frames \(\{U_{1},U_{2},U_{3}\}\) and \(\{V_{1},V_{2},V_{3}\}\) so that the structure constants are contained in \(\{0,1\}\). Without loss of generality we shall assume that the structure constants for the frame \(\{U_{1},\ldots,V_{3}\}\) are contained in \(\{0,1\}\). Because \(T\) is a Lie isomorphism, \(T_{1}^{i}T_{2}^{j}[U_{i},U_{j}]=[TU_{1},TU_{2}]=TU_{3}=T_{3}^{k}U_{k}\). This shows \(T_{3}^{3}=T_{1}^{1}T_{2}^{2}-T_{1}^{2}T_{2}^{1}\), \(T_{3}^{6}=T_{1}^{4}T_{2}^{5}-T_{1}^{5}T_{2}^{4}\), and \(T_{3}^{k}=0\) for \(k=1,2,4,5\). Similarly \(T_{6}^{3}=T_{4}^{1}T_{5}^{2}-T_{4}^{2}T_{5}^{1}\), \(T_{6}^{6}=T_{4}^{4}T_{5}^{5}-T_{4}^{5}-T_{5}^{4}\) and \(T_{6}^{k}=0\) for \(k=1,2,4,5\). For \(i,j\) such that \([P_{i},P_{j}]=0\), \(P_{i},P_{j}\in\{U_{1},\ldots,V_{3}\}\), \(T_{i}^{1}T_{j}^{2}-T_{i}^{2}T_{j}^{1}=T_{i}^{4}T_{j}^{5}-T_{i}^{5}T_{j}^{4}=0\). The matrix representation of \(T\) can be given by the block matrix \[T:=\begin{bmatrix}W&0&X&0\\ w^{t}&\det(W)&x^{t}&\det(X)\\ Y&0&Z&0\\ y^{t}&\det(Y)&z^{t}&\det(Z)\end{bmatrix}\] Where \(W,X,Y\) and \(Z\) are \(2\times 2\) matrices, \(w,x,y,z\) are \(2\times 1\) matrices, and each \(0\) is a \(2\times 1\) matrix. Letting \(C_{i}\) represent the \(i^{th}\) column of a matrix \(C\in M_{2}(\mathbb{R})\), we have the additional property that \(W_{i}\) and \(X_{j}\) are linearly dependent for any \(i,j\) and similarly \(Y_{i}\) and \(Z_{j}\) are linearly dependent. Suppose that \(X\) is singular. If \(W\) or \(Z\) is singular then \(T\) will be singular leading us to a contradiction. Let \(W\) and \(Z\) be nonsingular. By Lemma 3.2.1\(Y\) is singular. \(T\) can be represented as \[T=\begin{bmatrix}\omega&\chi\\ \gamma&\zeta\end{bmatrix}\] where \[\omega =\begin{bmatrix}W&0\\ w^{t}&\det(W)\end{bmatrix} \chi =\begin{bmatrix}X&0\\ x^{t}&0\end{bmatrix}\] \[\gamma =\begin{bmatrix}Y&0\\ y^{t}&0\end{bmatrix} \zeta =\begin{bmatrix}Z&0\\ z^{t}&\det(Z)\end{bmatrix}.\] Letting \(g\) be represented as \[g=\begin{bmatrix}A&C\\ C^{t}&B\end{bmatrix}\] \[\begin{bmatrix}\omega^{t}&\gamma^{t}\\ \chi^{t}&\zeta^{t}\end{bmatrix}\begin{bmatrix}A&C\\ C^{t}&B\end{bmatrix}\begin{bmatrix}\omega&\chi\\ \gamma&\zeta\end{bmatrix}=\begin{bmatrix}\omega^{t}A+\gamma^{t}C^{t}&\omega^{t }C+\gamma^{t}B\\ \chi^{t}A+\zeta^{t}C^{t}&\chi^{t}C+\zeta^{t}B\end{bmatrix}\begin{bmatrix} \omega&\chi\\ \gamma&\zeta\end{bmatrix}\] \[=\begin{bmatrix}\omega^{t}A\omega+\gamma^{t}C^{t}\omega+\omega^{t }C\gamma+\gamma^{t}B\gamma&\omega^{t}A\chi+\gamma^{t}C^{t}\chi+\omega^{t}C \zeta+\gamma^{t}B\zeta\\ \chi^{t}A\omega+\zeta^{t}C^{t}\omega+\chi^{t}C\gamma+\zeta^{t}B\gamma&\chi^{t} A\chi+\zeta^{t}C^{t}\chi+\chi^{t}C\zeta+\zeta^{t}B\zeta\end{bmatrix} \tag{16}\] so that \[\omega^{t}A\chi+\gamma^{t}C^{t}\chi+\omega^{t}C\zeta+\gamma^{t}B\zeta=0. \tag{17}\] Finally we obtain \[0 =(\omega U_{3})^{t}A(\chi V_{3})+(\gamma U_{3})^{t}C^{t}(\chi V_{ 3})+(\omega U_{3})C(\zeta V_{3})+(\gamma U_{3})^{t}B(\zeta V_{3})\] \[=\det(W)\det(Z)(U_{3})^{t}CV_{3}=\det(W)\det(Z)g(U_{3},V_{3}) \tag{18}\] where \(\det(W)\det(Z)\neq 0\) so that \(g(U_{3},V_{3})=0\). If \(X\) is non-singular then by Lemma 3.2.1\(W\) and \(Z\) must be singular. In order for \(T\) to remain non-singular \(Y\) must be nonsingular. If we post-compose \(T\) with the Lie isomorphism \(S=\begin{bmatrix}0&I\\ I&0\end{bmatrix}\) then a similar argument of the case where \(X\) is singular holds to show that \(g(U_{3},V_{3})=0\) Taking the contrapositive of the previous theorem states if \(g(U_{3},V_{3})\neq 0\), then there are no linear operators \(T\in\operatorname{Aut}(\mathfrak{g})\) such that \(g(TU_{i},TV_{j})=0\)\(\forall i,j\). **Proposition 3.2.2**.: _There exists a metric \(g\) on the Lie algebra \((\mathfrak{h}^{3}\oplus\mathfrak{h}^{3})\) such that \((\mathfrak{h}^{3}\oplus\mathfrak{h}^{3},g)\) does not admit an orthonormal Milnor frame._ Proof.: Let be \((\mathfrak{g}=\mathfrak{h}^{3}\oplus\mathfrak{h}^{3},g)\) be a metric Lie algebra with a Milnor frame \(\{X_{1},\ldots,X_{6}\}\) and \(T:\mathfrak{g}\to\mathfrak{g}\) be a linear operator such that \(\{TX_{1},\ldots,TX_{6}\}\) forms a Milnor frame with two non-trivial structure constants. Denote these structure constants as \(\lambda_{1},\ldots,\lambda_{6}\in\{0,1\}\). Then either \(\lambda_{i}=\lambda_{\sigma(i)}=1\), \(\lambda_{i}=\lambda_{\sigma^{2}(i)}=1\), or \(\lambda_{i}=\lambda_{\sigma^{3}(i)}=1\) for some \(i\in\{1,\ldots,6\}\). It cannot be the case that \(\lambda_{i}=\lambda_{\sigma^{2}(i)}=1\) for some \(i\) by Proposition 2.3.1 which leaves us with two cases. If there is some \(i\) such that \(\lambda_{i}=\lambda_{\sigma(i)}\), then \(\mathfrak{h}^{4}\subset\mathfrak{h}^{3}\oplus\mathfrak{h}^{3}\). By Proposition 3.2.1, we may choose a metric \(g\) so that \((\mathfrak{h}^{4},g|_{\mathfrak{h}^{4}})\) does not admit an orthonormal Milnor frame and so \((\mathfrak{h}^{3}\oplus\mathfrak{h}^{3},g)\) does not admit an orthonormal Milnor frame. If \(\lambda_{i}=\lambda_{\sigma^{3}(i)}=1\) for some \(i\), then \(T\in\operatorname{Aut}(\mathfrak{g})\). Choose a metric \(g\) which makes \(g(X_{3},X_{6})\neq 0\). For example if \(g=I+\epsilon(E_{36}+E_{63})\) for some sufficiently small \(\epsilon>0\) that allows \(g\) to be positive definite then \(g(X_{3},X_{6})=\epsilon>0\). By Proposition 3.2.1, there is no linear operator \(T\) which makes \(T(\mathfrak{h}^{3}\oplus\{0\})\) the orthogonal complement of \(T(\{0\}\oplus\mathfrak{h}^{3})\). Thus \(\mathfrak{h}^{3}\oplus\mathfrak{h}^{3}\) does not necessarily admit an orthonormal Milnor frame. Now we prove Theorem 1.0.2. Proof.: Let \(\mathfrak{g}\) be a Lie algebra with a Milnor frame. Suppose that any metric \(g\) admits an orthonormal Milnor frame. Then \(\mathfrak{h}^{4}\) cannot be a subalgebra of \(\mathfrak{g}\) and \(\mathfrak{h}^{3}\oplus\mathfrak{h}^{3}\) cannot be a subalgebra of \(\mathfrak{g}\). By Theorem 1.0.1, \(\mathfrak{g}=\mathfrak{h}^{3}\oplus\mathfrak{a}\).
2307.12976
Evaluating the Ripple Effects of Knowledge Editing in Language Models
Modern language models capture a large body of factual knowledge. However, some facts can be incorrectly induced or become obsolete over time, resulting in factually incorrect generations. This has led to the development of various editing methods that allow updating facts encoded by the model. Evaluation of these methods has primarily focused on testing whether an individual fact has been successfully injected, and if similar predictions for other subjects have not changed. Here we argue that such evaluation is limited, since injecting one fact (e.g. ``Jack Depp is the son of Johnny Depp'') introduces a ``ripple effect'' in the form of additional facts that the model needs to update (e.g.``Jack Depp is the sibling of Lily-Rose Depp''). To address this issue, we propose a novel set of evaluation criteria that consider the implications of an edit on related facts. Using these criteria, we then construct RippleEdits, a diagnostic benchmark of 5K factual edits, capturing a variety of types of ripple effects. We evaluate prominent editing methods on RippleEdits, showing that current methods fail to introduce consistent changes in the model's knowledge. In addition, we find that a simple in-context editing baseline obtains the best scores on our benchmark, suggesting a promising research direction for model editing.
Roi Cohen, Eden Biran, Ori Yoran, Amir Globerson, Mor Geva
2023-07-24T17:52:46Z
http://arxiv.org/abs/2307.12976v2
# Evaluating the Ripple Effects of Knowledge Editing in Language Models ###### Abstract Modern language models capture a large body of factual knowledge. However, some facts can be incorrectly induced or become obsolete over time, resulting in factually incorrect generations. This has led to the development of various editing methods that allow updating facts encoded by the model. Evaluation of these methods has primarily focused on testing whether an individual fact has been successfully injected, and if similar predictions for other subjects have not changed. Here we argue that such evaluation is limited, since injecting one fact (e.g. _"Jack Depp is the son of Johnny Depp"_) introduces a "ripple effect" in the form of additional facts that the model needs to update (e.g., _"Jack Depp is the sibling of Lily-Rose Depp"_). To address this issue, we propose a novel set of evaluation criteria that consider the implications of an edit on related facts. Using these criteria, we then construct RippleEdits, a diagnostic benchmark of 5K factual edits, capturing a variety of types of ripple effects. We evaluate prominent editing methods on RippleEdits, showing that current methods fail to introduce consistent changes in the model's knowledge. In addition, we find that a simple in-context editing baseline obtains the best scores on our benchmark, suggesting a promising research direction for model editing.1 Footnote 1: We release RippleEdits and our code at [https://github.com/edenbiran/RippleEdits/](https://github.com/edenbiran/RippleEdits/) ## 1 Introduction Modern language models (LMs) capture a large volume of factual knowledge in their parameters, and this can be effectively utilized in downstream tasks Petroni et al. (2019); Roberts et al. (2020); Shin et al. (2020); Ranziewski et al. (2021); Heinzerling and Inui (2021); Kadavath et al. (2022); Cohen et al. (2023). However, factual beliefs captured by the model may be incorrect or become outdated over time, potentially affecting the performance on the model on downstream tasks, its reliability and its usability Dhingra et al. (2022); Lazaridou et al. (2021); Jang et al. (2022). This limitation has prompted research on knowledge editing (KE) methods, which modify LMs in order to fix their factual errors (we provide a formal definition in SS2). Concretely, knowledge editing work has focused on injecting factual updates to LMs. Given an entity-relation-object triplet \((e,r,o)\) representing a fact (e.g. _"The Eiffel Tower is in the city of Paris"_), recent work proposed various meth Figure 1: Illustration of the evaluation scope of RippleEdits, compared to existing knowledge editing benchmark. For a given factual edit, we consider the “ripple effect” of the edit on the model’s knowledge. ods (Mitchell et al., 2022; Meng et al., 2022, 2023; Hernandez et al., 2023; Si et al., 2023) to inject this fact to a given LM, while "overriding" previous beliefs the model might have on \(e\) and \(r\) (e.g. that the Eiffel Tower is in London). A key question with KE is how to evaluate the success of such editing operations. The most basic "sanity-check" is that the model correctly completes \((e,r,?)\), as well as other paraphrases of this task, with \(o\). However, this is not enough as an evaluation, since one needs to check that the model did not distort other facts. Indeed, the standard evaluation protocol (Mitchell et al., 2022; Meng et al., 2022, 2023), for KE focuses on these two aspects of correctly completing various paraphrases of the new fact, and ensuring that other unrelated facts haven't been changed. In this work, we argue that to evaluate model edits, one should go beyond the single fact that was edited and check that other facts that are logically derived from the edit were also changed accordingly. For example, if \(z\) is the mother of \(e\), then the children of \(z\) are the siblings of \(e\). Consequently, once we modify the belief of a certain model that \(z\to z^{\prime}\) is the mother of \(e\), then we should also ensure that the model's belief regarding the siblings of \(e\) is also correct. Fig. 1 illustrates another example, where editing the City in which the Eiffel Tower is located, modifies other related facts, such as its country, while other facts should be retained. We refer to such changes that are implied by a factual edit as _"ripple effects"_. To account for ripple effects in the evaluation of factual edits, we propose six concrete evaluation criteria (SS3, Fig. 2), for testing which facts other than the edit itself should be modified or retained post-editing. Our tests allow to evaluate how well the model integrates the edit with the rest of its knowledge, through queries that involve logical reasoning, complex composition of facts with the edit as an intermediate step, and specificity across relations. Building upon these criteria, we create RippleEdits, a new benchmark for comprehensive evaluation of KE of LMs (see SS4). RippleEdits includes \(5\)K entries, each consisting of a factual edit, along with a set of test queries that check if the edit was successful in terms of its ripple effect. Moreover RippleEdits contains meta-data for each edit, including information about the timestamp of the edit (i.e., recent vs old), and the frequency of the entities (i.e., head vs tail). We use RippleEdits to evaluate three popular editing methods on five recent LMs (see SS5). We find that, even though current KE methods are effective in modifying a particular fact, they often fail to capture the ripple effects entailed by that fact, and demonstrate poor performance on most of our evaluation criteria. In addition, we consider a simple in-context editing baseline for KE, that leverages the casual attention mechanism rather than explicit updates the model parameters. While this method achieves improved results on our benchmark, outperforming current parametric KE methods, there is still ample room for improvement that calls for future research. Further, we analyze how editing performance varies across different model sizes and entity frequencies, finding that (a) larger models handle ripple effects better, and (b) success on RippleEdits is influenced by entity frequency, where editing frequent entities results in more logical reasoning errors. To conclude, our work makes the following contributions: (a) it highlights key limitations of KE evaluation, specifically regarding "ripple effects", (b) it introduces comprehensive evaluation criteria that aim to mitigate those limitations, (c) it proposes RippleEdits, a benchmark inspired by these criteria, (d) it evaluates current methods for KE and shows that they do not perform well on this task, while demonstrating that in-context editing is a promising direction for KE. We release RippleEdits and our code to facilitate future work on KE. ## 2 Problem Setting We consider editing of _factual knowledge_, where facts are expressed as triplets \((e,r,o)\) of a subject entity \(e\) (e.g. Eiffel Tower), a relation \(r\) (e.g. City), and an object \(o\) (e.g. Paris). In a standard KE setting, an edit request \((e,r,o)\rightarrow(e,r,o^{\prime})\) is made to modify a fact encoded by the model - setting a new target object \(o\to o^{\prime}\) of a given subject-relation pair. We follow this setting, while distinguishing between three cases, based on the knowledge encoded in the model before the edit and the relation of the edit: (a) modification of facts that are already encoded in the model \((e,r,o)\rightarrow(e,r,o^{\prime})\), that is, updating the object \(o\to o^{\prime}\) for a given subject entity \(e\) and relation \(r\), (b) injection of new facts \((e,r,o^{\prime})\) that are not captured by the model. For one-to one relations like Date of birth, where there is a single object for a given subject, an injection edit can be viewed as populating an empty object \((e,r,\emptyset)\rightarrow(e,r,o^{\prime})\). For one-to-many relations, such as Sibling and Occupation, an injection edit augments the set of objects \((e,r,\{o_{1},..,o_{n}\})\rightarrow(e,r,\{o_{1},..,o_{n},o^{\prime}\})\). Whether an edit is viewed as a modification or injection, depends on whether that information was captured in the model before the edit. Evaluating whether a specific fact (before or after an edit) is encoded by a model is typically done by testing if the model predicts the object for various input queries that represent the subject and relation (see more details in SS3.2). ## 3 Ripple Effects of Factual Edits Our focus is on evaluating the downstream effects of a given edit. Namely, given an edit \((e,r,o)\rightarrow(e,r,o^{\prime})\), we expect certain facts related to the edit to change as well. Consider, for example, the edit shown in Fig. 1. Changing the city in which the Effect Tower is located might also affect its country location and time zone. Formally, for a given model, assume a knowledge-graph \(\mathcal{K}:=\{(e_{i},r_{i},o_{i})\}_{i=1}^{N}\) of \(N\) factual triplets, representing the model's internal knowledge, and let \(\delta:(e,r,o)\rightarrow(e,r,o^{\prime})\) be an edit request for \(\mathcal{K}\). We define the _ripple effect_ of \(\delta\) on \(\mathcal{K}\), as the set of triplets \(\mathcal{R}(\delta)\) that the model implicitly needs to inject, modify, or delete from \(\mathcal{K}\) to reflect the world state after the edit \(\delta\). Notably, different edits can cause ripple effects of varying magnitudes. For example, changing the country of Rome from Italy to France, will entail many changes, such as the country in which the Colosseum is located, the language spoken in Rome, inter alia. On the other hand, updating the Siblings of Prince (Fig. 2) is both more realistic and should result in a more local effect. We refer to the number of facts affected by a single edit \(\delta\) (i.e. \(|\mathcal{R}(\delta)|\)) as the severity of the edit. In general, editing popular facts that were seen many times during training is likely to introduce more changes, and thus, editing their properties has a higher severity level. ### Evaluation Criteria We wish to evaluate how well models capture the ripple effects of factual edits. However, ripple effects potentially can span a large number of follow-up edits. Therefore, we focus on evaluating modified facts that are within 2-hop distance from subject or object of the original edit. Concretely, for a single edit \(\delta:(e,r,o)\rightarrow(e,r,o^{*})\), we evaluate the ripple effect \(\mathcal{R}(\delta)\), via the following evaluation criteria (examples are shown in Fig. 2): 1. [leftmargin=*] 2. _Logical generalization_ **(LG)**: we test if facts about the subject \(e\), the original object \(o\), and the new object \(o^{*}\) that are semantically related to the modified fact and expected to change by the edit, were indeed modified. Concretely, we test whether facts \((x,r^{\prime},z)\), with \(x\in\{e,o,o^{*}\}\) and \(r^{\prime}\) that is semantically related to \(r\), where modified consistently with respect to \(\delta\). For instance (Fig. 2A), since the relation Sibling is symmetric, the triplet (Prince, Sibling, Nicholas Carminowe) directly implies that the symmetric fact (Nicholas Carminowe, Sibling, Prince) also holds. 3. _Compositionality I_ **(CI)**: tests if the model can compose the edited fact with other facts about the target object \(o^{*}\). Let \((o,r^{\prime},z)\) and \((o^{*},r^{\prime},z^{*})\) be two facts of the same relation about \(o\) and \(o^{*}\), respectively. Also, denote by \(r^{\prime\prime}\) the complex relation expressing the composition of \(r\) and \(r^{\prime}\), e.g. \(r^{\prime\prime}=\texttt{Profession of sibling}\) for \(r=\texttt{Sibling}\) and \(r^{\prime}=\texttt{Profession}\). Then, after the edit \(\delta\), we expect the following change \((e,r^{\prime\prime},z)\rightarrow(e,r^{\prime\prime},z^{*})\). As illustrated in Fig. 2B, we ask about the profession of the siblings of Prince, after modifying his sibling. 4. _Compositionality II_ **(CII)**: tests if the model can compose a fact about a different subject \(e^{\prime}\neq e\) with the edited fact. Formally, let \((e^{\prime},r^{\prime},e)\) be a fact about \(e^{\prime}\) with the subject \(e\) as its object, and denote by \(r^{\prime\prime}\) the complex relation expressing the composition of \(r^{\prime}\) and \(r\) (see an example above). After the edit \(\delta\), the following change is expected for the subject \(e^{\prime}\): \((e^{\prime},r^{\prime\prime},o)\rightarrow(e^{\prime},r^{\prime\prime},o^{*})\). As shown in Fig. 2C, after modifying his sibling, we alias Prince as the founder of Paisley Park Records, and inquire about the Sibling of Founder of Paisley Park Records. 5. _Subject Aliasing_ **(SA)**: tests that the edit to the fact about \(e\) was also applied to any other entity \(e^{\prime}\) that is an alias for \(e\), namely, \((e^{\prime},r,o)\rightarrow(e^{\prime},r,o^{*})\). For instance, as in Fig. 2D, after modifying the sibling of Prince, we verify whether his sibling was also modified for his alias, Prince Roger Nelson. 5. _Forgetfulness (FN)_: for a one-to-many relation \(r\), there could be multiple objects for a given subject. In such cases, adding a new object should not affect the other objects encoded for this subject and relation. Therefore, for an object \(o^{\prime}\neq o^{*}\) for which there exists a triplet \((e,r,o^{\prime})\), we use this triplet as a test query. For example (Fig. 2E), after inserting the sibling Nicholas Carminowe for Prince, we check the model retains the fact that Tyka Nelson is also one of his siblings. 6. _Relation Specificity_ (RS): we test that facts about the subject \(e\), with relations whose objects are not influenced by the object of \(r\), are indeed not affected by the edit. As an example, in Fig. 2F, we check whether the model still correctly outputs the name of the Mother of Prince, after modifying his sibling. In SS4.1, we describe how we utilize a Knowledge Graph to generate factual editing evaluations, based on the above criteria. ### Related Work Knowledge Editing MethodsSeveral methods have been proposed to edit the factual knowledge encoded in a model. De Cao et al. (2021) and Mitchell et al. (2022) suggested to use hyper-networks to update the model weights. In addition, Meng et al. (2022, 2023) proposed to modify encoded facts by updating the weights of MLP layers, following recent observations that these layers can be cast as key-value memories (Geva et al., 2021) that store factual knowledge (Dai et al., 2022). Instead of updating the weights, other methods learn encodings that directly update the hidden representations of the models (Hernandez et al., 2023), or augment the input context with retrieved edits (Zhong et al., 2023). In SS5.1, we discuss state-of-the-art KE methods used in this work in greater detail. Separately from factual knowledge editing, recent works have also studied how to inject new facts into a model. Previous methods suggested unsupervised pre-training (Roberts et al., 2020; Zhang et al., 2021), semi-parametric methods methods where external information is added from a knowledge-base (Zhang et al., 2019; Peters et al., 2019; Lewis et al., Figure 2: An example test for each of our 6 evaluation criteria. The edit itself simulates adding a sibling to the entity Prince, and is shown at the top of each criteria with a bold arrow and an edit sign over the Sibling relation. For each test, the input subject is displayed in blue, target objects in green, and other nodes in orange. The color of an edge is derived from its target node. For Logical Generalization (A), the additional fact that needs to be inserted to the KG is presented with an edit sign next to the relation. For Compositions I (B) and Composition II (C) the model needs to hop over the edit to arrive at the target. In Subject Aliasing (D) we verify the edit also propagates to paraphrases of the input. In Forgetfulness (E), we verify that additional targets that share the input subject and relation are not forgotten, in necessary relations. In Relation Specificity, we verify other relations for the subject are not modified. 2020; Zhang et al., 2022), using adapters to store knowledge (Wang et al., 2021), or more recently directly updating FFN layers (Yao et al., 2022). Knowledge Editing EvaluationRecently, there has been a growing interest in KE evaluation. The main benchmarks currently used to evaluate KE methods are the Zero-Shot Relation Extraction (zsRE) (Levy et al., 2017; De Cao et al., 2021) and CounterFact (Meng et al., 2022). zsRE is a question-answering dataset for relation-specific queries, which includes human generated paraphrases that can be used to measure robustness to semantically equivalent inputs. For example, for the triplet (x, Country, y), zsRE contains queries such as "_In which country is x?_". CounterFact offers a more challenging setting, where the edited facts are counterfactuals, that are assigned a low probability by the LLM, such as editing the City of The Louvre from Paris to Rome. Evaluation in both zsRE and CounterFact primarily focuses on three important aspects of (a) _efficacy_, that is, the model generates the target object after the edit, (b) _paraphrasing_, testing if the model is robust in generating the target for paraphrases of the input, and (c) _specificity_, i.e., facts that are not related to the edit are unaffected. In addition to _efficacy_ and _specificity_, CounterFact measures the generation quality of the edited model when prompted with the edit's subject on two additional aspects: _consistency_ measures the similarity with subjects that share the same property as the edited object, and _fluency_ measures repetitiveness of the generated text. Recently, Onoe et al. (2023) introduce the task of _entity knowledge propagation_, aiming to examine the extent to which models are able to reason about emergent entities that did not appear in pretraining. In addition, Hoelscher-Obermaier et al. (2023) show that existing KE methods can have unwanted side effects and suffer from low specificity. Gupta et al. (2023) focus on editing commonsense knowledge and introduce MEMIT-CSKPROBE, a dataset for semantic generalization of commonsense edits. A concurrent work by Zhong et al. (2023) introduce MQUAKE, a benchmark that tests the ability of models to perform multi-hop reasoning after edits. While each of these benchmarks focuses on a single consequence of editing, RippleEdits enables comprehensive evaluation across six different evaluation criteria. For a detailed review of KE in LLMs see Yao et al. (2023). ## 4 The RippleEdits Benchmark In this section, we first describe a data collection pipeline (SS4.1) for factual edit requests and test queries for evaluating their ripple effects. Then in SS4.2, we apply this data generation process to create RippleEdits, a new benchmark for comprehensive KE evaluation. ### Data Generation Pipeline We describe our four-step data generation process, which is illustrated in Fig. 3. This process creates KE evaluation examples, each consisting of a factual edit request and a set of test queries (following our evaluation criteria). Since some steps in the pipeline involve manual writing of templates and logical rules, we restrict the edits and test queries to a fixed set of \(N_{rel}\) basic relations.2 Footnote 2: The full list of relations is available in our codebase, example relations are shown in Fig. 4. Step 1: Factual triplets collectionThe first stage of the pipeline (Fig. 3A) is to collect facts, for creating injection and edit requests. To this end, we use WikiData, a relational knowledge base consisting of facts that are expressed as triplets \((e,r,o)\), where \(e\) is a subject entity, \(r\) is a relation, and \(o\) is the object entity. We collect triplets of the following three types: * **Recent**: To create "real" plausible edit requests, we collect triplets that were recently inserted to WikiData. Such triplets represent facts that changed only recently. Therefore, they can be used to create injection edit requests for models that were trained before these facts were introduced, to simulate cases of an out-of-date model that requires factual updates. We collect such facts by randomly sampling triplets that have been modified during a range of 250 days after July 2022. * **Random**: We collect triplets corresponding to random facts, for which we will later generate modification edits (similarly to Meng et al. (2022)). Concretely, we divide the entities in WikiData into 10 groups, based on the amount of triplets associated with each entity. Intuitively, this could be viewed as a popularity measure. Then, we sample \(N_{ent}\) random entities from each group and randomly collect one triplet about each entity, which we will later convert to a factual edit request. These edits simulate factual edits that are meant to fix incorrect model predictions (e.g., when a model predicts that the capital of Germany is Frankfurt). * **Popular**: The two previous triplet types rely on random sampling from the entire knowledge-graph, and so many of them are likely to represent facts about tail entities. Tail entities are often not captured by models (Mallen et al., 2023), and therefore are not suitable for testing modification edits. To address this, we collect triplets about _popular entities_, where the subject corresponds to one of the top-viewed pages in Wikipedia.3 For a given triplet with a popular subject, we then sample WikiData triplets associated with them (the facts which on of those entities is the subject of). Importantly, this type of triplets enables to control for the ripple effect severity (SS3), i.e., how models handle the ripple effects of popular entities versus tail entities. Footnote 3: We extracted the entities whose corresponding Wikipedia page was included in the top-1000 most viewed pages in at least one month during 2020-2022. Step 2: Edits generationOnce we obtain factual triplets, we turn to generate edit requests for them (Fig. 3B). For Recent, triplets represent new facts that are meant to be injected to the model, assuming that it was trained before these facts were introduced to the world. Hence, for Recent, the target triplet for injection is the triplet itself. For Random and Popular triplets, we create an edit by generating a target triplet as follows. First, given a certain relation \(r\), we create a set of candidate object entities \(O_{r}\) by sampling \(N_{\textit{cand}}\) triplets involving \(r\) and extracting their object. Then, for every triplet \((e,r,o)\) in Random and Popular, we sample a target object \(o^{\prime}\neq o\) from \(O_{r}\). The fact that the target object is sampled from triplets with the same relation, makes the edit request technically consistent with the original triplet - the target object is of the same "type" as the original object (for example, a triplet with the relation Capital will get a new object of type City). Step 3: Evaluation tests generationThe next step in the generation process (Fig. 3C) is to create ripple effect evaluations for the factual edits we collected. To this end, we generate test queries for each of the evaluation criteria introduced in SS3.1. Each test query corresponds to a factual triplet that is expected to be true post-editing. 1. _Logical generalization_: For every relation \(r\), we define a set \(D_{r}\) of relations that semantically depend on it - changing \(r\)'s target object for a given subject is expected to change the target objects for the relations \(D_{r}\) of the subject. For instance, for the relation \(r=\texttt{Mother}\), the set \(D_{r}\) includes the relations Sibling, Sister, Brother, Aunt, and Uncle, among others. Then, for every relation \(r\in D_{r}\), we craft a logical rule for obtaining the new target for that relation post-editing. Given an edit \((e,r,o)\rightarrow(e,r,o^{*})\), we apply the logical rule corresponding to each relation \(r^{\prime}\in D_{r}\) to obtain a set of test queries \((e,r^{\prime},z^{\prime})\) about the subject \(e\). 2. _Compositionality I_: Let \(\mathcal{S}(e)\) be the set of WikiData triplets in which \(e\) is the subject. Given an edit \((e,r,o)\rightarrow(e,r,o^{*})\), we iterate through \(\mathcal{S}(o^{*})\) and for each triplet \((o^{*},r^{\prime},z)\in\mathcal{S}(o^{*})\), we construct a two-hop query about \(e\), with \(z\) as the answer. 3. _Compositionality II_: Let \(\mathcal{T}(e)\) be the set of WikiData triplets in which the entity \(e\) is the object. Given an edit \((e,r,o)\rightarrow(e,r,o^{*})\), we iterate through \(\mathcal{T}(e)\) and for each triplet \((z,r^{\prime},e)\in\mathcal{T}(e)\), we construct a two-hop query about \(z\) with \(o^{*}\) as the answer. Figure 3: Illustration for generating a RippleEdits test for the following modification edit: \((\texttt{Bill Gates},\texttt{Spouse},\texttt{Melinda Gates})\rightarrow( \texttt{Bill Gates},\texttt{Spouse},\texttt{Ricciarda Cybo Malaspina})\). We start by sampling the original fact from a KG (A). For modification edits, we create counterfactuals by choosing a object that shares the same type as the original object (B). The main step is (C), where we generate evaluation tests, by using the KG and sampling new triplets that should be retained or modified, post-edit. Finally, we utilize predefined templates to translate the KG triplets to natural language phrases (D). 4. _Subject Aliasing_: WikiData maintains a set \(\mathcal{A}(e)\) of aliases for every entity \(e\). Given an edit, \((e,r,o)\rightarrow(e,r,o^{*})\), we use this information to create a test query \((e^{\prime},r,o^{*})\) for every \(e^{\prime}\in\mathcal{A}(e)\). 5. _Forgetfulness_: We focus on counterfactual edits (Random and Popular) with one-to-many relations. To test whether the model retains the original objects \(\{o_{1},...,o_{n}\}\), in addition to the new edited object \(o^{*}\), we use the triplets \((e,r,o_{1}),...,(e,r,o_{n})\) as test queries. For example, see Fig.2E. 6. _Relation Specificity_: Given an edit \((e,r,o)\rightarrow(e,r,o^{*})\), we test whether facts about \(e\) that are semantically not influenced by the edit are retained after the edit. Concretely, we use as a test query every triplet in \(\mathcal{S}(e)\) (the set of triplets about \(e\)) with a relation that is in \(D_{r}\) (the set of relations that semantically depend on \(r\)). Step 4: Phrasing in natural languageAt this point (Fig. 3D), we have factual edit requests and their corresponding test queries. To use them as inputs to LMs, we convert them from triplet-form to natural language (NL). To this end, we manually craft a template NL phrase per relation in the data (this is feasible since we use a fixed set of relations), and use it to convert all the triplets with this relation. For instance, we use the template "The date of birth of <\(e\)> is" to obtain prompts for triplets with the relation \(r=\texttt{Date}\texttt{of}\texttt{Birth}\) and a subject entity \(e\). For the _Forgetfulness_ test triplets generated for an edit \((e,r,o)\rightarrow(e,r,o^{*})\) (where \(o\) is one of possibly multiple objects for the subject-relation pair), we form a single NL query that asks about other objects than the edited one, for example, "The award received by <\(e\)> which is not <\(o^{*}\)> is". ### Data Statistics We used our data generation pipeline to collect edits for 2,000 Recent facts, 1,000 Random facts, and 1,000 Popular facts. Specifically, to obtain the Recent subset, we set \(N_{ent}=200\) to sample 200 facts from each entity group in WikiData. For target triplets generation of Random and Popular, we set \(N_{cand}=100,000\). Last, we manually craft NL templates and logical rules for \(N_{rel}=54\) basic relations. We call our diagnosis benchmark RippleEdits, and publicly release it to the research community. Statistics on RippleEdits are presented in Tab. 1, showing that our generation process resulted in multiple (18-26) test queries per edit, with one test query of each type on average. Moreover, Popular edits contain more popular subjects (as intended), while Recent edits have the least poplar objects. Fig. 4 shows the top relations and their frequency in each subset of RippleEdits, demonstrating the diversity of the generated facts. ## 5 Experiments We use RippleEdits to evaluate recent KE methods on multiple LMs, and show that despite substantial progress on existing benchmarks, current KE methods struggle to introduce consistent changes to the model's knowledge after an edit. Moreover, a simple in-context editing baseline where generation is conditioned on the edited facts is more consistent, leaves ample room for improvement for future methods. ### Evaluation Setting DataRippleEdits is meant to be used as a diagnostic dataset to evaluate the ripple effects resulting from an editing operation. Therefore, to evaluate \begin{table} \begin{tabular}{l r r r} & \multicolumn{2}{c}{Recent} & \multicolumn{1}{c}{Random} & \multicolumn{1}{c}{Popular} \\ \hline \# of factual edits & 2,000 & 1,000 & 1,000 \\ \# of tests per edit & \(26.2\) & \(18.7\) & \(25.3\) \\ \# of queries per test & \(5.24\) & \(3.1\) & \(4.2\) \\ \hline \# of LG queries & \(2.5\) & \(3.6\) & \(2.6\) \\ \# of CI queries & \(11.7\) & \(4.7\) & \(6.1\) \\ \# of CII queries & \(5.1\) & \(5.1\) & \(3.9\) \\ \# of SA queries & \(1.8\) & \(1.3\) & \(4.7\) \\ \# of FN queries & \(0\) & \(0.3\) & \(0.2\) \\ \# of RS queries & \(5.1\) & \(3.7\) & \(7.8\) \\ \hline Subject triplets count & \(31.7\) & \(13.3\) & \(115.2\) \\ Subject page back-links & \(278.1\) & \(121.6\) & \(3934.5\) \\ Subject page views & \(189.6\) & \(67.91\) & \(7376.5\) \\ \hline Object triplets count & \(192.4\) & \(46.4\) & \(39.5\) \\ Object page back-links & \(18634.2\) & \(3065.0\) & \(2136.0\) \\ Object page views & \(2852.4\) & \(1379.7\) & \(1176.7\) \\ \hline \end{tabular} \end{table} Table 1: Statistics on the three subsets of RippleEdits showing the average of different metrics. For a given subject/object entity, triplets count is the number of WikiData facts it is associated with, page back-links is the number of Wikipedia pages with a link to the entity’s page, and page views is the average daily view count of the entity’s Wikipedia page over the week RippleEdits was created. the performance of an editing method on a given model, the data first needs to be adjusted such that (a) only cases of successful edits are evaluated, and (b) only test queries that the model answered correctly pre-editing are used for evaluation. Concretely, for a given editing method \(\mathcal{F}\), a model \(\mathcal{M}\), an edit request \(x:(e,r,o)\rightarrow(e,r,o^{\prime})\), is included in the evaluation if the following conditions are met when applying \(\mathcal{F}\) to \(\mathcal{M}\) and \(x\): (a) \(\mathcal{M}\) successfully generates \(o^{\prime}\) when queried about \(e\) and \(r\), namely, the edits has successfully been applied, and (b) \(\mathcal{M}\) successfully generates the correct objects for queries corresponding to the tests before applying the edit. For example, we check that the model can predict the children of \(o^{\prime}\) before asking about \(e\)'s new siblings, and that it predicts the mother of \(o^{\prime}\) before asking about the new maternal grandmother of \(e\). Editing methodsWe evaluate three KE methods: MEND (Mitchell et al., 2022), ROME (Meng et al., 2022), and MEMIT (Meng et al., 2023). MEND trains a network that modifies gradients to produce local edits when presented with a desirable input-output pair. ROME rank-one updates to the weights of the Transformer's MLP layers to update specific factual associations. MEMIT is an extension of ROME that is also capable of editing many facts at once. BaselineMotivated by the recent success of LMs to learn in-context and follow instructions (Brown et al., 2020; Ouyang et al., 2022; Liu et al., 2023), we propose an in-context editing (ICE) baseline for factual editing. Unlike the above methods, it does not introduce changes to the model parameters, but rather generation is conditioned on the new fact. Concretely, given an edit \((e,r,o)\rightarrow(e,r,o^{*})\) and a test query \(q\), we use the following prompt to obtain an answer from the model: "Imagine that <\(o^{*}\)> would have been <\(P_{r}\)>", where \begin{table} \begin{tabular}{l c c c c c c} & \multicolumn{2}{c}{Recent} & \multicolumn{2}{c}{Random} & \multicolumn{2}{c}{Popular} \\ & Edits & Tests & Edits & Tests & Edits & Tests \\ \hline GPT-2 & \(853\) & \(29\%\) & \(689\) & \(33\%\) & \(722\) & \(71\%\) \\ GPT-J & \(801\) & \(33\%\) & \(717\) & \(34\%\) & \(760\) & \(76\%\) \\ GPT-Neo & \(989\) & \(45\%\) & \(801\) & \(46\%\) & \(828\) & \(86\%\) \\ LLAMA & \(847\) & \(44\%\) & \(796\) & \(49\%\) & \(784\) & \(87\%\) \\ GPT-3 & \(822\) & \(55\%\) & \(760\) & \(74\%\) & \(665\) & \(94\%\) \\ \hline \end{tabular} \end{table} Table 2: (a) Number of edits we considered in our evaluation (that is, edits that have successfully occurred), from each of the data types, averaged over ROME, MEMIT and MEND, for the models: GPT-2, GPT-J, GPT-Neo and LLAMA, and using the ICE baseline for GPT-3. (b) Portion of queries, on average, that have been considered during our evaluation, namely that their conditions have been met. Figure 4: Most frequent relations and their frequency, in each subset of RippleEdits. Figure 5: An example modification edit from our ICE baseline. The color code of the KG is similar to that described in Fig.2. We prepend the prefix “_Imagine that_” to the input prompt, as counterfactuals can contradict knowledge embedded in a model’s parameters. is a particular manual-phrased proposition corresponding to \(r\), such as _"The mother of <e>"_ when \(r=\texttt{Mother}\) and \(e\) is the subject. An example is illustrated in Fig. 5. ModelsWe use 4 recent auto-regressive decoder-only LMs of different sizes: GPT-2 XL (Radford et al., 2019) with 1.5B parameters, GPT-J (Chen et al., 2021) with 6B parameters, LLaMA with 7B parameters, (Touvron et al., 2023), and GPT-NeoX with 20B parameters (Black et al., 2022). In addition, as our baseline does not require access to the model parameters, we also evaluate it on the closed-source models GPT-3 text-davinci-003 with 175B parameters (Brown et al., 2020). For all model-method combinations, except for ROME with LLaMA, we use the official implementation and hyperparameters from Meng et al. (2022). Also, we adjust ROME to LLaMA by following the method and utilizing the authors' codebase. Tab. 2 shows the number of edits and test queries left, for every model, after filtering out non-successful edits and inapplicable test queries (as described above). EvaluationEach model-method pair is evaluated separately, on every subset of RippleEdits. For each evaluation criteria, we first compute the average accuracy over the test queries per example, and then average over all the examples. For every test query, we let the model generate a maximum of 20 token. We consider a generation as successful if one of the target object's aliases appears in the text. In cases of multiple gold target objects (as in _Forgetfulness_ test queries), we evaluate each target object separately and consider the generation as correct if the generation was correct with respect to at least one object. ### Results Tab. 3, 4, 5 show the evaluation results on the Recent, Random, and Popular subsets, respectively. Considering the average scores across all subsets, we observe that existing editing methods struggle to handle the ripple effect induced by editing facts, with low average accuracy of \(38-66\) across all models. This suggests that, while KE methods demonstrate high capability in making local updates to the model's knowledge, these changes are mostly applied at a surface-level without propagating to other related facts. Moreover, comparing results across test criteria shows that some RippleEdits criteria are handled better than others. For example, while results for the _Subject Aliasing_ criteria that measures generalization to paraphrases are high (86.8 or higher across all settings), results for the other criteria are lower and vary between models, methods, and splits (e.g for the _Logical generalization_ criteria, results are at a very low \(5.5\) on the Popular split with GPT-J and ROME, but are much higher at \(71.1\) on the Random split with LLaMA and in-context editing). Next, we analyze our results across the different dimensions, to reveal fine-grained insights on when current KE methods succeed and fail. Importantly, we observe that our in-context editing baseline obtains the best overall results. Specifically, ICE outperforms ROME by more than 10 points for GPT-Neo and 30 points for LLaMA, on average. Although GPT-3 with ICE performs best on average, the 7B LLaMA is highly competitive, performing better or similarly on the Recent and Popular splits. Results across model sizeWe analyze how editing performance on RippleEdits is influenced by the model size. To this end, we further evaluate ROME on smaller versions of GPT-2 - with 345M (GPT2-M) and 762M (GPT2-L) parameters, and plot the average accuracy over the three subsets of RippleEdits as a function of model size. Fig. 6 presents the results, showing that editing performance increases in model size, with ROME obtaining substantially higher accuracy when applied to larger models. Nevertheless, our results (Tab. 3, 4, 5) show that when using ICE, the 7B \begin{table} \begin{tabular}{l l l l l l l|l} & & LG & CI & CII & SA & RS & Avg. \\ \hline \multirow{3}{*}{GPT-2} & ROME & 20.2 & 35.6 & 46.8 & 86.8 & 55.4 & 49.0 \\ & MEMIT & 21.8 & 30.3 & 46.2 & 92.9 & 56.8 & 49.6 \\ & MEND & 28.9 & 23.7 & 20.7 & 87.1 & 51.9 & 42.5 \\ \hline \multirow{3}{*}{GPT-J} & ROME & 15.2 & 29.5 & 50.5 & 90.3 & 60.0 & 49.1 \\ & MEMIT & 18.0 & 35.0 & 48.1 & 88.4 & 42.2 & 46.3 \\ \hline \multirow{3}{*}{GPT-Neo} & ROME & 27.2 & 54.3 & 69.4 & 98.9 & 80.3 & 66.0 \\ & ICE & 48.3 & 29.0 & 62.2 & 100 & 80.7 & 64.0 \\ \hline \multirow{3}{*}{LLaMA} & ROME & 16.7 & 47.8 & 50.0 & 93.6 & 59.3 & 53.5 \\ & ICE & 59.6 & 74.8 & 85.0 & 100 & 77.9 & 79.5 \\ \hline \multirow{3}{*}{GPT-3} & ICE & \(33.3\) & \(100\) & \(91.3\) & \(100\) & \(73.1\) & 79.5 \\ \hline \end{tabular} \end{table} Table 3: Accuracy on the Recent subset of RippleEdits, by MEND, ROME, MEMIT, and the ICE baseline, on GPT-2, GPT-J, GPT-Neo, LLaMA, and GPT-3. Evaluation on Forgetfulness is not applicable for Recent edits (see §4.1). LLAMA is competitive with the much larger GPT-3, suggesting that simply scaling the model size may not be sufficient to fix the drawbacks of current editing methods. Results across editing methodsTab. 6 shows the results of MEND, ROME and MEMIT, on GPT-2 across the RippleEdits evaluation criteria, while averaging over the three data subsets. Interestingly, MEND outperforms ROME and MEMIT in _Logical generalization_, but is worse in _Compositionality I_ and _Compositionality II_, suggesting that different methods might better capture different types of ripple effects. Results across data splitsFig. 7 displays results across evaluation splits and criteria. Splits differ in whether edited facts are counterfactual or real and in the popularity of the edited entities. When comparing the Recent split that examines injection of real recent facts to the counterfactual Random and Popular splits, we observe that for _Relation Specificity_, performance is best on the Recent split. Comparing the Random and Popular splits, that differ in the popularity of the edited entities, we see that while _Logical generalization_ is higher for Random, _Forgetfulness_ is higher for Popular. These results suggest that, although retaining correct knowledge is easier for popular entities, updating other facts that logically follow from an edit is harder for popular entities. ## 6 Conclusion and Discussion We introduce the concept of the ripple effects of knowledge editing, suggesting that editing a particular fact implies that many other facts need to be updated. We additionally propose RippleEdits, \begin{table} \begin{tabular}{l c c c} & MEND & ROME & MEMIT \\ \hline _Relation Specificity_ & \(34.4\) & \(37.6\) & \(39.1\) \\ _Logical generalization_ & \(39.1\) & \(26.5\) & \(29.0\) \\ _Compositionality I_ & \(17.0\) & \(37.9\) & \(35.3\) \\ _Compositionality II_ & \(13.6\) & \(37.7\) & \(39.1\) \\ \hline \hline \end{tabular} \end{table} Table 6: Accuracy of MEND, ROME and MEMIT, using GPT-2, averaged over the three RippleEdits splits - Recent, Random and Popular. \begin{table} \begin{tabular}{l l c c c c c|c|c} & & LG & CI & CII & SA & FN & RS & Avg. \\ \hline \multirow{3}{*}{GPT-2} & ROME & \(53.6\) & \(31.6\) & \(44.4\) & \(94.9\) & \(9.9\) & \(38.9\) & \(45.5\) \\ & MEMIT & \(58.4\) & \(30.5\) & \(49.8\) & \(100\) & \(20.0\) & \(36.2\) & \(49.1\) \\ & MEND & \(62.5\) & \(16.7\) & \(14.6\) & \(91.3\) & \(17.7\) & \(30.1\) & \(38.8\) \\ \hline \multirow{3}{*}{GPT-J} & ROME & \(53.8\) & \(40.8\) & \(49.9\) & \(93.8\) & \(15.2\) & \(39.4\) & \(48.8\) \\ & MEMIT & \(53.0\) & \(35.7\) & \(48.2\) & \(95.6\) & \(18.2\) & \(39.9\) & \(48.4\) \\ \hline \multirow{3}{*}{GPT-Neo} & ROME & \(61.6\) & \(49.4\) & \(57.1\) & \(100\) & \(30.8\) & \(50.7\) & \(58.3\) \\ & ICE & \(78.6\) & \(90.0\) & \(55.6\) & \(100\) & \(100\) & \(61.9\) & \(81.0\) \\ \hline \multirow{3}{*}{LLLAMA} & ROME & \(54.3\) & \(35.5\) & \(49.5\) & \(96.0\) & \(17.8\) & \(38.9\) & \(48.7\) \\ & ICE & \(71.1\) & \(73.8\) & \(80.3\) & \(100\) & \(100\) & \(69.6\) & \(82.5\) \\ \hline \multirow{3}{*}{GPT-3} & ICE & \(69.0\) & \(83.3\) & \(89.7\) & \(100\) & \(100\) & \(100\) & \(90.3\) \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy on the Random subset of RippleEdits, by MEND, ROME, MEMIT, and the ICE baseline, on GPT-2, GPT-J, GPT-Neo, LLAA, and GPT-3. \begin{table} \begin{tabular}{l l c c c c c|c|c} & & LG & CI & CII & SA & FN & RS & Avg. \\ \hline \multirow{3}{*}{GPT-2} & ROME & \(5.7\) & \(46.4\) & \(21.8\) & \(100\) & \(100\) & \(18.5\) & \(48.7\) \\ & MEMIT & \(6.7\) & \(45.2\) & \(21.2\) & \(100\) & \(100\) & \(24.3\) & \(49.6\) \\ & MEND & \(25.9\) & \(10.7\) & \(5.4\) & \(100\) & \(100\) & \(21.2\) & \(43.9\) \\ \hline \multirow{3}{*}{GPT-J} & ROME & \(5.5\) & \(44.1\) & \(21.0\) & \(98.6\) & \(99.0\) & \(22.3\) & \(48.4\) \\ & MEMIT & \(7.0\) & \(45.9\) & \(23.7\) & \(100\) & \(100\) & \(24.8\) & \(50.2\) \\ \hline \multirow{3}{*}{GPT-Neo} & ROME & \(36.4\) & \(29.4\) & \(41.6\) & \(100\) & \(100\) & \(50.8\) & \(59.7\) \\ & ICE & \(37.5\) & \(92.4\) & \(40.1\) & \(100\) & \(100\) & \(74.4\) & \(74.1\) \\ \hline \multirow{3}{*}{LLaMA} & ROME & \(22.0\) & \(37.4\) & \(16.2\) & \(100\) & \(100\) & \(20.6\) & \(49.4\) \\ & ICE & \(57.2\) & \(85.1\) & \(67.6\) & \(100\) & \(100\) & \(78.0\) & \(81.3\) \\ \hline \multirow{3}{*}{GPT-3} & ICE & \(31.0\) & \(86.1\) & \(65.6\) & \(100\) & \(100\) & \(83.8\) & \(77.7\) \\ \hline \hline \end{tabular} \end{table} Table 5: Accuracy on the Popular subset of RippleEdits, by MEND, ROME, MEMIT, and the ICE baseline, on GPT-2, GPT-J, GPT-Neo, LLaMA, and GPT-3. a diagnostic benchmark designed to evaluate the ripple effects in realistic edits. We further evaluate various KE methods with various models, and show that models often fail to capture the ripple effects of a given knowledge edit. We thus suggest that future development of KE methods should consider those effects more carefully. Finally, we show that a simple in-context editing method achieves the best results on our benchmark, highlighting the potential of such editing approaches. Our benchmark covers a small fraction of all possible ripple-edits. For example, one could consider ripples that involve more than two hops, and explore the graph structure of different edits. Finally, it would be interesting to consider cases where models succeed in capturing ripple-edits, and analyze how these are implemented mechanistically in the transformer architecture (Geva et al., 2023). ## 7 Acknowledgments This work is supported in part by the Israeli Science Foundation.
2302.09236
Scalable Prompt Generation for Semi-supervised Learning with Language Models
Prompt-based learning methods in semi-supervised learning (SSL) settings have been shown to be effective on multiple natural language understanding (NLU) datasets and tasks in the literature. However, manually designing multiple prompts and verbalizers requires domain knowledge and human effort, making it difficult and expensive to scale across different datasets. In this paper, we propose two methods to automatically design multiple prompts and integrate automatic verbalizer in SSL settings without sacrificing performance. The first method uses various demonstration examples with learnable continuous prompt tokens to create diverse prompt models. The second method uses a varying number of soft prompt tokens to encourage language models to learn different prompts. For the verbalizer, we use the prototypical verbalizer to replace the manual one. In summary, we obtained the best average accuracy of 73.2% (a relative improvement of 2.52% over even the previous state-of-the-art SSL method with manual prompts and verbalizers) in different few-shot learning settings.
Yuhang Zhou, Suraj Maharjan, Beiye Liu
2023-02-18T05:06:28Z
http://arxiv.org/abs/2302.09236v1
# Scalable Prompt Generation for Semi-supervised Learning with Language Models ###### Abstract Prompt-based learning methods in semi-supervised learning (SSL) settings have been shown to be effective on multiple natural language understanding (NLU) datasets and tasks in the literature. However, manually designing multiple prompts and verbalizers requires domain knowledge and human effort, making it difficult and expensive to scale across different datasets. In this paper, we propose two methods to automatically design multiple prompts and integrate automatic verbalizer in SSL settings without sacrificing performance. The first method uses various demonstration examples with learnable continuous prompt tokens to create diverse prompt models. The second method uses a varying number of soft prompt tokens to encourage language models to learn different prompts. For the verbalizer, we use the prototypical verbalizer to replace the manual one. In summary, we obtained the best average accuracy of 73.2% (a relative improvement of 2.52% over even the previous state-of-the-art SSL method with manual prompts and verbalizers) in different few-shot learning settings. ## 1 Introduction Pre-training large language models with huge amounts of text corpora in masked language modeling tasks and then fine-tuning the pre-trained language model (PLM) on downstream tasks have shown superior performance in many natural language processing tasks. However, the discrepancy between the pretraining task (masked language modeling objective) and the downstream fine-tuning task (task without MASK token) could lead to unexpected behaviors. Recently, there has been growing research interest in the area of prompt-tuning, where any NLU task is transformed into a cloze task to mimic the pre-training objective of a large masked language model Kumar et al. (2016); McCann et al. (2018); Radford et al. (2018). Prompt-based learning transforms an input \(\mathbf{x}\) into \(\mathbf{x}^{\prime}\) using a prompt function. It makes use of the vast amount of acquired knowledge of PLMs to predict a distribution of tokens at the masked position. The verbalizer then maps the predicted tokens to classes. The main advantage of this approach is that this method works well in a few-shot learning environment Schick and Schutze (2021). However, the main disadvantage of this method is the limitation posed by the prompt and verbalizer functions, which require human knowledge to carefully craft them. Such handcrafting work is expensive and not scalable with the increase in the variety of tasks and datasets. For example, in Alexa, there are thousands of domains and manually designing prompts and verbalizer for intent classification for each of them according to the dataset content demand human expertise, which is time consuming and not applicable. It is essential to reduce the human efforts in the process of prompt generation. Prompt-based learning requires finding the right tokens in the prompts that align with the task requirement and dataset content. However, since the objective of these prompt tokens is only for the language models to perform the task at hand, it is not necessary for them to be a sequence of words that humans can understand. Continuous prompt-based learning alleviates the need for human intervention to determine prompt tokens. Instead, it automates the prompt design process. In the literature, there are mainly two methods: i) automatically search for discrete prompt text tokens Shin et al. (2020) ii) automatically learn numerical prompt embeddings Lester et al. (2021); Li and Liang (2021); Liu et al. (2021, 2021); Hambardzumyan et al. (2021). The main difference between these two approaches is that the first searches for actual discrete tokens from the language model vocabulary, whereas the second method directly learns the embeddings for prompt tokens, which may not be human comprehensible. Similarly, automatic selection of label words Shin et al. (2020); Schick et al. (2020); Gao et al. (2021), soft verbalizer Hambardzumyan et al. (2021); Liu et al. (2021), and prototypical verbalizer Cui et al. (2022) are the methods proposed to eliminate the tedious process of manually defining verbalizer mapping functions. Most of these continuous prompt and automatic verbalizer methods focus on supervised learning (SL) settings but ignore their generalization under semi-supervised learning (SSL) settings. The previous state-of-the-art (SoTA) SSL method with various manual prompts and verbalizers has shown superiority over SL language models with a single manual prompt Schick and Schutze (2021). In this SSL pipeline, we normally train several labeler models with different manual prompts to capture diverse information from the limited training data and make use of them to annotate a huge amount of unlabeled data. Having to design several manual prompts and verbalizer models for SSL settings and applying them across multiple datasets and tasks will exacerbate the scalability and cost problem. In this paper, we tackle the problem posed by manual prompt and verbalizer design and propose automatic methods to fully automate the design of diverse prompts and verbalizers in SSL settings. Our main contributions are as follows. * We propose methods to generate various prompts by adding multiple demonstration examples with continuous prompt tokens for use in SSL settings. * To the best of our knowledge, we are the first to completely eliminate human involvement in designing multiple prompts and verbalizers in SSL settings and obtain similar and even better performance than the SoTA methods with manual prompts and verbalizers. * We empirically show that using the automatic verbalizer with manual prompts can achieve a similar performance to manual verbalizers' performance in the SSL pipeline. ## 2 Methodology Our overall prompt-based SSL workflow follows Pattern-exploiting Training (PET) semi-supervised learning setting Schick and Schutze (2021). PET first transforms the input sequence \(x\) to a cloze question containing a single MASK token. Next, it uses PLM to fill in the value of the MASK token and applies verbalizers to map the output tokens to the class labels \(y\in Y\). They devise a semi-supervised framework to produce soft labels on a large amount of unlabeled data, which are later used to train a final supervised classifier \(\mathbf{F}\). They report strong performance over other supervised prompt-tuning methods and other semi-supervised approaches without prompts across multiple NLU tasks. Before this paper, the PET approach was the state-of-the-art (SoTA) framework that integrates the prompt-tuning method into the SSL pipeline. The PET method fine-tunes multiple PLMs with different prompts. It introduces diversity in the prompts by manually designing several prompts using domain and task knowledge. Similarly, it uses human expertise to design verbalizer mappings for each of the datasets based on the knowledge of the tasks. Here, we use continuous and automatic prompts and verbalizers, thus eliminating the need for human involvement in designing manual prompts and verbalizers. ### Overall Pipeline Figure 1 shows the overall pipeline of our proposed methods. Unlike the original PET pipeline with manual prompts and verbalizers, we use a prompt generation function to generate multiple automatic prompts. Each PLM with automatic prompts serves as a labeler model. We train each of these prompts \(+\) automatic verbalizer models with a labeled dataset \(\mathcal{T}\) in few-shot settings. With an input sequence \(x_{t}\in\mathcal{T}\) and the given label \(y_{t}\), we first use the prompt function \(P\) to transform \(x_{t}\) into a sequence \(P(x_{t})\) with a MASK token. The verbalizer then maps the predicted word probability at the masked position to the label probability. For each PLM \(m\), the predicted probability \(p_{m}(y_{t}|x_{t})\) is defined as \[p_{m}(y_{t}|x_{t})=\frac{\exp m(y_{t}|x_{t})}{\sum_{y^{\prime}\in Y}\exp m(y^{ \prime}|x_{t})} \tag{1}\] where \(m(y|x)\) is the raw score of PLM \(m\) in the masked position. After obtaining the probability, we minimize the cross-entropy loss \(\mathcal{L}_{c}\) between \(p_{m}(y|x)\) and \(y\). We apply trained labeler models to each sentence \(x_{d}\in\mathcal{D}\) in the unlabeled dataset \(\mathcal{D}\) and get the probability \(p_{m}(y_{d}|x_{d})\) for each trained model. We then take the average of these probabilities from each trained model \(m\) as the ground-truth probability, \[p_{t}(y_{d}|x_{d})=\frac{1}{Z}\sum_{m\in M}p_{m}(y_{d}|x_{d})\] where \(Z\) is the total number of trained PLMs with different automatic prompts. Eventually, we fine-tune a final pre-trained language model \(\mathbf{F}\) with a standard sequence classification head. We use the Kullback-Leibler (KL) divergence as our loss function. Given \(p_{t}(y_{d}|x_{d})\) and the predicted probability \(\hat{p}(y_{d}|x_{d})\) of the final classifier \(\mathbf{F}\), the divergence loss \(\mathcal{L}_{div}\) for this input is: \[\mathcal{L}_{div}(x_{d})=\sum_{y^{\prime}\in Y}p_{t}(y^{\prime}|x_{d})\log \left(\frac{p_{t}(y^{\prime}|x_{d})}{\hat{p}(y^{\prime}|x_{d})}\right) \tag{2}\] The final classifier \(\mathbf{F}\) is then applied to the test set to obtain the results. Schick and Schutze (2021) introduce diversity in their SSL pipeline by training several models with different manual prompts and applying them to softly label a large number of unlabeled datasets. The diversity between manual prompts brings consistent improvements. We observe that diverse knowledge learned by the language model is mostly introduced by the prompts rather than manual verbalizers, since in most datasets, they prepare only one manual verbalizer but multiple prompts for experimentation. Thus, we propose replacing manual prompts with multiple automatic prompts and using the same automatic verbalizer for all labeler models. ### Continuous Prompt Design Several researchers have proposed methods to automate the prompt design process Liu et al. (2021); Li and Liang (2021); Lester et al. (2021). In most of these methods, they insert the continuous trainable prompt tokens into the input sentence and learn the token embeddings during the training process. However, existing continuous prompt-based learning methods do not consider their application in the PET pipeline, which requires training several labeler models Schick and Schutze (2021), in order to learn diverse knowledge from the datasets. Therefore, most methods do not define strategies to compose multiple continuous prompts. We propose two scalable solutions to introduce different variables in the design of continuous prompt labeler models (various demonstration examples or varying numbers of continuous prompt tokens). We expect that with these diverse continuous prompts, trained language models can fully learn different aspects of knowledge from the training dataset. #### 2.2.1 Scalable Prompt Generation Inspired by the P-tuning Liu et al. (2021) method, we insert multiple continuous prompt tokens \(p_{n}\) into the input sentence \(x\), transforming it into Figure 1: Semi-Supervised Learning (SSL) Training. Multiple diverse prompt-based learning models are trained on labeled data to soft label huge amounts of unlabeled data. The soft labels serve as ground truth to train the final classifier. \(P_{0},P_{1},\ldots\) are continuous prompt tokens and \(Demo\_A,Demo\_B,\ldots\) are demonstration examples randomly sampled from the training data. \([\mathbf{x}][p_{0},p_{1},\ldots,p_{n}][\text{MASK}]\).. Different from the original P-tuning method, we invent two scalable designs to make it suitable for the prompt-based SSL pipeline. Add Demonstration Examples:In this method, we add different demonstration examples to construct diverse prompts. This is similar to the prompt augmentation method, in which one chooses to add additional answered prompts to demonstrate what kind of answer the language model should produce for the MASK token Liu et al. (2021). These additional answered prompts are called the demonstration example \([demo]\). To reduce the discrepancy between the demonstration examples and the input sentences, we also add a fixed number of continuous prompt tokens \(p\) between the demonstration sentence and its true label. Thus, given the labeled input \(\mathbf{x_{d}}\) and its corresponding ground-truth label \(\mathbf{y_{d}}\) from the labeled training dataset, we construct the demonstration example as \([demo]=[\mathbf{x_{d}}][p_{0},p_{1},\ldots,p_{n}][\mathbf{y_{d}}]\), where \(p_{0},p_{1},\ldots,p_{n}\) are continuous prompt tokens. After composing the demonstration examples \([demo]\), given a training input from the labeled dataset \(x_{t}=(s_{i},s_{2},\ldots,s_{k})\in\mathcal{T}\) and label \(y_{t}\), where \(s_{i},s_{2},\ldots,s_{k}\) are input tokens for the PLM \(m\), the prompt template function \(P_{1}(x_{t})\) is formally defined as \[P_{1}(x_{t})_{1}=[demo_{1}][\mathbf{x_{t}}][p_{0},\ldots,p_{n}][ \text{MASK}] \tag{3}\] \[\ldots\] \[P_{1}(x_{t})_{k}=[demo_{k}][\mathbf{x_{t}}][p_{0},\ldots,p_{n}][ \text{MASK}]\] We create multiple prompts by adding different demonstration examples with exactly \(n\) continuous soft tokens with the input sentence. Demonstration examples are randomly sampled from the labeled datasets. For longer input sentences, we first truncate the length of \([demo]\) to fit the PLM requirement. Our intuition is that different demonstration examples will introduce the diversity necessary for SSL experimentation. Vary Soft Token Numbers:In this method, we vary the number of continuous prompt tokens between different labeler models. In other words, this prompt function \(P_{2}(x_{t})\) with input sentence \(x_{t}\) is defined as \[P_{2}(x_{t})_{1}=[\mathbf{x_{t}}][p_{0},p_{1},\ldots,p_{n_{1}}][ \text{MASK}] \tag{4}\] \[\ldots\] \[P_{2}(x_{t})_{k}=[\mathbf{x_{t}}][p_{0},p_{1},\ldots,p_{n_{k}}][ \text{MASK}]\] and each of the labeler models uses different \(n_{1}\) to \(n_{k}\) number(s) of continuous prompt tokens \(p\). Here, we do not prepend the demonstration example. Our intuition is that given different numbers of continuous prompt tokens, the optimized learned continuous prompts may also be different. For example, for AG's News dataset Zhang et al. (2015) about news topics, the optimized prompts with two continuous prompt tokens could be: \([\mathbf{[x][\text{News : }][\text{MASK}]}]\), while optimized prompts with three continuous prompt tokens could be: \([\mathbf{[x][\text{the category is}][\text{MASK}]}]\). We expect that varying the number of continuous prompt tokens will have a similar impact to manually constructing different prompts. #### 2.2.2 Reparameterization Block Li and Liang (2021) and Liu et al. (2021) empirically show that directly updating the parameters in continuous prompts leads to unstable optimization. Hence, we first feed prompt embeddings through a reparameterization block rather than directly feeding them into the PLM. Our reparametrization block uses a bidirectional LSTM Hochreiter and Schmidhuber (1997) network with a two-layer \(ReLU\) activated multilayer perceptron (MLP) Liu et al. (2021); Li and Liang (2021). We denote the random initialized tokens as \(p^{\prime}_{i}\) and the real input embeddings, which are fed into the PLM, as \(p_{i}\). The \(p_{i}\) are the output of the bidirectional LSTM network and the MLP as, \[p_{i}=\text{MLP}([\text{LSTM}(p^{\prime}_{0:i}),\text{LSTM}(p^{\prime}_{i:n}) ])\] where \(p_{i}\) is also the soft token used in Equations 3 and 4.We learn the optimized continuous prompt tokens \(\hat{p}_{0:n}\) during the training process. With the downstream cross-entropy loss \(\mathcal{L}_{c}\), we can differentially optimize the continuous prompts by: \[\hat{p}_{0:n}=\underset{p}{\operatorname{argmin}}\mathcal{L}_{c}(p_{m}(x|y),y) \tag{5}\] ### Automatic Verbalizers There are several automatic verbalizer methods that eliminate the need for human intervention and expertise to build mapping functions. We experiment with three types of automatic verbalizers: i) soft verbalizer Hambardzumyan et al. (2021), ii) prototypical verbalizer Cui et al. (2022), and iii) search-based verbalizer Schick et al. (2020). Cui et al. (2022) experimentally show the superiority of the prototypical verbalizer in a supervised learning environment. However, they did not conduct such experiments for SSL settings. Our experiment with the SSL PET method (details in Section 3.5) with different automatic verbalizers showed that the prototypical verbalizer performed better than the soft verbalizer and the search-based verbalizer on multiple datasets. Thus, we choose to use the prototypical verbalizer as a replacement for the manual verbalizer. With the optimized embedding of the MASK token from PLM \(m\) and the ground-truth labels \(y\), the prototypical verbalizer learns the prototype vectors for each class using contrastive learning (Oord et al., 2018). The prototypical verbalizer first initializes a prototype embedding for each class label and then uses the embedding of the MASK token as the instance embedding. It uses instance-instance loss \(\mathcal{L}_{ins}\) to maximize intra-class similarity and minimize inter-class similarity. Similarly, it uses instance-prototype loss \(\mathcal{L}_{proto}\) to maximize the similarity between the prototype and instances belonging to the same class and minimize the similarity of instances belonging to other classes. The probability distribution of the MASK token for each class is calculated by the cosine similarity between the instance embedding and each optimized prototype embedding. For inference, it assigns the class of the prototype vector to the instance with the highest probability score, which is computed by taking the similarity scores of the instance vector with the prototype vectors and normalizing them. ### Training and Inference Strategy All model parameters to be optimized are randomly initialized. As mentioned in Section 2.2.2 and 2.3, we update the parameters in the continuous prompts and PLMs with the loss \(\mathcal{L}_{c}\) and optimize the parameters in the verbalizers with the loss \(\mathcal{L}_{ins}\) and \(\mathcal{L}_{proto}\). Instead of summing all losses together, our training strategy is to first freeze the parameters in the prototypical verbalizer and then train the parameters in the reparameterization block and the PLM together with the cross-entropy loss \(\mathcal{L}_{c}\). Then we freeze the learned parameters and train the parameters in the prototypical verbalizers with instance-instance loss \(\mathcal{L}_{ins}\) and instance-prototype loss \(\mathcal{L}_{proto}\). After training all labeler models and obtaining the class probability on the unlabeled dataset, we use \(\mathcal{L}_{div}\) to fine-tune the final language model classifier. During inference, we do not rely on any prompt-based labeler models and directly use the final fine-tuned language model \(\mathbf{F}\) to predict on the test dataset. ## 3 Experiments To verify the effectiveness of our framework, we conduct multiple semi-supervised learning experiments with several strong baseline frameworks on the commonly-used NLU benchmarks. ### Dataset Collection We experiment with five different datasets1: AG's News (Zhang et al., 2015), Yahoo Answers (Zhang et al., 2015), MNLI (MultiNLI, Multi-Genre Natural Language Inference, Williams et al. (2018)), RTE (Recognizing Textual Entailment, Dagan et al. (2006)) and CB (CommitmentBank, de Marneffe et al. (2019)). AG's News and Yahoo answers are topic classification (TC) datasets, while MNLI, RTE, and CB are natural language inference (NLI) datasets. In Table 1, we provide the number of distinct classes, the unlabeled dataset size used for SSL, and the test size for all five datasets. Details about the design of prompts and verbalizers can be found in Appendix A. Footnote 1: We downloaded these datasets using the script provided by OpenPrompt [https://github.com/thunlp/OpenPrompt](https://github.com/thunlp/OpenPrompt) We perform multiple experiments in few-shot settings for all datasets. For few-shot experiments, we use \(1,5,10,20\) examples per class for all datasets except for CB and RTE, where we experiment with \(32\) examples to align with earlier research work (Schick and Schutze, 2021). We report the average accuracy for the evaluation across three runs of each experiment with three different random seeds. ### Proposed Models **Demo+Soft Tokens PET**: The first method is to replace the manual verbalizer with the prototypical verbalizer and manual prompts with demonstration examples and continuous prompt tokens. \begin{table} \begin{tabular}{|l c c c c|} \hline **Dataset** & **Task** & **\#Class** & **\#Unlabeled** & **\#Test** \\ \hline \hline AG’s News & TC & 4 & 40,000 & 7,600 \\ Yahoo & TC & 10 & 100,000 & 60,000 \\ \hline \hline CB & NLI & 3 & 30,000 & 56 \\ RTE & NLI & 2 & 20,000 & 277 \\ MNLI & NLI & 3 & 30,000 & 9,815 \\ \hline \end{tabular} \end{table} Table 1: Data statistics. TC= Topic Classification, NLI= Natural Language Inference **Vary Soft Tokens PET**: The second method is to introduce diversity by varying the number of continuous prompt tokens, and we use the prototypical verbalizer across multiple labeler models. ### Models for Comparison We design several strong baseline experiments in addition to our proposed models and also perform an ablation study to show the superiority of our proposed models in multiple NLU tasks. #### 3.3.1 Baseline Models **Fine-tune**: This is a supervised method, where we directly fine-tune the RoBERTa-large PLM with training examples in different few-shot settings. In this method, we do not leverage the unlabeled data. **Prototypical Verbalizer PET**: This is a semi-supervised learning method similar to Schick and Schutze (2021), but we replace the manual verbalizer with the prototypical verbalizer and keep the manual prompts. Experiments with this setup will show the benefits of applying automatic verbalizer in the PET framework. **Manual PET**: This is a semi-supervised learning method from Schick and Schutze (2021). Our main goal is to show that, with our proposed method, we can achieve similar or better results than this manual method. There are other SSL methods that rely on data augmentation without prompt tuning, such as UDA Xie et al. (2020) and MixText Chen et al. (2020). Since their performance is consistently worse than the Manual PET model across multiple datasets Schick and Schutze (2021), we do not choose these models for comparison in this work. #### 3.3.2 Model Intervention for Ablation Study **Fixed Soft Tokens PET**: This semi-supervised learning method is similar to our second proposed method, where we vary the number of continuous tokens to create multiple prompts. However, here we keep the number of continuous tokens fixed and do not add demonstration examples as well. This experiment will help us to understand the importance of diversity introduced by varying continuous tokens in prompt design. **Demo+Soft in SL**: This is a supervised method, where we use a prompt template to transform the input by adding a randomly selected demonstration example from the training data and a fixed number of continuous prompt tokens to the input, and we use the prototypical verbalizer for classification. We use RoBERTa-large for PLM. With this experiment, we try to understand the power of semi-supervised learning methods with multiple prompts over supervised training. ### Implementation Details We use the RoBERTa-Large model Liu et al. (2019) as our PLM for all of our experiments. We use AdamW as our optimizer with a learning rate of \(1\mathrm{e}{-5}\) and a weight decay of \(0.01\) with linear scheduler, batch size of \(2\), and trained for \(5\) epochs. The reparameterization block contains 2-layer bidirectional LSTM and 2 linear layers with ReLU activation function. The hidden dimension of the linear layer and LSTM layer is 768, as well as the hidden dimension of Roberta-Large. We train the parameters in the reparameterization block and the PLM together. For the prototypical verbalizer, we base our implementation on the Pytorch2, Huggingface transformer3, and OpenPrompt4 frameworks Ding et al. (2021). For our Demo+Soft Tokens PET, each labeler model will learn 5 soft tokens with different demonstrations. For our Vary Soft Tokens PET, we prepare 5 prompts for each dataset and the number of soft tokens in each prompt ranges from 1 to 5. Footnote 2: [https://pytorch.org/](https://pytorch.org/) Footnote 3: [https://huggingface.co/](https://huggingface.co/) Footnote 4: [https://github.com/thunlp/OpenPrompt](https://github.com/thunlp/OpenPrompt) ### Results of Multiple Automatic Verbalizers To understand which automatic verbalizer is a better replacement for manual verbalizer, we first experiment with three automatic verbalizers: soft verbalizer Hambardzumyan et al. (2021); Liu et al. (2021), search verbalizer Gao et al. (2021); Shin et al. (2020); Schick et al. (2020), and prototypical verbalizer Cui et al. (2022). For all of these \begin{table} \begin{tabular}{|l|c|c c c|} \hline **Datasets** & & \multicolumn{4}{c|}{**SSL PET**} \\ \hline & \# **instances** & **SoftVerb** & **SearchVerb** & **ProtoVerb** \\ \hline \hline AG’s News & 10 & 49.4 & **80.5** & 77.2 \\ Yahoo & 10 & 11.8 & 34.0 & **51.9** \\ \hline \hline CB & 32 & **88.7** & 73.2 & 85.7 \\ RTE & 32 & 48.2 & 50.2 & **52.8** \\ MNLI & 10 & 39.0 & 37.0 & **50.0** \\ \hline \end{tabular} \end{table} Table 2: Average accuracy on different datasets by replacing manual verbalizers with automatic verbalizers in the PET SSL setup. For CB and RTE, we use 32 training examples, whereas for other datasets, we use 10 training examples to train labeler models. The best performance is marked in bold. experiments, we apply experimental setups similar to PET paper, but only replace the manual verbalizer with the automatic verbalizer (Schick and Schutze, 2021). Table 2 shows the average accuracy over three runs with three different seeds on different datasets with these verbalizers. From Table 2, the prototypical verbalizer shows better performance than other verbalizers for three (Yahoo, RTE, and MNLI) out of five datasets. The search verbalizer and soft verbalizer models perform better than the prototypical verbalizer model only on one dataset each. Since the prototypical verbalizer performs better than other verbalizers in majority of the datasets, we decided to use this as our automatic verbalizer. ### Comparison with Manual PET With the prototypical verbalizer as our automatic verbalizer, we then experiment with our proposed methods for automatic prompt design. Table 3 shows our results on different datasets and tasks in the few-shot setting. Table 3 shows that by only replacing the manual verbalizer with the prototypical verbalizer (column **Protoverb**) and keeping other aspects of the experiment the same as the PET method, we can achieve slightly lower performance (\(70.1\) average accuracy) compared to Manual PET (\(71.4\) average accuracy) (Schick and Schutze, 2021). This shows that to eliminate human involvement in designing verbalizers, we can simply replace the manual verbalizer with the prototypical verbalizer with only a little performance sacrifice. For our next set of experiments, we replace manual prompts with our proposed method, automatically creating multiple prompts. The first method (Demo+Soft Tokens PET), which adds randomly sampled demonstration examples from training data with a fixed number of trainable continuous prompt tokens with input, achieves better performance than Manual PET method. The next method (Vary Soft PET), in which we vary the number of continuous trainable tokens, also achieves better performance than Manual PET method. For topic classification tasks, under multiple few-shot settings, the average accuracy of Demo+Soft and Vary Soft PET are \(77.0\) and \(77.3\), respectively, while the average accuracy of Manual PET method is \(77.1\). Similarly, for NLI datasets under different few-shot settings, the average accuracy of our Vary Soft PET method is \(69.6\) and Demo+Soft Tokens PET method is \(70.7\). Both of these results are better than Manual PET method (\(67.7\)). Furthermore, across all these datasets, Demo+Soft Tokens PET and Vary Soft PET achieve an average performance of \(73.2\) and \(72.6\), respectively. These results are better than Manual PET (\(71.4\)) method. This experiment shows that it is possible to completely eliminate human involvement and expertise in designing prompts and verbalizers for the SSL pipeline with even better performance. We also observe that for the case of one-shot experiments with MNLI dataset, Demo + Soft PET method obtains an accuracy of \(36.1\), which is much worse than other prompt baseline models. This may be due to randomly sampled \([demo]\) examples, as previous studies have shown that the choice of examples in the few-shot setting can result in high-variance performance (Lu et al., 2021). In future work, we can utilize sentence embeddings to make intelligent decisions while selecting demonstration examples. ### Ablation Study #### 3.7.1 Impact of Semi-supervised Learning We compare our proposed methods with supervised learning methods: fine-tuning and prompt-based tuning methods (Demo+Soft in SL). All semi-supervised learning methods perform significantly better than supervised learning methods. Traditional fine-tuning methods perform the worst (\(45.1\) average accuracy) on different datasets and tasks. Demo+Soft in SL method is similar to our proposed Demo+Soft Tokens PET method but does not make use of unlabeled data. Demo+Soft in SL performs better than the fine-tuning method and achieves an average accuracy of \(68.7\) on multiple datasets and tasks in different few-shot settings. Both of the supervised learning methods perform worse than any SSL prompting model, indicating the necessity of the SSL pipeline in NLU tasks. #### 3.7.2 Impact of Diversity in the Prompts In order to understand the effect of introducing diversity through multiple prompts in SSL, we devise another experiment, where we use the SSL setup but use only **one** prompt labeler model (not adding a demonstration example but using trainable soft tokens) to label unlabeled data. We name this method as Fixed Soft Tokens PET. Table 3 shows that in most comparisons (13/14), our proposed Vary Soft PET or Demo+Soft PET method achieves better performance. When comparing with the Fixed Soft PET, our proposed Demo+Soft PET shows an improvement of average accuracy from \(72.3\) to \(73.2\) (\(p<0.05\) by paired \(t\) test) Hsu and Lachenbruch (2014). Moreover, both Demo+Soft and Vary Soft PET methods obtain better average performance than the Fixed Soft Tokens PET in NLI and topic classification tasks. These results show the importance of diversity introduced by multiple prompt labeler models. ## 4 Related Work ### Language Model Prompting Cui et al. (2021) authors fine-tuned the pre-trained generative language model, BART, with a predefined template (\(candidate\_span\) is a \(entity\_type\) entity) for NER classification. Wang et al. (2021) proposed Entailment as Few-shot Learner (EFT) method, which transforms classification tasks into natural language textual entailment tasks and then fine-tunes the LM. The transformation also makes it easy to leverage unsupervised contrastive data augmentation methods to add pairwise examples to the limited annotated data. This setting further showed an average 2.7% improvement in 15 different NLP tasks. In addition to using the prompts for supervised learning, PET is the SoTA method to adapt the manual prompts along with semi-supervised learning to obtain strong performance across multiple NLU tasks. Schick and Schutze (2021). ### Automatic Prompts and Verbalizers Shin et al. (2020) used a gradient-guided search to find the discrete tokens for prompts based on task accuracy, initialize tokens, and then fine-tune the LM. For automatic label token selection, they first train a logistic regression classifier from the contextualized embedding of the MASK token and then predict the score from MLM's output word embeddings. They select the top-k highest scoring words for each label. They showed better performance over manual prompting methods for sentiment classification and textual entailment tasks. Similarly, instead of using a gradient-guided search for prompt tokens, Li and Liang (2021) and Lester et al. (2021) attached prefix vectors and learned the embeddings for prefix vectors by keeping the LM model parameters frozen. Liu et al. (2021) proposed P-tuning, which replaces the input embeddings of pre-trained language models with its differentiable output embeddings, using the pat \begin{table} \begin{tabular}{l|c|c c c c c|c c} \hline & & \multicolumn{4}{c|}{**Semi Supervised Learning PET**} & \multicolumn{2}{c}{**Supervised**} \\ \hline **Dataset** & **\# Training** & **Demo+Soft** & **Vary Soft** & **Fixed Soft** & **Protoverb** & **Manual** & **Fine-Tune** & **Demo+Soft** \\ \hline \hline \multicolumn{8}{c}{**Topic Classification**} \\ \hline AG’s News & 1 & **83.5** & 81.3 & 82.8 & 80.0 & 80.7 & 25.7 & 62.2 \\ AG’s News & 5 & 87.6 & **88.0** & 87.3 & 87.3 & 87.8 & 32.6 & 84.9 \\ AG’s News & 10 & 88.3 & 88.3 & 86.5 & 88.7 & **88.8** & 58.3 & 87.2 \\ AG’s News & 20 & 88.8 & **89.3** & 88.9 & 89.2 & 89.2 & 86.1 & 88.0 \\ \hline Yahoo & 1 & 61.1 & **62.9** & 59.6 & 62.0 & 62.3 & 10.7 & 55.6 \\ Yahoo & 5 & 67.4 & 67.9 & 67.1 & 67.8 & **68.0** & 12.1 & 65.2 \\ Yahoo & 10 & 68.9 & 69.5 & 69.1 & **70.0** & 69.5 & 37.8 & 67.0 \\ Yahoo & 20 & 70.7 & **71.0** & 70.4 & 70.9 & 70.7 & 66.7 & 66.5 \\ \hline \hline **TC Avg** & - & 77.0 & **77.3** & 76.5 & 77.0 & 77.1 & 41.2 & 72.1 \\ \hline \hline \multicolumn{8}{c}{**Natural Language Inference**} \\ \hline MNLI & 1 & 36.1 & 51.7 & **52.7** & 44.2 & 44.8 & 34.3 & 35.1 \\ MNLI & 5 & 51.2 & **58.1** & 57.7 & 55.3 & 55.2 & 33.5 & 46.9 \\ MNLI & 10 & 60.4 & 57.8 & 58.4 & **62.3** & 60.5 & 34.3 & 54.4 \\ MNLI & 20 & 64.0 & 64.7 & 60.5 & **69.6** & 68.6 & 35.0 & 41.9 \\ \hline CB & 32 & **88.7** & 88.1 & 88.7 & 85.7 & 86.9 & 60.7 & 87.6 \\ \hline RTE & 32 & **70.4** & 62.5 & 62.6 & 52.8 & 58.8 & 48.1 & 67.4 \\ \hline \hline **NLI Avg** & - & **70.7** & 69.6 & 69.5 & 65.5 & 67.7 & 47.7 & 66.5 \\ \hline \hline **Overall Avg** & - & **73.2** & 72.6 & 72.3 & 70.1 & 71.4 & 45.1 & 68.7 \\ \hline \end{tabular} \end{table} Table 3: Few-shot experiment results (average accuracy) on different datasets with our proposed methods in PET SSL setup. For CB and RTE, we use \(32\) training examples, whereas for other datasets we use \(\{1,5,10,20\}\) randomly selected examples per class for few-shot learning experiments. The best performance is marked in bold. Note that to report the average results for NLI task, we first average over the MNLI results under different few-shot settings, and then average over the three NLI datasets to give each task equal weight. The overall average results are computed following a similar approach, giving each dataset an equal weight. tern based on human design. Liu et al. (2021b) optimized and adapted the Prefix Tuning model for NLU. Vu et al. (2021) proposed to learn soft prompt embeddings from one or more source tasks and then transfer them to initialize the prompts for the target task. In addition, they also proposed an efficient retrieval approach to find task embeddings and predict the most transferable source tasks for a given novel target task. Several automatic verbalizers, such as search-based verbalizers, soft verbalizers, and prototypical verbalizers, have been proposed to automate the design of the verbalizer mapping function. Search-based verbalizers aim to find the appropriate tokens to replace human selection (Schick et al., 2020a; Shin et al., 2020b; Gao et al., 2020). Both soft verbalizers and prototypical verbalizers learn trainable class or prototype embeddings during the training process (Cui et al., 2022; Zhang et al., 2021; Hambardzumyan et al., 2021). Mahabadi et al. (2022) proposed a prompt-free method (PERFECT) to train the language model, which does not rely on manual commands and verbalizers. PERFECT reported performance similar to that of PET (Schick and Schutze, 2021) in a few-shot setting. However, they used a supervised learning setup and compared their results with the single labeler model with one prompt rather than the results from the final classifier. Here, we use a similar SSL setting to Schick and Schutze (2021) and report the results of the final classifier. ## 5 Conclusions In this paper, we are able to successfully use automatic prompts and verbalizers in semi-supervised learning settings. We show that our proposed automatic prompt generation methods with prototypical verbalizer can eliminate human engineering in prompt-based SSL setup and achieve similar or better performance than the SoTA Manual PET method. Our methods have the added advantage of being scalable with multiple tasks and datasets. We also empirically verify the power of semi-supervised learning methods, which take advantage of large amounts of unlabeled data, over supervised methods. In the next steps, we plan to investigate whether we would be able to achieve similar performance by freezing PLMs' parameters and only tuning verbalizer and prompt parameters. This setup will save a tremendous amount of space by making it easy to share and reuse PLMs. Moreover, we plan to explore ways to combine the two proposed methods Demo+Soft PET and Vary Soft PET, which would take advantage of both methods. ## 6 Limitations Although we experiment with multiple NLU tasks and datasets, these datasets are only in the English language. Prompt-based learning relies on large language models, which have acquired knowledge through pre-training on huge corpora. With low-resource languages, it might be difficult to get PLMs trained on a huge corpus, which might make it hard to reproduce performance similar to the English corpus. The fine-tuning and inference of PLM requires multiple large GPUs, which might not be accessible to everyone. ## Acknowledgments We would like to thank the anonymous reviewers as well as Wei Ai, Paiheng Xu, Akram Almatarky, Jangwon Kim, Morteza Ziyadi, and Giannis Karamanolakis for reviewing the paper and for providing helpful comments and suggestions.
2301.08478
Interaction between the turbulent solar wind and a planetary magnetosphere: a 2D comet example
Using the newly developed code \emph{Menura}, we present the first global picture of the interaction between a turbulent solar wind and a planetary obstacle in our solar system, namely a comet. This first publication aims at shedding lights on the macroscopic effect of the upstream solar wind turbulence on the induced magnetosphere of a comet. Using a hybrid Particle In Cell simulation code, we model a medium activity comet, using both a turbulent and a laminar solar wind input, for a direct comparison between the two regimes. We show how the turbulent characteristics of the solar wind lead to a smaller obstacle size. We then present how the upstream turbulent structures, traced by the perpendicular magnetic field fluctuations absent in the laminar case, self-consistently drape and pile-up around the denser inner coma, forming intense plasmoids downstream of the nucleus, pulling away dense cometary ion bubbles. This pseudo-periodic erosion phenomenon re-channels the global cometary ion escape and as a result, the innermost coma is found to be on average 45\% less dense in the turbulent case than predicted by simulating a laminar upstream flow.
Behar Etienne, Henri Pierre
2023-01-20T09:14:44Z
http://arxiv.org/abs/2301.08478v1
# Interaction between the turbulent solar wind and a planetary magnetosphere: a 2D comet example ###### Abstract Context:Using the newly developed code _Menura_, we present the first global picture of the interaction between a turbulent solar wind and a planetary obstacle in our solar system, namely a comet. Aims:This first publication aims at shedding lights on the macroscopic effect of the upstream solar wind turbulence on the induced magnetosphere of a comet. Methods:Using a hybrid Particle In Cell simulation code, we model a medium activity comet, using both a turbulent and a laminar solar wind input, for a direct comparison between the two regimes. Results:We show how the turbulent characteristics of the solar wind lead to a smaller obstacle size. We then present how the upstream turbulent structures, traced by the perpendicular magnetic field fluctuations absent in the laminar case, self-consistently drape and pile-up around the denser inner coma, forming intense plasmoids downstream of the nucleus, pulling away dense cometary ion bubbles. This pseudo-periodic erosion phenomenon re-channels the global cometary ion escape and as a result, the innermost coma is found to be on average 45% less dense in the turbulent case than predicted by simulating a laminar upstream flow. Conclusions: ## 1 Introduction The solar wind - a supersonic, radially expanding plasma escaping from the Sun - can be described with two levels of complexity. First, by considering its average, background values, providing a global laminar picture in which most of the seminal studies on the heliosphere and planetary magnetospheres were founded (Biermann, 1951; Alfven, 1957; Parker, 1958; Dungey, 1961). A second level of complexity introduces the turbulent nature of the flow, a combination of chaotic fluctuations within the magnetic field and particles density and velocity, adding up to their background, average values (Bruno & Carbone, 2005). In this phenomenology, energy (magnetic and kinetic) cascades from large to small scales, much like described by its neutral fluid analogue (Kolmogorov, 1941), corresponding to a spectrum of fluctuations ranging over several decades of temporal and spatial scales. Eventually, this cascade leads to the dissipation of the energy at the smallest scales involved. Turbulence is suggested to play a key role in the acceleration of the solar wind, providing a continuous heating of the plasma from the solar corona and beyond (Cranmer et al., 2015). When it comes to the formation and dynamics of planetary magnetospheres, the overwhelming majority of our knowledge was built on the laminar description of the solar wind. Specifically, all global numerical simulations of these interactions involve a steady, homogeneous plasma flow upstream of the obstacle (Schunk & Nagy, 2009; Ma et al., 2008; Kallio et al., 2012). A few recent exceptions can be pointed out, with for instance the Magneto-hydrodynamics (MHD) simulation using time-varying upstream conditions at the Earth (Lakka et al., 2017), two studies of a Coronal Mass Ejection interacting with the Earth, either using a MHD model (Lakka et al., 2019) or a hybrid Particle-In-Cell (PIC) model (Moissard et al., 2022), and the hybrid PIC simulation of the effect of a pivoting magnetic field upstream of Mars (Romanelli et al., 2019). However, a growing interest around the relationship between turbulence and magnetospheres is nowadays emerging. Aside the plethora of publications focused on turbulence within the Earth magnetosheath (Rakhmanova et al. (2021) and references therein), several studies appeared on the topic of turbulence in the magnetospheres of outer planets and comets (Saur, 2021; Ruhunusiri et al., 2020). Turbulence within the geomagnetic tail was also at the centre of many investigations, reviewed by Antonova & Stepanova (2021) and El-Alaoui et al. (2021). More specifically, the effect of upstream turbulence on the terrestrial magnetosphere and its dynamics has also been investigated for decades, with results reviewed by D'Amicis et al. (2020) and Guio & Pecseli (2021). This sizable literature outlines one main lacking aspect: a numerical tool for the global simulation of these turbulent interactions. This is where the recently developed code _Menura_(Behar et al., 2022) positions itself, allowing the injection of a fully turbulent flow upstream of an obstacle. This publication presents the first application of the code, and focuses on a cometary magnetosphere characterised by a neutral outgassing typical for a heliocentric distance of about two astronomical units (au). Since the dawn of solar and space physics, comets have been emblematic tracers of the solar wind, hinting at its very existence (Biermann, 1951; Alfven, 1957), while interplanetary sector boundaries as well as CMEs were notably analysed using remote observations based on the inter mittent disconnections of comet tails (Niedner and Brandt, 1978; Vourlidas et al., 2007). More recently, comets were used once more to trace some of the solar wind turbulent parameters (DeForest et al., 2015) as well as its speed (Cheng et al., 2022). In addition to this historic role, a great amount of knowledge on cometary magnetospheres was produced during the last decade in the context of the European mission Rosetta (Goetz, 2022). This provides us with a solid background for a first global exploration of such a turbulent interaction. Because of the multi-scale nature of turbulence, its numerical simulation is intrinsically expensive. For this first study, we made the choice of properly resolving a wide range of scales, from the magnetospheric to the ion scales, below their inertial length. To allow a practical handling of this problem, and specifically to quickly iterate the simulation and its analysis, we chose to work in a two-dimensional spatial domain, with velocities and field components described in a three dimensional space. This inherently limits the significance of our findings. To this extent, the aims of this first study are not to bring definitive results, but to properly illustrate the capacity of this new numerical approach, to give a first example on how its products can be tackled, and most importantly to highlight new aspects of the interaction, to be later on verified by a three-dimensional approach, in the limit of realistic computing resources. It should however not be underestimated that the best resolution achieved by a 2D approach, and therefore the corresponding plasma mechanisms, cannot be verified by a 3D approach with an equal computational power. Therefore a 2-dimensional approach cannot be limited to a role of pathfinder: it may very well demonstrate mechanisms otherwise not reproducible. This first publication focuses on the effect of turbulence on the obstacle itself, looking into scales around and higher than the ion scale, while the characteristics of the turbulent flow within the obstacle is left for future studies. ## 2 The Model The numerical model used to investigate the interaction of the solar wind - either laminar or turbulent - with a planetary obstacle is described and tested in Behar et al. (2022). It is based on a hybrid Particle-In-Cell (PIC) model implementation of the Vlasov-Maxwell equations, including a source term for the distribution function: the ions are described as massive particles, the electrons are considered as massless and charge-neutralising, while the electromagnetic fields together with the particles' moments are gathered at the nodes of a regular grid. At the core of the model is a generalised Ohm's law that computes the electric field provided the magnetic field and the particles' moments. _Menura_ uses the following formulation: \[\mathbf{E}=-\mathbf{u_{i}}\times\mathbf{B}+\frac{1}{en}\mathbf{J}\times \mathbf{B}-\frac{1}{en}\nabla\cdot p_{e}-\eta_{h}\nabla^{2}\mathbf{J} \tag{1}\] with \(\mathbf{u_{i}}\) the ion bulk velocity, \(n\) the plasma density, \(\mathbf{J}\) the charge current, \(p_{e}\) the electron pressure. Additionally, a term of hyper-resistivity is used to dampen small scales numerical oscillations, with the coefficient \(\eta_{h}\) multiplying the laplacian of the current. Through the Faraday's law, this corresponds to a diffusion term. \(\eta_{h}\) is taken to be \(2.5\,10^{-4}\) in the entire study. The electron pressure is obtained assuming it results from a polytropic process, with an index of 1 used throughout the study, corresponding to an isothermal process. The code uses normalised units, with distances expressed in units of the background proton inertial length \(d_{\theta}\) and time in units of the inverse of the background ion cyclotron frequency \(\omega_{\omega 0}^{-1}\). The model is based on a two-steps procedure. First, a turbulent flow is generated, in the absence of the obstacle, as further described in Section 4. Second, this turbulent solar wind is injected in a simulation domain containing an obstacle. Since Step 1 is solved in a fully periodic domain, the injection of Step 1 outputs within the domain of Step 2 can be done periodically as well (see Behar et al. (2022) for more details). For such a medium activity comet, the domain boundaries parallel to the flow are also kept periodic. During both steps, the equations and all variables are solved and expressed in the solar wind reference frame. It is therefore the obstacle which is moving through the solar wind. The domain is kept centred on the obstacle by regular copies and shifts of the fields and the particles, as illustrated in Behar et al. (2022). To that extent, the solar wind is not really _injected_ in the domain but _laid down_ in front of the moving object. Because the object reference frame is the one used in all planetary simulation codes we have encountered, the vocabulary will unavoidably present some ambiguities between the two frames. In the following sections, we describe results and mechanisms in either the solar wind reference frame or the object reference frame, making sure to specify which. During Step 2, in order to simulate a comet, a collection of cometary ions is added at each time step, as described in the next section. In order to properly appreciate the influence of the incoming turbulent flow on the interaction, the model can also be used to send a laminar flow on the obstacle, all other parameters kept equal. We first describe this laminar case in Section 3, before considering the turbulent case in Sections 4 and 5. ## 3 The Obstacle: a comet Without gravity, intrinsic magnetic field or solid central body (the size of the nucleus being negligible with respect to the dynamical scales of the system), and without ion collisions, the numerical modelling of such an obstacle is fairly straightforward: Figure 1: Schematic of a medium activity comet, highlighting the asymmetric dynamics of the solar wind ions, and indicating the two major escape channels for cometary ions. cometary ions are introduced at each iteration, according to a given ion production rate \[q_{i}(r)=\nu_{i}\cdot n_{0}(r)=\frac{\nu_{i}Q}{4\pi u_{0}r^{2}}, \tag{2}\] with \(r\) the distance from the comet nucleus, \(\nu_{i}\) the ionisation rate of cometary neutral molecules, \(n_{0}\) the neutral cometary density, \(Q\) the neutral outgassing rate, \(u_{0}\) the radial expansion speed of the escaping neutral cometary atmosphere. At the beginning of the run, the number \(\alpha\) of particles to be added in the proximity of each grid node is first calculated using Eq. 2 (together with the duration of one iteration). At each time step, we inject a constant number of particles with random positions within the cell surrounding each grid node, a number given by the integer part of \(\alpha\). Each iteration, we produce and compare one more random number between 0 and 1 to the remaining part of \(\alpha\), to decide whether one additional particle is created or not. In the sub-region surrounding the centre of the comet, where the distribution of neutral molecules shows the highest (radial) derivative, it is necessary to use a finer sub-grid to estimate these \(\alpha\) values, and then average them over the main grid nodes closest to the nucleus, in order to not underestimate the local creation. We use a sub-grid ten times finer than the main grid, after having verified that no significant change using an even finer sub-grid can be found. The highest cometary ion density is found close to the nucleus, a region within which noticeable plasma structures appear: this is the interaction region we simulate. At 2 au away from the Sun, these plasma structures and boundaries appear over the ion kinetic scales, characterised by the cometary ion gyro-radius, which will be discussed in further details in the Section 5. This interaction region is sketched in Figure 1. Upstream of the nucleus, seldom newly born cometary ions are _picked up_ by the solar wind electric field, and start their cycloid motion. Closer to the nucleus, as their density is much higher, they form a noticeable density structure, which we hereafter refer to as the _"pick-up plume channel"_ (analog to the so-called pick-up plume at Mars (Dong et al. 2015)), the first significant escape channel for cometary ions. This early phase of the gyration is represented in Figure 1. As the solar wind permeates the ionised cometary atmosphere, the total bulk velocity of the plasma decreases while the density of cometary ions increases. The frozen-in magnetic field _piles up_ on the denser coma, its amplitude increasing. Because the coma is denser close to the nucleus than elsewhere, the magnetic field additionally _drapes_ around the nucleus, in the iconic shape established by Alfven (1957). Close to the nucleus, the magnetic field strength and its distortion become so intense that eventually, through the Hall component of the electric field (given by the local curl of the magnetic field under the Darwin approximation \(\partial_{i}\mathbf{E}<<\mathbf{J}\)), the dense inner coma is accelerated downstream, presenting a second escape channel for cometary ions, also indicated in Figure 1 (Behar, E. et al. 2018). In the following, we refer to this cometary ion escape channel as the _"Hall escape channel"_. In the schematic, the cometary pick-up ions are accelerated upward, and as a result of momentum conservation, solar wind ions are deflected downward. This kinetic effect results in the formation of a solar wind over-density highly asymmetric further away from the Sun (Behar et al. 2018), which transition to a more symmetric structure at smaller heliocentric distances (Hansen et al. 2007). Together with the structures formed by the pick-up plume and the Hall escape channels, this solar wind overdensity is the third main cometary ion structure generated by the solar wind-comet interaction at such a heliocentric distance. Each one of them will be tackled in the rest of this study to diagnose the effect of upstream solar wind turbulence on the plasma environment of the obstacle. ## 4 The incoming flow: a turbulent solar wind During the first of the two simulation steps, a turbulent plasma is obtained by letting the energy of initial perturbations cascade from large to small scales. Sine modes perturbations are initialised at time 0 in both the in-plane magnetic and velocity fields, on top of a guiding magnetic field \(\mathbf{B}_{0}\) purely out of the simulation plane. All particles are created with velocities following a Maxwellian distribution, using a thermal speed equal to the Alfven speed. At time 500 \(\omega_{c0}^{-1}\) (corresponding to a physical time of 1740 s with the values of Table 1), the turbulence has developed into the omni-directional Power Spectral Density shown in Figure 2, defined and used in the previous studies of Franci et al. (2015) and Behar et al. (2022). The spectrum displays a Kolmogorov-like scaling over the inertial, MHD scales, following the black guide line with slope -5/3, to then adopt a much steeper slope at ion kinetic scales, similarly to the results of Franci et al. (2015) for a very similar simulation. At high spatial frequencies, a flattening of the spectrum is found, corresponding to an energy range in which the noise of the particles is adding up to the cascading energy. At the highest frequencies, we find a sharp increase in energy due to the noise of the finite differences used by the algorithm, a feature shared with the results of Franci et al. (2015). The background values of this run are given in Table 1. The simulation domain is 500 \(d_{0}\) (corresponding to 65 733 km) large, and a regular grid with 2000 x 2000 nodes is used, corresponding to a grid spacing \(\Delta x\) of 0.25\(d_{0}\). The resolved wave vectors are consequently within \([0.0062,12.4]d_{0}^{-1}\). The initial perturbations are injected with wave vectors in the range \([0.0062,0.1]d_{0}^{-1}\), and only the remaining, non-perturbed spatial scales are shown in Figure 2. This simulation uses 4000 particles per grid node (equivalently cell). A time step \(\Delta t=0.025\omega_{c0}^{-1}\) is used during this first step, while a twice finer time resolution \(\Delta t=0.0125\omega_{c0}^{-1}\) is needed to properly solve the physics of the tail during Step 2 (cf next sections). This turbulent plasma has an average out-of-plane magnetic field of precisely \(<B_{z}>=1\), but an average total magnitude of \(<B>=1.06\). Compared to its laminar analogue, which is defined to be out-of-plane with amplitude 1 everywhere in the domain, we find a magnetic energy density (or magnetic pressure) 12% larger in the turbulent plasma, due to its additional in-plane component. \begin{table} \begin{tabular}{c|c} \(B_{0}\) & 3.0 nT \\ \(n_{0}\) & 3.0 cm\({}^{-3}\) \\ \(\omega_{c0}\) & 0.29 s \\ \(d_{0}\) & 131 km \\ \(\nu_{A0}\) & 38 km/s \\ \(v_{h0}\) & 38 km/s \\ \(\beta_{0}=\beta_{c0}\) & 1 \\ rms(\(B_{\perp}\))/ \(<B>\) & 0.32 \\ \end{tabular} \end{table} Table 1: Characteristics of the turbulent solar wind, with rms\((B_{\perp})=\sqrt{\)rms\((B_{x})+\)rms\((B_{z})}\) and \(<B>\) the average value of the total amplitude over the domain. ## 5 General comparison During the first time interval of Step 2, cometary ions are steadily increasing in number as the magnetosphere develops, before reaching a pseudo steady-state: the total number of particles in the domain evolves around a constant value. After dynamical equilibrium is reached, we simulate further the interaction, for an additional 150 \(\omega_{c0}^{-1}\). The density of both species are displayed in Figure 3, showing one snapshot when dynamical equilibrium is reached, for both the laminar and the turbulent upstream conditions. The solar wind over-density position as well as the pick-up plume, taken in the laminar case (left column), are reported in the turbulent results using dashed lines. We find that both the solar wind over-density and the plume are reduced in size in the turbulent case. On average, the nose of the overdensity is 4 \(d_{0}\) further upstream with laminar upstream conditions. As for the plume, we can estimate the gyration of the ions using their simple upstream gyroradius (the gyroradius a cometary test-particle would have in the upstream wind). For the laminar conditions, with an homogeneous magnetic field, the value is \(R_{\rm laminar}=180\ d_{0}\) everywhere in the domain. But in the turbulent case, the magnetic field now has an additional in-plane component, which needs to be accounted for when calculating the gyroradius, which involves the ion velocity component perpendicular to the magnetic field. In addition, the amplitude of the magnetic field is also larger on average in the domain, as described in the previous section. One can compute the local value of the gyroradius, and find an average value over the domain of \(R_{\rm turbulent}=147\ d_{0}\), significantly smaller than the laminar value. The lower row of Figure 3 uses a cycloid of radius 180, shown with a blue dashed line, showing a good match with the laminar plume, while the turbulent plume is found to be smaller. Whether we look at this interaction from a kinetic point of view (Behar et al., 2018), involving these gyration scales, or a fluid point of view, involving the upstream magnetic pressure, we find in both cases that the plasma structures around such a comet are expected to be smaller in the turbulent case, which we indeed verified with the present simulations. It should be noted however that the definition of the laminar plasma is a choice, arbitrary to some extent, and one may argue that it could very well be defined in such a way that the magnetic energy density or the gyroradius are equal in both the laminar and the turbulent case. It should be pointed out that the laminar interaction already shows some level of complexity, with some obvious wave patterns within the magnetosphere in the upper part of the cometary ion density (likely similar to the bi-ion acoustic waves also found by Bagdonat & Motschmann (2002), but also in the lower part of the solar wind density (see also the work of Koenders, C. et al. (2016) on low frequency waves at comet 67P/CG). These fluctuations and their fate when considering a turbulent upstream input, as well as their potential contribution to the inner-magnetosphere turbulence, is yet another important future direction to explore. The Hall escape channel is shown stemming from the comet's inner region in the lower panels of Figure 3. In the laminar case (bottom left panel), the homogeneous incoming magnetised solar wind results in a continuous cometary ion acceleration and in turn in a continuous cometary ion structure, forming the Hall escape channel. On the contrary, in the turbulent case (bottom right panel), the turbulent nature of the incoming magnetised solar wind is responsible for the generation of discontinuities in Figure 2: The left-hand panel provides a map of the perpendicular (or in-plane) magnetic field fluctuations squared. The right-hand panel shows their Power Spectral Density, providing a guideline with slope -5/3. this same Hall escape channel, which is now found to be composed of discrete, high density cometary ion bubbles, of similar density as the inner coma. The modifications of the solar wind turbulent flow are highlighted in Figure 4, which shows the in-plane magnetic field lines, using a Line Integral Convolution (LIC) representation (Loring et al. 2015). In the upper panel, the in-plane magnetic field lines are coloured using the amplitude of the same in-plane field, while the lower panel uses the density of cometary ions superposed to the LIC representation. Cometary ion bubbles are found to be enclosed in magnetic islands, downstream of the nucleus, islands which were not present in the upstream turbulent wind. We identify two main regions in this magnetosphere. First the "wings", in which deformed upstream perpendicular field structures can be recognised. And second, the comet tail in which most of the cometary ions are confined, corresponding to very low solar wind densities in Figure 3. There, newly formed magnetic field islands of various sizes can be observed, and dubbed as "loops" on Figure 4. In the next two sections, we explore the origin of these two regions. Figure 3: The upper row shows the density of the solar wind ions while the lower row presents the density of the cometary ions. The left column corresponds to laminar upstream conditions, while the right column corresponds to turbulent upstream conditions. The position of the solar wind overdensity in the laminar case is reported in the turbulent case with the dashed red line. Similarly, the estimated gyration of the cometary ions in the plume in the laminar case is shown in the lower row with a blue dashed line. ## 6 Disconnection of the comet's head The pile-up and draping of the magnetic field, which so far in the literature was discussed in the context of a laminar interaction, corresponding to an homogeneous upstream magnetic field, in turn affect the turbulent structures of the solar wind magnetic field as well. To our knowledge, the consequences of cometary-induced pile-up and draping of the magnetic field on the incoming turbulent, heterogeneous plasma flow have not been considered or investigated so far. To illustrate the mechanism at the origin of the formation of these high density bubbles and the complex magnetic field structures within the comet's tail, we have designed a dedicated numerical experiment to isolate the interaction between a single upstream perpendicular magnetic field structure and the coma, in an otherwise laminar upstream flow. The laminar version of the interaction is resumed from its steady state shown in Figure 3, and an ersatz of a magnetic island (i.e. a flux rope in the 3-dimensional case) is introduced upstream of the comet. Eventually, the comet meets and interacts with this artificial structure, just as it does with structures of the fully turbulent flow. The magnetic field island ersatz is defined by adding a perpendicular component to the otherwise homogeneous and out-of-plane background magnetic field. This additional perpendicular component is characterised by circular in-plane field lines, and an amplitude depending on the distance to the centre of these circular lines, with a maximum value at some distance from the centre. We have chosen a Gaussian profile with a maximum value of 0.5 the background magnetic field at 50 \(d_{0}\) from the centre of the structure, and a standard deviation of 25 \(d_{0}\). This artificial structure is meant to mimic the large scale vortices found in the perpendicular magnetic field shown in the left hand-side of Figure 2, with diameters about 50 to 100 \(d_{0}\), with a larger amplitude found away from their centres. Such a structure is indicated by a dashed circle in the left panel of Figure 2. This ersatz is not meant to be a perfect replica of such structures however: no further considerations are taken into account other than a divergence-free, circular structure of about 100 \(d_{0}\) in diameter, with a realistic maximum found at some distance from its centre. This ersatz can be seen still undisturbed upstream of the comet in the uppermost panel of Figure 5, defining the relative time \(t=0\) for this experiment. The figure uses the same LIC representation as Figure 4, with a threshold of 0.01 on the in-plane (perpendicular) magnetic field: if the perpendicular magnetic field is smaller than 0.01, no LIC is shown. The background colourmaps of the first four panels from the top, showing four successive and equally separated times of the experiment, addi Figure 4: The upper panel provides a Line Integral Convolution of the in-plane magnetic field component, coupled to its colour-coded amplitude. The lower panel shows the same LIC representation, coupled to the cometary ion density along the comet tail. In the lower panel, lower-right corner, the horizontal line length corresponds to 100 \(d_{i}\). tionally give the amplitude of the perpendicular magnetic field, while for the final time of the experiment, given in the bottom panel, the density of cometary ions is used. The time \(t=0\) is chosen as the magnetic island is just starting to interact with the dense inner coma. As a result, the left-most field lines are found to be deformed, draped inward the island, with a corresponding increase of the field amplitude - pile-up - seen with red tones at the left-most boarder of the structure. As the magnetic field amplitude increases and the length scale between the anti-parallel magnetic field lines shortens, a strong current sheets forms around the comet nucleus location (second panel). This strong current sheet is likely unstable to the tearing instability, eventually forming shorter scale magnetic islands (third panel). This series of snapshots gives us a very tangible representation of how the comet pierces through an upstream magnetic field structure, similar to a projectile through an obstacle. After "impact", we find two remains of the initial structure with high perpendicular magnetic field amplitudes looping around two distinct centres, on each side of the comet's head. Within these two wings, the upstream information is conserved to some extent, with field structures intensified and deformed. However, closer to the comet's head and downstream of it, new, smaller structures of even higher perpendicular magnetic field amplitude are produced, seen as smaller scale closed magnetic field loops. Whereas the two wings are found to be more or less static in the solar wind reference frame, just as their parent upstream structure, these smaller scale intense loops have a speed closer to the comet itself. Or equivalently, described in the object reference frame, whereas the wings are advected downstream at more or less the speed of the solar wind, the new loops are transported downstream at a much lower speed. The difference between these two types of structures originate from the density the upstream parent structure meets during its interaction with the coma. On the one hand, the sides, or wings, of the parent island interact with the comet in regions where the density is still dominated by solar wind protons in terms of number density, and the two wings keep frozen into the solar wind dominated flow, conserving some information from upstream. On the other hand, at the centre of the interaction region, the plasma is largely dominated by the cometary ions. The upstream magnetic field, now with the additional perpendicular component, is piling up in this drastically reduced plasma mean velocity, reaching higher amplitudes than in the wings. And most interestingly, this rising magnetic tension, through the Hall electric field, eventually pulls off the dense inner coma, resulting in one main and two secondary high density bubbles found downstream of the comet's head at time \(56.25\) (bottom panel), enclosed in the newly formed, intense magnetic field loops. Note that at this time, taken much later than the first four snapshots, the wings, not carrying any significant cometary ion density, are by then long gone downstream of the comet. After this build-up and release phenomenon, and in the absence of additional heterogeneous perpendicular magnetic field structures, the inner coma resumes its laminar, continuous Hall escape. We will now have a closer look at this disconnection event, zooming inside the inner interaction region, and focusing on times around \(t=6.25\). ## 7 Disconnection at the ion scale Figure 6 shows the comet's head during the same experiment, zooming a bit closer in the spatial domain than the previous representation, using a more classic, oriented field line representation superposed to the cometary ion density. These three snapshots are also focused around the time when the centre of the upstream magnetic island passes through the inner coma, corresponding to the second panel of Figure 5. In the upper panel, the magnetic island can be identified, already significantly piled-up, i.e. strongly compressed along the \(x\)-direction. As the plasma is faster above and below the inner coma (red tones of the colourmap), the island is additionally draped, and with the upper and lower parts of the island advected downstream much faster than the central part, an elongation of the structure is also seen. Field lines which were initially circular are now found to describe highly eccentric ellipse-like figures, along a main axis highlighted with a manually added red band, describing a bow. Because of this combined compression and elongation, along the red band, we find a line separating field lines parallel and of opposite sense. As the comet continues piercing through the structure, the red band gets significantly draped. In the second panel, Figure 5: Each panel provides a snapshot of the numerical experiment. All panels use a LIC representation of the in-plane magnetic field lines. The first four rows use the amplitude of the in-plane magnetic field for the colours, while the last panel uses the cometary ion density.
2302.11479
Drop Edges and Adapt: a Fairness Enforcing Fine-tuning for Graph Neural Networks
The rise of graph representation learning as the primary solution for many different network science tasks led to a surge of interest in the fairness of this family of methods. Link prediction, in particular, has a substantial social impact. However, link prediction algorithms tend to increase the segregation in social networks by disfavoring the links between individuals in specific demographic groups. This paper proposes a novel way to enforce fairness on graph neural networks with a fine-tuning strategy. We Drop the unfair Edges and, simultaneously, we Adapt the model's parameters to those modifications, DEA in short. We introduce two covariance-based constraints designed explicitly for the link prediction task. We use these constraints to guide the optimization process responsible for learning the new "fair" adjacency matrix. One novelty of DEA is that we can use a discrete yet learnable adjacency matrix in our fine-tuning. We demonstrate the effectiveness of our approach on five real-world datasets and show that we can improve both the accuracy and the fairness of the link prediction tasks. In addition, we present an in-depth ablation study demonstrating that our training algorithm for the adjacency matrix can be used to improve link prediction performances during training. Finally, we compute the relevance of each component of our framework to show that the combination of both the constraints and the training of the adjacency matrix leads to optimal performances.
Indro Spinelli, Riccardo Bianchini, Simone Scardapane
2023-02-22T16:28:08Z
http://arxiv.org/abs/2302.11479v1
# Drop Edges and Adapt: a Fairness Enforcing Fine-tuning for Graph Neural Networks ###### Abstract The rise of graph representation learning as the primary solution for many different network science tasks led to a surge of interest in the fairness of this family of methods. Link prediction, in particular, has a substantial social impact. However, link prediction algorithms tend to increase the segregation in social networks by disfavoring the links between individuals in specific demographic groups. This paper proposes a novel way to enforce fairness on graph neural networks with a fine-tuning strategy. We **D**rop the unfair **E**dges and, simultaneously, we **A**dapt the model's parameters to those modifications, DEA in short. We introduce two covariance-based constraints designed explicitly for the link prediction task. We use these constraints to guide the optimization process responsible for learning the new 'fair' adjacency matrix. One novelty of DEA is that we can use a discrete yet learnable adjacency matrix in our fine-tuning. We demonstrate the effectiveness of our approach on five real-world datasets and show that we can improve both the accuracy and the fairness of the link prediction tasks. In addition, we present an in-depth ablation study demonstrating that our training algorithm for the adjacency matrix can be used to improve link prediction performances during training. Finally, we compute the relevance of each component of our framework to show that the combination of both the constraints and the training of the adjacency matrix leads to optimal performances. keywords: Graph Neural Network; Fairness; Link Prediction + ## 1 Introduction The fairness of graph representation learning algorithms is quickly becoming a crucial area of research. Of particular interest is the fairness issue associated with the link prediction task. This task is heavily applied in two of the most influential AI-powered domains of our digital life, social networks and products recommendation. Social network topologies define the stream of information we will receive, often influencing our opinion McPherson et al. (2001); Halberstam and Knight (2016); Lee et al. (2019); Abbass (2018). Nevertheless, malicious users can modify topologies to spread false information Roy and Chahar (2021). Similarly, recommender systems suggest products tailored to our characteristics and history of purchases. However, pursuing the highest accuracy led to the discrimination of minorities in the past Corbett-Davies et al. (2017); Obermeyer et al. (2019), despite the law prohibiting unfair treatment based on sensitive traits such as race, religion, and gender. The unfairness arises even if the sensitive attributes are not used explicitly in the learning model. For example, most social networks are homophily-dominant. Nodes in the local neighbourhood belong to the same sensitive class with minimal connections across nodes of differing sensitive attributes. Therefore communities isolate themselves polarizing the opinions expressed within the communities. This effect is also known as the filter bubble problem. The same issue affects the bipartite graphs of users and items used in product recommendations. In Nguyen et al. (2014), the authors concluded that recommender systems reduce the exposition of the user to a subset of the items available over time. For example, streaming services may recommend movies from a particular genre to users from a specific gender. Thus, link prediction algorithms have a substantial social impact and can worsen existing biases in the data. However, enforcing the prediction of new links to be fair can mitigate the issue. Graph neural networks (GNNs) Bronstein et al. (2017); Bacciu et al. (2020); Spinelli et al. (2021) provide state-of-the-art link prediction results with an end-to-end learning paradigm. A common approach to improve the fairness of these algorithms requires the introduction of fairness enforcing constraints during a model's training Bose and Hamilton (2019). Another strategy involves the modification of the graph's topology for post-processing the model's prediction Spinelli et al. (2022); Dai and Wang (2020); Loveland et al. (2022). Along this, the community is studying how to measure the actual fairness introduced in the system by these methods. Link predic tion requires a dyadic fairness measure that considers the influence of both sensitive attributes associated with the connection Masrour et al. (2020). However, most works on fairness measures focus on independent and identically distributed (i.i.d.) data. A common solution consists in determining new groups defined for the edges. Then, it is possible to measure the level of equity of a new edge added to the graph by applying the known fairness metrics to these new groups. Since training is the most expensive phase in the modern machine learning pipeline (excluding data harvesting and labelling), we designed a fine-tuning strategy named DEA, where we learn to modify the graph's topology and adapt the parameters of the network to those modifications. A novel covariance-based constraint designed for the link prediction task guides the fine-tuning. We introduce a novel parametrization that allows the new adjacency's optimization in its discrete form. We apply a variation of the Gumbel-max trick Jang et al. (2017) paired with a small multilayer perceptron that allows us to sample the edges from the original adjacency matrix. ## 2 Related Works In this section, we focus on the recent contributions to the fair graph representation learning field. Although the extensive and interdisciplinary literature Chiappa (2019); Chiappa et al. (2020) on algorithmic bias, the study of fairness in graph representation learning is recent. The surge of interest is due to the state-of-the-art results of graph neural networks (GNNs) in many graph-based tasks. Some works focused on the node embeddings task to create fair embeddings to use as the input of a downstream link prediction task. Compositional fairness constraints Bose and Hamilton (2019) learn a set of adversarial filters that remove information about particular sensitive attributes. GUIDE Song et al. (2022) maximize overall individual fairness minimizing at the same time group disparity of individual fairness across different groups. FairWalk Rahman et al. (2019) is an adaptation of Node2Vec Grover and Leskovec (2016) that aims to increase the fairness of the resulting embeddings. It modifies the transition probability of the random walks at each step, by weighing the neighbourhood of each node, according to their sensitive attributes. The recent work of Li et al. (2021) learns a fair adjacency matrix during an end-to-end link prediction task. FairAdj uses a graph variational autoencoder Kipf and Welling (2016) as base architecture and introduces two different optimization processes. One for learning a fair version of the adjacency matrix and one for the link prediction. Similarly, FairDrop Spinelli et al. (2022) modifies the adjacency during training using biased edge dropout targeting the homophily with respect to the sensitive attribute. However, the biased procedure is non-trainable. FairMod Current et al. (2022), and FairEdit Loveland et al. (2022) considers debiasing the input graph during training with the addition of artificial nodes and edges and not just the deletion. Except for FairDrop and FairAdj, the other solutions target the task of computing node embeddings or node classification explicitly. To our knowledge, we are the first to propose a model agnostic fine-tuning strategy to solve the link prediction end-to-end, optimizing both model's utility and fairness protection. Our contribution contains two novelties. From one side, we introduce two covariance-based constraints explicitly to enforce the fairness of the link prediction classification. Secondly, we propose a novel way to parametrize a discrete yet trainable adjacency matrix. The latter aspect is of particular interest to the community to improve the quality of the messages sent across the graph Gasteiger et al. (2019); Kazi et al. (2022). DropEdge Rong et al. (2020) is a dropout mechanism which randomly removes a certain number of edges from an input graph at each training epoch to sparsify the connectivity. In Sparsified Graph Convolutional Network (SGCN) Li et al. (2022), the authors first pre-train a GCN to solve a node classification task. Then, a neural network sparsifies the graph by pruning some edges. Finally, they improve the classification performances by training a new GCN on the sparsified graph. Rather than sparsify the topology, another approach consists in rewiring the connections. GraphSage Hamilton et al. (2017) performs a neighbourhood sampling intending to be able to scale to larger graphs. The solution proposed in Gasteiger et al. (2019) alleviates the problem of noisy and often edges in real graphs by combining spectral and spatial techniques. DGM Kazi et al. (2022) and IDGL Chen et al. (2020) jointly learn the graph structure and graph embedding for a specific task. Finally, taking distance from the message passing framework and using tools from differential geometry, the authors of Topping et al. (2022) present a new curvature-based method for graph rewiring. Our solution is closely related to the first approaches sparsifying the topology. However, in future works, we plan to rewire the graphs' topology with the same underlying objective. ## 3 Preliminaries ### Graph representation learning In this work we will consider an undirected and unweighted graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{1,\ldots,n\}\) is the set of node indexes, and \(\mathcal{E}=\{(i,j)\mid i,j\in\mathcal{V}\}\) is the set of arcs (_edges_) connecting pairs of nodes. The meaning of a single node or edge depends on the application. For some tasks, a node \(i\) is endowed with a vector \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) of features. Each node is also associated with a categorical sensitive attribute \(s_{i}\in S\) (e.g., political preference, ethnicity, gender), which may or may not be part of its features. Connectivity in the graph can be summarized by the adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\). This matrix is used to build different types of operators that define the communication protocols across the graph. The vanilla operator is the symmetrically normalized graph Laplacian Kipf and Welling (2017). A Graph Neural Network \(\mathrm{GNN}(\mathbf{X},\mathbf{A})\) can combine node features with the structural information of the graph by solving an end-to-end optimization problem. We will focus on the link prediction task, where the objective is to predict whether two nodes in a network are likely to have a link Liben-Nowell and Kleinberg (2007). The output of the GNN consists of a matrix of node embeddings \(\mathbf{H}\). Therefore we compute a new \(n\times n\) matrix containing a probability score for each possible link in the graph \(\mathbf{\hat{Y}}=\mathrm{sigmoid}(\mathbf{H}\mathbf{H}^{T})\). The optimization objective is a binary cross-entropy loss over a subset of positive training edges and negative ones (sampled once). ### Dyadic group fairness metrics Fairness in decision-making is broadly defined as the absence of any advantage or discrimination towards an individual or a group based on their traits Saxena et al. (2019). Due to the broadness of the definition, there are several different fairness metrics, each focused on another type of discrimination Mehrabi et al. (2019). We focus on group fairness metrics measuring if the model's predictions disproportionately benefit or damage people of different groups defined by their sensitive attributes. These measures are usually expressed in the context of a binary classification problem. In the notation of the previous section, denote by \(Y\in[0,1]\) a binary target variable defined for each node of the graph, and by \(\hat{Y}=f(\mathbf{x})\) a predictor that does not exploit the graph structure. As before, we associate to each \(\mathbf{x}\) a categorical sensitive attribute \(S\). For simplicity's sake, we assume \(S\) to be binary, but the following definitions extend easily to the multi-class case. Two widely used criteria belonging to this group are: * _Demographic Parity_ (\(DP\))Dwork et al. (2012): a classifier satisfies \(DP\) if the likelihood of a positive outcome is the same regardless of the value of the sensitive attribute \(S\). \[P(\hat{Y}|S=1)=P(\hat{Y}|S=0)\] (1) * _Equalized Odds_ (\(EO\))Hardt et al. (2016): a classifier satisfies \(EO\) if it has equal rates for true positives and false positives between the two groups defined by the protected attribute \(S\). \[P(\hat{Y}=1|S=1,Y=y)=P(\hat{Y}=1|S=0,Y=y)\] (2) These definitions trivially extend to cases where the categorical sensitive attribute can have more than two values \(|S|>2\). For the rest of the paper, we will consider this scenario. The link prediction task's predictive relationship between two nodes should be independent of both sensitive attributes. Therefore, In Masrour et al. (2020) and Spinelli et al. (2022), the authors introduced three dyadic criteria to map the sensitive attributes from the nodes to the edges. The original groups defined by \(S\) generate different dyadic subgroups associated with the edges \(D\). The dyadic groups can be summarized as follows: * **Mixed dyadic** (\(|D|=2\)): the original groups generate two dyadic groups independently from the cardinality of the sensitive attribute. An edge will be in the intra-group if it connects a pair of nodes with the same sensitive attribute. Otherwise, it will be part of the inter-group. * **Group dyadic** (\(|D|=|S|\)): creates a one-to-one mapping between the dyadic and node-level groups. Each edge is counted twice, once for every sensitive attribute involved. This dyadic definition ensures that the nodes participate in the links' creation regardless of the value of their sensitive attribute. * **Sub-group dyadic** (\(|D|=\frac{(|S|+2-1)!}{2!(|S|-1)!}\)): enumerates all the possible combinations of sensitive attributes. The fairness criteria protect the balance between all the possible inter-group and intra-group combinations. ## 4 Drop Edges and Adapt In this work, we aim to improve the fairness of a trained GNN. In our fine-tuning strategy, we optimize at the same time the model and the adjacency matrix to solve the main task subject to a fairness constraint. To optimize the adjacency matrix, we learn a latent variable for each edge in the original graph with a neural network. The number of the introduced parameters is negligible concerning the size of the input graph, which makes our approach applicable to large-scale datasets. We focus our evaluation on the task of end-to-end link prediction. Therefore we design the constraint accordingly. We show the general framework of our method in Figure 1. We aim to fine-tune a trained model with an additional regularization term enforcing fairness by changing the adjacency matrix and adapting the network weights to these modifications. To do so, we introduce a different architecture called Sampler, containing an MLP. The Sampler takes as input the node embeddings produced by the GNN and builds representation for the edges in the graph. Then it outputs a new adjacency which will be used by the GNN to make its predictions. The fine-tuning loss comprises the cross-entropy loss and a fairness constraint that updates the Sampler and the GNN. Below, we introduce each element in a separate section. ### Sampler The Sampler is one of the two key contributions of our proposed approach. We want to sample the edges from the original adjacency matrix to help the GNN produce fairer predictions. At the same time, it has to preserve the discrete nature of the graph during the training process. The Sampler contains an MLP taking as input an edge embedding, defined as the concatenation Figure 1: DEA schematics. The pre-trained GNN extracts the node embeddings \(\mathbf{H}\). The Sampler takes them as input and returns a new, fairness enforcing, discrete adjacency matrix \(\widehat{\mathbf{M}}\). The new matrix is used as input for a new feedforward step of the GNN. Finally, we update the Sampler and the GNN with a combination of the binary cross-entropy loss and our covariance-based fairness constraint. of two-node embeddings produced by the last layer of the GNN. The output of the MLP is an unnormalized probability vector \(\mathbf{z}\) where each element is associated with an edge of the graph. To sample the edges, we use the Gumbel-max trick Jang et al. (2017). It is a method to draw a sample from a categorical distribution, given by its unnormalized (log-)probabilities. The community proposed several extensions of this trick, including a Gumbel-sigmoid Geng et al. (2020). We apply this function to the vector \(z\): \[\widetilde{m}_{(i,j)}=\text{sigmoid}\left(\frac{z_{(i,j)}+G^{\prime}}{\tau}\right) \tag{3}\] where \(\text{G}^{\prime}\) is an independent Gumbel noise and \(\tau\in(0,\infty)\) is a temperature parameter. As \(\tau\) diminishes to zero, a sample from the Gumbel-Sigmoid distribution becomes cold and resembles the one-hot samples. The procedure generates a new vector of soft-noisy weights \(\widetilde{m}_{(i,j)}\)\(\forall(i,j)\in\mathcal{E}\). Finally, we build the new adjacency matrix \(\widehat{\mathbf{M}}\) where each element is defined as follows: \[\widehat{m}_{(i,j)}=\begin{cases}1&\text{ if }\widetilde{m}_{(i,j)}\geqslant 0.5 \text{ and }(i,j)\in\mathcal{E}\\ 0&\text{ otherwise}\end{cases}\;, \tag{4}\] The flowing of the gradient is guaranteed thanks to the use of a straight-through estimator Hinton et al. (2012). ### Constraints In Zafar et al. (2019), the authors introduce a constraint to design convex boundary-based classifiers free of disparate impact. They use the covariance between the sensitive attribute \(s\) and the signed distance from the feature vectors to the decision boundary. Even if this measure is just a proxy for the disparate impact, it led to good empirical results. Neural networks, however, are not convex boundary-based classifiers. We cannot apply the constraint in its original formulation. To this end, we propose to exploit the prediction margin instead of the distance from the decision boundary. We recall that the prediction margin for a model parametrized by \(\theta\) is defined as follows: \[\beta_{\theta}(i,j)=\hat{y}_{(i,j)}-\delta \tag{5}\] where \(\hat{y}_{(i,j)}\) is the predicted probability for the edge between node \(i\) and node \(j\). \(\delta\) is the threshold to assign the edge to the positive class if \(\hat{y}_{(i,j)}\geqslant\delta\) or otherwise to the negative class. In our definition of the constraint, we consider the dyadic nature of the link prediction task. The first and most effortless approach consists in building a constraint replicating the mixed dyadic definition. We create a new vector in which we assign to each edge a single value. We let \(e=1\) if the nodes at the ends of the edges have the same sensitive attribute and \(e=0\) otherwise. The covariance mixed dyadic constraint can be written as: \[\text{CovM}=\left|\frac{1}{|\mathcal{E}|}\sum_{(i,j)\in\mathcal{E}}(e_{(i,j)}- \bar{e})\beta_{\theta}(i,j)\right|\leqslant c \tag{6}\] where \(\bar{e}\) is the mean of the \(e\) vector. We then propose a second version of the constraint mimicking the group dyadic definition to create a more expressive constraint. We create as many vectors as the sensitive attribute \(S\) cardinality. The first vector \(e^{1}\) will be associated with the first possible value of the sensitive attribute \(S\), denoted as \(s^{1}\) and so on. We then let \(e^{k}_{(i,j)}=1\) if at least \(i\) or \(j\) has \(s^{k}\) as sensitive attribute. We end up with \(|S|\) different \(e\) vectors and the same number of covariance constraints. We can minimize the constraint independently by assigning a different threshold \(c\) to each one of them or by averaging them together. We can express the latter approach as: \[\text{CovG}=\left|\frac{1}{|\mathcal{E}||S|}\sum_{(i,j)\in\mathcal{E}}\sum_{k \in|S|}(e^{k}_{(i,j)}-\bar{e}^{k})\beta_{\theta}(i,j)\right|\leqslant c \tag{7}\] In our evaluation, we opted for the second solution leaving the first approach for future work. In the end, this last approach can be viewed as a one-vs-all fairness constraint where we try to maximize the fairness of all groups at once. ### Fine-tuning Fine-tuning a model has several advantages over training from scratch when one is trying to impose some constraints. First, it is easy to assess the fairness of the prediction of the original model and fine-tuned one. Secondly, it is possible to create a fairer model and obtain a more equitable prediction of the new adjacency without retraining the model. Ideally, we want to optimize the adjacency matrix. However, as it is possible to see in the ablation section, the model suffers drastic changes in its inputs. We found that adapting the model's parameters while learning the adjacency stabilizes the predictive performances meanwhile improving their fairness.We start with a trained model parameterized by \(\theta\) and a threshold value \(\delta\) used to assign an edge to the positive or negative class. Next, we sample a negative set of edges for the link prediction loss. For each epoch, we compute the node embeddings. The Sampler takes them as input to output \(\widehat{\mathbf{M}}\). The network combines this discrete and trainable adjacency with the negative samples for the final feedforward step. Next, we compute the standard cross-entropy for the link prediction task and our covariance-based fairness enforcing constraint. The constraint is balanced with an additional hyperparameter \(\lambda\). Finally, we update the GNN and the MLP inside the Sampler. ## 5 Experimental section We focus our experiments on measuring the impact of our fine-tuning strategy for enhancing fairness on the link prediction task. We use six fairness metrics (i.e. two for each dyadic group) together with the AUC and accuracy on the main task. In addition, we report the average and standard deviations of ten runs with random data splits. We monitor the Demographic Parity difference (\(\Delta DP\)) and the Equality of Odds difference (\(\Delta EO\)). The first measures the difference between the largest and the lowest group-level selection rate: \[\Delta DP=\max_{d}E[\widehat{\mathbf{Y}}|D=d]-\min_{d}E[\widehat{\mathbf{Y}}|D=d] \tag{8}\] The latter report the maximum discrepancy between the true positive rate (TPR) difference and the false positive rate (FPR) difference between the groups: \[\Delta TPR =\max_{d}E[\widehat{\mathbf{Y}}=1|D=d,\mathbf{Y}=1] \tag{9}\] \[-\min_{d}E[\widehat{\mathbf{Y}}=1|D=d,\mathbf{Y}=1]\,,\] \begin{table} \begin{tabular}{l|c|c|c|c} Dataset & \(S\) & \(|S|\) & Features & Nodes & Edges \\ \hline Citeseer & paper class & 6 & 3703 & 2110 & 3668 \\ Cora-ML & paper class & 7 & 2879 & 2810 & 7981 \\ PubMed & paper class & 3 & 500 & 19717 & 44324 \\ DBLP & continent & 5 & None & 3980 & 6965 \\ FB & gender & 2 & None & 4039 & 88234 \\ \end{tabular} \end{table} Table 1: Dataset statistics. \[\Delta FPR =\max_{d}E[\hat{\mathbf{Y}}=1|D=d,\mathbf{Y}=0] \tag{10}\] \[-\min_{d}E[\hat{\mathbf{Y}}=1|D=d,\mathbf{Y}=0]\,,\] \[\Delta EO=\max(\Delta TPR,\Delta FPR)\,. \tag{11}\] Our evaluation comprises five datasets. We report their statistics in Table 1. DBLP is a co-authorship network built-in Buyl and De Bie (2020) from the original dataset introduced in Tang et al. (2008). Nodes represent authors and are connected if they have collaborated at least once. The sensitive attribute is the continent of the author institution without Africa and Antarctica because of their under-representation in the data. Facebook (FB) Leskovec and Mcauley (2012) is a combination of ego-networks introduced in Spinelli et al. (2022) obtained from a social network. The graph encodes users as nodes with gender as a sensitive attribute and friendships as links. These two datasets do not have feature vectors associated with the nodes. Therefore we used the eigenvectors of the Laplacian matrix as input features. We included three benchmark citation networks Citeseer, Cora-ML, and PubMed. In these graphs, nodes are articles and have associated a bag-of-words representation of the abstract. Links represent a citation regardless of the direction. We used the category of the article as a sensitive attribute. We would like to recall that the value of the sensitive attribute arises naturally from the graph topology but is never used directly in the learning pipeline. We tested our fine-tuning strategy on a GCN Kipf and Welling (2017) and a GAT Velickovic et al. (2018). We used an embedding size of 128 for the GCN. The GAT uses an embedding size of 16 with eight attention heads which are concatenated. We used two layers for the citation datasets and four for the two more complex datasets. We chose the threshold for computing the accuracy and the corresponding fairness with a grid search in the interval \([0.4,0.7]\) for each algorithm. In our covariance constraints, we set \(c=0\) and choose \(\lambda\) to balance the regularization term with grid search. The temperature \(\tau\) of the Gumbel-sigmoid followed a linear decay from 5 to 1 for each dataset. The MLP in the Sampler has two layers of 128 elements across all experiments. We trained the models using Adam optimizer Kingma and Ba (2014) for 100 epochs on every dataset except FB, which required 200 epochs. Our fine-tuning required additional 100 epochs. We compare against competitors designed to enforce the fairness of link prediction tasks. We build upon the experimental evaluation proposed in Spinelli et al. (2022). Therefore we include DropEdge and Fairdrop as plain and biased sparsification techniques and FairAdj as a more complex approach. We used two configurations suggested in the original implementation for the latter method. The one with the hyperparameter \(T_{2}=20\) provides a more robust regularization towards fairness with respect to the model trained with \(T_{2}=5\) at the cost of lowering the model's utility. 2, 3 and 4, DEA provides slightly better protection than FairAdj. However, the latter loses in accuracy and AUC with severe losses in Table 4 where the drop in accuracy provided by FairAdj is about 15% of accuracy and 10% of AUC. FairAdj fails to solve the link prediction task on complex datasets like DBLP (Tab. 5) and FB (Tab. 6). In the end, DEA removes around 10% of the edges, considerably less than DropEdge and FairDrop. In Figure 2, we show the intermediate steps resulting in the final version of the fair adjacency matrix \(\widehat{\mathbf{M}}\). There is little difference between the actual edge distribution \(\mathbf{Z}\) and its noisy version after the Gumbel-sigmoid trick \(\widehat{\mathbf{M}}\). Also, it is possible to see that CovM is more peaked at the extreme values. Finally, Figure 2(c) shows the number of edges removed from the original adjacency matrix \(\widehat{\mathbf{M}}\) to obtain a fairer link prediction. Figure 2: Edge distribution at different stages of our pipeline. In blue, we depict the results obtained using CovM constraint; in orange, the ones with CovG. Figure (a) shows the distribution \(\mathbf{z}\) learnt by the MLP inside our Sampler. Figure (b) shows the approximation after the Gumbel sigmoid trick \(\widetilde{\mathbf{m}}\). Finally, Figure (c) shows the number of edges removed and kept in the new fairness-enforcing adjacency matrix \(\widehat{\mathbf{M}}\) thresholding the values in \(\widetilde{\mathbf{m}}\) at 0.5. work. In the first experiment, we train for the same number of epochs as a standard GCN and one paired with the Sampler, learning a new adjacency matrix with the sole objective of maximizing accuracy. Then, we optimize the adjacency matrix and the model's parameters to solve the main task without additional fairness constraints. Finally, in Table 7, we show that those modifications to the adjacency matrix improve the link prediction performances. In the second, we focus on the various components of our architecture on the Citeseer dataset. Results are visible in Table 8. We disable each time a different component of our framework. We train everything from scratch instead of fine-tuning a model in the second and third rows. In "Training w X", we feed to the Sampler the concatenation of the feature vectors associated with the nodes instead of the node embeddings generated by the GNN. We then proceed to fine-tune the model by disabling some components. In "w/o Sampler", we keep the covariance constraint but remove the learning of the adjacency matrix. In "w/o CovM", we do the opposite. Finally, we fine-tune the model without any modification. The latter solution has comparable performances in terms of accuracy, but it has significantly worst fairness metrics. Training from scratch has similar results. Fine-tuning with the covariance constraint or the Sampler improves the fairness, but we obtain the best results when both are active. \begin{table} \begin{tabular}{l|c|c c c|c c c|c c} Method & Accuracy \(\uparrow\) & AUC \(\uparrow\) & \(\Delta DP_{m}\downarrow\) & \(\Delta EO_{m}\downarrow\) & \(\Delta DP_{s}\downarrow\) & \(\Delta EO_{g}\downarrow\) & \(\Delta DP_{s}\downarrow\) & \(\Delta EO_{s}\downarrow\) \\ \hline GCN & 82.3 \(\pm\) 1.4 & 90.8 \(\pm\) 1.1 & 0.7 \(\pm\) 0.5 & 2.8 \(\pm\) 0.7 & 3.5 \(\pm\) 0.7 & 3.4 \(\pm\) 1.2 & 7.5 \(\pm\) 1.2 & 5.6 \(\pm\) 2.0 \\ GAT & 80.2 \(\pm\) 2.7 & 86.3 \(\pm\) 2.4 & 0.8 \(\pm\) 0.6 & 1.8 \(\pm\) 0.9 & 3.3 \(\pm\) 1.3 & 2.5 \(\pm\) 1.5 & 7.2 \(\pm\) 2.5 & 4.3 \(\pm\) 2.5 \\ \hline GCN+DropEdge & 77.9 \(\pm\) 1.5 & 87.7 \(\pm\) 0.9 & 0.9 \(\pm\) 0.1 & 4.9 \(\pm\) 0.4 & 5.5 \(\pm\) 0.5 & 6.9 \(\pm\) 0.8 & 11.0 \(\pm\) 1.0 & 11.7 \(\pm\) 1.5 \\ GAT+DropEdge & 71.7 \(\pm\) 4.6 & 83.9 \(\pm\) 1.5 & 1.1 \(\pm\) 0.2 & 1.4 \(\pm\) 1.4 & 2.3 \(\pm\) 1.3 & 2.4 \(\pm\) 2.6 & **5.3 \(\pm\)** 2.0 & 4.4 \(\pm\) 4.5 \\ \hline GCN+FairDrop & 77.4 \(\pm\) 1.9 & 87.7 \(\pm\) 1.0 & **0.6**\(\pm\) 0.3 & 4.3 \(\pm\) 0.2 & 4.9 \(\pm\) 0.2 & 5.9 \(\pm\) 0.7 & 10.0 \(\pm\) 0.5 & 10.1 \(\pm\) 1.2 \\ GAT+FairDrop & 75.1 \(\pm\) 2.1 & 83.7 \(\pm\) 1.2 & 1.3 \(\pm\) 0.7 & 1.8 \(\pm\) 1.5 & 2.3 \(\pm\) 1.4 & 2.5 \(\pm\) 2.4 & 5.6 \(\pm\) 2.4 & 4.7 \(\pm\) 4.0 \\ \hline GCN+DEA+CovM & **82.9** \(\pm\) 1.2 & **93.5** \(\pm\) 0.6 & 1.6 \(\pm\) 0.3 & 1.9 \(\pm\) 0.7 & 2.6 \(\pm\) 0.3 & 1.6 \(\pm\) 0.6 & 6.1 \(\pm\) 0.7 & 2.9 \(\pm\) 0.9 \\ GCN+DEA+CovG & **82.9** \(\pm\) 1.2 & 93.2 \(\pm\) 0.9 & 1.9 \(\pm\) 0.3 & 1.6 \(\pm\) 0.3 & **2.1** \(\pm\) 0.4 & 1.4 \(\pm\) 0.3 & 5.5 \(\pm\) 0.6 & 2.5 \(\pm\) 0.4 \\ GAT+DEA+CovM & 82.8 \(\pm\) 2.3 & 90.9 \(\pm\) 2.0 & 1.1 \(\pm\) 0.6 & 1.5 \(\pm\) 0.9 & 2.9 \(\pm\) 0.6 & 1.4 \(\pm\) 0.8 & 6.4 \(\pm\) 1.2 & 2.7 \(\pm\) 1.2 \\ GAT+DEA+CovG & 82.2 \(\pm\) 2.1 & 89.5 \(\pm\) 2.1 & 1.3 \(\pm\) 0.4 & **1.0** \(\pm\) 0.5 & 2.5 \(\pm\) 0.8 & **1.0** \(\pm\) 0.6 & 5.8 \(\pm\) 1.8 & **2.2** \(\pm\) 1.0 \\ \end{tabular} \end{table} Table 6: Link prediction on FB \begin{table} \begin{tabular}{l|c c|c c} Method & Acc Sampler & AUC Sampler & Acc w/o & AUC w/o \\ \hline Citeseer & **78.3**\(\pm\) 0.5 & **88.6**\(\pm\) 0.3 & 76.7 \(\pm\) 1.3 & 86.7 \(\pm\) 1.3 \\ \hline Cora & **82.1**\(\pm\) 1.1 & **89.8**\(\pm\) 1.0 & 81.0 \(\pm\) 1.1 & 88.0 \(\pm\) 1.0 \\ \hline Pubmed & **88.8**\(\pm\) 0.5 & **95.0**\(\pm\) 0.3 & 88.0 \(\pm\) 0.4 & 94.5 \(\pm\) 0.2 \\ \hline DBLP & **83.2**\(\pm\) 1.0 & **86.9**\(\pm\) 0.9 & 82.4 \(\pm\) 0.7 & 86.3 \(\pm\) 1.8 \\ \hline FB & **82.6**\(\pm\) 0.9 & 90.7 \(\pm\) 0.6 & 82.3 \(\pm\) 1.4 & **90.8**\(\pm\) 1.1 \\ \end{tabular} \end{table} Table 7: Results obtained training a GCN on Citeseer with and without the Sampler optimizing the adjacency matrix. ## 6 Conclusions We introduced DEA, a novel approach to improve the fairness of a GNN solving a link prediction task. In our fine-tuning strategy, we learn to modify the graph's topology and adapt the parameters of the network to those modifications. A module called Sampler learns to drop edges from the original adjacency matrix. We exploit a Gumbel-sigmoid to sample a new discrete and fair adjacency. At the same time, the GNN uses this new matrix for fine-tuning. We guide both optimization processes with an additional regularization term shaped as a covariance-based constraint. We provided two different formulations, the first acting on the inter and intra connections between the groups defined by the sensitive attribute. In the second modelling, each value of the sensitive attribute is in the one-vs-the-rest paradigm. We performed an extensive experimental evaluation where we demonstrated that our fine-tuning strategy provides state-of-the-art protection against unfairness meanwhile improving the model's utility on the original task. Finally, we performed an ablation study on the contribution of each component of our pipeline. In future, we would like to learn to add new connections instead of just dropping them from the original adjacency matrix.
2308.04749
Enhancing Efficient Continual Learning with Dynamic Structure Development of Spiking Neural Networks
Children possess the ability to learn multiple cognitive tasks sequentially, which is a major challenge toward the long-term goal of artificial general intelligence. Existing continual learning frameworks are usually applicable to Deep Neural Networks (DNNs) and lack the exploration on more brain-inspired, energy-efficient Spiking Neural Networks (SNNs). Drawing on continual learning mechanisms during child growth and development, we propose Dynamic Structure Development of Spiking Neural Networks (DSD-SNN) for efficient and adaptive continual learning. When learning a sequence of tasks, the DSD-SNN dynamically assigns and grows new neurons to new tasks and prunes redundant neurons, thereby increasing memory capacity and reducing computational overhead. In addition, the overlapping shared structure helps to quickly leverage all acquired knowledge to new tasks, empowering a single network capable of supporting multiple incremental tasks (without the separate sub-network mask for each task). We validate the effectiveness of the proposed model on multiple class incremental learning and task incremental learning benchmarks. Extensive experiments demonstrated that our model could significantly improve performance, learning speed and memory capacity, and reduce computational overhead. Besides, our DSD-SNN model achieves comparable performance with the DNNs-based methods, and significantly outperforms the state-of-the-art (SOTA) performance for existing SNNs-based continual learning methods.
Bing Han, Feifei Zhao, Yi Zeng, Wenxuan Pan, Guobin Shen
2023-08-09T07:36:40Z
http://arxiv.org/abs/2308.04749v1
Enhancing Efficient Continual Learning with Dynamic Structure Development of Spiking Neural Networks ###### Abstract Children possess the ability to learn multiple cognitive tasks sequentially, which is a major challenge toward the long-term goal of artificial general intelligence. Existing continual learning frameworks are usually applicable to Deep Neural Networks (DNNs) and lack the exploration on more brain-inspired, energy-efficient Spiking Neural Networks (SNNs). Drawing on continual learning mechanisms during child growth and development, we propose Dynamic Structure Development of Spiking Neural Networks (DSD-SNN) for efficient and adaptive continual learning. When learning a sequence of tasks, the DSD-SNN dynamically assigns and grows new neurons to new tasks and prunes redundant neurons, thereby increasing memory capacity and reducing computational overhead. In addition, the overlapping shared structure helps to quickly leverage all acquired knowledge to new tasks, empowering a single network capable of supporting multiple incremental tasks (without the separate sub-network mask for each task). We validate the effectiveness of the proposed model on multiple class incremental learning and task incremental learning benchmarks. Extensive experiments demonstrated that our model could significantly improve performance, learning speed and memory capacity, and reduce computational overhead. Besides, our DSD-SNN model achieves comparable performance with the DNNs-based methods, and significantly outperforms the state-of-the-art (SOTA) performance for existing SNNs-based continual learning methods. ## 1 Introduction Children are able to incrementally learn new tasks to acquire new knowledge, however, this is a major challenge for Deep Neural Networks (DNNs) and Spiking Neural Networks (SNNs). When learning a series of different tasks sequentially, DNNs and SNNs forget the previously acquired knowledge and fall into catastrophic forgetting [12]. Despite some preliminary solutions that have recently been proposed for DNNs-based continual learning, there is still a lack of in-depth inspiration from brain continual learning mechanisms and exploration on SNNs-based models. The studies attempt to address the continual learning problem of DNNs under task incremental learning (recognition within the classes of a known task) and class incremental learning (recognition within all learned classes) scenarios. Related works can be roughly divided into three categories: **a) Regularization.** Employing maximum a posterior estimation minimizes the changes of important weights [13, 14, 15]. These methods require strong model assumptions, such as the EWC [14] supposing that new weights are updated to local regions of the previous task weights, which are highly mathematical abstractions and poorly biologically plausibility. **b) Replay and retrospection.** Reviewing a portion of the samples of the old tasks while learning the new task [15, 16, 17], is currently considered as the superior class incremental learning method. The samples of old tasks are stored in additional memory space or generated by additional generation networks, resulting in extra consumption. **c) Dynamic network structure expansion.** [18, 19] proposed progressive neural networks that extend a new network for each task, causing a linear increase in network scale. To reduce network consumption, a sub-network of the whole is selected for each task using pruning and growth algorithms [11, 12], evolutionary algorithms [16] or reinforcement learning (RL) algorithms [13, 14]. However, these methods require storing a mask for each sub-network, which to some extent amounts to storing a separate network for each task, rather than a brain-inspired overall network capable of performing multiple sequential tasks simultaneously. To the best of our knowledge, there is little research on SNNs-based continual learning. Spiking neural networks, as third-generation neural networks [15, 16], simulate the information processing mechanisms of the brain, and thus serve well as an appropriate level of abstraction for integrating inspirations from brain multi-scale biological plasticity to achieve child-like continual learning. The existing HMN algorithm [15] uses a DNN network to decide the sub-network of SNN for each task, and is only applicable to two-layer fully connected networks for the N-MNIST dataset. There is still a lack of SNNs-based continual learning methods that could incorporate in-depth inspiration from the brain's continual learning mechanisms, while achieving comparable performance with DNNs under complex continual learning scenarios. Structural development mechanisms allow the brain's nervous system to dynamically expand and contract, as well as flexibly allocate and invoke neural circuits for efficient continual learning [11]. Motivated by this, this paper proposes Dynamic Structure Development of Spiking Neural Networks (DSD-SNN) for efficient and adaptive continual learning. DSD-SNN is designed as an SNN architecture that can be dynamically expanded and compressed, empowering a single network to learn multiple incremental tasks simultaneously, overcoming the problem of needing to assign masks to each task faced by DNNs-based continual learning methods. We validate the effectiveness of our proposed model on multiple class incremental learning (CIL) and task incremental learning (TIL) benchmarks, achieving comparable or better performance on MNIST, N-MNIST, and CIFAR-100 datasets. Especially, the proposed DSD-SNN model achieves an accuracy of 77.92% \(\pm\) 0.29% on CIFAR100, only using 37.48% of the network parameters. The main contributions of this paper can be summarized as follows: * DSD-SNN dynamically grows new neurons to learn newly arrived tasks, while extremely compressing the network to increase memory capacity and reduce computational overhead. * DSD-SNN maximally utilizes the previously learned tasks to help quickly adapt and infer new tasks, enabling efficient and adaptive continual learning (no need to identify separate sub-network mask for each task). * The experimental results demonstrate the remarkable superiority of DSD-SNN model on performance, learning speed, memory capacity and computational overhead compared with the state-of-the-art (SOTA) SNNs-based continual learning algorithms, and comparable performance with DNNs-based continual learning algorithms. ## 2 Related Work This paper mainly focuses on dynamic network structure expansion algorithms based on structural plasticity, which can be divided into progressive neural networks (PNN) and sub-network selection algorithms. In fact, the existing network structure expansion algorithms are mostly DNNs-based continual learning, with little exploration on SNNs. **Progressive neural networks.**[17] first proposes the progressive neural network and applies it to multiple continual reinforcement learning tasks. The PNN expands a new complete network for each new task and fixes the networks of the old tasks. In addition, lateral connections are introduced between the networks to effectively leverage the knowledge already learned. PIL [13] extends the PNN to large-scale convolutional neural networks for image classification tasks. However, the PNNs algorithms extremely increase the network storage and computational consumption during continual learning. In contrast, as development matures and cognition improves, the number of brain synapses decreases by more than 50% [10], forming a highly sparse brain structure perfect for continual learning. The PNNs blindly expand the structure causing catastrophic effects in the case of massive sequential tasks. Sub-network selection algorithm.A part of the network nodes is selected to be activated for a given task. PathNet [10] is first proposed to select path nodes (each node contains a set of neurons) for each task using the genetic algorithm. RPS-Net [14] randomly activates multiple input-to-output paths connected by convolutional blocks, and chooses the highest-performing ones as the final path. In addition, RCL [23] employ additional RL networks to learn the number of neurons required for a new task, while CLEAS [1] uses RL to directly determine the activation and death of each neuron. HMN [15] uses a hybrid network learning framework that uses an ANN modulation network to determine the activation of neurons for a SNN prediction network, but is only applied to small-scale networks for simple scenarios. A sub-network mask learning process based on pruning strategy is proposed by [1], which is applied to CIL combined with the replay strategy. The above algorithms select sub-networks for each task separately, failing to maximize the reuse of acquired knowledge to support new task learning. To solve this problem, DRE [24] prunes a sparse convolutional feature extractor for each task, and then merges the output of the convolution extractor into the previous tasks. CLNP [18] grows new neurons for a new task based on the old network, and DEN [21] expands when the already learned network is insufficient for the new task, while reusing the existing neurons. These several works require storing an additional sub-network mask for each task, which both increases additional storage consumption and is not consistent with the overall developmental learning process of the brain. Considering the various limitations of existing works above, the DSD-SNN proposed in this paper, which is a pioneering algorithm on SNNs-based continual learning, enables the capacity of a single network to learn multiple sequential tasks simultaneously, while reusing the acquired knowledge and significantly increasing the memory capacity. ## 3 Method ### Continual Learning Definition We are expected to sequentially learn \(\Gamma\) tasks, \(\Gamma=\{T_{1},...,T_{N}\}\). Each task \(T_{i}\) takes the form of a classification problem with its own dataset: \(D_{T_{i}}=\{(x_{j},y_{j})\}_{j=1}^{N_{T_{i}}}\), where \(x_{j}\in\chi,y_{i}\in\{1,...,C_{T_{i}}\}\), \(\chi\) is the input image space, \(N_{T_{i}}\) and \(C_{T_{i}}\) are the number of samples and classes of task \(T_{i}\). For the task incremental learning scenario, \(T_{i}\) is knowable in the testing process, setting requires to optimize: \[\underset{\theta}{max}\:E_{T_{i}\sim\Gamma}[E_{(x_{j},y_{j})\sim T_{i}}[logp_{ \theta}(y_{j}|x_{j},T_{i})]] \tag{1}\] where \(\theta\) is the network parameters. When \(T_{i}\) is unknown in testing, more complex class incremental learning scenarios solve the following problems: \[\underset{\theta}{max}\:E_{T_{i}\sim\Gamma}[E_{(x_{j},y_{j})\sim T_{i}}[logp_ {\theta}(y_{j}|x_{j})]] \tag{2}\] ### DSD-SNN Architecture The design of the DSD-SNN algorithm is inspired by the dynamic allocation, reorganization, growth, and pruning of neurons during efficient continual learning in the brain. As depicted in Fig. 1, the proposed DSD-SNN model includes three modules (random growth, adaptive pruning, freezing neurons) to accomplish multi-task incremental learning. Random growth.When a new task is coming, the DSD-SNN model first randomly assigns and grows a portion of untrained empty neurons to form a new pathway. And the new task-related classification neurons are added to the output layer as shown in Fig. 1. Newly grown neurons receive the output of all non-empty neurons of the previous layer (both newly grown neurons and already frozen neurons in the previous tasks). Therefore, all features learned from previous tasks can be captured and reused by the neural pathways of the new task. Then, the DSD-SNN algorithm can take full advantage of the features learned from the previous task to help the new task converge quickly, while the newly grown neurons can also focus on learning features specific to the new task. Adaptive pruning.During the learning process of the current task, the DSD-SNN algorithm adaptively detects relatively inactive neurons in the current pathway based on synaptic activity and prunes those redundant neurons to save resources. The pruned neurons are re-initialized as empty neurons that can be assigned to play a more important role in future tasks. Pruning only targets those neurons that are newly grown for the current task and does not include neurons that were frozen in the previous tasks. Adaptive pruning can substantially expand the memory capacity of the network to learn and memorize more tasks under a fixed scale. Freezing neurons.The contributing neurons that are retained after pruning will be frozen, enabling the DSD-SNN model to learn new tasks without forgetting the old tasks. The frozen neurons can be connected to newly grown neurons to provide acquired knowledge. During the training of new Figure 1: The DSD-SNN model realizes multi-task incremental learning through random growth, adaptive pruning, and freezing of neurons. task \(T_{i}\), all input synapses of the frozen neuron are no longer updated, only the newly added output synapses to the new neurons can be updated. The DSD-SNN model with neuron growth, pruning, and freezing can memorize previous knowledge and reuse the acquired knowledge to learn new tasks for efficient continual learning. The deep SNN with multiple convolutional and fully connected layers is constructed to implement task incremental learning and class incremental learning, as shown in Fig. 2. During the training process, we sequentially input training samples of each task and update the synapses newly added to the network. In the testing process, test samples of all learned tasks are fed into our overall multi-task continual learning network, so that a single DSD-SNN model can achieve all tasks without the need to identify separate sub-network mask for each task. To address more complex class incremental learning, we add a two-layer network as the task classifier. The task classifier receives inputs from the classes outputs of the continual learning network, and outputs which task the current sample belongs to (as in the red box in Fig. 2). According to the inferred task \(\hat{T_{i}}\) obtained from the task classifier, the DSD-SNN model chooses the maximum output class of the \(\hat{T_{i}}\) task in the continual learning network as the predicted class. ### DSD-SNN Computational Details So far in this section, we have described how our model efficiently and adaptively accomplishes continual learning. We now introduce the detailed growth and pruning scheme that we use throughout this paper. #### 3.3.1 Neuronal Growth and Allocation During brain development, neurons and synapses are first randomly and excessively grown and then reshaped based on the external experience [15, 10]. In the DSD-SNN model, the SNN is first initialized to consist of \(N^{l}\) neurons in each layer \(l\). In the beginning, all neurons in the network are unassigned empty neurons \(N_{empty}\). When the new task \(T_{i}\) arrives, we randomly grow \(\rho\%\times N^{l}\) neurons from the empty neurons for each layer, denoted as \(N_{new}\). After training and pruning for task \(T_{i}\), all retained neurons \(N_{new}\) are frozen, added to \(N_{frozen}\). To better utilize the acquired knowledge, the newly grown neurons \(N_{new}^{l}\) in each layer not only receive the output of the new growth neurons \(N_{new}^{l-1}\) in the previous layer, but also receive the output of the frozen neurons \(N_{frozen}^{l-1}\) in the previous layer, as follows. \[\{N_{frozen}^{l-1},N_{new}^{l-1}\}\to N_{new}^{l} \tag{3}\] Where \(\rightarrow\) represents the input connections. For the frozen neurons \(N_{frozen}^{l-1}\), growth does not add input connections to avoid interference with the memory of previous tasks. Note that we do not assign task labels to frozen and new growth neurons in either the training or testing phase of continual learning. That is, the DSD-SNN algorithm uses the entire network containing all neurons that have learned previous tasks to do prediction and inference. Thus, our model is able to learn multiple sequential tasks simultaneously without storing separate sub-network masks. #### 3.3.2 Neuronal Pruning and Deactivation Neuroscience researches have demonstrated that after the overgrowth in infancy, the brain network undergoes a long pruning process in adolescence, gradually emerging into a delicate and sparse network [12, 13, 14]. Among them, input synapses are important factors to determine the survival of neurons according to the principle of "use it or lose it" [15, 16, 17]. For SNN, neurons with input synapse weights close to 0 are more difficult to accumulate membrane potentials beyond the spiking threshold, resulting in firing spikes less and contributing to the outputs less. Therefore, we used the sum of input synapse weights \(S_{i}^{l}\) to assess the importance of neurons \(i\) in the \(l\) layer as in Eq. 4. \[S_{i}^{l}=\sum_{j=1}^{M_{l-1}}W_{ij} \tag{4}\] Figure 2: The architecture of the DSD-SNN model. Where \(W_{ij}\) is the synapse weights from presynaptic neuron \(j\) to postsynaptic neuron \(i\), \(M_{l-1}\) is the number of presynaptic neurons. During the training of new tasks, we monitor the importance of newly grown neurons \(N_{new}\) and prune redundant neurons whose values of \(S_{i}\) are continuously getting smaller. Here, we define a pruning function as follows: \[\phi_{P_{i}^{l}}=\alpha*Norm(S_{i}^{l})-\rho_{p} \tag{5}\] \[P_{i}^{l}=\gamma P_{i}^{l}+e^{-\frac{epoch}{\eta}}\phi_{P_{i}^{l}} \tag{6}\] Where \(Norm(S_{i}^{l})\) refers to linearly normalize \(S_{i}^{l}\) to 0 \(\sim\) 1. \(\alpha=2\) and \(\rho_{p}\) control the pruning strength. \(\rho_{p}\) includes \(\rho_{c}\) and \(\rho_{f}\) for the convolutional and fully connected layers, respectively. \(P_{i}^{l}\) is initialized to 5. \(\gamma=0.99\) and \(\eta\) controls the update rate as [14]. \(e^{-\frac{epoch}{\eta}}\) decreases exponentially with increasing epoch, which is consistent with the speed of the pruning process in biological neural networks that are first fast, then slow, and finally stable [17, 14]. The pruning functions are updated at each epoch, then we prune neurons with the pruning function \(P_{i}^{l}<0\). We structurally prune channels in the convolutional layer and prune neurons in the fully connected layer, removing their input connections and output connections. ### SNNs Information Transmission Different from DNNs, SNNs use spiking neurons with discrete 0/1 output, which are able to integrate spatio-temporal information. Specifically, we employ the leaky integrate-and-fire (LIF) neuron model [1] to transmit and memorize information. In the spatial dimension, LIF neurons integrate the output of neurons in the previous layer through input synapses. In the temporal dimension, LIF neurons accumulate membrane potentials from previous time steps via internal decay constants \(\tau\). Incorporating the spatio-temporal information, the LIF neuron membrane potential \(U_{i}^{t,l}\) at time step \(t\) is updated by the following equation: \[U_{i}^{t,l}=\tau(1-U_{i}^{t-1,l})+\sum_{j=1}^{M_{l-1}}W_{ij}O_{j}^{t,l-1} \tag{7}\] When the neuronal membrane potential exceeds the firing threshold \(V_{th}\), the neuron fires spike, and its output \(O_{i}^{t,l}\) is equal to 1; Conversely, the neuron outputs 0. The discrete spiking outputs of LIF neurons conserve consumption as the biological brain, but hinder gradient-based backpropagation. To address this problem, [20] first proposed the method of surrogate gradient. In this paper, we use Qgategrad [15] surrogate gradient method with constant \(\lambda=2\) to approximate the spiking gradient, as follows: \[\frac{O_{i}^{t,l}}{U_{i}^{t,l}}=\begin{cases}0,&|U_{i}^{t,l}|>\frac{1}{\lambda} \\ -\lambda^{2}|U_{i}^{t,l}|+\lambda,&|U_{i}^{t,l}|\leq\frac{1}{\lambda}\end{cases} \tag{8}\] Overall, We present the specific procedure of our DSD-SNN algorithm as Algorithm 1. ``` Input: Dataset \(D_{T_{i}}\) for each task \(T_{i}\); Initialize empty network \(Net\); Constant parameters of growth \(\rho\%\) and pruning \(\rho_{c},\rho_{f}\). Output: Prediction Class in task \(T_{i}\) (TIL) or in all tasks (CIL). for each sequential task \(T_{i}\)do Growing new neurons to \(Net\) as Eq. 3; for\(epoch=0\); \(epoch<E\); \(epoch++\)do SNN forward prediction \(Net\) (\(D_{T_{i}}\)) as Eq. 7; SNN backpropagation to update new connections as Eq. 8; Assessing importance for newly grown neurons as Eq. 4; Calculating the neuronal pruning function as Eq. 5 and Eq. 6; Pruning redundant neurons with \(P_{i}^{l}<0\); end for Freezing retained neurons in \(Net\); end for ``` **Algorithm 1**The DSD-SNN Continual Learning. ## 4 Experiments ### Datasets and Models To validate the effectiveness of our DSD-SNN algorithm, we conduct extensive experiments and analyses on the spatial MNIST [11], CIFAR100 [21] and neuromorphic temporal N-MNIST datasets [1] based on the brain-inspired cognitive intelligence engine BrainCog [13]. The specific experimental datasets and models are as follows: * Permuted MNIST: We permute the MNIST handwritten digit dataset to ten tasks via random permutations of the pixels. Each task contains ten classes, divided into 60,000 training samples and 10,000 test samples. As for the SNN model, we use the SNN with two convolutional layers, one fully-connected layer, and the multi-headed output layer. * Permuted N-MNIST: We randomly permute the N-MNIST ( the neuromorphic capture of MNIST) to ten tasks. And we employ the same sample division and the same SNN structure as MNIST. * Split CIFAR100: The more complex natural image dataset CIFAR100 is trained in several splits including 10 steps (10 new classes per step), 20 steps (5 new classes per step). SNN model consisting of eight convolutional layers, one fully connected and multi-headed output layer are used to generate the predicted class. For the task classifier, we use networks containing a hidden layer with 100 hidden neurons for MNIST and N-MNIST, and 500 hidden neurons for CIFAR100. To recognize tasks better, we replay 2000 samples for each task as [12, 13, 14]. Our code is available at [https://github.com/BrainCog-X/BrainCog/tree/main/examples/Structural_Development/DSD-SNN](https://github.com/BrainCog-X/BrainCog/tree/main/examples/Structural_Development/DSD-SNN). ### Comparisons of Performance As shown in Fig. 2(a), our DSD-SNN model maintains high accuracy with increasing number of learned tasks. This demonstrates that the proposed model overcomes catastrophic forgetting on all MNIST, neuromorphic N-MNIST and more complex CIFAR100 datasets, achieving robustness and generalization capability on both TIL and CIL. To validate the effectiveness of our dynamic structure development module, we compare the learning process of DSD-SNN with other DNNs-based continual learning and transfer them to SNN as Fig. 2(b). The experimental results indicate that DSD-SNN realizes superior performance in learning and memorizing more incremental tasks, exhibiting larger memory capacity compared to the DNNs-based continual learning baselines. The comparison results of the average accuracy with existing continual learning algorithms based on DNN and SNN are shown in Table 1 and Table 2. In the TIL scenario, our DSD-SNN achieves an accuracy of 97.30% \(\pm\) 0.09% with a network parameter compression rate of 34.38% for MNIST, which outperforms most DNNs-based algorithms such as EWC [17], GEM [14], and RCL [20]. In particular, our algorithm achieves a higher performance improvement of 0.70% over the DEN [21] model (which is also based on growth and pruning). For the temporal neuromorphic N-MNIST dataset, our DSD-SNN algorithm is superior to the existing HMM algorithm which combines SNN with DNN [15]. Meanwhile, our DSD-SNN model achieves 92.69% \(\pm\) 0.57% and 96.94% \(\pm\) 0.05% accuracy in CIL scenarios for MNIST and N-MNIST, respectively. From Table 2, our DSD-SNN outperforms PathNet [16], DEN [21], RCL [20] and HNET [21], which are also structural extension methods, in both TIL and CIL scenarios for 10 steps CIFAR100. iCaRL [14] and DER++ [22] achieve higher accuracy of 84.20% in TIL scenarios than our 77.92%, but they are inferior in CIL scenarios (51.40% and 55.30%) than our 60.47%. Moreover, the DSD-SNN compresses the network to only 37.48% after learning all tasks, further saving energy consumption. For 20 steps CIFAR100 with more tasks, our DSD-SNN achieves even higher accuracy 81.17% in TIL scenario and has excellent experimental results consistent with 10 steps. To the best of our knowledge, this is the first time that the energy-efficient deep SNNs have been used to solve CIFAR100 continual learning and achieve comparable performance with DNNs. In summary, the DSD-SNN model significantly outperforms the SNNs-based continual learning model on the N-MNIST dataset. On MNIST and CIFAR100 datasets, the proposed model achieves comparable performance with DNNs-based models and performs well on both TIL and CIL. ### Effects of Efficient Continual Learning Fig. 4 depicts the performance of the DSD-SNN model for task incremental learning on multiple datasets. The experimental results demonstrate that our SNNs-based model could improve the convergence speed and performance of new tasks during sequential continual learning, possessing the forward transfer capability. The newer tasks achieve higher performance from the beginning for MNIST and CIFAR100 datasets, indicating that the previously learned knowledge is fully utilized to help the new tasks. Also, the new tasks converge to higher performance faster, suggesting that the network has a strong memory capacity to continuously learn and remember new tasks. Similar comparable results can be obtained on the N-MNIST dataset. ### Ablation Studies **Effects of each component.** To verify the effectiveness of the growth and pruning components in our DSD-SNN model, we compare the number of network parameters (Fig.4(a)) and performance (Fig.4(b)) of DSD-SNN, DSD-SNN without pruning, and DSD-SNN without reused growth during multi-task continual learning. The experimental results show that the number of parameters in the DSD-SNN model fluctuates up and finally stabilizes at 37.48% for CIFAR100, achieving superior accuracy on multi-task continual learning. In contrast, \begin{table} \begin{tabular}{c c c} \hline \hline Method & Dataset & Acc \\ \hline EWC [17] & MNIST & 81.60\% \\ GEM [14] & MNIST & 92.00\% \\ DEN [21] & MNIST & 96.60\% \\ RCL [20] & MNIST & 96.60\% \\ CLNP [21] & MNIST & 98.42\(\pm\) 0.04 \% \\ **Our DSD-SNN** & MNIST & **97.30\(\pm\) 0.09 \%** \\ HMN(SNN+DNN) [15] & N-MNIST & 78.18\% \\ **Our DSD-SNN** & N-MNIST & **97.06\% \(\pm\) 0.09 \%** \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy of task incremental learning compared to other works for MNIST and N-MNIST datasets. Figure 4: During the continual learning process of each task, the changes of accuracy with epochs. Figure 3: The average accuracy with increasing number of tasks. **(a)** Our DSD-SNN for MNIST, N-MNIST and CIFAR100. **(b)** Comparison of our DSD-SNN with other methods for CIFAR100. the network scale of the model without pruning rises rapidly and quickly fills up the memory capacity, leading to a dramatic drop in performance after learning six tasks. The above results reveal that the pruning process of DSD-SNN not only reduces the computational overhead but also improves the performance and memory capacity. For the growth module of DSD-SNN, we eliminate the addition of connections from frozen neurons to verify the effectiveness of reusing acquired knowledge in improving learning for new tasks. From Fig. 4(a) and 4(b), DSD-SNN without reused growth suffers from catastrophic forgetting when there is no additional conservation of sub-network masks. The scale of the non-reused network is very small, and the failure to reuse acquired knowledge significantly degrades the performance of the model on each task. Therefore, we can conclude that reusing and sharing acquired knowledge in our DSD-SNN model achieves excellent forward transfer capability. **Effects of different parameters.** We analyze the effects of different growing and pruning parameters (the growth scale \(\rho\) and pruning intensity \(\rho_{c},\rho_{f}\)). For the growth parameter \(\rho\), the results are very close in the range of 5-15% for MNIST in Fig. 5(a), as well as in the range of 7.5-15% for CIFAR100 in Fig. 5(b). Only in the larger case, there is a performance degradation in the later learning task (8th task), due to the larger growth scale of the previous task resulting in insufficient space to learn new knowledge in the later tasks. Fig. 5(c) and 4 describe the effects of pruning strength \(\rho_{c}\), \(\rho_{f}\) on performance. The larger \(\rho_{c},\rho_{f}\), the more convolutional channels and fully connected neurons are pruned. We found that the accuracy is very stable at less than \(\rho_{c}=0.50,\rho_{f}=1.00\) for MNIST and \(\rho_{c}=0.75,\rho_{f}=1.25\) for CIFAR100, but the accuracy declines at larger \(\rho_{c},\rho_{f}\) due to the over-pruning. The DSD-SNN model is more adaptable to pruning parameters on the CIFAR100 dataset because it has a larger parameter space of SNN model. These ablation experiments demonstrate that our DSD-SNN is very robust for different growth and pruning parameters across multiple datasets. ## 5 Conclusion Inspired by the brain development mechanism, we propose a DSD-SNN model based on dynamic growth and pruning to enhance efficient continual learning. Applied to both TIL and CIL scenarios based on the deep SNN, the proposed model can fully reuse the acquired knowledge to help improve the performance and learning speed of new tasks, and combine with pruning mechanism to significantly reduce the computational overhead and enhance the memory capacity. Our DSD-SNN model belongs to the very few explorations on SNNs-based continual learning. The proposed algorithm surpasses the SOTA performance achieved by SNNs-based continual learning algorithm and achieves comparable performance with DNNs-based continual learning algorithms. \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & 10steps TIL Acc (\%) & 10steps CIL Acc (\%) & 20steps TIL Acc (\%) & 20steps CIL Acc (\%) \\ \hline EWC [Kirkpatrick _et al._, 2017] & 61.11 \(\pm\) 1.43 & 17.25 \(\pm\) 0.09 & 50.04 \(\pm\) 4.26 & 4.63 \(\pm\) 0.04 \\ MAS [Aljundi _et al._, 2018] & 64.77 \(\pm\) 0.78 & 17.07 \(\pm\) 0.12 & 60.40 \(\pm\) 1.74 & 4.66 \(\pm\) 0.02 \\ PathNet [Fernando _et al._, 2017] & 53.10 & 18.50 & - & - \\ SI [Zenke _et al._, 2017] & 64.81 \(\pm\) 1.00 & 17.26 \(\pm\) 0.11 & 61.10 \(\pm\) 0.82 & 4.63 \(\pm\) 0.04 \\ DEN [Yoon _et al._, 2018] & 58.10 & - & - & - \\ RCL [Xu and Zhu, 2018] & 59.90 & - & - & - \\ icRL [Rebuffi _et al._, 2017] & 84.20 \(\pm\) 1.04 & 51.40 \(\pm\) 0.99 & 85.70 \(\pm\) 0.68 & 47.80 \(\pm\) 0.48 \\ HNET [Yon _et al._, 2020] & 63.57 \(\pm\) 1.03 & & 70.48 \(\pm\) 0.25 & - \\ DER++ [Yan _et al._, 2021] & 84.20 \(\pm\) 0.47 & 55.30 \(\pm\) 0.10 & 86.60 \(\pm\) 0.50 & 46.60 \(\pm\) 1.44 \\ FOSTER [Wang _et al._, 2022] & - & 72.90 & - & 70.65 \\ DyTox [Douillard _et al._, 2022] & - & 73.66 \(\pm\) 0.02 & - & 72.27 \(\pm\) 0.18 \\ **Our DSD-SNN** & **77.92 \(\pm\) 0.29** & **60.47 \(\pm\) 0.72** & **81.17 \(\pm\) 0.73** & **57.39 \(\pm\) 1.97** \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy comparisons with DNNs-based algorithms for CIFAR100. Figure 5: Effects of each component. Number of network parameters (**a**) and accuracy (**b**) of our DSD-SNN, non-pruned model and non-reused model for CIFAR100. Figure 6: The effect of pruning and growth parameters on accuracy in multi-task continual learning. ## Acknowledgements This work is supported by the National Key Research and Development Program (Grant No. 2020AAA0107800), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB32070100), the National Natural Science Foundation of China (Grant No. 62106261). ## Contribution Statement B.H. and F.Z are equal contribution and serve as co-first authors. B.H., F.Z. and Y.Z. designed the study. B.H., F.Z. W.P. and G.S.performed the experiments and the analyses. B.H., F.Z. and Y.Z. wrote the paper.
2306.13799
Coleman-Weinberg dynamics of ultralight scalar dark matter and GeV-scale right-handed neutrinos
We consider an extension of the Standard Model by three singlet fermions and one singlet real scalar field. The scalar is an ultralight dark matter candidate whose abundance is set by dynamically induced misalignment from the Higgs portal. We focus on parameter space where the Coleman-Weinberg potential both fixes the dark matter relic abundance, and predicts the mass scale of right-handed neutrinos. The model prefers scalar masses in the range of $10 ~{\rm {\mu}eV} \lesssim m_{\phi} \lesssim 10 ~{\rm meV}$, and can be tested via direct searches for a light scalar (e.g. fifth force tests), or by searching for right-handed neutrinos in laboratory experiments.
Clara Murgui, Ryan Plestid
2023-06-23T22:20:49Z
http://arxiv.org/abs/2306.13799v3
**Coleman-Weinberg dynamics of ultralight scalar dark matter and GeV-scale right-handed neutrinos** ## Abstract We consider an extension of the Standard Model by three singlet fermions and one singlet real scalar field. The scalar is an ultralight dark matter candidate whose abundance is set by dynamically induced misalignment from the Higgs portal. We focus on parameter space where the Coleman-Weinberg potential both fixes the dark matter relic abundance, and predicts the mass scale of right-handed neutrinos. The model prefers scalar masses in the range of \(10~{}\mu\text{eV}\lesssim m_{\phi}\lesssim 10~{}\text{meV}\), and can be tested via direct searches for a light scalar (e.g. fifth force tests), or by searching for right-handed neutrinos in laboratory experiments. ###### Contents * 1 Introduction * 1.1 Summary of results * 1.2 Outline of paper * 2 Model definition and dynamics at zero temperature * 2.1 Coleman-Weinberg potential * 2.2 Mass spectrum * 2.3 Decay rates * 2.4 Scalar couplings to matter * 3 Dynamics in the early universe * 3.1 Thermal misalignment & relic abundance * 3.2 VEV misalignment * 3.3 Forced resonances and the electroweak phase transition * 3.4 Thermal mass from RHNs * 3.5 Parametric resonances * 3.6 Initial conditions after inflation * 3.7 Connections to leptogenesis * 4 Experimental signatures and constraints * 4.1 Probes of a light scalar * 4.2 Right-handed neutrino searches * 5 Conclusion and Outlook ## 1 Introduction The origins of dark matter, neutrino masses, and the observed baryon asymmetry are three of the most important unanswered questions in fundamental physics. It is therefore interesting to consider extensions of the Standard Model (SM) that are capable of successfully addressing all three of these mysteries simultaneously. In this work we consider a (nearly) minimal extension of the SM by four singlet fields: one real scalar, and three right-handed neutrinos (RHNs). In the absence of RHNs our model reduces to the model suggested in [1] in which a light scalar field couples via the super-renormalizable portal to the Higgs field \({\cal L}\supset-A\phi|H|^{2}\). One advantage of this model is that the scalar mass is protected from large radiative corrections proportional to the Higgs mass, such that it is technically natural. This remains true provided that the renormalizable (i.e. quartic) Higgs-portal coupling takes on small values, not larger than the squared ratio of the light scalar mass to the Higgs mass. Nevertheless, even with a very soft coupling, i.e. \(A\lesssim 1\ \mu\)eV, the dark matter is not entirely secluded and interesting phenomenology still occurs, such that the model is testable and falsifiable. More recently, it has been noted that if the initial field misalignment after inflation is sufficiently small, then thermal misalignment [2, 3] dominates such that the dark matter dynamics are insensitive to the initial conditions [3, 4]. This supplies a relic density target in close analogy with models of freeze-out dark matter, albeit with a very different microphysical origin. The addition of RHNs, or other physics capable of reproducing the observed neutrino textures [5], can qualitative change the scenario sketched above. Fermions coupled to scalars can have non-trivial thermal dynamics, see e.g. [6]. Perhaps most strikingly in the case of RHNs and a light scalar field, if one takes the simplest1 model of neutrino masses, the type-I seesaw mechanism [7, 5, 8], then we would generically expect a Yukawa coupling between RHNs and the scalar field \({\cal L}\supset g\phi N^{c}N^{c}\), with \(N^{c}\) the RHN field written in two-component notation. Radiative corrections to the scalar mass, \(m_{\phi}\), are now proportional to \(gM_{N}\) rather than the soft scale \(A\). Since it is essential that \(\phi\) remains light so as to be a viable dark matter candidate, and \(M_{N}\) may be heavy, this posses a problem for dark matter phenomenology. The problem is further worsened by the fact that the zero-temperature vacuum expectation value (vev) of the scalar field \(\varphi_{0}=\langle\phi\rangle_{T=0}\) is large, Footnote 1: An admittedly subjective statement. \[\varphi_{0}=-\frac{A\mathrm{v}^{2}}{m_{\phi}^{2}}\ \ \ \ \mathrm{where}\ \ \ \mathrm{v}^{2}=\langle|H|^{2}\rangle. \tag{1}\] Thus, even for very small couplings i.e. \(g\ll 1\), if \(m_{\phi}\) is light then the vev of the scalar will induce a sizeable Majorana mass for \(M_{N}\). This feature is not specific to type-I seesaw models and is generic to any mechanism that generates massive Majorana active neutrinos. Dirac neutrinos would conserve lepton number and forbid a coupling between the right-handed partners and \(\phi\), and therefore represent a neutrino mass mechanism which is "safe" insofar as the scalar dynamics are concerned.2 Footnote 2: One could consider a charged Majoron, however this would necessarily carry two units of lepton number and be forbidden from coupling to the Higgs via the super-renormalizable portal. The above discussion suggests that \(m_{\phi}\) will depend on the scalar vev, \(\varphi_{0}\), but Eq. (1) then tells us that the scalar vev must be determined self consistently. Indeed, assuming the vev induced mass of the RHNs dominates, we may take \(m_{N}\sim g\varphi_{0}\) (with, again, a subscript zero denoting zero temperature), such that the quantum correction to the scalar mass would be given by, \[m_{\phi}^{2}\sim\frac{6g^{4}}{16\pi^{2}}\varphi_{0}^{2}\ \times(\log). \tag{2}\] If we again assume this radiative correction dominates over the bare mass, substitution into Eq. (1) then allows one to self-consistently estimate \(\varphi_{0}\) in terms of \(g\) and \(A\). Radiative corrections therefore supply a mechanism by which the scalar potential self-adjusts towards a stable solution. If the scalar is too light, its vev will become large. This will induce a larger RHN mass, which in turn radiatively generates a larger scalar mass. This then reduces the scalar vev to a smaller value. The above is simply a clumsy discussion of symmetry breaking in the presence of radiative corrections. As has been long understood, a systematic and efficient treatment of this effect is provided by the Coleman-Weinberg (CW) potential [9]. Indeed, a re-interpretation of the above discussion may simply be rephrased as the _stabilization_ of the scalar vev via radiative corrections. This is slightly different than the typical radiatively induced symmetry breaking that one associates with the CW model. A qualitative picture of the relevant effective potential is shown in Fig. 1 both at zero and finite temperature. In this paper we focus on the limit in which the RHN and scalar mass are both dominated by radiative corrections. This is equivalent to considering the \(O(1)\) "patch" of parameter space in which the bare fermion and scalar masses are sufficiently small, but nevertheless non-zero. This has the advantage of being highly predictive. Ignoring matrix structure, two parameters control the dark matter abundance. We choose to work in terms of the Higgs-portal coupling \(A\), and the physical scalar mass \(m_{\phi}\) at \(T=0\). Fixing the dark matter relic abundance determines \(A\) as a function of \(m_{\phi}\) (assuming initial conditions are erased dynamically). Neutrino masses are generated via a standard seesaw mechanism, and the relevant neutrino textures accommodated by appropriately choosing the Yukawa couplings to charged leptons, \(\mathcal{L}\supset YLHN\). An investigation of the broader parameter space of the model lies beyond the scope of this paper, and will be pursued separately. Figure 1: The effective potential, \(V_{\rm eff}(\varphi)\), at zero-temperature (solid black line), and at finite temperature (solid orange line) in the vicinity of the minimum. We use \(\varphi\) for the homogeneous mode of the scalar field. ### Summary of results Before entering the details of the paper, let us sketch our main conclusions. Focusing on regions of parameter space where the scalar is light naturally pushes us towards the weak coupling limit in which \(g\ll 1\). We take \(A\) and \(g\) as two independent parameters which together radiatively generate the rest of the terms in the Lagrangian. For a fixed \(A\) and small coupling, \(g\ll 1\), the radiative stabilization mechanism described above naturally generates the following hierarchy of scales \[\underbrace{\varphi_{0}}_{O(1)}\gg\underbrace{M_{N}}_{O(g)}\gg\underbrace{m _{\phi}}_{O(g^{2})}\, \tag{3}\] where we measure scales relative to the vev of the field. This hierarchy occurs because the RHN mass arises at tree level, and the scalar mass at one loop from the fermion mass. Noting that \(\varphi_{0}\propto A\mathrm{v}^{2}/m_{\phi}^{2}\) then fixes the formal scaling of \(A\mathrm{v}^{2}\) with respect to \(g\) since we have counted \(\varphi_{0}\) as \(O(1)\). This scaling demands \(A\ll m_{\phi}\) such that the Higgs-portal contribution to the scalar mass is negligible relative to the RHN mediated loop. It is also interesting to interpret Eq. (3) with \(A\) viewed as an input. From this IR perspective \(A\) parameterizes a soft deformation of the secluded theory (with no coupling to the SM) which subsequently induces a tower of increasingly UV scales, each parametrically larger by factors of \(1/g^{n}\). It is the interplay between this soft breaking, the small Yukawa coupling \(g\), and the electroweak scale which induces the hierarchy outlined above. Remarkably, we find that the radiatively generated parameter space of the model is able to accommodate the observed relic abundance of dark matter in regions of parameter space that are currently untested. We find that the relic energy density in the coherent oscillating \(k=0\) mode of \(\phi\) is given by (up to a slowly varying logarithm) \[\Omega_{\phi}h^{2}=0.2\bigg{(}\frac{A}{1\ \mu\mathrm{eV}}\bigg{)}^{2}\bigg{(} \frac{m_{\phi}}{3\ \mathrm{meV}}\bigg{)}^{-11/4}. \tag{4}\] The parameter space differs substantially from the predictions of [4] because the dark matter evolves via a non-linear differential equation due to the quartic (and higher order) terms in the CW potential. This introduces a number of other interesting dynamical features which we discuss below. Finally, perhaps most interesting from the perspective of laboratory tests of the model, we find that \(M_{N}\)_always lies below the weak scale_. More quantitatively, when expressed in terms of the zero temperature mass, \(m_{\phi}\), and the Higgs-portal coupling \(A\), we find \[M_{N}\sim\bigg{(}\frac{A}{m_{\phi}}\bigg{)}^{1/2}\mathrm{v}. \tag{5}\] Benchmarking against the predictions of \(A(m_{\phi})\) from the dark matter relic abundance, we generically predict RHNs in the mass range \(M_{N}\in[1\ \mathrm{GeV},10\ \mathrm{GeV}]\). ### Outline of paper The rest of the paper is organized as follows: In Section 2 we present the Lagrangian, and discuss the model's zero temperature dynamics. Next, in Section 3 we consider the dynamics of the scalar field in the early universe. The bulk of our analysis is contained in Sections 3.1 and 3.2 where we predict the dark matter relic abundance. In Section 4 we consider experimental tests of the model we consider here, including fifth force and RHN searches. Finally, in Section 5 we summarize our findings and comment on future directions. ## 2 Model definition and dynamics at zero temperature The model we consider in this work is fully specified by the following Lagrangian, \[\begin{split}\mathcal{L}=\frac{1}{2}(\partial\phi)^{2}-V(\phi)+ \overline{N}^{c}\sigma_{\mu}\partial^{\mu}N^{c}&-\tfrac{1}{2}M_{ ij}N_{i}^{c}N_{j}^{c}-\tfrac{1}{2}g_{ij}\phi N_{i}^{c}N_{j}^{c}\\ &-Y_{ij}\tilde{H}LN^{c}-A\phi|H|^{2}-\tfrac{1}{2}B\phi^{2}|H|^{2} \ +\ \text{h.c.},\end{split} \tag{6}\] where \(V(\phi)=\tfrac{1}{2}c_{2}\phi^{2}+\tfrac{1}{6}c_{3}\phi^{3}+\tfrac{1}{24}c_{4 }\phi^{4}\); we do not consider higher dimensional operators in this analysis.3 We have written the Lagrangian such that the first line contains all of the terms that are present when the dark sector is fully secluded from the Standard Model. We have included bare masses \(\tfrac{1}{2}M_{ij}N_{i}^{c}N_{j}^{c}\) and \(\tfrac{1}{2}c_{2}\phi^{2}\), but as discussed above, we take these as small, and their physical values will be radiatively generated. Footnote 3: Strictly speaking quantum corrections destabilize the effective potential at large field values, and higher dimensional operators may be required to ensure a bounded Hamiltonian. The dynamics we consider here are insensitive to these choices, and we do not discuss them further. It is interesting to first consider the dark sector in isolation, and to consider symmetries that could be imposed on this now secluded sector that would protect the \(M_{N}=0\) limit while still allowing a \(\phi N^{c}N^{c}\) coupling. We consider a discrete \(\mathbb{Z}_{4}\) lepton number symmetry under which \(\phi\) carries charge 2 and \(N^{c}\) carries charge 1. This has the additional consequence of forbidding a cubic term in the potential \(V(\phi)\) such that it is technically natural to set the cubic coupling to zero. While this is not strictly speaking necessary for the phenomenology Figure 2: Parameter space in the \(m_{\phi}\)-A plane. The solid, black line corresponds to a relic density of \(\Omega_{\phi}h^{2}=0.12\)[10] derived using Eqs. (36) and (41). For comparison, we show the results in a quadratic potential (teal), taken from [4]. BBN constraints demand \((\Gamma_{N})^{-1}>0.1\ \text{s}\)[11] which we impose by demanding \(M_{N}\geq 1\ \text{GeV}\). Dynamics near the electroweak phase transition do not admit a simple analytic description. In orange we show inverse square law tests (solid orange) [12, 13, 14, 15] and a future projection (dashed orange) from the HUST group [16]. we consider, it simplifies the analysis considerably, and so we choose to impose the \(\mathbb{Z}_{4}\) symmetry on the secluded sector in what follows; this will be softly broken by \(A\phi|H|^{2}\) (as well as by \(YLHN\)). Similarly we will suppress matrix indices for the sake of simplicity, implicitly assuming \(g_{ij}=g\delta_{ij}\). ### Coleman-Weinberg potential Let us now analyze the CW potential of the dark sector in the secluded limit. We will always use \(\varphi\) when referring to the classical dynamics of the scalar field, and \(\phi\) to denote quantum excitations. Following the standard construction i.e. introducing background field dependent masses, and appropriate counterterms, we then fix the Coleman-Weinberg potential with the following renormalization conditions (with primes denoting derivatives) [17] \[\left[V_{\text{CW}}\right]_{\varphi=0} =\Lambda^{4}\, \tag{7}\] \[\left[V^{\prime\prime}_{\text{CW}}\right]_{\varphi=0} =0\,\] (8) \[\left[V^{\prime\prime\prime}_{\text{CW}}\right]_{\varphi=\varphi _{R}} =\lambda_{R}. \tag{9}\] Eq. (8) _defines_ the boundary between a broken and unbroken phase in the classical theory. The pole mass, \(m_{\phi}\) is defined as the curvature of the effective potential at _its minimum_ i.e. not at \(\varphi=0\). At the order we are working the renormalization conditions for \(g\) and \(M\) are immaterial. Setting \(M\) to zero, we arrive at the renormalized CW potential \[V_{\text{CW}}(\varphi;A=0)=\frac{1}{4!}\varphi^{4}\left(\frac{75g^{4}}{8\pi^{ 2}}+\lambda_{R}\right)-\frac{3g^{4}\varphi^{4}}{32\pi^{2}}\log\left(\frac{ \varphi^{2}}{\varphi_{R}^{2}}\right)+\Lambda^{4}. \tag{10}\] The RHNs are fermions, and their quantum corrections do not induce symmetry breaking. Let us note that the stability of the theory depends on the choice of \(\lambda_{R}\) and \(\varphi_{R}\), which is unsurprising; a negative quartic coupling will have a potential that is unbounded from below. In fact, these two quantities together form a renormalization group (RG) invariant quantity. If \(\varphi_{R}\) is varied, then \(\lambda_{R}\) adjusts itself such that physical predictions are unaffected. Let us now softly break our \(\mathbb{Z}_{4}\) symmetry with the introduction of the Higgs-portal coupling proportional to \(A\). A simple spurion analysis tells us that the coefficient of the cubic correction we have neglected, \(\phi^{3}\), will be proportional to \(A^{3}/m_{H}^{2}\ll A\). Therefore, even in the presence of the Higgs-portal, neglecting the cubic interaction of the scalar potential is still an excellent approximation. The parameters we consider here are such that the Higgs field's dynamics are only slightly perturbed by the presence of the secluded sector. We may therefore set the Higgs field to its Standard Model vev at \(T=0\). Since \(A/m_{H}\ll 1\) it is legitimate to neglect loop corrections from the Higgs portal. The CW potential then assumes the form, \[V_{\text{CW}}(\varphi)=A|H|^{2}\varphi+\frac{1}{4!}\varphi^{4}\left(\frac{75g ^{4}}{8\pi^{2}}+\lambda_{R}\right)-\frac{3g^{4}\varphi^{4}}{32\pi^{2}}\log \left(\frac{\varphi^{2}}{\varphi_{R}^{2}}\right)+\Lambda^{4}. \tag{11}\] Higher order corrections proportional to \(\lambda_{R}^{2}\) would appear in a more general treatment, however, we take \(\lambda_{R}^{2}\sim O(g^{8})\) in our counting (i.e. assuming \(\lambda_{R}\) to be radiatively generated) and so we neglect these contributions. Depending on the choice of \(\lambda_{R}\) the potential either has a minimum at \(\varphi_{0}<0\) or a maximum at \(\varphi_{0}>0\). In both cases \(|\varphi_{0}|\sim O(g^{-4/3})\). We demand that \(\lambda_{R}\) be such that the potential has a minimum at a negative value of \(\varphi_{0}\). It is convenient to choose \(\varphi_{R}\sim(A\text{v}^{2}/g^{4})^{1/3}\), such that \(\log(\varphi_{0}/\varphi_{R})\) is \(O(1)\) in the vicinity of the minimum, in which case requiring \(\lambda_{R}\geq-\frac{75g^{4}}{8\pi^{2}}\) is sufficient. The potential minimum in our model will generally be meta stable, but with an extremely high potential barrier. ### Mass spectrum In what follows, we make the RG-invariant choice: \(\lambda_{R}=0\) and \(\varphi_{R}=-\sqrt[3]{(8\pi^{2}A\mathrm{v}^{2})/(11g^{4})}\). These parameter choices do not qualitatively affect the discussion in what follows beyond ensuring the potential has a stable minimum.4 The vev of \(\phi\) is then given by Footnote 4: To remain in a “radiatively generated” region of parameter space the scaling \(\lambda_{R}\sim O(g^{4})\) should be respected. \[\varphi_{0}=\varphi_{R}=-\bigg{(}\frac{8\pi^{2}A\mathrm{v}^{2}}{11g^{4}} \bigg{)}^{1/3}\, \tag{12}\] while the mass of the scalar at \(T=0\) (i.e. the quantum fluctuations around the minimum) is given by \[m_{\phi}^{2}=\big{[}V^{\prime\prime}(\varphi)\big{]}_{\varphi=\varphi_{0}}= \frac{27}{2}\bigg{(}\frac{A\mathrm{v}^{2}g^{2}}{11\pi}\bigg{)}^{2/3}. \tag{13}\] We may then trade \(g\) for \(m_{\phi}\), \[g=\frac{2^{3/4}(11\pi)^{1/2}}{3^{9/4}}\bigg{(}\frac{m_{\phi}^{3}}{A\mathrm{v}^ {2}}\bigg{)}^{1/2}. \tag{14}\] Above quantities can then be expressed in terms of the pole mass rather than the Yukawa \(g\), for example Eq. (12) may be re-written as \[\varphi_{0}=\varphi_{R}=-\frac{27}{11}\frac{A\mathrm{v}^{2}}{m_{\phi}^{2}}. \tag{15}\] Notice that this differs by an \(O(1)\) factor from the relationship between \(A\), \(m_{\phi}\), and \(\varphi_{0}\) when \(m_{\phi}\) is dominated by its tree-level value [1]. Finally, the physical mass of the RHNs (again neglecting matrix structure) is given by \[M_{N}=g|\varphi_{0}|=6^{3/4}\sqrt{\frac{\pi}{11}}\bigg{(}\frac{A}{m_{\phi}} \bigg{)}^{1/2}\ \mathrm{v}. \tag{16}\] This is our first major result: the mass of the RHNs is parametrically tied to the weak scale. The ratio \(A/m_{\phi}\sim O(g^{2})\) when the mass of \(\phi\) is radiatively generated. We therefore conclude that the mass of RHNs will always lie below the weak scale and can therefore be searched for using accelerator and fixed target facilities (see Section 4.2). Neutrino masses are generated via the standard seesaw mechanism. Assuming no special flavor structure in the RHN mass matrix we expect the standard type-I seesaw formula to apply \[m_{\nu}^{ij}=m_{D}^{i\alpha}M_{\alpha\beta}^{-1}m_{D}^{\beta j}\, \tag{17}\] where \(m_{D}=Yv\) is the Dirac mass matrix connecting active and sterile neutrinos, and all indices run from 1 to 3; the raising and lowering of indices is done to de-clutter the notation. As we discuss below, the Yukawa textures play no substantial role in the dark matter dynamics. For instance, a pseudo-Dirac pair of RHNs may allow for an inverse seesaw mechanism to operate, such that \(m_{\nu}\sim m_{D}^{2}M^{-2}\mu\) where \(\mu\) is the mass splitting between the pseudo-Dirac pair [18, 19]. This scenario gives a technically natural origin of a small mass splitting which can be advantageous for leptogenesis [20, 21]. As an aside, let us also note that the model has an additional radiative neutrino mass mechanism. A diagram with the Higgs running in the loop with a single vev insertion of \(\phi\) yields a neutrino mass that scales as \[m_{\nu}\sim\frac{6gY^{2}}{16\pi^{2}}\varphi_{0}\log\!\big{(}m_{\nu}^{2}/m_{H}^ {2}\big{)}\, \tag{18}\] Although we generically expect this to be sub-dominant to the standard seesaw mass, it may contribute appreciably when \(A/m_{\phi}\) is not too small, or when there is fine tuning such that \(|g|\;Y_{ij}Y_{ji}\ll Y_{ij}g_{jk}Y_{ki}\) with \(|g|\) the typical value of an entry in \(g_{jk}\). For the parameter space preferred by the dark relic abundance, this contribution is negligible and we do not consider it further. ### Decay rates Let us now turn to the properties of the RHNs. Because of their couplings to the scalar field, a new decay pathway is available. For two of the RHNs we will have \(N_{i}\to N_{j}\nu\) with \(j\leq i\) (assuming labeling is ordered in mass). For the lightest RHN the decay pathway \(N\to\nu\phi\) is available and scales as \[\Gamma_{N\to\nu\phi}=\frac{g^{2}M_{N}}{8\pi}\sum_{\alpha}\theta_{\alpha}^{2}\, \tag{19}\] where \(\theta_{\alpha}\) is the mixing between the state \(N_{1}\) and the three active neutrinos. This rate should be compared against the muon-like decay formula \[\Gamma_{N\to\ell\nu\nu}=\frac{G_{F}^{2}{M_{N}}^{5}}{192\pi^{2}}\sum_{\alpha} \theta_{\alpha}^{2}. \tag{20}\] The same mixing angle appears in both cases, and so we see the relevant comparison is \(g^{2}/8\pi\) vs \(G_{F}^{2}{M_{N}}^{4}/(192\pi^{2})\). In what follows we find that \(g^{2}\ll G_{F}^{2}{M_{N}}^{4}\) such that the RHN's lifetime is not substantially modified by the scalar decay mode. The scalar is itself unstable. The decay rate is given parametrically by \(\Gamma_{\phi}\sim\theta^{4}g^{2}m_{\phi}\). For a scalar mass around 1 meV and assuming a single massless active neutrino such that \(\phi\to\nu\nu\) is allowed,5 the scalar lifetime is given roughly by Footnote 5: If all neutrinos are heavier than \(\phi\), then \(\phi\to\gamma\gamma\) is the leading decay channel. This is even slower than \(\phi\to\nu\nu\) and never threatens dark matter stability on cosmological timescales [1]. \[\tau_{\phi}\sim 3\times 10^{19}\ {\rm Gyr}\left(\frac{5\times 10^{-13}}{g} \right)^{2}\left(\frac{1\ {\rm meV}}{m_{\phi}}\right)\left(\frac{10^{-5}}{\theta}\right)^{4}. \tag{21}\] Comparing to the age of the universe, \(\tau_{U}\sim 13.8\) Gyr, the dark matter is clearly stable on cosmological timescales. ### Scalar couplings to matter Finally, let us briefly review constraints on a light scalar mixing with the Higgs. More detailed discussions can be found in [1, 4]. The \(A\phi|H|^{2}\) coupling induces mixing between the Higgs boson, \(h\), and \(\phi\) after electroweak symmetry breaking. The coupling of the Higgs to matter then induces couplings between \(\phi\) and matter. The largest such coupling for nucleons comes from heavy quarks, which is transmitted to hadrons via an anomaly matching condition [22, 23, 24]. In the low energy description the coupling is dominantly to gluons. Electron couplings are simply proportional to the electron Yukawa. The result is that \(\phi\) couples to nucleons and electrons with a strength given by \[g_{\phi NN}\sim\frac{A\Lambda_{\rm had}}{m_{h}^{2}}\quad,\quad g_{\phi ee}= \frac{Am_{e}}{m_{h}^{2}}\, \tag{22}\] where \(\Lambda_{\rm had}\) is a typical hadronic scale on the order of the nucelon mass. These couplings can induce a Yukawa-like fifth force between test bodies and be used to search for an ultralight scalar [12, 13, 14, 15]. ## 3 Dynamics in the early universe In the original proposal of Piazza and Pospelov [1] the dark matter abundance was set by a choice of initial conditions for the misalignment of the scalar field. Oscillations onset at \(3H=m_{\phi}\) and the dark matter relic density can be estimated straightforwardly given the initial misalignment. Batell, Ghalsasi, and Rai demonstrated in [4] that there exists a regime in which the misalignment that determines the relic abundance is dictated by thermal properties. This then supplies a predictive relic density target, at least so long as initial misalignment is not too large (see Footnote 7). The dynamics of the scalar field in the CW potential differ qualitatively from the quadratic case considered in [1, 4]. In what follows we describe the evolution of the scalar field. We focus on initial conditions that are sufficiently small such that thermal misalignment dominates (see Section 3.6). Much of our analysis mirrors the setup in [4] and so we do not belabor points that are discussed in detail there. What is new is the shape of the effective potential, being stabilized by radiative corrections rather than by the bare scalar mass, and new degrees of freedom in the form of RHNs. ### Thermal misalignment & relic abundance In this section we consider the relic abundance generated by the thermal misalignment mechanism assuming that initial conditions are sufficiently small such that the dark matter misalignment is dominated by thermal effects. We focus on parameter space where oscillations begin before the electroweak phase transition. We consider the evolution of the field's \(k=0\) mode, \(\varphi:=\phi_{k=0}\) using the equations of motion, \[\ddot{\varphi}+3H\dot{\varphi}+\frac{\partial}{\partial\varphi}V_{\rm eff}( \varphi)=0\, \tag{23}\] where dotted derivatives correspond to \(\frac{\rm d}{\rm d}\). It is convenient to recast this equation in terms of temperature in a radiation dominated epoch, where \(H=\gamma T^{2}/M_{\rm Pl}\) where we use the reduced Planck mass \(M_{\rm Pl}=2.43\times 10^{18}\) GeV and \(\gamma=\sqrt{\pi^{2}g_{*}(T)/90}\). Using the Jacobian \({\rm d}T/{\rm d}t=-HT\), we can rewrite the time derivatives as temperature derivatives,6 Footnote 6: Using \[\dot{\varphi}=\frac{{\rm d}\varphi}{{\rm d}T}\frac{{\rm d}T}{{\rm d}t}=- \frac{{\rm d}\varphi}{{\rm d}T}HT,\quad\text{and}\quad\ddot{\varphi}=\frac{{ \rm d}}{{\rm d}t}\ \left(-\frac{{\rm d}\varphi}{{\rm d}T}HT\right)=\frac{{\rm d}^{2} \varphi}{{\rm d}T^{2}}H^{2}T^{2}+3\frac{{\rm d}\varphi}{{\rm d}T}H^{2}T.\] \[\frac{{\rm d}^{2}\varphi}{{\rm d}T^{2}}=-\frac{1}{H^{2}T^{2}}\frac{\partial V_{ \rm eff}}{\partial\varphi}\ =-\frac{M_{\rm Pl}^{2}}{\gamma^{2}T^{6}}\ \frac{\partial V_{\rm eff}}{ \partial\varphi}. \tag{24}\] In practice we set \(g_{*}\approx 106.75=(\text{const})\) for numerical estimates above the electroweak scale [25]. The effective potential contains contributions from the zero temperature and finite temperature potentials. For the finite temperature piece we make use of the thermal functions \(J_{B}\) and \(J_{F}\) for bosons and fermions, respectively [26]. At early epochs the only thermal degree of freedom with coupling to \(\phi\) is the Higgs and so, following [1], we include only the contributions of the Higgs field and the Goldstone bosons, \(\chi\), \[V_{\rm eff}=V_{\rm CW}(\varphi)+\frac{1}{2\pi^{2}}\,T^{4}J_{B}\left[\frac{m_{h }^{2}(\varphi,h,T)}{T^{2}}\right]+\frac{3}{2\pi^{2}}\,T^{4}J_{B}\left[\frac{m_{ \chi}^{2}(\varphi,h,T)}{T^{2}}\right]. \tag{25}\] The RHNs can also contribute if they thermalize, but this turns out to happen after the electroweak phase transition; we defer a discussion to Section 3.4. The effective masses are given simply by \(m^{2}=m_{0}^{2}+\Pi(T)\), where \(m_{0}^{2}\) is the zero temperature effective mass and \(\Pi(T)\) is the thermal self-energy. The only dependence on \(\varphi\) enters in the zero temperature effective mass, \[m_{h,0}^{2}(\varphi,h) =-\mu^{2}+3\lambda h^{2}+A\varphi\, \tag{26}\] \[m_{\chi,0}^{2}(\varphi,h) =-\mu^{2}+\lambda h^{2}+A\varphi\, \tag{27}\] where \(h\) is the value of the Higgs field's \(k=0\) mode. The self-energies are proportional to \(T^{2}\), \[\Pi(T)=T^{2}\left(\frac{3}{16}g^{2}+\frac{1}{16}g^{\prime 2}+\frac{1}{4}y_{t}^{ 2}+\frac{1}{2}\lambda^{2}\right), \tag{28}\] where \(g\), \(g^{\prime}\), \(y_{t}\) and \(\lambda\) are the electroweak gauge coupling, the hypercharge gauge coupling, the Yukawa coupling of the top quark and the quartic coupling of the standard model Higgs potential, respectively. At high temperatures \(\Pi(T)\) dominates the argument of the bosonic thermal function, and \(J^{\prime}_{B}[\Pi(T)^{2}/T^{2}]\approx 0.5\) (numerically) [4]. The derivative of the effective potential is then given by \[\begin{split}\frac{\partial V_{\rm eff}}{\partial\varphi}& =\frac{11}{8\pi^{2}}g^{4}(11-3\mathsf{L})\varphi^{3}+\frac{A}{2 \pi^{2}}T^{2}\left(J^{\prime}_{B}\left[\frac{m_{h}^{2}(\varphi,h,T)}{T^{2}} \right]+3J^{\prime}_{B}\left[\frac{m_{\chi}^{2}(\varphi,h,T)}{T^{2}}\right] \right)\\ &\simeq\frac{11}{8\pi^{2}}g^{4}(11-3\mathsf{L})\varphi^{3}+\frac{ A}{\pi^{2}}T^{2}+\ldots\quad\text{ for}\quad T\gg\mathrm{v}\ \,\end{split} \tag{29}\] where \(\mathsf{L}=\log\bigl{(}\varphi^{2}/\varphi_{R}^{2}\bigr{)}\). This is the thermal misalignment mechanism introduced in [2, 3]. The linear piece tilts the potential at the origin, resulting in a temperature dependent minimum \[\varphi_{\rm min}(T)\simeq-\frac{27}{(11\pi)^{2/3}}\left(\frac{1}{11-3\mathsf{ L}_{\rm min}}\right)^{1/3}\frac{\mathrm{Av}^{2}}{m_{\phi}^{2}}\left(\frac{T}{ \mathrm{v}}\right)^{2/3}\, \tag{30}\] where \(\mathsf{L}_{\rm min}=\log\bigl{(}\varphi_{\rm min}^{2}/\varphi_{R}^{2}\bigr{)}\) can be determined self consistently via an iterative procedure. The thermally induced tilt drives the field to negative values even in the presence of Hubble friction and provides a mechanism for erasing initial conditions.7 Footnote 7: Thermal misalignment can generically erase initial conditions on the order of the zero temperature vev \(\varphi_{0}\sim A\mathrm{v}^{2}/m_{\phi}^{2}\) which is typically on the order of \(10^{13}\) GeV for the parameters we consider. At high temperatures, Hubble friction is large and the field's dynamics are dominated by the thermal misalignment. The initial phase of thermal misalignment is then given by, \[\varphi_{\rm pre}(T)=-\frac{AM_{\rm Pl}^{2}}{6\pi^{2}\gamma^{2}T^{2}}+\varphi_ {I}\quad\text{ for}\quad T\gg T_{\rm osc}\, \tag{31}\] where \(T_{\rm osc}\) is the temperature at which \(\varphi\) starts oscillating, and will be defined below. The initial condition, \(\varphi_{\rm I}\), is set at some large temperature where \(\dot{\varphi}_{I}=0\). As \(T\) decreases, the field value drifts towards increasingly negative values. The solution in Eq. (31) is valid at high temperatures while the Hubble friction is still effective. Examining the equations of motion,8 one sees that \(m_{\rm eff}^{2}(\varphi)=\partial_{\varphi}V_{\rm CW}(\varphi)/\varphi\) plays the same role as a fixed mass in a harmonic oscillator. The condition for the onset of oscillations is \([3H]^{2}=[m_{\rm eff}(\varphi_{\rm pre})]^{2}\) evaluated at \(T=T_{\rm osc}\), where we use Eq. (31) to compute \(m_{\rm eff}^{2}\). Using \(m_{\rm eff}^{2}=g^{4}\varphi_{\rm pre}^{2}(11-3\mathsf{L}_{\rm osc})/(8\pi^{2})\) where \(\mathsf{L}_{\rm osc}=\log\bigl{(}\varphi^{2}/\varphi_{R}^{2}\bigr{)}\) we find, Footnote 8: In a harmonic potential one would have \(\ddot{\varphi}+3H\dot{\varphi}+m^{2}\varphi=F\), with \(F\) an “external force”. Whereas we have \(\ddot{\varphi}+3H\dot{\varphi}+\partial_{\varphi}V_{\rm CW}\varphi=F\). \[T_{\rm osc}=\frac{\mathrm{v}}{3}\left(\frac{M_{\rm Pl}m_{\phi}}{\gamma\mathrm{v }^{2}}\right)^{3/4}\left(\frac{11^{1/4}(11-3\mathsf{L}_{\rm osc})^{1/8}}{3^{5/ 8}\ 2^{1/4}\pi^{1/2}}\right)\, \tag{32}\] where \(\mathsf{L}_{\rm osc}\) can be estimated via the same iterative procedure described above, \[\mathsf{L}_{\rm osc}\approx\log\left(\frac{11m_{\phi}M_{\rm Pl}}{6\pi^{2}\sqrt{3} \gamma\sqrt{11-3\mathsf{L}_{\rm osc}}{\rm v}^{2}}\right). \tag{33}\] This iterative procedure converges provided \(\varphi^{2}/\varphi_{R}^{2}\lesssim 39\) which is equivalent to demanding \(\mathsf{L}_{\rm osc}\lesssim 11/3\). This can be re-written in terms of the scalar mass as \(m_{\phi}\lesssim m_{\phi}^{(c)}\) with \(m_{\phi}^{(c)}\sim 10~{}{\rm meV}\). The critical mass, \(m_{\phi}^{(c)}\), depends on \(\lambda_{R}\), and our numerics use \(\lambda_{R}=0\). Above the critical mass, the tilt of the potential is too strong at the onset of oscillations and there is no local minimum. For \(m_{\phi}\gtrsim 10~{}{\rm meV}\), the dynamics demand a tree-level quartic-coupling \(\lambda_{R}>0\) to stabilize the potential at the relevant epoch. The low temperature dynamics can still be dominated by the CW potential in certain regions of parameter space, however it is clear that if \(m_{\phi}\gg m_{\phi}^{(c)}\), then the viable parameter space for dark matter could not be radiatively generated from \(A\) and \(g\). Let us note that \(\varphi_{\rm pre}(T_{\rm osc})\) is rather close the minimum \(\varphi_{\rm min}(T_{\rm osc})\) such that a harmonic approximation is valid almost immediately after the onset of oscillations. The mass of \(\phi\), which we define as \(\mu_{\phi}^{2}(T)=[V^{\prime\prime}(\varphi)]_{\varphi=\varphi_{\rm min}}\), will be temperature dependent due to the drifting minimum \[\mu_{\phi}^{2}(T)=m_{\phi}^{2}\left(1-\frac{\mathsf{L}_{\rm min}}{3}\right) \left(\frac{T}{{\rm v}}\right)^{4/3}\left(\frac{11/\pi^{2}}{11-3\mathsf{L}_{\rm min }}\right)^{2/3}. \tag{34}\] The slow variation of the minimum, \(\varphi_{\rm min}\propto T^{2/3}\), relative to Hubble, \(H\propto T^{2}\) and by proxy the oscillation frequency, means that the shifting of the potential minimum is effectively adiabatic. The scalar oscillates around the minimum and behaves like cold dark matter almost immediately. With this picture in mind it is clear that the amplitude of the scalar oscillations _relative to the minimum_ determines the relic abundance. It turns out that this is given by \[\varphi_{\rm osc}=|\varphi_{\rm pre}(T_{\rm osc})-\varphi_{\rm min}(T_{\rm osc })|=\left(\sqrt[3]{3/2}-1\right)\times|\varphi_{\rm min}(T_{\rm osc})|\approx 0.145 \times|\varphi_{\rm min}(T_{\rm osc})|. \tag{35}\] Numerical solutions of Eq. (24) show that the inclusion of the temperature dependent Higgs vev, v, simply adiabatically transfers the oscillating solution to the final zero temperature minimum as is shown in Fig. 3. The relic scalar abundance \(\Omega_{\phi}\) can be expressed in terms of model parameters. The dynamics are adiabatic, such that number of particles per comoving volume is conserved and \(n_{\varphi}/s={\rm const}\) where \(s=\frac{2\pi^{2}}{45}g_{*}(T)T^{3}\)[28]. The number density at \(T=T_{\rm osc}\) is given by \(n(T_{\rm osc})=\rho(T_{\rm osc})/\mu(T_{\rm osc})\), where \(\rho(T_{\rm osc})\simeq\frac{1}{2}\mu_{\phi}^{2}(T_{\rm osc})(\varphi_{\rm pre }(T_{\rm osc})-\varphi_{\rm min}(T_{\rm osc}))^{2}\) is measured relative to the instantaneous minimum of the potential. The thermal misalignment therefore generates a dark matter abundance of [28, 29, 30] \[\Omega_{\phi}h^{2}=\frac{\rho(T_{\rm osc})}{\rho_{c}}\bigg{(}\frac{m_{\phi}}{ \mu(T_{\rm osc})}\bigg{)}\bigg{(}\frac{T_{0}}{T_{\rm osc}}\bigg{)}^{3}\frac{g_ {*}(T_{0})}{g_{*}(T_{\rm osc})}=0.15\bigg{(}\frac{A}{1~{}\mu{\rm eV}}\bigg{)}^ {2}\bigg{(}\frac{3~{}{\rm meV}}{m_{\phi}}\bigg{)}^{11/4}. \tag{36}\] Here, \(\rho_{c}\approx 10^{-5}~{}{\rm GeV}/{\rm cm}^{3}\)[10], and \(g_{*}(T_{0})=3.91\) while we take \(g_{*}(T_{\rm osc})=106.75\)[25], and \(T_{0}=2.7~{}{\rm K}\) is the temperature of the universe today [31]. For thermal misalignment to be operational, we require that \(T_{\rm osc}\gtrsim T_{\rm EW}\). Otherwise the scalar field will be stuck by Hubble friction until after the electroweak phase transition. Demanding \(T_{\rm osc}\geq 170~{}{\rm GeV}\)[27, 32] requires \(m_{\phi}\geq 0.35~{}{\rm meV}\). We now turn to lower masses where oscillations begin after the electroweak phase transition. ### VEV misalignment Next consider the case where \(T_{\rm osc}\ll T_{\rm EW}\). In this limit the electroweak symmetry is broken, the Higgs' contribution to the thermal functions is Boltzmann suppressed, and the dynamics are governed by \[\ddot{\varphi}+3H\dot{\varphi}+m_{\rm eff}\varphi=-A{\rm v}^{2}. \tag{37}\] In the regime \(T_{\rm osc}<T\ll T_{\rm EW}\), the zero-mode amplitude of the scalar field is given by9 Footnote 9: Initial conditions from \(\varphi_{\rm pre}\) have a small effect for \(T\ll T_{\rm EW}\) and so we omit them for simplicity. \[\varphi_{\rm post}(T)\simeq-\frac{AM_{\rm Pl}^{2}{\rm v}^{2}}{20\gamma^{2}T^{ 4}},\ \ \ \ {\rm for}\ \ \ T_{\rm osc}<T\ll T_{\rm EW}. \tag{38}\] The amplitude at which the field starts oscillating is given by the relative amplitude of the field at \(T_{\rm osc}\) with respect to the minimum. To estimate \(T_{\rm osc}\) in this regime we require \([m_{\rm eff}(\varphi_{\rm post})]^{2}=(3H)^{2}\) at \(T=T_{\rm osc}\), which leads to \[T_{\rm osc}=\frac{11^{1/4}}{3^{11/12}20^{1/6}}\sqrt{\frac{m_{\phi}M_{\rm Pl}} {\gamma}}. \tag{39}\] We note that the dependence on \(m_{\phi}\) is different from \(T_{\rm osc}\) above the electroweak scale (see Eq. (32)). At the onset of oscillations, the energy density is \(\rho(T_{\rm osc})\simeq\frac{1}{2}m_{\phi}^{2}(\varphi_{\rm pre}(T_{\rm osc})- \varphi_{0})^{2}\), where we have approximated the mass \(\mu_{\phi}(T)\) and the minimum \(\varphi_{\rm min}(T)\) of the scalar field by their zero temperature values. Note that the definition of \(m_{\rm eff}=\partial_{\varphi}V_{\rm CW}/\varphi\) does not change with respect to the previous case. However, in this regime \([\partial_{\varphi}V_{\rm CW}]_{\varphi=\varphi_{\rm min}}\sim A{\rm v}^{2}\) and we Figure 3: Dynamics of \(\varphi(T)\) for \(m_{\phi}=3\) meV (after re-scaling to \(\varphi_{0}\) results are independent of \(A\)). The solid black line shows the numerical evolution of \(\varphi\) with temperature. The gray dashed line shows the thermal evolution of the scalar minimum, and \(\varphi_{\rm pre}\) is plotted in turquoise. We show the predicted onset of the oscillations in a vertical orange dashed line. The EW phase transition has been modeled by an hyperbolic tangent with width \(15\ {\rm GeV}\)[27]. We take \({\sf L}=0\) for simplicity. recover \(m_{\rm eff}(T\ll T_{\rm EW})\sim O(1)\times m_{\phi}\). Analogously to Eq. (35), \[\varphi_{\rm osc}=|\varphi(T_{\rm osc})-\varphi_{0}|=\left(\frac{1}{5^{1/3}} \left(\frac{3}{2}\right)^{2/3}-1\right)\times|\varphi_{0}|\approx 0.234\times| \varphi_{0}|\ \, \tag{40}\] which guarantees that oscillations begin, and remain, close to the minimum of the potential where a harmonic approximation is reliable. Knowing \(\varphi_{\rm osc}\) we can estimate the post-electroweak contribution of the scalar oscillations to the relic density today, \[\Omega_{\phi}h^{2}\simeq\frac{\rho(T_{\rm osc})}{\rho_{c}}\bigg{(}\frac{T_{0} }{T_{\rm osc}}\bigg{)}^{3}\frac{g_{*}(T_{0})}{g_{*}(T_{\rm osc})}\simeq 0.1 \bigg{(}\frac{A}{1~{}{\rm neV}}\bigg{)}^{2}\bigg{(}\frac{90~{}\mu{\rm eV}}{m_{ \phi}}\bigg{)}^{7/2}\, \tag{41}\] for \(T_{\rm osc}\ll T_{\rm EW}\) and \(g_{*}(T_{\rm osc})\simeq 90~{}[25]\)10 For \(T\ll T_{\rm EW}\) the dynamics match closely with those in Ref. [4] for a quadratic potential. This is because the effective potential is well approximated by its zero temperature form, and the oscillations take place close to the minimum. Since we have defined \(m_{\phi}^{2}\) as the curvature about the minimum, the dynamics are equivalent to a quadratic potential with a mass \(m_{\phi}\) as studied in [4]. In the above expression we have taken \(\mathrm{L}_{\rm osc}\approx 0\) as \(\varphi_{\rm post}(T_{\rm osc})\sim\varphi_{R}\). In Fig. 2 we show the predicted relic abundance in the \(m_{\phi}-A\) plane vs existing constraints. Footnote 10: Within the parameter space plotted in Fig. 2, oscillations onset well before the QCD phase transition, and the relativistic degrees of freedom only change from \(80\lesssim g_{*}\lesssim 100\). We take \(g_{*}\simeq 90\) for simplicity. We summarize the scalings of the relic density and \(T_{\rm osc}\) with \(A\) and \(m_{\phi}\) for the thermal and VEV misalignment regimes in the diagram below, \[\begin{array}{c}T_{\rm osc}\sim m_{\phi}^{1/2}\quad T_{\rm osc}\ll T_{\rm EW }\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad ### Thermal mass from RHNs When the RHNs thermalize, they contribute to the effective potential for \(\phi\) by supplying a thermal mass. Using the high temperature expansion of the fermionic thermal function \(J_{F}\), the thermal mass is proportional to \(6{M_{N}}^{2}(\varphi)T^{2}/48=g^{2}T^{2}\varphi^{2}/8\). The masses of the RHNs we consider here are sufficiently low that if a type-I seesaw mechanism is responsible for neutrino masses then RHN thermalization takes place an order of magnitude below the electroweak scale. At these temperatures the Higgs vev has already saturated to its zero temperature value. We may check if the thermal mass from the RHNs substantially modifies the minimum of the effective potential by checking if the following ratio12 is \(O(1)\) or larger, Footnote 12: This can be seen by solving the equation \(a+2bx+4cx^{3}=0\) perturbatively in \(b\). One immediately finds that the relevant dimensionless ratio is \(b/(a^{2}c)^{1/3}\). \[\frac{\frac{1}{8}g^{2}T^{2}}{\sqrt[3]{A^{2}v^{4}\frac{11-3\mathsf{L}}{32\pi^{2 }}g^{4}}}\approx 0.34\ \frac{m_{\phi}T^{2}}{Av^{2}}\, \tag{42}\] for \(T\lesssim\mathrm{v}\), and where we have set \(\mathsf{L}=0\) for simplicity and used \(\varphi_{0}\) as an estimate for \(\varphi\). Depending on the region of parameter space we consider this may either substantially, or only slightly shift the position of the minimum. Notice, however, that since this all occurs at times after the electroweak phase transition, and \(T_{\mathrm{osc}}\gg T_{\mathrm{EW}}\) (for thermal misalignment13), the field is already completing many oscillations each Hubble time. The onset of the thermal population of RHNs therefore appears adiabatic to the oscillating scalar field in the same way that the shifting minimum at higher temperatures and from the onset of the Higgs vev both appear adiabatic. The effect of the RHNs therefore only serves to shift the vev of the scalar field, but does not substantially affect the amplitude of oscillations and therefore the dark matter relic abundance. This will lead to a slight time dependence in the mass of the RHNs, but we do not expect this to have any sizeable cosmological consequences. Although we do not pursue this issue further here, it may be interesting to consider this effect in more detail. Footnote 13: For vev misalignment where \(T_{\mathrm{osc}}\sim 10\) GeV the thermal contribution from \(N\) could modify the misalignment, but we do not consider this here. ### Parametric resonances Scalar fields oscillating in a non-linear potential can produce scalar quanta. At sufficiently large occupation numbers, Bose enhancements can lead to exponential growth and this can destabilize the dynamics of the zero mode [33]. For a scalar field coupled to fermions as we consider here, fermion production is also possible [34]. Since the dynamics here involve \(|\varphi(t)-\varphi_{\mathrm{min}}|\ll\varphi_{\mathrm{min}}\) the fermion mass is approximately constant, and much heavier than the effective scalar mass at a given temperature \(M_{N}\gg\mu_{\phi}\). We therefore focus on parametric resonances involving the scalar field itself. All oscillations occur close to the temperature dependent minimum of the potential \(\varphi_{\mathrm{min},T}\) such that anharmonic effects are suppressed. After the onset of oscillations we will have \(\mu_{\phi}(T)\gg H\), and since \(\varphi_{\mathrm{min}}(T)\propto T^{2/3}\) varies adiabatically with respect to the dynamics of the oscillations, t is therefore legitimate to treat \(\mu_{\phi}\) and \(H\) as constant. This is equivalent to a multi-scale separation of the field into its slow and fast modes. We will be interested in the parametric resonances of the fast modes, as a function of the wavenumber \(\mathbf{k}\), the temperature \(T\), and the effective mass \(\mu_{\phi}(T)\). With this adiabatic approximation in mind we will treat the temperature \(T\) as a label, such that we may consider the dynamics of \(\phi_{T}(x,t)\). We will separate the field into its temperature dependent minimum (which does not depend on \(t\) in the adiabatic approximation) and fluctuations about said minimum, \(\phi_{T}(x,t)=\varphi_{\rm min}(T)+\delta\phi_{T}(x,t)\). When expanded about the minimum, the potential will have a Taylor series given by \[V(\varphi_{\rm min}+\delta\phi_{T})=V(\varphi_{\rm min})+\frac{1}{2}\mu_{\phi}^ {2}\delta\phi_{T}^{2}+\frac{1}{3!}\kappa_{3}\delta\phi_{T}^{3}+\frac{1}{4!} \kappa_{4}\delta\phi_{T}^{4}+... \tag{43}\] where \(\left.\kappa_{n}\equiv\partial^{n}V/\partial\phi_{T}^{n}\right|_{\phi_{T}= \varphi_{\rm min}}\). The equations of motion for the fluctuating field are given by \[\Box\delta\phi_{T}(x,t)+3H\partial_{t}\delta\phi_{T}(x,t)+V^{\prime}=0\, \tag{44}\] where the prime denotes differentiation with respect to \(\delta\phi_{T}\). Let us further split the field \(\delta\phi_{T}\) into its \({\bf k}=0\) mode, and its \({\bf k}\neq 0\) modes (labeled with comoving momenta) \[\delta\phi_{T}(x,t)=\delta\varphi_{T}(t)+\sum_{k\neq 0}{\rm e}^{{\bf i}{\bf k} \cdot{\bf x}}\delta\phi_{\bf k}(t). \tag{45}\] We will study the linearized equations of motion for \(\phi_{\bf k}(t)\), which are given by \[\frac{{\rm d}^{2}}{{\rm d}t^{2}}\delta\phi_{\bf k}+3H\frac{{\rm d}}{{\rm d}t} \delta\phi_{\bf k}+\biggl{[}\frac{{\bf k}^{2}}{a^{2}}+\mu_{\phi}^{2}\biggr{]} \delta\phi_{\bf k}+\biggl{[}\sum_{n\geq 3}\frac{\kappa_{n}}{(n-2)!}( \delta\varphi_{T})^{n-2}\biggr{]}\delta\phi_{\bf k}=0\, \tag{46}\] where \(a\) is the scale factor. This may be re-written as \[\frac{{\rm d}^{2}}{{\rm d}t^{2}}\delta\phi_{\bf k}+3H(T)\frac{{\rm d}}{{\rm d }t}\delta\phi_{\bf k}+\Omega_{T}^{2}({\bf k})\delta\phi_{\bf k}+f_{T}(t) \delta\phi_{\bf k}=0\, \tag{47}\] where the definition of each variable is clear by comparison with the equation above. Close to \(T=T_{\rm osc}\) the damping due to Hubble friction occurs on time scales comparable to the oscillations. We do not, therefore, expect any periodic drive in this regime. Although it is possible that some fraction of the energy in the \(k=0\) mode may leak into other modes, no exponential growth of perturbations will occur. At low temperatures \(T\ll T_{\rm osc}\) we have \(\mu_{\phi}\gg 3H\) and Hubble friction may be treated as a small perturbation. In this limit the field will have red-shifted and its amplitude correspondingly decreased such that \[f_{T}(t)\approx\kappa_{3}\delta\varphi_{T}(t)\approx\kappa_{3}{\cal A}_{T} \cos(\mu_{\phi}t)\, \tag{48}\] where \(\kappa_{3}=\frac{2}{3}\mu_{\phi}^{2}/\varphi_{\rm min}(T)\) for \(L_{\rm osc}\sim 1.5\). We have also introduced the temperature dependent amplitude \[{\cal A}_{T}=0.145\times\biggl{(}\frac{\mu_{\phi}(T_{\rm osc})}{\mu_{\phi}(T)} \biggr{)}^{1/2}\left(\frac{g_{*}(T)}{g_{*}(T_{\rm osc})}\right)^{1/2}\varphi_{ \rm min}(T_{\rm osc})\biggl{(}\frac{T}{T_{\rm osc}}\biggr{)}^{3/2}. \tag{49}\] In this limit we can map onto the Mathieu equation [35], \[\ddot{y}+\nu^{2}\bigl{(}1+h\cos\omega t\bigr{)}y=0. \tag{50}\] The mapping between our problem and the parameters of the Mathieu equation is given by \(\nu^{2}=\mu_{\phi}^{2}+(k/a)^{2}\) and \(h=\kappa_{3}\,{\cal A}_{T}/\nu^{2}\). In the limit of \(h\to 0\) the condition for a parametric resonance may be written as [36] \[|2\nu-\omega|<\tfrac{1}{2}|\nu h| \Longrightarrow 2\sqrt{\mu_{\phi}^{2}+\frac{{\bf k}^{2}}{a^{2}}}-\mu_{\phi}< \frac{\mu_{\phi}^{2}{\cal A}_{T}}{3|\varphi_{\rm min}(T)|\sqrt{\mu_{\phi}^{2} +\frac{{\bf k}^{2}}{a^{2}}}}\, \tag{51}\] which may be re-expressed as \[\biggl{|}\frac{{\cal A}_{T}}{3\varphi_{\rm min}(T)}\biggr{|}>\left(2\sqrt{ \frac{{\bf k}^{2}}{a^{2}\,\mu_{\phi}^{2}}+1}-1\right)\sqrt{1+\frac{{\bf k}^{2 }}{a^{2}\,\mu_{\phi}^{2}}}\gtrsim 1. \tag{52}\] Therefore, provided \(\mathcal{A}_{T}<3\varphi_{\rm min}(T)\) no parametric resonance occurs at this order in the linearized equations of motion; this inequality is always satisfied in practice. We therefore find that parametric resonances are unimportant for the dynamics of the ultralight dark matter. Higher order terms from the potential can induce parametric resonances via higher harmonics of the field. For example the quartic term, proportional to \(\cos^{2}(\mu_{\phi}t)=\frac{1}{2}(1+\cos 2\mu_{\phi}t)\), both provides a parametric oscillation _and_ a detuning of the natural oscillator's frequency. This contribution is further suppressed by \(\mathcal{A}_{T}/\varphi_{\rm min}(T)\) (in addition to a typically small logarithm) and is negligible relative to the term proportional to \(\cos(\mu_{\phi}t)\). We therefore conclude that parametric resonances can be neglected when estimating the dark matter relic abundance. ### Initial conditions after inflation Thermal misalignment erases initial conditions much smaller than \(\varphi_{\rm pre}(T_{\rm osc})\sim\varphi_{0}\), which for the parameter space we consider is typically of order \(\sim 10^{13}~{}{\rm GeV}\). It remains possible that certain inflationary scenarios may result in a much larger initial misalignment, in which case the dark matter relic abundance cannot be predicted given only \(A\) and \(m_{\phi}\). Although we remain agnostic about the details of inflation, it is interesting to speculate on what kinds of initial conditions are generic in the model we consider here. In contrast to the quadratic potential considered in [4], the nonlinearities in the CW potential, and the dependence of \(M_{N}\) on \(\varphi(t)\), mean that large field velocities can result in non-perturbative particle production [37, 38, 39, 34]. This offers an efficient damping mechanism, sometimes referred to as instant preheating [38]. If we consider inflationary scenarios with a low Hubble scale, \(H_{I}\ll T_{\rm EW}\), then the analysis sketched in [4] applies. Electroweak symmetry is broken during inflation, and field values diffuse towards \(\varphi_{0}\)[40, 41]. They remain "pinned" there by Hubble friction, and large values of \(A\) are required to realize a phenomenologically viable dark matter relic abundance. It is arguably more natural to consider the opposite limit where \(H_{I}\gg T_{\rm EW}\). In this limit one expects larger field values of order \(\varphi_{*}\sim H_{I}/g\)[40, 41]. This will lead to a large effective mass \(m_{\rm eff}^{2}\sim g^{2}H_{I}^{2}\). After the end of inflation, assuming radiation domination, Hubble will drop like \(H\sim T^{2}\), and the field will begin oscillating at temperature \(T_{\rm osc}^{*}\sim\sqrt{gM_{\rm Pl}H_{I}}\). This is much larger than the \(T_{\rm osc}\) relevant for thermal misalignment for \(H_{I}\gg T_{\rm EW}\). The field will loose energy via particle production, and its amplitude will decrease. We may characterize its motion by the field value at each successive turning \(\varphi_{\rm turn}^{(i)}\) point where \(\dot{\varphi}=0\); energy loss implies \(\varphi_{\rm turn}^{(i)}<\varphi_{\rm turn}^{(i-1)}\). Since \(m_{\rm eff}^{2}\) is field-dependent, and smallest near \(\varphi=0\), it is generic that the field will get stuck by Hubble friction close to the origin in field space. This then suggests \(\varphi_{I}\approx 0\) as a "natural" initial condition for inflationary scenarios satisfying \(H_{I}\gg T_{\rm EW}\). In practice an equivalent condition is \(\varphi_{I}\ll 10^{13}~{}{\rm GeV}\) due to the erasure of initial conditions by thermal misalignment. It is interesting to note that the ability to accommodate large field values during inflation naturally suppresses isocurvature fluctuations [1]. The relaxation mechanism sketched above may therefore allow the model we consider to both evade isocurvature constraints while simultaneously having small enough initial conditions to allow for erasure via thermal misalignment. ### Connections to leptogenesis Before moving on to phenomenological signatures, let us comment on the possibility of explaining the baryon asymmetry within the model. We have so far remained entirely agnostic as to the Yukawa couplings with charged leptons that generate neutrino masses. Scalar masses above 30 \(\mu\)eV are both consistent with constraints from BBN, and predict RHNs in the mass range of a few GeV. It is well known that leptogenesis by oscillations (or ARS leptogenesis for Akhmedov, Rubakov, and Smirnov [20]) is operational in this mass range. It is therefore tempting to ask if the mechanism may operate within the model at hand, thereby supplying an explanation of dark matter, neutrino masses, and the observed baryon asymmetry. Indeed we expect, given the extremely small Yukawa coupling between \(N\) and \(\phi\), that the lifetime of the RHNs will be largely unaffected by the scalar field. We note, however, that close to \(T_{\rm osc}\) the RHN mass will fluctuate an \(O(1)\) amount, albeit with a very slow period of \(\mu(T_{\rm osc})\). It may be possible for this to lead to the production of RHNs modifying the non-equilibrium number densities of \(N\) with respect to a vanilla ARS leptogenesis mechanism. This may allow for efficient leptogenesis at smaller mixing angles. A detailed investigation of leptogenesis within this model lies beyond the scope of this paper. We note that there is a large volume of parameter space where the mechanism is viable [42]. ## 4 Experimental signatures and constraints One interesting consequence of the radiatively generated parameter space we consider here is the correlation between RHN phenomenology and direct searches for a light scalar. At higher masses, where thermal misalignment is operational, inverse square law tests may offer the most competitive search channel with which to probe the parameter space that predicts the correct relic abundance. At lower masses, where \(A\) may be very small and still produce acceptable misalignment to supply the correct dark matter abundance, direct searches for the Higgs portal coupling may be unrealistic. In this limit searches for the RHNs which radiatively generate the light scalar's mass may offer better experimental prospects. ### Probes of a light scalar Direct searches for the light mediator \(\phi\) have been discussed previously in the literature, and we refer the interested reader to Refs. [1, 4] for a more detailed discussion. For masses between 3 \(\mu\)eV and 15 meV the strongest constraints come from short-distance tests of the inverse square law (on the scale of millimeters). The scalar discussed here will result in a Yukawa potential with a range of \(\lambda=1/m_{\phi}=1\ {\rm mm}\times\left(\frac{0.197\ {\rm meV}}{m_{\phi}}\right)\). The strength of the coupling is set by the nucleon-scalar coupling discussed in Eq. (22). Inverse square law tests are often quoted in terms of \(\alpha^{2}=\alpha_{1}\alpha_{2}\) where a potential of the form \(V=-Gm_{\rm nuc}^{2}/r\times(1+\alpha^{2}{\rm e}^{-m_{\phi}r})\) is assumed, being \(m_{\rm nuc}\) the nucleon mass. The Yukawa potential's strength, \(\alpha^{2}\), is related to the Higgs portal coupling, \(A\), via \[\alpha=\frac{\Lambda_{\rm had}}{m_{h}^{2}}\frac{\sqrt{2}\ M_{\rm Pl}}{m_{\rm nuc }}\times A\, \tag{53}\] where \(\Lambda_{\rm had}\) is a hadronic scale of order \(\sim 200\ {\rm MeV}-600\ {\rm MeV}\)[22, 23, 24] and \(m_{h}=125\ {\rm GeV}\) is the mass of the Higgs; for numerical estimates we take \(\Lambda_{\rm had}=530\ {\rm MeV}\) following [23, 24]. A number of groups have obtained limits on a Yukawa potential at millimeter scales. Relevant experimental tests include the Irvine [15], Eot-Wash [13, 14], and HUST experiments [12]. Projections from the HUST group predict improved sensitivity in the vicinity of \(m_{\phi}\sim{\rm meV}\), and will test part of the relic-abundance parameter space for \(m_{\phi}\sim 1\ {\rm meV}\)[16]. ### Right-handed neutrino searches Fixing the relic density of \(\phi\) to be all the dark matter predicts the relationship between RHN mass and \(m_{\phi}\). For \(m_{\phi}\gtrsim 0.1\) meV, where \(T\gtrsim T_{\rm EW}\) we find \[M_{N}=6.1\ {\rm GeV}\ \Big{(}\frac{m_{\phi}}{3\ {\rm meV}}\Big{)}^{3/16}\, \tag{54}\] which is relatively insensitive to the dark matter mass \(m_{\phi}\). More generally \(M_{N}\sim(A/m_{\phi})^{1/2}\) as given by Eq. (16). If \(T\ll T_{\rm EW}\) instead, or in terms of the scalar mass, \(m_{\phi}\ll 0.1\) meV, the mass of the heaviest neutrino is instead given by \[M_{N}=1.25\ {\rm GeV}\ \Big{(}\frac{m_{\phi}}{90\ {\rm\mu eV}}\Big{)}^{3/8}. \tag{55}\] Constraints from BBN suggest a minimal mass for RHNs. This constraint is set by the lifetime of the RHN, and so depends also on the Yukawa couplings. If we fix \(Y\sim\sqrt{m_{\nu}M_{N}}/{\rm v}\) to the seesaw line and demand \(\tau_{N}\gtrsim 0.1\) s [11] then one has \(M_{N}\gtrsim 1\ {\rm GeV}\). This then demands that \(m_{\phi}\gtrsim 30\ {\rm\mu eV}\) to avoid standard constraints on RHNs from BBN (see Fig. 2). The radiative scenario presented above can be studied either by direct probes of a light scalar (e.g. fifth force searches) or by searching for the RHNs that are predicted in the spectrum. As is well appreciated in the literature surrounding RHNs, mixing angles can be much larger than a naive type-I seesaw estimate would suggest (e.g. if the inverse seesaw mechanism is operational [18]). Therefore, one may view searches for RHNs below a few GeV as generic _discovery_ opportunities for the dynamics discussed herein. Searches for RHNs can also exclude certain regions of viable dark matter parameter space if the seesaw line can be probed. No near-term experiment projects sensitivity down to the seesaw line for masses that are not excluded from BBN, however SHiP projects sensitivity to \(\theta_{\mu}^{2}\) and \(\theta_{e}^{2}\) that is within an order of magnitude of the seesaw line [43]. If one is willing to entertain non-standard cosmologies, or alternative decay paths for the RHNs, then the bound on the scalar mass discussed above is relaxed. A few near-term experiments then offer promising detection prospects. For \(140\ {\rm MeV}\lesssim M_{N}\lesssim 460\ {\rm MeV}\) NA62 has probed \(\theta_{\mu}^{2}\) to within an order of magnitude of the seesaw line [44], and may reach it with further data. For \(60\ {\rm MeV}\lesssim M_{N}\lesssim 130\ {\rm MeV}\) the upcoming PIONEER experiment projects to probe the seesaw line for \(\theta_{e}^{2}\)[45]. For \(M_{N}\lesssim 1\ {\rm MeV}\) the RHNs can be searched for in nuclear beta decays. BeEST [46], HUNTER [47], and KATRIN [48] project sensitivity below the seesaw line [49], and may offer a complimentary discovery avenue. ## 5 Conclusion and Outlook We have considered ultralight Higgs-portal dark matter in the presence of a neutrino mass mechanism. Since the scalar must be a gauge singlet in order to couple via the super-renormalizable Higgs portal, it is generic to consider interactions between scalar dark matter and singlet fields responsible for neutrino mass generation. In our case, we focus on RHNs and a type-I seesaw mechanism. Generically, RHNs much heavier than the scalar dark matter, \(M_{N}\gg m_{\phi}\), induce a large radiative mass for \(\phi\). Motivated by this observation, we have focused on a region of parameter space in which the scalar's mass is _entirely_ generated entirely by radiative effects. In this CW-dominated regime the model has correlated and testable predictions, with phenomenology controlled by two parameters (plus Yukawa couplings to fix neutrino masses). From a microphysical perspective, the two input parameters are the RHN scalar Yukawa coupling \(g\), and the soft Higgs portal coupling \(A\). We choose to trade \(g\) for the physical scalar mass at zero temperature \(m_{\phi}\). The mass scale of RHNs is predicted by \(A\) and \(m_{\phi}\), and there exists a preferred range for dark matter between a few \(\mu\)eV and a few \(\mathrm{meV}\). RHN searches offer a complimentary discovery avenue in addition to direct probes of an ultralight scalar via inverse square law tests. While the CW-dominated parameter space we consider here is highly predictive, and represents some \(O(1)\) fraction of the available parameter space,14 it is by no means generic. The parameter space of an ultralight singlet scalar dark matter in the presence of a neutrino mass mechanism is worth exploring more broadly. The scalar will generically couple to whatever UV degrees of freedom generate neutrino masses and so the dynamics of dark matter will be tied to the physics of neutrino masses in a manner similar to what is presented above. We plan to pursue this connection in future work. Footnote 14: Quantifying volumes in parameter space with a log-measure. ## Acknowledgments We thank Akshay Ghalssai and especially Brian Batell for useful discussions about thermal misalignment in the context of the Higgs portal. Mark Wise supplied helpful comments on the possibility of parametric resonances. We acknowledge useful discussions with Junwu Huang, Maxim Pospelov, Marilena Loverde, Michael Ratz, and Yohei Ema about initial conditions after inflation. We thank Brian Batell, Akshay Ghalssai, Peizhi Du, Michael Ratz, and Mark Wise for feedback on the manuscript. We acknowledge the hospitality of the Simons Center for Physics and Geometry and the Aspen Center for Physics (which is supported by National Science Foundation grant PHY-2210452) while part of this work was being completed. Funding information.This work is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632 and by the Walter Burke Institute for Theoretical Physics. RP acknowledges support from the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632 and the Neutrino Theory Network Program Grant under Award Number DEAC02-07CHI11359 and the US DOE under Award Number DE-SC0020250.
2304.00822
Musical creativity enabled by nonlinear oscillations of a bubble in water
Producing original and arranging existing musical outcomes is an art that takes years of learning and practice to master. Yet, despite the constant advances in the field of AI-powered musical creativity, production of quality musical outcomes remains a prerogative of the humans. Here we demonstrate that a single bubble in water can be used to produce creative musical outcomes, when it nonlinearly oscillates under an acoustic pressure signal that encodes a piece of classical music. The audio signal of the response of the bubble resembles an electric guitar version of the original composition. We suggest, and provide plausible theoretical supporting arguments, that this property of the bubble can be used to create physics-inspired AI systems capable of simulating human creativity in arrangement and composition of music.
Ivan S. Maksymov
2023-04-03T09:08:52Z
http://arxiv.org/abs/2304.00822v1
# Musical creativity enabled by nonlinear oscillations of a bubble in water ###### Abstract Producing original and arranging existing musical outcomes is an art that takes years of learning and practice to master. Yet, despite the constant advances in the field of AI-powered musical creativity, production of quality musical outcomes remains a prerogative of the humans. Here we demonstrate that a single bubble in water can be used to produce creative musical outcomes, when it nonlinearly oscillates under an acoustic pressure signal that encodes a piece of classical music. The audio signal of the response of the bubble resembles an electric guitar version of the original composition. We suggest, and provide plausible theoretical supporting arguments, that this property of the bubble can be used to create physics-inspired AI systems capable of simulating human creativity in arrangement and composition of music. ## I Introduction Bubbles in liquids underpin many important natural phenomena [1; 2; 3; 4], including cavitation [1] and sound of running water [5; 6]. Oscillations of bubbles driven by an acoustic pressure wave are also similar to the behaviour of a biological brain since both the brain [7] and the bubble [2] are nonlinear dynamical systems [8]. Subsequently, studies of oscillating bubbles may help understand certain brain functions that are responsible for perception of sounds and music. The following experimental evidence speaks in favour of this proposition. Firstly, it has been demonstrated that the electric charge impulses that underpin the nerve signalling are accompanied by acousto-mechanical (sound) waves that are intrinsically nonlinear [9; 10; 11; 12]. Secondly, it has been shown that human experience in musics is mediated by nonlinear-acoustical processes [13] and that the same processes underpin the auditory processing abilities of some animals [14]. In particular, in an experiment involving owls exposed to a piece of classical music made up of tones with deliberately removed fundamental frequency harmonics, the owl's brain restored the missing fundamental harmonics [15]. While such a behaviour is of considerable interest in the field of nonlinear physics [16], effectively the owl's brain transferred a musical idea from its original position to a lower frequency, which is a common examples of octave transposition [14]. Significantly, while transposition in musics is a nontrivial task that is accessible mostly to individuals with a formal relevant education, a biological brain can do this type of audio processing naturally. Thirdly, it is also well-known that nonlinear-acoustical processes [13] underpin the operation of many musical instruments [17] and that musicians, as well as many people who love music but do not have formal background in it, understand nonlinear effects naturally without knowing much about nonlinear physics [14]. This fact also indicated that a biological brain can naturally process nonlinear acoustic signals. Recently, we suggested that a cluster of oscillating bubbles in water can operate as an artificial neural network that exhibits complex nonlinear behaviour and that can be trained to predict highly nonlinear and chaotic time series that arise in many practical situations [18] such as the analysis of financial markets, weather forecasting and control of autonomous vehicles [19; 20; 21; 22]. Thus, since the particular kind of the artificial neural networks that oscillating bubbles can efficiently emulate--the Echo State Network (ESN) [19] and Liquid State Machine (LSM) [23]--can also reproduce some functions of a biological brain, it is conceivable that oscillating bubbles may also reproduce some of the brain's functions, including those associated with the perception of music. Hence, in this work we suggest that a highly nonlinear behaviour of an oscillating bubble could be used to complete some tasks that require musical creativity. Musical creativity can be defined as a process of employing existing musical knowledge to produce novel musical outcomes that may take, for example, the form of improvisations, compositions and arrangements. The production of quality musical outcomes remains one of the most challenging tasks for machine learning systems [24] despite the recent significant progress in AI-powered musical creativity (see [25; 26; 27; 28] to cite a few works). Subsequently, the idea that a simple bubble could produce a creative musical output is not only fundamentally intriguing but can also lead to new knowledge in the field of AI. As a representative example, we synthesise a simple version of "In the Hall of the Mountain King", a piece of music composed by Edvard Grieg, and use it as the acoustic signal that drives nonlinear oscillations of a single bubble in water. By means of rigorous numerical simulations, we demonstrate that the output signal produced by the bubble is perceived as a "heavy metal" cover on the original composition decorated with warm and gritty tones typical of an electric guitar [29]. This paper is organised as follows. Our main findings are presented in Sect. II and followed by the discussion in Sect. III. The discussion is supported by a comprehensive theoretical analysis and numerical modelling of the physical properties of an acoustically driven oscillating bubble, the results of which are presented in Sect. A and Sect. B. Since memory is one of the prerequisites of creativity [30; 31], we employ the ESN algorithm described in Sect. C to demonstrate in Sect. D that an oscillating bubbles possesses a memory capacity suitable for applications in the field of AI. ## II Results We choose a simple piano version of "In the Hall of the Mountain King" by Edvard Grieg to be the acoustic signal driving oscillations of the bubble (Fig. 1). This composition is well-known to the general public and is also often used by musicians to produce their own recordings. Relying on the principles of 8-bit computer music arrangement [32], we encode each note of the melody as a sequence of square pulses repeated at the frequency of the \(n\)th key of an idealised acoustic piano: \[f(n)=2^{\frac{n-49}{2}}\times 440\,\mathrm{Hz}\,. \tag{1}\] A four beats per bar time signature of the melody and its \(120\,\mathrm{bpm}\) tempo also enable us to calculate the duration of each bar in seconds. Although this approach cannot be used reproduce the exact sound of a piano, it suffices to create an easily recognisable version of the composition (see the supplementary audio file out.in) in a format suitable for processing by the numerical model employed in this work. In the numerical model, we consider a single mm-sized bubble. The fundamental nonlinear physics underlying the interaction of a mm-sized bubble with a single square acoustic pressure pulse is discussed in Sect. B. Here we analyse the response of the bubble to a sequence of identical square pulses that encode the melody. We also record the response of the bubble as an audio file (see the supplementary file out.wav) and we discuss the aesthetic characteristics of the resulting audio record. While the results presented in this section were obtained using positive square pressure pulses that increase the ambient pressure of the bubble in a stepwise manner, similar physical behaviour of the bubble was also observed using negative square pulses. Pulses of different peak amplitude and temporal duration were also considered. We established that the choice of the type of the pulses and of their particular duration and peak amplitude influences the aesthetic characteristics of the output produced by the bubble. Therefore, instead of being guided by the physics of interaction between the bubble and acoustic pressure pulses, our choice of the model parameters is dictated by the goal of achieving an appealing aesthetic characteristic. In Fig. 2(a) we plot several input square pulses corresponding to the notes of the melody (the black curve) and compare them with the respective acoustic response of the bubble (the red curve). Nonlinear response of oscillating bubbles has been the subject of many theoretical and experimental works (see [1; 2; 33; 34; 35] to cite a few), and we establish that the response of the bubble to the acoustic signature of the melody is also highly nonlinear. Moreover, we can see that the bubble continues oscillating during the periods of time between the individual pulses associated with the musical notes. This result speaks in favour of the ability of the oscillating bubble to have memory, which is an essential property of a nonlinear processing unit suitable for application in ESN as well as a prerequisite for artificial creativity (see Sect. D). In particular, we argue that a prolonged oscillation of the bubble excited by the sound of the composition is equivalent to the sustain effect used in the music instruments such as electric guitar and piano, where the length of time a note audibly resonates is deliberately prolonged [29]. This behaviour can also correspond to the so-called nonlinearity with memory employed in advanced digital implementations of electric guitar distortion effects [36; 37]. There, an idealised nonlinear system with memory can be represented analytically as a Volterra series, where the output of the nonlinear system depends on the input to the system at all other times, thereby providing the ability to create fading memory [38]. Indeed, as shown in Fig. 2(b), in the frequency domain the nonlinearity of the bubble manifests itself as the enrichment of the spectrum with higher-order harmonics--compare the black curve corresponding to the spectrum of the acoustic signature of the melody with the red curve corresponding to the response of the bubble. In electric guitar performances, the appearance of higher-order harmonics is associated with fuzzy and gritty tones [29; 36; 39]. Subsequently, the recorded output signal (see the supplementary file out.wav) is aesthetically perceived as an electric guitar cover of the original piece of music. Figure 1: Notes of the piece of music used as the acoustic signal that drives the oscillations of the bubble. To test our perception, we asked people with formal music education to listen to the output signal and they confirmed that the melody indeed closely resembles an electric guitar version of the original melody. We also used Audacity audio processing software, where we applied the standard static nonlinear audio processing effects to the original melody to simulate the effect of an electric guitar [36; 39]. The goal of this procedure was to reproduce the lineshape and spectrum of the bubble response as well as to recreate the electric guitar effect of the bubble on the original composition. The result of application of the digital effects is shown in Fig. 3, where we applied reverberation, which is a process relevant to both memory in artificial intelligence [40] and to cognitive memory [41], and a guitar distortion effect that enriches the spectrum with the higher-order harmonics of the fundamental frequency [29; 39] (the nonlinearity of this distortion effect is memoryless [36; 37], which justifies the addition of the reverberation effect to our model). The lineshape and the spectral composition of the digitally produced signal (the blue curve) resemble those of the response of the bubble (the red curve). Since the use of distortion is characteristic of the heavy metal style in music [39], the fact that the bubbles reproduces this effect serves as an objective confirmation of its ability to creatively process music. ## III Discussion Thus, we show that the ability of a single oscillating bubble to perform complex nonlinear tasks enables it to arrange existing music pieces similarly to a human. Creative music composition using the properties of liquids is an established style of music [42], where it is highly likely that nonlinear acoustic effects associated with bubbles trapped in liquids [5; 6] have already been exploited in some form. However, we look at the nonlinear properties of the bubble at a different angle. In our previous work we demonstrated the ability of oscillating bubbles to forecast chaotic time series similarly to an artificial neural network [18], which is a task that requires not only nonlinearity but also memory. Most importantly, unlike a bubble in water that produces some sounds that are then used by an artist to compose music, a bubble employed in an artificial neural network plays an active role of an analog data processing unit that mimics the operation of a biological neuron. Although the memory capacity and the speed of data processing of such a unit are low compared with those of a typical digital computer, it has been demonstrated that analog computer systems can be more efficient than digital ones in solving certain classes of problems pertinent to the field of AI [18; 22; 43; 44]. Given this, we suggest that a single bubble can be employed as a building block of an analog AI system that can produce musical outputs with no or little human input. Unlike the music transcription, which is an exact note for note rendition of a piece of music written for one instrument and played on another (i.e. piano to guitar), arrangement is a more creative process, where the style of the music is changed and new complex tones are added. Consequently, this creative task is especially challenging for AI system because it requires a machine to have some of the key features of the human intelligence such as the ability to associate ideas, perceive, think, search for answers and criticise results of own work [45]. Yet, creativity relies on cognitive memory [30; 31] and is closely linked to cultural context and personality, also being influenced by motivation and emotions of the artist [45]. Interestingly enough, the ability to appreciate, arrange and compose heavy metal music has also been associated with high intellectual abilities [46], which means that the production of heavy metal style music should be a particularly challenging task for AI. Thinking in terms of machine learning systems, researchers have attempted to create models, where training inputs for achieving artificial creativity are represented by poorly defined data sets affected by perturbations and noise [47; 48]. The studies of so-created models have revealed that achieving artificial creativity may contradict the standard approach to training an artificial neural network since perturbations associated with creativity interfere with the operation of the network, for instance, by altering the values of its connection weights Figure 2: (a) Waveforms corresponding to three notes of the input melody (the black curve) and the respective acoustic response of the bubble (the red curve). The green rectangle highlights the time range of a magnified portion of the signals plotted in the inset located below the main panel. (b) Fourier power spectra of the waveforms shown in the inset to panel (a). [47]. Subsequently, it has been suggested that the organisational principles of conventional neural network should be changed to enable AI-powered creativity [48]. However, this assessment of the applicability of conventional artificial neural networks in the field of AI creativity does not take into account the recent advances in the development of analog (non-digital) counterparts of neural networks, where hardware and real-life physical systems that exhibit a nonlinear dynamical behaviour are used as artificial neurons. While such physical computation systems have thus far mimicked the operation of some digital neural network architectures [22], it has been demonstrated that they hold a potential to surpass the abilities of a computer program in operations intended to simulate the functions of a biological brain [18; 22; 49]. The findings presented in this work contribute to the endeavour to demonstrate this potential. Finally, assuming that an AI system has been able to creatively generate a musical output comparable with that produced by a human, researchers face yet another problem: the quality of the AI-generated musical output is difficult to assess since this process would rely on the appreciation of trained listeners [25], who may, in turn, hold a cultural bias [50] or a bias against computer-composed music [51]. Since it is challenging to critically judge the aesthetical quality of the electric guitar arrangement produced by a bubble, we found several covers on "In the Hall of the Mountain King" produced by professional electric guitar players [52; 53]. Even though those compositions sound more appealing than the version produced by the bubble, we have been able to distinguish the same warm and gritty tones created by the professional musicians in their performances. Thus, we leave it up to the readers to listen to the cited compositions and to comparatively judge the quality of the output produced by the bubble. However, we note that while it takes years of practice for a human to master an electric guitar, a single bubble in water appears to have an intrinsic ability to reproduce the sound of this musical instrument. ## IV Conclusions Using a rigorous numerical model of nonlinear oscillations of an acoustically-driven single bubble in water, we have demonstrated that a bubble can produce audio outputs that aesthetically sound as creative musical outcomes produced by humans. Since past research demonstrated that oscillating bubbles can form a physics-based artificial neural network that can simulate certain functions of a biological brain, we suggest that either a single oscillating bubble or a network (cluster) of such bubbles could be used as an apparatus capable of reproducing some forms of artificial creativity. Achieving creativity in musics has thus far been a challenging to resolve problem for modern AI systems. Therefore, we believe that our findings may contribute to further development in this vital field of fundamental and applied research, also being of interest to artists, who experiment with the acoustic properties of liquids. ###### Acknowledgements. ISM thanks Professor Sergey Suslov and Dr Andrey Pototsky (Swinburne University of Technology) for valuable discussions, and Professor Mikhail Kostylev (The University of Western Australia) for help with the calculation of the memory capacity.
2308.07240
The Cactus Group Property for Ordinal Sums of Disjoint Unions of Chains
We study the action of Bender-Knuth involutions on linear extensions of posets and identify LE-cactus posets, i.e. those for which the cactus relations hold. It was conjectured in \cite{chiang2023bender} that d-complete posets are LE-cactus. Among the non-d-complete posets that are LE-cactus, one notable family is ordinal sums of antichains. In this paper, we characterize the LE-cactus posets in a more general family, namely ordinal sums of disjoint unions of chains.
Son Nguyen
2023-08-14T16:22:12Z
http://arxiv.org/abs/2308.07240v1
# The Cactus Group Property for Ordinal Sums of Disjoint Unions of Chains ###### Abstract We study the action of Bender-Knuth involutions on linear extensions of posets and identify LE-cactus posets, i.e. those for which the cactus relations hold. It was conjectured in [1] that d-complete posets are LE-cactus. Among the non-d-complete posets that are LE-cactus, one notable family is ordinal sums of antichains. In this paper, we characterize the LE-cactus posets in a more general family, namely ordinal sums of disjoint unions of chains. ###### Contents * 1 Introduction * 2 Definitions and Results * 2.1 Poset operations * 2.2 Bender-Knuth involutions * 2.3 Cactus relations * 2.4 Promotion and Evacuation * 2.5 Unions of Antichains * 3 Promotion and Evacuation * 4 Proof of main theorem * 5 Discussion ## 1 Introduction First introduced by Bender and Knuth in their study of enumerations of plane partitions and Schur polynomials [1], the _Bender-Knuth (BK) involutions_, a certain family of involutions on the set of column-strict (semi-standard) tableaux, have seen a wide range of applications across different areas of combinatorics. A classic application of BK involutions is on column-strict tableaux where they are used to prove that Schur polynomials are symmetric. Informally, the BK involutions \(t_{i}\) act on a column-strict tableau by fixing an \(i\) (resp. \(i+1\)) when there is an \(i+1\) below (resp. \(i\) above), and then swapping the contents of the remaining numbers \(i\) and \(i+1\) in each row. A _linear extension_ of a poset \(P\) is a linear order \(f\) that is compatible with \(P\), that is, a bijective labeling \(f:P\to\{1,2,\ldots,|P|\}\) such that if \(a<_{P}b\) then \(f(a)<f(b)\). In [12], Stanley introduced an analog of the BK involutions \(t_{i}\) on linear extensions of a poset \(P\), which swaps two adjacent labels \(i\) and \(i+1\) when they label incomparable elements of \(P\) and fixes them otherwise. In this paper, we study a family of relations among the BK involutions called _cactus relations_, which present the cactus group. For any poset \(P\) with \(|P|=n\), it is easy to see that the BK involutions \(t_{1},\ldots,t_{n-1}\) acting on the linear extensions of \(P\) satisfy the relations \(t_{i}^{2}=1\) and \(t_{i}t_{j}=t_{j}t_{i}\) for \(|i-j|\geq 2\). On the other hand, for some posets \(P\) (discussed extensively in [1]), they _fail_ to satisfy the extra family of relations that define the _cactus group_, namely \((t_{i}q_{jk})^{2}=1\) for \(i+1<j<k\) where \(q_{jk}:=q_{k-1}q_{k-j}q_{k-1}\) and \(q_{i}:=t_{1}(t_{2}t_{1})\ldots(t_{i}t_{i-1}\ldots t_{1})\). See Section 2.3 for a fuller discussion of these relations. In [1], a poset \(P\) was called _LE-cactus_ whenever the cactus group relations hold when \(\{t_{i}\}\) act on its linear extensions. Our main concern is the following question: for which posets do the BK involutions satisfy the cactus relations (defined in Definition 2.3)? The authors of [1] showed that Ferrers posets are LE-cactus. In [1], several other families of LE-cactus posets were found, such as shifted Ferrers posets, rooted trees, and other minuscule posets. The paper also made the following conjecture about _d-complete_ posets. We refer the readers to [13] and [14] for the precise definition of d-complete posets. **Conjecture 1.1** ([1, Conjecture 3.23]).: _d-complete posets are LE-cactus._ However, a large number of LE-cactus posets remained uncategorized. For example, a larger family of posets that includes all d-complete posets is the _jeu-de-taquin_ posets. _Most_ (but not all) jeu-de-taquin posets are LE-cactus: among all jeu-de-taquin posets of size up to 9, only 1 is not LE-cactus. Furthermore, there are many other posets that are not jeu-de-taquin but are also LE-cactus; one notable family involves ordinal sums of antichains. In this paper, we characterize the LE-cactus posets in a more general family, namely ordinal sums of disjoint unions of chains. Since most posets in this family are not LE-cactus, our result is a complement of Conjecture 1.1. Furthermore, in [1], the following result about disjoint unions of LE-cactus posets was proved. **Theorem 1.2** ([1, Theorem 3.17]).: _If \(P\) and \(Q\) are LE-cactus, then their disjoint union, \(P+Q\), is LE-cactus._ On the other hand, little is known about ordinal sums of LE-cactus posets, i.e. if \(P\) and \(Q\) are LE-cactus, when is \(P\oplus Q\) LE-cactus? Some progress was made in [1]. **Proposition 1.3** ([1, Proposition 3.18, 3.19, 3.20]).: _Let \(A_{m}\) be the antichain of size \(m\). If \(P\) is LE-cactus, then \(A_{1}\oplus P\) and \(A_{2}\oplus P\) are LE-cactus. However, for any non-empty finite poset \(P\), \(A_{m}\oplus P\) is not LE-cactus for \(m>3\)._ Since disjoint unions of chains are LE-cactus, our main theorem about their ordinal sums is another step towards understanding the ordinal sum operation. Let us summarize our main result. Let \(\mathfrak{D}_{n}\), with \(n>1\), be the set of all posets with \(n\) elements that are disjoint unions of at least two chains. For completeness, we define \(\mathfrak{D}_{1}\) to include the poset \(C_{1}\) containing one element. Let \(\lambda=(\lambda_{1},\ldots,\lambda_{\ell})\) be a partition of \(n\) with \(\ell>1\). We define \(D_{\lambda}\) to be a disjoint union of \(\ell\) chains such that the \(i\)th chain has \(\lambda_{i}\) elements. **Definition 1.4**.: We say a triple \((p,n,q)\) is **cactus-compatible** if it satisfies the following condition: Let \(P\) and \(Q\) be any poset with \(|P|=p\) and \(|Q|=q\); let \(D_{\lambda}\) be any poset in \(\mathfrak{D}_{n}\). Let \(R=P\ \oplus\ D_{\lambda}\ \oplus\ Q\), and let \(f\) be any linear extension of \(R\). Then for all \(i+1<j<k\), the element \((t_{i}q_{jk})^{2}\) fixes the labels of the elements in \(D_{\lambda}\) when acting on \(f\). Our main theorem, Theorem 4.14, follows the following three propositions. **Proposition 4.2**.: _For \(n>3\), \((p,n,q)\) is cactus-compatible if and only if_ 1. \(p>q+n-4\)_, or_ 2. \(p=q+n-4\) _and_ \(q\) _mod_ \(n\neq 1,3\)_, or_ 3. \(p=q+n-r\) _for_ \(r>4\) _and_ \(q\) _mod_ \(n>r-1\)_._ _In particular, if \(p\leq q-1\), \((p,n,q)\) is not cactus-compatible._ **Proposition 4.12**.: \((p,3,q)\) _is cactus-compatible if and only if_ 1. \(p>q-1\)_, or_ 2. \(p=q-1\) _and_ \(q\) _mod_ \(n\neq 1,3\)_._ _In particular, if \(p\leq q-1\), \((p,3,q)\) is not cactus-compatible._ **Proposition 4.13**.: _For all \(p,q\), \((p,1,q)\) and \((p,2,q)\) are cactus-compatible._ Our main theorem is the following characterization. **Theorem 4.14**.: _Consider a sequence of positive integers \(a_{0}=0,a_{1},\ldots,a_{\ell},a_{\ell+1}=0\) and a sequence of posets \(D_{\mu_{1}},\ldots,D_{\mu_{\ell}}\) where \(D_{\mu_{i}}\in\mathfrak{D}_{a_{i}}\). The poset \(P=D_{\mu_{1}}\oplus\ldots\oplus D_{\mu_{\ell}}\) is LE-cactus if and only if for all \(i=1,2,\ldots,\ell\), the triples_ \[\left(\sum_{r=0}^{i-1}a_{r},\ a_{i},\ \sum_{r=i+1}^{\ell+1}a_{r}\right)\] _are cactus-compatible._ The paper is outlined as follows: in Section 2, we will review the key definitions and results on Bender-Knuth involutions, promotion and evacuation, and cactus relations. Then in Section 3, we will study the actions of promotion and evacuation on linear extensions of \(D_{\lambda}\). We will also introduce a way to think about these actions as permuting numbers on ordered set partitions. Finally, in Section 4, we will prove our main theorem through a few results. _Remark_.: We can think of a linear extension of a poset \(P\) as a bijection from the elements of \(P\) to the set \(\{1,\ldots,p\}\) where \(p=|P|\). Hence, in this paper, when we use the operation mod \(p\), we mean the result is in \(\{1,\ldots,p\}\) instead of \(\{0,\ldots,p-1\}\). **Acknowledgments** I would like to thank Vic Reiner for introducing the subject to me, and for his extremely valuable guidance and support. I would like to thank the 2022 University of Minnesota Combinatorics and Algebra REU, supported by RTG grant NSF/DMS-1745638, for organizing the program in which this project was initiated. I thank Judy Chiang, Anh Hoang, Matthew Kendall, Ryan Lynch, Benjamin Przybocki, Janabel Xia, Pasha Pylyavksyy, and Sylvester Zhang for their helpful discussions and comments. I thank Connor McCausland for their help with proofreading. Definitions and Results ### Poset operations Let \(P\) and \(Q\) be any finite posets with the partial orders \(\leq_{P}\) and \(\leq_{Q}\) respectively. The _ordinal sum_ of \(P\) and \(Q\) is the poset \(R\) whose elements are those in \(P\cup Q\), and \(a\leq_{R}b\) if and only if * \(a,b\in P\) and \(a\leq_{P}b\), or * \(a,b\in Q\) and \(a\leq_{Q}b\), or * \(a\in P\) and \(b\in Q\). We denote the ordinal sum of \(P\) and \(Q\) as \(P\ \oplus\ Q\). On the other hand, the _disjoint union_ of \(P\) and \(Q\) is the poset \(R\) whose elements are those in \(P\cup Q\), and \(a\leq_{R}b\) if and only if * \(a,b\in P\) and \(a\leq_{P}b\), or * \(a,b\in Q\) and \(a\leq_{Q}b\). We denote the disjoint union of \(P\) and \(Q\) as \(P+Q\). Finally, a special family of posets that we will consider is the chain posets in which the partial order is a total order. We denote the chain poset with \(n\) elements as \(C_{n}\). ### Bender-Knuth involutions A _linear extension_ of a finite poset \(P\) is a linear order \(f\) that is compatible with \(P\), that is, a bijective labeling \(f:P\to\{1,2,\ldots,|P|\}\) such that if \(a<_{P}b\) then \(f(a)<f(b)\). Let \(\mathcal{L}(P)\) be the set of linear extensions of a poset \(P\). The BK involutions act on linear extensions of any poset \(P\) as follows: each \(t_{i}:\mathcal{L}(P)\to\mathcal{L}(P)\) is a bijection that swaps two adjacent labels \(i\) and \(i+1\) when they label incomparable elements of \(P\) and fixes them otherwise. Let us briefly point out that the BK involutions defined here are motivated by the classical BK involutions on _Young tableaux_. Recall that given a partition \(\lambda=(\lambda_{1},\ldots,\lambda_{\ell})\) of \(n\), the _Young diagram_ of \(\lambda\) is a collection of \(n\) left-justified boxes in \(\ell\) rows such that row \(i\) has \(\lambda_{i}\) boxes. A _standard Young tableau_ of shape \(\lambda\) is a filling of the Young diagram of \(\lambda\) with the numbers \(1,2,\ldots,n\) such that the numbers strictly increase from left-to-right in each row and from top-to-bottom in each column. The _Ferrers_ poset \(F_{\lambda}\) of \(\lambda\) is the set \(\{(i,j)\mid 1\leq j\leq\lambda_{i}\}\) with the partial order \((i,j)<(i,j+1)\) and \((i,j)<(i+1,j)\). Observe that every linear extension of a Ferrers poset can be viewed as a standard Young tableau, as shown in Figure 1. Thus, in the special case of Ferrers posets, the \(t_{i}\) defined above can be identified with a special case of the classical BK involutions on standard Young tableaux. We refer the readers to [CHK\({}^{+}\)23] for a more thorough discussion of these classical BK involutions. Figure 1: Standard Young tableau and Ferrers poset ### Cactus relations The terminology _cactus group_ was introduced in work of Henriques and Kamnitzer [13], as a name for the fundamental group of the moduli space \(\overline{M}_{0,n+1}(\mathbb{R})\) of real genus zero stable curves with \(n+1\) marked points, appearing in work of Devadoss [4] and Davis, Januszkiewicz and Scott [1]. **Definition 2.1** (Cactus group).: The cactus group \(\mathcal{C}_{n}\) is generated by \(q_{[i,j]},1\leq i<j\leq n\), satisfying the relations 1. \(q_{[i,j]}^{2}=1\), 2. \(q_{[i,j]}q_{[k,l]}=q_{[k,l]}q_{[i,j]}\) if \(j<k\), 3. \(q_{[i,j]}q_{[k,l]}q_{[i,j]}=q_{[i+j-l,i+j-k]}\) if \(i\leq k<l\leq j\). We note that there is a well-defined group homomorphism from the cactus group \(\mathcal{C}_{n}\) to the symmetric group \(\mathfrak{S}_{n}\), sending \(q_{[i,j]}\) to \(\left(\begin{smallmatrix}i\ i+1&\cdots&j\\ j-1&\cdots&i\end{smallmatrix}\right)\). For example, letting \((i,j,k,l)=(2,7,3,5)\), the third relation in Definition 2.1 becomes \(q_{[2,7]}q_{[3,5]}q_{[2,7]}=q_{[4,6]}\). Indeed, \[12345678\xrightarrow{q_{[2,7]}}17654328\xrightarrow{q_{[3,5]}}17456328 \xrightarrow{q_{[2,7]}}12365478\] and \[12345678\xrightarrow{q_{[4,6]}}12365478.\] The reason why we want to note this example is because we will see a similar idea in the proof of our main theorem. In [1], Chmutov, Glick and Pylyavskyy proved that the action of BK involutions on column-strict tableaux satisfies the cactus relation by introducing an isomorphic presentation of the cactus group that will be used in this paper. **Theorem 2.2** ([1, Theorem 1.8]).: _The relations in Definition 2.1 for \(\mathcal{C}_{n}\) are equivalent to the following relations on the generators \(t_{i},i=1,\ldots,n-1\):_ 1. \(t_{i}^{2}=1\)_,_ 2. \((t_{i}t_{j})^{2}=1\) _if_ \(|i-j|>1\)_,_ 3. \((t_{i}q_{k-1}q_{k-j}q_{k-1})^{2}=1\) _if_ \(i+1<j<k\)_,_ _where we define_ \[q_{i}=t_{1}(t_{2}t_{1})\ldots(t_{i}t_{i-1}\ldots t_{1}). \tag{1}\] _For our convenience, we also define_ \[q_{jk}=q_{k-1}q_{k-j}q_{k-1} \tag{2}\] _so that the third relation becomes \((t_{i}q_{jk})^{2}=1\) if \(i+1<j<k\)._ On linear extensions of posets, the first two relations in Theorem 2.2 always hold. The last relation, \((t_{i}q_{jk})^{2}=1\) if \(i+1<j<k\), does not. For example, one can check that the relation \((t_{1}q_{34})^{2}=1\) does not hold for the linear extension in Figure 2. This motivates the following definition. **Definition 2.3** ([1, Definition 3.10]).: Call the relation \((t_{i}q_{jk})^{2}=1\) if \(i+1<j<k\) the **cactus relation**, and call posets on which this relation holds **LE-cactus** posets. ### Promotion and Evacuation In [14], Stanley gave two operations on linear extensions: promotion and evacuation. These will be extremely useful for understanding our main result. First, we introduce _promotion_\(\partial_{i}\). **Definition 2.4** (Promotion).: Let \(1\leq i\leq|P|\), promotion \(\partial_{i}:\mathcal{L}(P)\rightarrow\mathcal{L}(P)\) is a bijection that sends a linear extension \(f\) of \(P\) to \(f^{\prime}=\partial_{i}(f)\) by the following procedure: 1. Let \(t_{1}\in P\) satisfy \(f(t_{1})=1\) and remove the label \(1\) from \(t_{1}\). 2. Among the elements of \(P\) covering \(t_{1}\), let \(t_{2}\) be the one with the smallest label \(f(t_{2})\), "slide" this label down to \(t_{1}\), i.e. remove this label from \(t_{2}\) and place it at \(t_{1}\). 3. Repeat the procedure until reaching an element \(t_{k}\) such that either \(t_{k}\) is a maximal element or no element covering \(t_{k}\) has label less than or equal to \(i\). 4. Label \(t_{k}\) with \(i+1\) and decrease every label from \(2\) to \(i+1\) by one. Note that at this point there might be two labels \(i+1\), we only decrease the one that labels \(t_{k}\). The (saturated) chain \(t_{1}<t_{2}<\ldots<t_{k}\) above is called the _promotion chain_. Figure 3 shows an example of promotion \(\partial_{5}\) of the linear extension \(f\) shown in Figure 2(a). The red elements in Figure 2(f) form the promotion chain. Note that before the final step (Figure 2(e)), there are two labels \(6\). In the final step, we only decrease the one labeling the top element of the promotion chain (the label \(6\) in red). While it is not clear from the definition of promotions that they are related to Bender-Knuth moves \(t_{i}\), Stanley showed that they are indeed the combinatorial interpretation for \(t_{i-1}t_{i-2}\ldots t_{1}\). Figure 3: Promotion \(\partial_{5}\) Figure 2: A non-LE-cactus poset **Theorem 2.5** ([14]).: _For any poset \(P\), \(\partial_{i}=t_{i-1}t_{i-2}\ldots t_{1}\) for \(1<i\leq|P|\)._ For example, Figure 4 shows the action of \(t_{4}t_{3}t_{2}t_{1}\) on the same linear extension as in Figure 2(a), and the result in Figure 3(f) is the same as in Figure 2(f). Thus, we can define \(\partial_{i}=t_{i-1}t_{i-2}\ldots t_{1}\) for \(i>1\) and \(\partial_{1}=1\), and we can express the operator operator appearing in (1) as \(q_{k-1}=\partial_{1}\partial_{2}\ldots\partial_{k}\). With the definition of promotion, the combinatorial interpretation of \(q_{i}\), called _evacuation_, can be described easily. **Definition 2.6** (Evacuation).: Let \(1\leq i\leq|P|\), evacuation \(q_{i}:\mathcal{L}(P)\to\mathcal{L}(P)\) is a bijection that sends a linear extension \(f\) of \(P\) to \(f^{\prime}=q_{i}(f)\) by the following procedure: 1. Apply \(\partial_{i+1}\) and "freeze" the label \(i+1\). 2. Apply \(\partial_{i}\) and "freeze" the label \(i\). 3. Continue applying promotion until every label from \(1\) to \(i+1\) is frozen. We will occasionally refer to \(\partial_{i+1}\) as the first round of promotion in \(q_{i}\). Similarly, \(\partial_{i}\) is the second round of promotion and so on. Thus, \(q_{i}\) has \(i+1\) rounds of promotion. Figure 5 shows an example of evacuation. We want to point out that there is an offset by \(1\) in the definition of evacuation as \(q_{i}\) includes \(i+1\) promotions \(\partial_{i+1},\partial_{i},\ldots,\partial_{1}\). This is, unfortunately, unavoidable because the action of \(t_{i-1}t_{i-2}\ldots t_{1}\) affects the labels from \(1\) up to \(i\). We refer the reader to [14] for further details about promotion and evacuation. What we want to emphasize is the following property. **Proposition 2.7** ([14, Theorem 2.1]).: _Evacuation is an involution, that is \(q_{i}^{2}=1\)._ For example, we invite the readers to apply \(q_{4}\) to the linear extension in Figure 4(f). By Proposition 2.7, one should expect the result to be the linear extension in Figure 4(a). **Corollary 2.8**.: _The operator \(q_{jk}\) defined in (2) is also an involution, that is \(q_{jk}^{2}=1\)._ Thus, the cactus relation \((t_{i}q_{jk})^{2}=1\) is equivalent to \(t_{i}\) and \(q_{jk}\) commuting. Figure 4: The action of \(t_{4}t_{3}t_{2}t_{1}\) ### Unions of Antichains Now we introduce our main concern of this paper. **Definition 2.9**.: For \(n>1\), we define \(\mathfrak{D}_{n}\) to be the set of all posets with \(n\) elements that are unions of at least two chains. For example, Figure 6 shows the two posets in \(\mathfrak{D}_{3}\). For completeness, we define \(\mathfrak{D}_{1}\) to include the poset \(C_{1}\) containing one element. _Remark_.: If a poset is a single chain, all Bender-Knuth moves become the identity, which is not interesting. Thus, we want at least two chains in our definition. **Definition 2.10**.: Let \(\lambda=(\lambda_{1},\ldots,\lambda_{\ell})\) be a partition of \(n\) with \(\ell>1\). We define \(D_{\lambda}\) to be a union of \(\ell\) chains such that the \(i\)th chain has \(\lambda_{i}\) elements, that is, \(D_{\lambda}=C_{\lambda_{1}}+C_{\lambda_{2}}+\ldots+C_{\lambda_{\ell}}\). _Remark_.: The order of the chains does not matter, so we can assume that the lengths of the chains are decreasing and associate each poset in \(\mathfrak{D}_{n}\) with a partition of \(n\). In other words, \(\mathfrak{D}_{n}=\{D_{\lambda}\ |\ \lambda\vdash n,\ \ell(\lambda)>1\}\) for \(n>1\), and \(\mathfrak{D}_{1}=\{D_{(1)}\}\). Observe that a linear extension \(f\) of \(D_{\lambda}=C_{\lambda_{1}}+\cdots+C_{\lambda_{\ell}}\) is completely determined by the _ordered set partition_\((L_{1},\ldots,L_{\lambda_{\ell}})\) of the set \(\{1,2,\ldots,n\}\) having blocks \(L_{i}:=f^{-1}(C_{i})\). ## 3 Promotion and Evacuation In this section, we will analyze the behavior of linear extensions of disjoint union of chains under promotion and evacuation. First, we study the effect of promotion on linear extensions of \(D_{\lambda}\) on the corresponding ordered set partition. **Lemma 3.1**.: _Let \(f\) be a linear extension of \(D_{\lambda}\in\mathfrak{D}_{n}\). If a chain in \(D_{\lambda}\) contains the label \(i\), then after \(\partial_{k}\), this chain contains the label \(i\) if \(i>k\), and it contains the label \(i-1\pmod{k}\) if \(i\leq k\). In particular, this chain contains the label \(k\) if \(i=1\)._ Proof.: This is apparent from the definition of promotion. Since \(\partial_{k}\) does not affect any label greater than \(k\), if \(i>k\), this label stays at the same element after \(\partial_{k}\). If \(1<i\leq k\), then this label stays in the same chain during promotion because the chains are disjoint, and it is decreased by one in the final step of promotion. Hence, the chain that originally contained \(i\) will contain \(i-1\) after promotion \(\partial_{k}\). Finally, if \(i=1\), then this label is removed at the beginning of promotion. However, because the chains are disjoint, the final element of the promotion chain is in the same chain. This element is eventually labeled \(k+1\) and then decreased to \(k\). Thus, the chain that originally contains \(1\) will contain \(k\) after promotion \(\partial_{k}\). In terms of ordered set partitions, the action of \(\partial_{k}\) can be described as follows: keep the numbers from \(k+1\) to \(n\) in the same blocks. Then, send \(1\) to the block containing \(2\), send \(2\) to the block containing \(3\), \(\ldots\), and send \(k\) to the block containing \(1\). In other words, promotion is left multiplication by \((k\ldots 21)\). Figure 7 shows an example of the correspondence between applying \(\partial_{5}\) to the linear extension and applying \((54321)\) to the ordered set partition. Next, we will look at the effect of evacuation. **Lemma 3.2**.: _Let \(f\) be a linear extension of \(D_{\lambda}\in\mathfrak{D}_{n}\). If a chain in \(D_{\lambda}\) contains the label \(i\), then after \(q_{k-1}\), this chain contains the label \(i\) if \(i>k\) and \(k+1-i\) if \(i\leq k\)._ Proof.: Recall that \(q_{k-1}\) actually includes \(k\) rounds of promotion: \(\partial_{k},\partial_{k-1},\ldots,\partial_{1}\). Since \(q_{k-1}\) does not affect any label greater than \(k\), if \(i>k\), this label stays at the same element after \(q_{k-1}\). If \(1\leq i\leq k\), let \(P_{j}\) be the chain in \(D_{\lambda}\) that originally contained the label \(i\). By Lemma 3.1, after the first \(i-1\) rounds of promotion \(\partial_{k},\partial_{k-1},\ldots,\partial_{k-i+2}\), \(P_{j}\) contains the label \(1\). Thus, after applying \(\partial_{k-i+1}\) in the \(i\)th round of promotion, \(P_{j}\) contains the Figure 7: Promotion \(\partial_{5}\) on a linear extension and the corresponding ordered set partition label \(k-i+1\). None of the subsequent rounds of promotion affect the label \(k-i+1\), so this label stays in \(P_{j}\). This completes the proof. Similar to promotion, in terms of ordered set partitions, the action of \(q_{k-1}\) can be described as follows: keep the numbers from \(k+1\) to \(n\) in the same blocks. Then, send \(1\) to the block containing \(k\), send \(2\) to the block containing \(k-1\), \(\ldots\), and send \(k\) to the block containing \(1\). In other words, evacuation is left multiplication by \(\left(\begin{smallmatrix}1&2&\cdots&k\\ k-1&\cdots&1\end{smallmatrix}\right)\). Figure 8 shows an example of the correspondence between applying \(q_{4}\) to the linear extension and applying \(\left(\begin{smallmatrix}1&2&3&4&5\\ 5&4&3&2&1\end{smallmatrix}\right)\) to the ordered set partition. ## 4 Proof of main theorem In this section, we will prove the main theorem. First of all, our main concern will be posets \(R=P\ \oplus\ D_{\lambda}\ \oplus\ Q\) where \(D_{\lambda}\in\mathfrak{D}_{n}\), and \(P\) and \(Q\) are any finite posets with \(|P|=p\) and \(|Q|=q\). Since \(R\) is an ordinal sum, for any linear extension \(f\) of \(R\), \(f^{-1}(P)=\{1,\ldots,p\}\), \(f^{-1}(D_{\lambda})=\{p+1,\ldots,p+n\}\), and \(f^{-1}(Q)=\{p+n+1,\ldots,p+n+q\}\). Thus, a linear extension \(f\) of \(R\) induces a linear extension \(g\) of \(D_{\lambda}\) defined by \(g(x)=f(x)-p\) for \(x\in D_{\lambda}\). **Definition 4.1**.: We call the induced map defined above \(I\). Figure 9 gives an example of this map. The goal of the main theorem is to characterize the conditions on \(p\) and \(q\) such that \((t_{i}q_{jk})^{2}\) fixes the labels of the elements in \(D_{\lambda}\) for all \(i+1<j<k\), that is, \(I((t_{i}q_{jk})^{2}(f))=I(f)\) for all \(i+1<j<k\). Recall from Section 1 the following definition. **Definition 1.4**.: We say a triple \((p,n,q)\) is **cactus-compatible** if it satisfies the following condition: Let \(P\) and \(Q\) be any poset with \(|P|=p\) and \(|Q|=q\); let \(D_{\lambda}\) be any poset in \(\mathfrak{D}_{n}\). Let \(R=P\ \oplus\ D_{\lambda}\ \oplus\ Q\), and let \(f\) be any linear extension of \(R\). Then for all \(i+1<j<k\), the element \((t_{i}q_{jk})^{2}\) fixes the labels of the elements in \(D_{\lambda}\) when acting on \(f\). First, we prove the following proposition. **Proposition 4.2**.: _For \(n>3\), \((p,n,q)\) is cactus-compatible if and only if_ 1. \(p>q+n-4\)_, or_ 2. \(q\) _._ 2. \(p=q+n-4\) _and_ \(q\) _mod_ \(n\neq 1,3\)_, or_ 3. \(p=q+n-r\) _for_ \(r>4\) _and_ \(q\) _mod_ \(n>r-1\)_._ _In particular, if \(p\leq q-1\), \((p,n,q)\) is not cactus-compatible._ We will need a few lemmas to prove this proposition. **Lemma 4.3**.: _The only moves that affect the labels in \(D_{\lambda}\) are \(t_{p+1},\ldots,t_{p+n-1}\)._ Proof.: Since \(R\) is an ordinal sum, for any linear extension \(f\) of \(R\), \(f^{-1}(P)=\{1,\ldots,p\}\), \(f^{-1}(D_{\lambda})=\{p+1,\ldots,p+n\}\), and \(f^{-1}(Q)=\{p+n+1,\ldots,p+n+q\}\). Thus, \(t_{i}\) does not affect the labels of the elements in \(D_{\lambda}\) for \(i<p\) and \(i>p+n+1\). Furthermore, since \(f^{-1}(p)\in P\) and \(f^{-1}(p+1)\in D_{\lambda}\), \(p\) and \(p+1\) always label comparable elements, so \(t_{p}\) does nothing on \(f\). Similarly, \(t_{p+n}\) also does nothing. Hence, the only moves that affect the labels in \(D_{\lambda}\) are \(t_{p+1},\ldots,t_{p+n-1}\). **Lemma 4.4**.: _Evacuation \(q_{k-1}\) affects the labels in \(D_{\lambda}\) as follows._ 1. _If_ \(k-1\leq p\)_,_ \(I(q_{k-1}(f))=I(f)\)_._ 2. _If_ \(p<k-1<p+n\)_,_ \(I(q_{k-1}(f))=q_{k-1-p}(I(f))\)_._ 3. _If_ \(p+n\leq k-1\)_,_ \(I(q_{k-1}(f))=q_{n-1}\partial_{n}^{k-p-n}(I(f))\)_._ Proof.: 1) is clear since if \(k-1\leq p\), \(q_{k-1}\) only consists of \(t_{1},\ldots,t_{p}\), which do not affect the labels in \(D_{\lambda}\). To prove 2), note that in this case, \(q_{k-1}=\partial_{1}\ldots\partial_{p}\partial_{p+1}\ldots\partial_{k}\). Again, \(\partial_{1}\ldots\partial_{p}\) only consist of \(t_{1},\ldots,t_{p}\), so they do not affect the labels in \(D_{\lambda}\). On the other hand, in \(\partial_{p+v}=t_{p+v-1}\ldots t_{p+1}t_{p}\ldots t_{1}\), the first \(p\) terms \(t_{p}\ldots t_{1}\) do not affect the labels in \(D_{\lambda}\) either. Thus, Figure 9: An example of the induced map \(I\) the only terms in \(\partial_{p+v}\) that affect the labels in \(D_{\lambda}\) are \(t_{p+v-1}\ldots t_{p+1}\), so the terms in \(q_{k-1}\) that actually affect the labels in \(D_{\lambda}\) are \(t_{p+1}(t_{p+2}t_{p+1})\ldots(t_{k-1}\ldots t_{p+1})\). On \(I(f)\), this is equivalent to \(t_{1}(t_{2}t_{1})\ldots(t_{k-1-p}\ldots t_{1})\), which is \(q_{k-1-p}\). Proving 3) is similar. Note that in this case, \(q_{k-1}=\partial_{1}\ldots\partial_{p}\partial_{p+1}\ldots\partial_{p+n} \partial_{p+n+1}\ldots\partial_{k}\). Removing the terms that do not affect the labels in \(D_{\lambda}\), we obtain \[t_{p+1}(t_{p+2}t_{p+1})\ldots(t_{p+n-1}\ldots t_{p+1})(t_{p+n-1}\ldots t_{p+1} )^{k-p-n}\] where the first part \(t_{p+1}(t_{p+2}t_{p+1})\ldots(t_{p+n-1}\ldots t_{p+1})\) comes from \(\partial_{p+1}\ldots\partial_{p+n}\) and the second part \((t_{p+n-1}\ldots t_{p+1})^{k-p-n}\) comes from \(\partial_{p+n+1}\ldots\partial_{k}\). On \(I(f)\), the first part is equivalent to \(t_{1}(t_{2}t_{1})\ldots(t_{n-1}\ldots t_{1})\), which is \(q_{n-1}\), while the second part is equivalent to \((t_{n-1}\ldots t_{1})^{k-p-n}\), which is \(\partial_{n}^{k-p-n}\). This gives the desired product. Recall from Section 3 that we can associate each linear extension of \(D_{\lambda}\) with an ordered set partition of \(n\) numbers. Furthermore, \(\partial_{k}\) is equivalent to left multiplication by \((k\ldots 21)\) while \(q_{k-1}\) is equivalent to left multiplication by \(\left(\begin{smallmatrix}1&2&\cdots&k\\ k&k-1&\cdots&1\end{smallmatrix}\right)\). We claim that studying the action of promotion and evacuation on the ordered set partitions is enough to prove Proposition 4.2. **Lemma 4.5**.: \((t_{i}q_{jk})^{2}\) _fixes the labels in \(D_{\lambda}\) for any linear extension \(f\) of \(R\) if and only if it is equivalent to the identity permutation on ordered set partitions._ Proof.: If \((t_{i}q_{jk})^{2}\) is equivalent to the identity permutation on ordered set partitions, then for any linear extension \(f\), let \(L_{m}\) be the set of numbers labeling the \(m\)th chain of \(D_{\lambda}\) in \(I(f)\). Since \((t_{i}q_{jk})^{2}\) is the identity permutation, after \((t_{i}q_{jk})^{2}\), \(L_{m}\) is the same set. \(L_{m}\) uniquely determines the labeling of the \(m\)th chain, so \((t_{i}q_{jk})^{2}\) fixes \(I(f)\). Thus, it fixes the labels of \(D_{\lambda}\) in \(f\). Conversely, if \((t_{i}q_{jk})^{2}\) is equivalent to \(w\neq\mathrm{id}\), then there exists a number \(\ell\) such that \(\ell\neq w(\ell)\). Since \(D_{\lambda}\) has at least two chains, the ordered set partitions have at least two blocks. Thus, we can put \(\ell\) and \(w(\ell)\) in two different blocks. This gives a linear extension that is not fixed by \((t_{i}q_{jk})^{2}\) because the label \(w(\ell)\) is in different chains before and after \((t_{i}q_{jk})^{2}\). From Lemma 4.4, we can easily deduce the effect of \(q_{k-1}\) on the ordered set partition corresponding to \(I(f)\). **Lemma 4.6**.: _Evacuation \(q_{k-1}\) affects the ordered set partition corresponding to \(I(f)\) as follows._ 1. _If_ \(k-1\leq p\)_,_ \(q_{k-1}\) _does nothing to the ordered set partition._ 2. _If_ \(p<k-1<p+n\)_,_ \(q_{k-1}\) _is left multiplication by_ \(\left(\begin{smallmatrix}1&2&\cdots&k-p\\ k-p&k-p-1&\cdots&1\end{smallmatrix}\right)\)_._ 3. _If_ \(p+n\leq k-1\)_,_ \(q_{k-1}\) _is left multiplication by_ \(\left(\begin{smallmatrix}1&2&\cdots&n\\ n&n-1&\cdots&1\end{smallmatrix}\right)(n\ldots 21)^{k-p-n}\)_._ Thus, we have a nice description of the action of \(q_{jk}\) for some value of \(j<k\). **Corollary 4.7**.: _Suppose \(k=p+n+r\) for some \(r\geq 0\) and \(p<k-j<p+n\). Let \(m=r\) mod \(n\) and \(\ell=k-j-p\). Then \(q_{jk}\) is left multiplication by_ \[\left(\begin{smallmatrix}m-\ell&m-\ell+1&\cdots&m-1&m\\ m&m-1&\cdots&m-\ell+1&m-\ell\end{smallmatrix}\right),\] _where all values in the matrix are taken mod \(n\)._ Proof.: By Lemma 4.4, \(q_{jk}=q_{k-1}q_{k-j}q_{k-1}\) is equivalent to \(q_{n-1}\partial_{n}^{r}\ell q_{n-1}\partial_{n}^{r}\) on \(I(f)\). By Lemma 4.6, this is equivalent to left multiplication by \[\left(\begin{smallmatrix}1&2&\cdots&n\\ n&n-1&\cdots&n\end{smallmatrix}\right)(n\ldots 21)^{r}\left(\begin{smallmatrix}1&2& \cdots&\ell+1\\ \ell+1&\ell+2&\cdots&1\end{smallmatrix}\right)\left(\begin{smallmatrix}1&2& \cdots&n\\ n&-1&\cdots&n\end{smallmatrix}\right)(n\ldots 21)^{r}.\] Since \((n\ldots 21)^{n}=1\), we can simplify this to \[\left(\begin{smallmatrix}1&2&\cdots&n\\ n&n-1&\cdots&1\end{smallmatrix}\right)(n\ldots 21)^{m}\left(\begin{smallmatrix}1&2& \cdots&\ell+1\\ \ell+1&\ell+2&\cdots&1\end{smallmatrix}\right)\left(\begin{smallmatrix}1&2& \cdots&n\\ n&n-1&\cdots&1\end{smallmatrix}\right)(n\ldots 21)^{m}.\] Showing that this is equal to \(\left(\begin{smallmatrix}m-\ell&m-\ell+1&\cdots&m-1&m\\ m&-1&\cdots&m-\ell+1&m-\ell\end{smallmatrix}\right)\) is just a matter of keeping track of the permutation. \[1,\ldots,m-\ell,\ldots,m,m+1,\ldots,n\] \[n-m+1,\ldots,n-\ell,\ldots,n,1,\ldots,n-m\] \[\left(\begin{smallmatrix}1&2&\cdots&n\\ n&n-1&\cdots&1\end{smallmatrix}\right)\] \[m,\ldots,\ell+1,\ldots,1,n,\ldots,m+1\] \[\left(\begin{smallmatrix}1&2&\cdots&\ell+1\\ \ell+1&\ell+2&\cdots&1\end{smallmatrix}\right)\] \[m,\ldots,1,\ldots,\ell+1,n,\ldots,m+1\] \[n,\ldots,n+1-m,\ldots,n+\ell+1-m,n-m,\ldots,1\] \[\left(\begin{smallmatrix}1&2&\cdots&n\\ n&n-1&\cdots&1\end{smallmatrix}\right)\] \[1,\ldots,m,\ldots,m-\ell,m+1,\ldots,n\] For example, the action of \(q_{jk}\) on \(D_{(4,2,2)}\) where \(m=2\) and \(\ell=5\) is left multiplication by \(\left(\begin{smallmatrix}6&7&8&1\\ 2&1&8&7&6\end{smallmatrix}\right)\). We can think of the action of \(q_{jk}\) as "flipping" the interval between \(m-\ell\) and \(m\). On the other hand, \(t_{i}\) is simply flipping \(i-p\) and \(i-p+1\). Recall that the condition \((t_{i}q_{jk})^{2}=1\) is equivalent to \(t_{i}\) and \(q_{jk}\) commuting. Since \(t_{i}\) and \(q_{jk}\) both flip intervals, if they flip disjoint intervals, they do commute. However, if the intervals they flip are not disjoint, most of the time they do not commute. This allows us to prove Proposition 4.2. **Lemma 4.8**.: _If \(i\leq p\) or \(k-j\leq p\), then \((t_{i}q_{jk})^{2}\) fixes the labels in \(D_{\lambda}\)._ Proof.: If \(i\leq p\), then \(t_{i}\) does not affect the labels in \(D_{\lambda}\), so the only action that affects these labels is \(q_{jk}^{2}\), which is the identity by Corollary 2.8. If \(k-j\leq p\), then we claim that \(q_{jk}\) fixes the labels in \(D_{\lambda}\). Since \(k-j\leq p\), only \(q_{k-1}^{2}\) affects the labels in \(D_{\lambda}\). Fortunately, \(q_{k-1}^{2}\) is also the identity by Proposition 2.7. Lemma 4.8 implies condition 1 of Proposition 4.2. **Corollary 4.9**.: _If \(p>q+n-4\), then \((t_{i}q_{jk})^{2}\) fixes the labels in \(D_{\lambda}\) for all \(i+1<j<k\)._ Proof.: If \(p>q+n-4\), then \(2p+4>p+n+q\). By Lemma 4.8, if \(i\leq p\), then the statement is true. If \(i\geq p+1\), then \(j\geq p+3\), and since \(k\leq|R|=p+n+q<2p+4\), \(k-j<p+1\). Hence, the statement is also true by Lemma 4.8. Now we prove condition 2 of Proposition 4.2. **Lemma 4.10**.: _If \(p=q+n-4\), then \((t_{i}q_{jk})^{2}\) fixes the labels in \(D_{\lambda}\) for all \(i+1<j<k\) if and only if \(q\) mod \(n\neq 1,3\)._ Proof.: The only tuple \((i,j,k)\) such that \(i+1<j<k\) and \(i,k-j>p\) is \((p+1,p+3,2p+4)\). For this tuple, \(k-j=p+1\) and \((k-p-n)\) mod \(n=q\) mod \(n\). Thus, \(t_{i}\) is equivalent to left multiplication by (12) to the numbers on the ordered set partition, and \(q_{jk}\) is equivalent to left multiplication by \((m-1,m)\) where \(m=q\) mod \(n\) by Corollary 4.7. Clearly, (12) and \((m-1,m)\) commute if and only if \(m\neq 1,3\). Finally, we prove condition 3 of Proposition 4.2. **Lemma 4.11**.: _If \(p=q+n-r\) for \(r>4\), then \((t_{i}q_{jk})^{2}\) fixes the labels in \(D_{\lambda}\) for all \(i+1<j<k\) if and only if \(q\) mod \(n>r-1\)._ Proof.: Let \(m=q\) mod \(n\). First, we consider \(r=5\). In this case, if \(m=1,3\), then for \((i,j,k)=(p+1,p+4,2p+5)\), \(t_{i}\) and \(q_{jk}\) do not commute by the same argument as in the proof of Lemma 4.10. If \(m=2\), then for \((i,j,k)=(p+1,p+3,2p+5)\), \(t_{i}\) is equivalent to left multiplication by (12) to the ordered set partition, and \(q_{jk}\) is equivalent to left multiplication by \(\left(\begin{smallmatrix}n&1&2\\ 2&1&n\end{smallmatrix}\right)\) by Corollary 4.7. Clearly, these two do not commute. The same tuple and argument apply for the case \(m=4\) since (12) and \(\left(\begin{smallmatrix}2&3&4\\ 4&3&2\end{smallmatrix}\right)\) also do not commute. Conversely, when \(m>4\), if \(i=p+1\), the options for \((j,k)\) are \((p+3,2p+5)\), \((p+3,2p+4)\), and \((p+4,2p+5)\). In any case, \(q_{jk}\) only affects \(m,m-1,m-2>4-2=2\) while \(t_{i}\) only affects \(1\) and \(2\). Thus, \(t_{i}\) and \(q_{jk}\) commute. If \(i=p+2\), then the only option for \((j,k)\) is \((p+4,2p+5)\), so \(q_{jk}\) only affects \(m,m-1>4-1=3\) while \(t_{i}\) only affects \(2\) and \(3\). Thus, \(t_{i}\) and \(q_{jk}\) also commute. This complete the argument for the case \(r=5\). The analogous argument applies when \(r>5\). If \(2<m\leq r-1\), then for \((i,j,k)=(p+1,p+r-m+2,2p+r)\), \(t_{i}\) and \(q_{jk}\) do not commute since (12) and \(\left(\begin{smallmatrix}2&3&m\\ m&m-1&m\end{smallmatrix}\right)\) do not commute. Similarly, if \(m=2\), for \((i,j,k)=(p+1,p+r-2,2p+r)\), \(t_{i}\) and \(q_{jk}\) do not commute since (12) and \(\left(\begin{smallmatrix}2&1&2\\ 2&1&n\end{smallmatrix}\right)\) do not commute. If \(m=1\), for \((i,j,k)=(p+1,p+r-1,2p+r)\), \(t_{i}\) and \(q_{jk}\) do not commute since (12) and \((1n)\) do not commute. Conversely, when \(m>r-1\), if \(i=p+\ell\), then \(p+\ell+2\leq j<k\leq 2p+r\), so \(k-j\leq p+r-\ell-2\). This means that \(q_{jk}\) only affects \(m,m-1,\ldots,m-r+\ell+2>(r-1)-r+\ell+2=\ell+1\) while \(t_{i}\) only affects \(\ell\) and \(\ell+2\). Thus, \(t_{i}\) and \(q_{jk}\) commute. This completes the proof. Corollary 4.9, Lemma 4.10, and Lemma 4.11 together prove Proposition 4.2. Now in order to complete the characterization of LE-cactus posets in the family of ordinal sums of disjoint union of chains, we need the conditions for \(\mathfrak{D}_{1},\mathfrak{D}_{2}\), and \(\mathfrak{D}_{3}\). The conditions for \(\mathfrak{D}_{3}\) is quite similar to Proposition 4.2. **Proposition 4.12**.: \((p,3,q)\) _is cactus-compatible if and only if_ 1. \(p>q-1\)_, or_ 2. \(p=q-1\) _and_ \(q\) _mod_ \(n\neq 1,3\)_._ _In particular, if \(p\leq q-1\), \((p,3,q)\) is not cactus-compatible._ The proof for Proposition 4.12 is essentially the same as the proofs of Corollary 4.9 and Lemma 4.10. The conditions for \(\mathfrak{D}_{1}\) and \(\mathfrak{D}_{2}\) are even simpler: there is no condition! **Proposition 4.13**.: _For all \(p,q\), \((p,1,q)\) and \((p,2,q)\) are cactus-compatible._ Proof.: The case when \(D_{\lambda}\in\mathfrak{D}_{1}\) is trivial since its element can only be labeled by \(p+1\). When \(D_{\lambda}\in\mathfrak{D}_{2}\), the only move that affects the labels in \(D_{\lambda}\) is \(t_{p+1}\). Fortunately, \(t_{p+1}\) appears an even number of times in \((t_{i}q_{jk})^{2}\), so it fixes the labels in \(D_{\lambda}\). Combining Proposition 4.2, Proposition 4.12, and Proposition 4.13, we achieve our desired characterization. **Theorem 4.14**.: _Consider a sequence of positive integers \(a_{0}=0,a_{1},\ldots,a_{\ell},a_{\ell+1}=0\) and a sequence of posets \(D_{\mu_{1}},\ldots,D_{\mu_{\ell}}\) where \(D_{\mu_{i}}\in\mathfrak{D}_{a_{i}}\). The poset \(P=D_{\mu_{1}}\oplus\ldots\oplus D_{\mu_{\ell}}\) is LE-cactus if and only if for all \(i=1,2,\ldots,\ell\), the triples_ \[\left(\sum_{r=0}^{i-1}a_{r},\ a_{i},\ \sum_{r=i+1}^{\ell+1}a_{r}\right)\] _are cactus-compatible._ The following corollary is immediate. **Corollary 4.15**.: _For all \(i\), all \(D_{\mu}\in\mathfrak{D}_{i}\) are LE-cactus._ _Remark_.: Corollary 4.15 can also be proved using Theorem 1.2 from [13] by observing that every chain is LE-cactus, and hence their disjoint unions are also LE-cactus. ## 5 Discussion Unfortunately, Theorem 4.14 does not apply to other posets since for an arbitrary poset, it is not easy to understand the action of promotion and evacuation. Specifically, if a connected component is not a chain, then the set of labels in that component does not uniquely determine the labeling. However, we do have a _necessary_ condition for a poset to be LE-cactus. **Proposition 5.1**.: _Let \(D\) be a disconnected poset with \(|D|=n\), and let \(R=P\ \oplus\ D\ \oplus\ Q\) for posets \(P\) and \(Q\) with \(|P|=p\) and \(|Q|=q\). Then \(R\) is LE-cactus only if \((p,n,q)\) satisfies the conditions in Proposition 4.2 and 4.12._ Proof.: Since \(D\) is disconnected, we can write \(D\) as a disjoint union \(D_{1}\oplus\ldots\oplus D_{\ell}\). Since \(R\) is LE-cactus, all elements \((t_{i}q_{jk})^{2}\) fix the labels in \(D\). In particular, the labels in \(D_{i}\) stay in \(D_{i}\) under the action of these elements. By the same ordered set partition argument as in Section 4, this happens only if \((p,n,q)\) satisfies the conditions in Proposition 4.2 and 4.12. Furthermore, in [13, Proposition 3.18, 3.19, 3.20], it was proved that if \(P\) is LE-cactus then \(A_{1}\oplus P\) and \(A_{2}\oplus P\) are LE-cactus, but \(A_{i}\oplus P\) is not for any \(i\geq 3\), where \(A_{i}\) is an antichain of \(i\) elements. This can be seen from our main theorem: since \(A_{i}\in\mathfrak{D}_{i}\), and \(A_{i}\oplus P\) does not satisfy the conditions in Proposition 4.2 and 4.12, the labels in \(A_{i}\) are not fixed by all elements \((t_{i}q_{jk})^{2}\). However, these elements \((t_{i}q_{jk})^{2}\) do fix the labels in \(P\). In general, we have the following proposition. **Proposition 5.2**.: _Let \(P\) be an LE-cactus poset. Consider a sequence of positive integers \(a_{0}=0,a_{1},\ldots,a_{\ell},a_{\ell+1}=|P|\) and a sequence of posets \(D_{\mu_{1}},\ldots,D_{\mu_{\ell}}\) where \(D_{\mu_{i}}\in\mathfrak{D}_{a_{i}}\). The poset \(R=D_{\mu_{1}}\oplus\ldots\oplus D_{\mu_{\ell}}\oplus P\) is LE-cactus if and only if for all \(i=1,2,\ldots,\ell\), the triples_ \[\left(\sum_{r=0}^{i-1}a_{r},\ a_{i},\ \sum_{r=i+1}^{\ell+1}a_{r}\right)\] _are cactus-compatible._ Proof.: We already know that for \(r=1,2,\ldots,\ell\), the elements \((t_{i}q_{jk})^{2}\) fixed the labels in \(D_{\mu_{r}}\) if and only if \(a_{r}\) satisfies the conditions in Proposition 4.2 and 4.12. Thus, it suffices to prove that the labels in \(P\) are always fixed by the elements \((t_{i}q_{jk})^{2}\). Let \(m=a_{1}+\ldots+a_{\ell}\), by similar reasoning to the arguments above, if \(i\leq m\) or \(k-j\leq m\), then \((t_{i}q_{jk})^{2}\) fixes the labels in \(P\). If \(i,k-j>m\), then on the labels in \(P\), \(t_{i}\) is equivalent to \(t_{i-m}\) and \(q_{jk}\) is equivalent to \(q_{k-1-m}q_{k-j-m}q_{k-1-m}\), which is \(q_{k-m,j}\). Thus, \((t_{i}q_{jk})^{2}\) is equivalent to \((t_{i-m}q_{k-m,j})^{2}\). Since \(i+1<j\), \(i-m+1<j\), and since \(k-j>m\), \(k-m>j\). Thus we do have the condition \(i-m+1<j<k-m\), so this element \((t_{i-m}q_{k-m,j})^{2}\) is actually acting on linear extensions of \(P\). Since \(P\) is LE-cactus, this element fixes the labels in \(P\). Therefore, this gives hope for an analogue of Proposition 4.2 for LE-cactus poset: Let \(D\) be an LE-cactus poset, and \(R=P\oplus D\oplus Q\) for some finite posets \(P\) and \(Q\). Are there any conditions on \(|P|,|D|,|Q|\) that characterize when the labels in \(D\) are fixed by all elements \((t_{i}q_{ij})^{2})\)? Corollary 4.15 says that all disjoint union of chains are LE-cactus, so we expect an answer to this question to be a generalization of Proposition 4.2. This would give a clearer insight into the behavior of LE-cactus posets under the ordinal sum operation.
2308.08173
Expressivity of Graph Neural Networks Through the Lens of Adversarial Robustness
We perform the first adversarial robustness study into Graph Neural Networks (GNNs) that are provably more powerful than traditional Message Passing Neural Networks (MPNNs). In particular, we use adversarial robustness as a tool to uncover a significant gap between their theoretically possible and empirically achieved expressive power. To do so, we focus on the ability of GNNs to count specific subgraph patterns, which is an established measure of expressivity, and extend the concept of adversarial robustness to this task. Based on this, we develop efficient adversarial attacks for subgraph counting and show that more powerful GNNs fail to generalize even to small perturbations to the graph's structure. Expanding on this, we show that such architectures also fail to count substructures on out-of-distribution graphs.
Francesco Campi, Lukas Gosch, Tom Wollschläger, Yan Scholten, Stephan Günnemann
2023-08-16T07:05:41Z
http://arxiv.org/abs/2308.08173v2
# Expressivity of Graph Neural Networks ###### Abstract We perform the first adversarial robustness study into Graph Neural Networks (GNNs) that are provably more powerful than traditional Message Passing Neural Networks (MPNNs). In particular, we use adversarial robustness as a tool to uncover a significant gap between their theoretically possible and empirically achieved expressive power. To do so, we focus on the ability of GNNs to count specific subgraph patterns, which is an established measure of expressivity, and extend the concept of adversarial robustness to this task. Based on this, we develop efficient adversarial attacks for subgraph counting and show that more powerful GNNs fail to generalize even to small perturbations to the graph's structure. Expanding on this, we show that such architectures also fail to count substructures on out-of-distribution graphs. Machine Learning, Graph Neural Networks, Graph Neural Networks ## 1 Introduction In recent years, significant efforts have been made to develop Graph Neural Netwoks (GNNs), for several graph-related tasks, such as molecule property predictions (Gasteiger et al., 2020), social network analysis (Fan et al., 2019), or combinatorial problems (Gasse et al., 2019), to name a few. The most commonly used architectures are based on message passing, which iteratively updates the embedding of each node based on the embeddings of its neighbors (Gilmer et al., 2017). Despite their broad success and wide adoption, different works have pointed out that so called Message Passing Neural Networks (MPNNs) are at most as powerful as the 1-Weisfeiler-Lehman (WL) algorithm (Morris et al., 2019; Xu et al., 2019) and thus, have important limitations in their expressive power (Chen et al., 2020). This encouraged the development of provably more powerful architectures. However, there is no guarantee that the training process also yields models that are as powerful as theoretically guaranteed. Thus, this work investigates if and to what extent the empirically achieved expressivity of such GNNs lacks behind their theoretic possibilities by taking a novel look from the perspective of adversarial robustness. In particular, we focus on the task of counting different subgraphs, which is provably impossible for MPNNs (Chen et al., 2020) (except for very limited cases), but important for many downstream tasks (Huang et al., 2023; Liu et al., 2019; Monti et al., 2018). Using our new adversarial framework for subgraph counting, we find that the counting ability of theoretically more powerful GNNs fail to generalize even to small perturbations to the graph's structure (see Figure 1). A result even more interesting given that subgraph counting is polynomially solvable for fixed subgraph sizes (Shervashidze et al., 2009).1 We expand on these results and show that these architectures also fail to count substructures on out-of-distribution (OOD) graphs. Furthermore, retraining the last MLP layers responsible for the prediction based on the graph embedding does not entirely resolve this issue. Footnote 1: In general, this problem is NP-complete (Ribeiro et al., 2021) **Contributions.** (i) We perform the first study into the adversarial robustness of GNNs provably more powerful than the 1-WL and use it as an effective tool to uncover a significant gap between the theoretically possible and empirically achieved expressivity for substrucu Figure 1: GNNs more powerful than 1-WL are not adversarially robust for subgraph-counting tasks. (see Section 6). (ii) We extend the concept of an adversarial example from classification to (integer) regression tasks and develop multiple perturbations spaces interesting for the task of subgraph counting (see Section 4). (iii) We develop efficient and effective adversarial attacks for subgraph counting, operating in these perturbations spaces and creating _sound_ perturbations, i.e., where we know the ground truth (see Section 5). (iv) In Section 6.2 we show that subgraph-counting GNNs also fail to generalize to out-of-distribution graphs, providing additional evidence that these GNNs fail to reach their theoretically possible expressivity. Our code implementation can be found at [https://github.com/francesco-campi/Rob-Subgraphs](https://github.com/francesco-campi/Rob-Subgraphs). ## 2 Background We consider undirected, unattributes graphs \(G\)=\((V,E)\) with nodes \(V\)=\(\{1,\ldots,n\}\) and edges \(E\)\(\subseteq\)\(\{\{i,j\}\mid i,j\in V,i\neq j\}\), represented by adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\). A graph \(G_{S}=(V_{S},E_{S})\) is a _subgraph_ of \(G\) if \(V_{S}\subseteq V\) and \(E_{S}\subseteq E\). We say \(G_{S}\) is an _induced_ subgraph if \(E_{S}\) contains all edges in \(E\) that connect pairs of nodes in \(V_{S}\). An egonet \(\mathrm{ego}_{l}(i)\) is the induced subgraph containing all nodes with a distance of at most \(l\) from root node \(i\). Furthermore, two graphs \(G,G^{\prime}\) are isomorphic (\(\simeq\)) if there exists a bijection \(f:V\to V^{\prime}\) such that \(\{i,j\}\in E\) if and only if \(\{f(i),f(j)\}\in E^{\prime}\). Lastly, the diameter \(\mathrm{diam}(G)\) denotes the length of the largest shortest path in graph \(G\). ### Subgraph-Counting Consider a fixed graph \(H\) which we call a pattern (Figure 2). A classic graph-related problem is the _(induced-) subgraph-counting_ of the pattern \(H\)(Ribeiro et al., 2021), which consists of enumerating the (induced) subgraphs of \(G\) isomorphic to \(H\). The subgraph-count of \(H\) is denoted by \(\mathcal{C}(G,H)\), and by \(\mathcal{C}_{I}(G,H)\) in the induced case. To simplify the notation we will also refer to it as \(\mathcal{C}(G)\) if \(H\) is given in the context. Several algorithms have been developed to solve the task of subgraph-counting. In this work we specifically consider the (exact) algorithm of Shervashidze et al. (2009) (presented in Appendix B) due to its low computational cost. ### Expressivity of Graph Neural Networks The expressivity of machine learning models is about which functions they can and cannot approximate. There are different ways of studying the expressive power of GNNs. In this work we specifically consider their ability to count subgraphs (Chen et al., 2020) because it is strictly related to different real-world tasks such as computational chemistry (Jin et al., 2020) and social network analysis (Jiang et al., 2010). We define the ability to count subgraphs as follows: **Definition 2.1**.: A family of functions \(\mathcal{F}\) can perform _subgraph-counting_ of a target pattern \(H\) on a graph class \(\mathcal{G}\) if for any two graphs \(G_{1},G_{2}\in\mathcal{G}\) with \(\mathcal{C}(G_{1},H)\neq\mathcal{C}(G_{2},H)\) there exists a function \(f\in\mathcal{F}\) such that \(f(G_{1})\neq f(G_{2})\). Surprisingly, MPNNs have considerable limitations in subgraph-counting. In fact, Chen et al. (2020) show that MPNNs are not able to count induced patterns with three or more nodes, leaving out only the ability to count edges. For example, Figure 3 shows two graphs that, despite having different triangle counts, will always return identical outputs when fed in the same MPNN. A different perspective to measure the expressive power is graph isomorphism. In this regard, Xu et al. (2019); Morris et al. (2019) demonstrated that an MPNN is at most as powerful as 1-WL isomorphism test at distinguishing pairs of non-isomorphic graphs. Moreover, since the WL algorithms are designed to extract representation vectors from graphs, they could be used also to perform subgraph-counting. In particular, Chen et al. (2020) showed that \(k\)-WL, and equivalently powerful architectures, can perform substructure-counting for patterns with at most \(k\) nodes, creating a connection between the two approaches. ### More Expressive Graph Neural Networks In this work, we analyze two state-of-the-art architectures for the task of subgraph counting: PPGN (Maron et al., 2019), and I\({}^{2}\)-GNN (Huang et al., 2023). PPGN represents the graph structure in a tensor and exploits tensor multiplications to enhance the expressivity. It reaches the same expressive power of 3-WL, which makes it capable of counting patterns of size three. I\({}^{2}\)-GNN, following the approach of subgraph GNNs (Frasca et al., 2022), decomposes the whole graph into different subgraphs and processes them independently with an MPNN. It has been explicitly developed to be expressive enough for counting different substructures and most important for this work, can count arbitrary patterns of size four. Both, PPGN and I\({}^{2}\)-GNN are effective architectures for downstream tasks such as molecular property predictions. Figure 3: Pair of undistinguishable graphs for MPNNs with different triangle counts. Figure 2: Examples of graph patterns used for subgraph-counting. ## 3 Related Work Chen et al. (2020) were the first to study the expressivity of GNNs w.r.t. their ability to count substructures. They, and later Tahmasebi et al. (2021) proposed architectures for counting substructures. However, these suffer from high computational complexity. Yu et al. (2023) proposed an architecture purely focusing on subgraph counting. However, subgraph counting alone can be solved by efficient randomized algorithms (Bressan et al., 2021). Thus, in this work, we focus on efficient architectures, which leverage their subgraph counting ability to improve generalization for other downstream tasks. In particular, we focus on PPGN (Maron et al., 2019) and I\({}^{2}\)-GNN (Huang et al., 2023). Both achieve state-of-the-art results for substructure counting while having formal expressivity guarantees. Different works have studied the adversarial robustness of GNNs for graph-level classification (Dai et al., 2018) and node-level classification (Zugner et al., 2018). Regarding the latter, Gosch et al. (2023) exactly define (semantic-preserving) adversarial examples. Moreover, Geisler et al. (2022) use adversarial attacks with sound perturbation models, i.e., where the ground truth change is known, to investigate the generalization of neural combinatorial solvers. Conversely, adversarial robustness for regression tasks has currently received very little attention (Deng et al., 2020). ## 4 Robustness in Subgraph-Counting The field of adversarial robustness is about the problem that machine learning models are vulnerable to small changes to their inputs (Goodfellow et al., 2015). In particular, for the subgraph-counting problem we want to analyze whether the error of the models increases when tested on perturbed input graphs \(\tilde{G}\) of a graph from a set of perturbed graphs \(\mathcal{P}(G)\). To evaluate the performance of a model \(f\) on perturbed graphs \(\tilde{G}\in\mathcal{P}(G)\) we use the following adversarial loss: \[\ell_{adv}(\tilde{G}):=|f(\tilde{G})-\mathcal{C}(\tilde{G},H)|.\] ### Subgraph-Counting Adversarial Examples To empirically evaluate the expressivity of machine learning models for subgraph-counting via adversarial robustness, we have to introduce a notion of adversarial example. In classification tasks adversarial examples are simply perturbations that change the predicted class. In general regression tasks one can define a threshold on \(\ell_{adv}\) for which we call a perturbed graph an adversarial example (Deng et al., 2020). However, this definition is application-dependent and, in our work, we define a specific threshold exploiting the fact that subgraph-counting is an _integer_ regression task. **Definition 4.1**.: Given a model \(f\) and clean graph \(G\), we say that \(\tilde{G}\in\mathcal{P}(G)\) is an _adversarial example_ for \(f\) if: 1. \(\lfloor f(G)+0.5\rfloor=\mathcal{C}(G)\) 2. \(\lfloor f(\tilde{G})+0.5\rfloor\neq\mathcal{C}(\tilde{G})\) 3. \(\frac{\ell_{adv}(\tilde{G})-\ell_{adv}(G)}{\ell_{adv}(G)}>\delta\). The conditions \((i)\) and \((ii)\) guarantee that the model prediction, when approximated to the nearest integer, is correct for \(G\) and wrong for \(\tilde{G}\). Here, having a correct initial prediction is essential to clearly distinguish the performances on the original graph from the perturbed graph. In addition, the condition \((iii)\) ensures that a margin exists between the errors on the original data instance and the perturbed one, and the size of the margin depends on the value of \(\delta\). This requisite prevents easily generating adversarial examples from graphs that are almost wrongly predicted, i.e. \(\ell_{adv}(G)\approx 0.5\). ### Perturbation Spaces We define different perturbation spaces for a graph \(G\) as constrained sets of structurally perturbed graphs constructed from \(G\). In particular, we consider different combinations of edge deletions and additions, for example \(E^{\prime}=E\cup\{i,j\}\) with \(\{i,j\}\notin E\) represents an edge addition. We always consider sound perturbation models, i.e, where we know the ground truth change. These are efficiently implemented as described in Section 5. It is meaningful to limit the number of perturbations in order to control how shifted the distribution of the perturbed subgraph-counts is compared to the distribution of the original ones. Then, we define the _constrained_ perturbation space with maximal budget \(\Delta\) as: \[\mathcal{P}_{\Delta}(G):=\{\tilde{G}\mid\frac{1}{2}\|\mathbf{A}-\mathbf{A}^{ \prime}\|_{0}\leq\Delta\}, \tag{1}\] where \(\|\cdot\|_{0}\) represents the number of non-zero elements, i.e. the number of perturbed edges. **Semantic-Preserving Perturbations.** Additionally, we conduct a robustness analysis more closely in line with adversarial examples for classification tasks, by incorporating a further constraint to guarantee the preservation of a specific level of semantic meaning. In particular, we define the _count-preserving_ perturbation space as: \[\mathcal{P}_{\Delta}^{c}(G)\coloneqq\{\tilde{G}\mid\tilde{G}\in\mathcal{P}_{ \Delta}(G)\ \wedge\ \mathcal{C}(\tilde{G})=\mathcal{C}(G)\}. \tag{2}\] Additionally, when considering induced subgraphs, keeping the count constant does not guarantee that the subgraphs isomorphic to the pattern remain the same. In fact, perturbations can simultaneously delete a subgraph isomorphic to the pattern and generate a new one (see Figure 4). We will denote the _subgraph-preserving_ perturbation space by \[\mathcal{P}_{\Delta}^{s}(G)\coloneqq\{\tilde{G}\mid\tilde{G}\in \mathcal{P}_{\Delta}(G)\wedge \tag{3}\] \[G_{S}\subseteq G,G_{S}\simeq H\iff G_{S}\subseteq\tilde{G},G_{S }\simeq H\}.\] ## 5 Subgraph-Counting Adversarial Attacks For a subgraph-counting model \(f\), the goal of an adversarial attack is to find the pertubed graph \(G^{*}\in\mathcal{P}(G)\) that causes the maximal error increase. This problem can be formulated as an optimization problem: \[G^{*}=\operatorname*{argmax}_{\tilde{G}\in\mathcal{P}(G)}\ell_{adv}(\tilde{G}). \tag{4}\] Attacking subgraph-counting GNNs for studying their empirical expressivity is particularly challenging. In fact, (i) the subgraph-count can vary significantly even for slight structural changes, and (ii) finding \(G^{*}\) of Equation (4) requires solving a discrete optimization problem. ### Sound Perturbations for Subgraph-Counting To tackle the sensitivity of the counts to structural changes, we exploit the exact algorithm to update the ground-truth count after every perturbation. In this way, we generate sound perturbations since the exact ground-truth value is know. In order to prevent this step to become computationally prohibitive, we develop an efficient count updating scheme that uses only a small portion of the graph. **Proposition 5.1**.: _Consider a graph \(G\) and a pattern \(H\) with \(\operatorname{diam}(H)=d\). Then, for every edges \(\{i,j\}\) we have that \(\operatorname{ego}_{d}(i)\) and \(\operatorname{ego}_{d}(j)\) contain all the subgraphs \(G_{S}\subset G\) such that \(G_{S}\simeq H\) and \(i,j\in V_{S}\)._ Proof in Appendix A. When an edge \(\{i,j\}\) is perturbed, only the subgraphs containing both the end nodes can be affected and potentially change their isomorphism relation with \(H\). Therefore, according to Proposition 5.1, it is sufficient to verify potential count changes only in \(\operatorname{ego}_{d}(i)\) (or equivalently \(\operatorname{ego}_{d}(j)\)). Specifically, the theorem assumes that \(\{i,j\}\) is contained in the graph, hence we extract the egonet from the graph including \(\{i,j\}\) (original for edge deletion and perturbed for addition). Next, from the nodes of \(\operatorname{ego}_{d}(i)\) we generate the induced subgraphs \(G_{S}\) and \(\tilde{G}_{S}\) from the original and perturbed graphs respectively. Since the possible alterations of the subgraph-count are enclosed in \(G_{S}\) and \(\tilde{G}_{S}\), we have the following count update rule. **Proposition 5.2**.: _Let \(\tilde{G}\) be a perturbation of a single edge of a graph \(G\), then there holds:_ \[\mathcal{C}(\tilde{G})=\mathcal{C}(G)+\mathcal{C}(\tilde{G}_{S})-\mathcal{C}( G_{S}).\] Following Proposition 5.2 we need to run the subgraph-counting algorithm only on the smaller subgraphs \(G_{S}\) and \(\tilde{G}_{S}\), rather than on the whole graph \(\tilde{G}\). Additionally, Proposition 5.1 guarantees that potential changes in the subgraphs isomorphic to the patterns are also constrained in the egonet, thus it can be used also identify perturbations belonging to the subgraph-preserving perturbation space \(\mathcal{P}_{\Delta}^{s}\). ### Construction of Adversarial Examples To create adversarial examples we need to solve the discrete optimization problem in Equation (4). To do so we develop algorithms that generate more powerful perturbation one change at a time, in this way, we keep track of the exact count with the update rule (Proposition 5.2). **Greedy Search.** We develop an efficient and effective greedy search algorithm (Algorithm 1). At each step we select the most effective perturbation of the current perturbed graph \(\tilde{G}\) in \(\mathcal{P}_{1}(\tilde{G})\) (or in \(\mathcal{P}_{1}^{c}(\tilde{G}),\mathcal{P}_{1}^{s}(\tilde{G})\)) until the budget limit is reached. The new subgraph-counts of perturbations in \(\mathcal{P}_{1}(\tilde{G})\) are computed with Proposition 5.2, whereas the preserving perturbation spaces are generated with Algorithm 2. ``` Input:\(G,\Delta,k\) \(\mathcal{G}^{(0)}=\{G\}\) for\(i=0\)to\(\Delta-1\)do \(\mathcal{P}^{(i)}=\{\}\) for\(\tilde{G}\)in\(\mathcal{G}^{(i)}\)do \(\mathcal{P}^{(i)}=\mathcal{P}^{(i)}\cup\mathcal{P}_{1}(\tilde{G})\) \(\{\text{or}\ \mathcal{P}_{1}^{c}(G),\mathcal{P}_{1}^{s}(G)\}\) endfor \(\mathcal{G}^{(i+1)}=\)greatest \(k\) in \(\{\ell_{adv}(\tilde{G})\mid\tilde{G}\in\mathcal{P}^{(i)}\}\) endfor Return:\(G^{*}=\operatorname*{argmax}_{\tilde{G}\in\mathcal{G}^{(\Delta)}}\{\ell_{adv}( \tilde{G})\}\) ``` **Algorithm 1** Beam search (greedy search for \(k=1\)) **Beam search.** A more advanced algorithm that does not increase the computational complexity is beam search. Concretely, it follows simultaneously \(k\) different paths to explore more extensively the perturbation space (see Algorithm 1). To improve the computational efficiency the perturbations in \(\mathcal{P}_{1}\) can be randomly selected according to the degrees of the end nodes of the perturbed edge. Concretely, the probability to pick the perturbation where the edge \(\{i,j\}\) has been added (or deleted) is proportional to \(d(i)^{2}+d(j)^{2}\), since intuitively these are the most relevant edges. Figure 4: This Figure shows examples demonstrating that not all the count-preserving perturbations are also subgraph-preserving ones. On the left a subgraph- and count-preserving perturbation for 4-cycles where the red edge has been deleted. On the right a perturbation that leaves unchanged the count of 2-paths, but it deletes the induced substructure \(\{2,3,4\}\) to generate \(\{1,2,3\}\). ## 6 Experiments In Section 6.1, we analyze the empirical expressivity of GNNs using our subgraph-counting adversarial attacks and using generalization as a (proxy) measure. Extending on this, in Section 6.2 we investigate if the same GNNs can count subgraph patterns for out-of-distribution graphs. Here we present the results of the induced subgraph-counting of triangles, 4-cycles and chordal cycles, for other patterns refer to Appendix C. ### Adversarial Robustness Here, we analyze the empirical expressivity of GNNs using our subgraph-counting adversarial attacks. Dataset and models.We generate a synthetic dataset of \(5\),\(000\) Stochastic-Block-Model graphs with 30 nodes divided into 3 different communities. The probabilities of generating edges connecting nodes within the same community are \([0.2,0.3,0.4]\), while the probability of generating edges between nodes of different communities is 0.1. We randomly split the dataset into training, validation, and test sets with percentages \(30\%,20\%,50\%\). We then train PPGN (Maron et al., 2019) and I\({}^{2}\)-GNN (Huang et al., 2023). Experimental Settings.We train each model 5 times using different initialization seeds to prevent bad weight initialization influencing the final results. Then, for each of the trained models \(f_{i}\) with seed \(i\), we use our adversarial attacks (see Section 5) to generate adversarial examples from 100 correctly predicted test graphs and average the percentage of successful attacks over all seeds. Furthermore, we investigate if the adversarial graphs for a model \(f_{i}\) transfer to the other models \(f_{j}\) trained with a different initialization seed \(j\neq i\). We inspect all three different perturbation spaces with budgets \(\Delta\) of \(1\%,5\%,10\%\) and \(25\%\) with respect to the average number of edges of the graphs in the dataset and use \(\delta=1\) as margin. In detail, we use beam search with beam width \(k=10\) to explore \(\mathcal{P}^{*}_{\Delta}\) and \(\mathcal{P}^{*}_{\Delta}\), while we rely on greedy search for \(\mathcal{P}_{\Delta}\). Results.The plots in Figure 5 show the percentage of perturbations found by the optimization algorithms that represent a successful adversarial example according to Definition 4.1. To condensate the results in a numerical value, we report in Table 1 the area under the curve (AUC) of the functions Non-Robust and Non-Robust (Transfer) in Figure 5. The results are reported as the proportion with respect to the area under the unity function \(f(x)=1\), which represents the worse case where all permutations generate an adversarial example already at \(\Delta=1\%\). Interestingly, the results show that we can find several adversarial examples for both architectures. In particular, PPGN is highly unrobust in the subgraph-counting of patterns with four nodes. However, several adversarial examples can be found also for the triangle count, even though the theoretical expressivity of PPGN claims that it is a family of functions that can count 3-dimensional subgraphs in the sense of Definition 2.1. Similarly, the more expressive model I\({}^{2}\)-GNN is fooled on 4-dimensional patterns, in spite of being sufficiently powerful to count them. This indicates that the empirical expressivity achieved does not match the theoretical expressivity since the models are not able to generalize to subgraph-counting tasks that they should in theory be able to solve. Additionally, in Appendix C.2 we investigate some structural properties of the adversarial examples. **Experimental Settings.** Firstly, we train the PPGN and I\({}^{2}\)-GNN architectures on the dataset d\({}_{1}\) and test them on both d\({}_{1}\) and d\({}_{2}\) to investigate the OOD generalization performances of the architectures. Additionally, we train the models directly on d\({}_{2}\) to have a comparison of the best performances achievable on this dataset. The errors are expressed using the mean absolute error (\(\ell_{1}\)) and an extension of it, which is obtained by normalizing by the ground-truth count (\(\ell_{c}\)). **Results.** Table 2 shows the test errors of the aforementioned settings averaged over five different initialization seeds. Here we observe that the models achieve very poor performances on general OOD graphs compared to their ideal performances (OOD and d\({}_{2}\) rows). However, if the model were able to perform subgraph-counting, as theoretically claimed, they should be able to perform this task regardless of the graph distribution. This result matches with Section 6.1 and shows that the models do not learn to detect the patterns and they rather overfit on the training distribution. However, this behavior could be intrinsic to the models' architecture. The models are designed to extract a vector representation from each input graph, which is then mapped to the prediction through an MLP. Then, the fact that different graph distributions might generate different graph representations leads us to investigate whether the problem is a poor generalization of the map between the graph embedding and the count. To test this possibility, we retrain on d\({}_{2}\) only the final MLP of the models previously trained on d\({}_{1}\) (row MLP in Table 2). While this adjustment is helpful, the errors are consistently one order of magnitude higher than the ones obtained training directly on d\({}_{2}\). This indicates that the graph representations do not achieve their theoretic separation power and that the problem does _not_ only lie in the last MLP prediction layers. ## 7 Conclusion We proposed a novel approach to assess the empirical expressivity achieved by subgraph-counting GNNs via adversarial robustness. We show that despite being theoretically capable of counting certain patterns, the models lack generalization as they struggle to correctly predict adversarially perturbed and OOD graphs. Therefore, the training algorithms are not able to find weights corresponding to a maximally expressive solution. Extending our study to other related GNNs such as KP-GNN (Feng et al., 2022) or to include adversarial training (Gosch et al., 2023; Geisler et al., 2022) to steer towards more robust and expressive solutions, are interesting directions for future work. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Arch.} & Exp. & \multicolumn{3}{c}{Trangle} & \multicolumn{2}{c}{4-Cycle} & \multicolumn{2}{c}{Chord. C.} \\ \cline{3-8} & Setting & \(\ell_{1}\) & \(\ell_{c}\) & \(\ell_{c}\) & \(\ell_{c}\) & \(\ell_{1}\) & \(\ell_{c}\) \\ \hline \hline \multirow{4}{*}{PPGN} & d\({}_{1}\) & 0.0058 & 7.8e-4 & 0.059 & 0.010 & 0.10 & 0.011 \\ & OOD & 2.98 & 0.041 & 5.40 & 1.17 & 20.0 & 0.25 \\ & d\({}_{2}\) & 0.0091 & 1.7e-4 & 0.040 & 0.0050 & 0.12 & 0.0017 \\ & MLP & 0.059 & 9.8e-4 & 0.29 & 0.043 & 1.083 & 0.014 \\ \hline \multirow{4}{*}{\(\mathrm{I}\)-GNN} & d\({}_{1}\) & 0.0027 & 2.8e-4 & 0.035 & 0.0062 & 0.020 & 0.0023 \\ & OOD & 3.25 & 0.044 & 2.16 & 0.45 & 6.75 & 0.084 \\ \cline{1-1} & d\({}_{2}\) & 0.032 & 6.2e-4 & 0.028 & 0.0031 & 0.30 & 0.0042 \\ \cline{1-1} & MLP & 0.20 & 0.0031 & 0.19 & 0.025 & 1.56 & 0.020 \\ \hline \hline \end{tabular} \end{table} Table 2: Test errors of the OOD experiments that investigate the generalization abilities of the architectures. Specifically, d\({}_{1}\) represents models trained and tested on the same dataset d\({}_{i}\), OOD models trained on d\({}_{1}\) and tested in d\({}_{2}\) and in MLP we additionally retrain the final layers on d\({}_{2}\). Figure 5: The plots illustrate in blue the success rate of our subgraph-counting adversarial attacks at finding perturbations that represent adversarial examples according to Definition 4.1 constrained and subgraph preserving perturbation spaces. In orange, we represent how effective the adversarial examples are when transferred to the models trained with a different initialization seed. The values are the average of the results obtained with 5 different initialization seeds with the relative standard errors.
2301.04242
Anionic nickel and nitrogen effects in the chiral antiferromagnetic antiperovskite Mn$_3$NiN
Magnetic antiperovskites, holding chiral noncollinear antiferromagnetic ordering, have shown remarkable properties that cover from negative thermal expansion to anomalous Hall effect. Nevertheless, details on the electronic structure related to the oxidation states and the octahedral center's site effect are still scarce. Here, we show a theoretical study, based on first-principles calculations in the framework of the density-functional theory, DFT, on the electronic details associated with the nitrogen site effect into the structural, electronic, magnetic, and topological degrees of freedom. Thus, we show that the nitrogen-vacancy increases the values of the anomalous Hall conductivity and retains the chiral $\Gamma_{4g}$ antiferromagnetic ordering. Moreover, we reveal, based on the Bader charges and the electronic structure analysis, the negative and positive oxidation states in the Ni and Mn sites, respectively. The latter is in agreement with the expected $A_3^{\alpha+}B^{\beta-}X^{\delta-}$ oxidation states to satisfy the charge neutrality in the antiperovskites, but rare for transition metals. Finally, we extrapolate our findings on the oxidation states to several Mn$_3B$N compounds showing that the antiperovskite structure is an ideal platform to encounter negative oxidation states in metals sitting at the corner $B$-site.
E. Triana-Ramírez, W. Ibarra-Hernandez, A. C. Garcia-Castro
2023-01-10T23:30:51Z
http://arxiv.org/abs/2301.04242v1
# Anionic nickel and nitrogen effects in the chiral antiferromagnetic antiperovskite Mn\({}_{3}\)NiN ###### Abstract Magnetic antiperovskites, holding chiral noncollinear antiferromagnetic ordering, have shown remarkable properties that cover from negative thermal expansion to anomalous Hall effect. Nevertheless, details on the electronic structure related to the oxidation states and the octahedral center's site effect are still scarce. Here, we show a theoretical study, based on first-principles calculations in the framework of the density-functional theory, DFT, on the electronic details associated with the nitrogen site effect into the structural, electronic, magnetic, and topological degrees of freedom. Thus, we show that the nitrogen vacancy increases the values of the anomalous Hall conductivity and retains the chiral \(\Gamma_{4g}\) antiferromagnetic ordering. Moreover, we reveal, based on the Bader charges and the electronic structure analysis, the negative and positive oxidation states in the Ni and Mn sites, respectively. The latter is in agreement with the expected \(A_{3}^{\alpha+2}B^{\beta-}X^{\delta-}\) oxidation states to satisfy the charge neutrality in the antiperovskites, but rare for transition metals. Finally, we extrapolate our findings on the oxidation states to several Mn\({}_{3}\)_BN_ compounds showing that the antiperovskite structure is an ideal platform to encounter negative oxidation states in metals sitting at the corner _B_-site. ## 1 Introduction: In the modern quest for novel and exciting phenomena in new materials, antiperovskite, or inversed-perovskite with the formulae \(A_{3}BX\), has stood out as an astonishing type of materials showing large anomalous Hall conductivity [1, 2, 3, 4], superconductivity [5, 6, 7, 8], good performance for new batteries [9, 10], tunable hybrid-improper ferroelectricity [11], tangible magnetocaloric effects [12], and large spin-lattice coupling [13, 14, 15, 16, 17]. All the latter are just a few examples that can be mentioned among the vast functionalities offered by these materials. Interestingly, in antiperovskites [18, 19] the electrostatic balance and the oxidation site occupation are apparently reversed with respect to the known perovskites, _ABX3_[20]. Most of the mentioned phenomena in the antiperovskites move most of their properties to the reversed occupation of their anionic and cationic sites forming the reversed \(XA_{6}\) octahedra. This subtle change in the coordination and atomic occupations gives rise to, for example, the triangular geometric coordination of the magnetic sites that in turn, induces a strong magnetic frustration converging into chiral noncollinear antiferromagnetic orderings [21]. Another example is the topologically related properties in the Sr\({}_{3}\)SnO (Sr\({}_{3}\)PbO) with bands crossing at the Fermi level between the Sr:\(4d\) and Sn:\(5p\) (Pb:\(6p\)) electronic states [22, 23]. Here, the Sn and Pb atomic sites hold formal negative oxidation states confirmed by Mossbauer [22] and X-ray photoemission spectroscopy, XPS [23]. Interestingly, the negative oxidation states in metals have attracted considerable attention due to the possible new physics that may unveil [24, 25]. Despite these reports, few metallic species are known in the literature. Some examples of negative oxidation states in metals have been demonstrated in compounds such as CsAu [26] and Na-Au binary compounds [27, 28]. In the latter compounds, gold's oxidation state is Au\({}^{1-}\) achieving a full \(6s^{2}5d^{10}\) electronic occupation in the outer shell. Other examples are the Pt's negative oxidation at Ba-Pt systems [29, 30] and dimethylformamide's surface [31, 32] as well as the negative oxidation state in Zn at octahedrally coordinated Zn\({}_{2}\)\(M_{4}\) (\(M=\) Li and Na) [33]. Moreover, multiple molecular compounds have shown metals in their structures with formal negative oxidation states [24, 25]. As such, antiperovskites appear to be potential candidates to explore possible negative oxidation states in metals, and their induced properties, due to the expected stoichiometric relationship \(A_{3}^{\alpha+2}B^{\beta-}X^{\delta-}\) in contrast to \(A^{\alpha+2}B^{\beta+}X_{3}^{\delta-}\) to hold the charge neutrality. Then, when going from the perovskite to the antiperovskite, the _B_-site switches from \(B^{\beta+}\) to \(B^{\beta-}\). This is the case of the SrSnO\({}_{3}\) and Sr\({}_{3}\)SnO where the Sn oxidation state goes from 4+ to 4\(-\)[22] to maintain the charge neutrality in both compounds, while the Sr and O sites keep the 2+ and 2- oxidation, respectively. These findings are also in agreement with other gold-based antiperovskites oxides Cs\({}_{3}\)AuO and Rb\({}_{3}\)AuO [34]. Interestingly as observed in perovskites, the anionic vacancies, such as oxygen deficiency in perovskite oxides,[35, 36] could alter the electronic structure by inducing an electronic reconstruction that directly affects the presented oxidation states of the atomic species[37]. These vacancies can be present despite the advances in growth techniques nitrogen vacancies are expected to be formed, as in the Mn\({}_{3}\)PtN\({}_{x}\) case[38]. Therefore, exploring the effect of the nitrogen deficiency in the Mn\({}_{3}\)_bN_ antiperovskites, and particularly in the Mn\({}_{3}\)NiN prototype, is essential in order to unveil its influence on the structural and electronic degrees of freedom that may affect the magnetic response and anomalous Hall conductivity. Additionally, the N-site is strongly correlated with the oxidation states at the Mn- and Ni-sites in the Mn\({}_{3}\)NiN antiperovskite. In this paper, we study from first-principles calculations, the electronic structure of the chiral antiferromagnetic antiperovskite Mn\({}_{3}\)NiN. Thus, we performed a detailed study of this antiperovskite's electronic structure and explored the formal charges of the atomic species coupled with the understanding of the N-site effect in the physics beneath the electronic structure of this compound. This antiperovskite stands as a prototype in this family of materials and we pay special attention to the influence of the nitrogen vacancy in the topological features, such as the anomalous Hall conductivity. In Section 2 we present all the computational details and the theoretical approaches used for the development of this work. This section is followed by the presentation of the obtained results and the consequent analysis, in Section 3. Finally, we draw our conclusions, in Section 4, and highlight some perspectives associated with our findings around the oxidation states and the nitrogen's effect in the Mn\({}_{3}\)NiN antiperovskite. Furthermore, we extrapolate these analyzes and include our results in several Mn\({}_{3}\)_bN_ antiperovskites. ## 2 Theoretical and computational details: We performed first-principles calculations within the density-functional theory (DFT)[39, 40] approach by using the vasp code (version 5.4.4)[41, 42]. The projected-augmented waves, PAW[43, 44] scheme was used to represent the valence and core electrons. The electronic configurations considered in the pseudo-potentials as valence electrons are Mn: (3_p_\({}^{\prime}\)3_d_\({}^{\prime}\)5_d_\({}^{\prime}\)2, version 02Aug2007), Ni: (3_p_\({}^{\prime}\)6_d_\({}^{\prime}\)8_d_\({}^{\prime}\)2, version 06Sep2000), and N: (2_s_\({}^{2}\)2_p_\({}^{5}\), version 08Appr2002). The exchange-correlation, \(E_{xx}\), was represented within the generalized gradient approximation, GGA in the PBE8D parametrization[45], and the \(E_{xx}\) of the _d_-electrons was corrected through the DFT+_U_ approximation within the Liechtenstein formalism[46]. We used a Coulomb on-site value of \(U\) = 2.0 eV parameter. The latter optimized to reproduce the experimentally observed lattice parameter. Also, a metaGGA formalism[47], within the SCAN implementation[48], and the hybrid based-functional, HSE06[49] were adopted to correlate with the Hubbard correction within the PBEsol\(+U\) calculations. The periodic solution of the crystal was represented by using Bloch states with a Monkhorst-Pack[50]\(k\)-point mesh of 13\(\times\)13\(\times\)13 and 600 eV energy cut-off to give forces convergence of less than 0.001 eV-A\({}^{-1}\) and an error in the energy less than 0.5 meV. The spin-orbit coupling (SOC) was included to consider noncollinear magnetic configurations[51]. The phonon calculations were performed within the finite-differences methodology[52, 53] and analyzed through the phonopy interface[54, 55]. The latter calculations were performed in the 2\(\times\)2\(\times\)2 supercell to properly map the lattice dynamics at the zone-boundary. For these calculations in the supercell, the _k_-mesh was then set to 6\(\times\)6\(\times\)6 and the noncollinear magnetic orderings were also considered. To evaluate the anomalous Hall conductivity, and the changes in the Berry curvature, we have used the Wannier functions methodology for which the wannierization was performed with the Wannier90 code[56, 57] and post-processed with the WannierBerri package[58]. Here, \(s\), \(p\), and \(d\) orbitals were considered in the Mn and Ni cases, while \(s\) and \(p\) were considered at the N site. Additionally, a 3.0 eV window was used around the Fermi level for the wannierization. Bader charges were evaluated by the methodology developed by G. Henkelman _et al.[59]_. Finally, the atomic structure figures were elaborated with the vesta code[60]. ## 3 Results and discussion: In what follows, we start by describing the electronic properties related to the N-site effect in the Mn\({}_{3}\)NiN antiperovskite. In Fig. 1 are shown the Mn\({}_{3}\)Ni and Mn\({}_{3}\)NiN cubic _Pm_\(\overline{3}\)_m_ (SG. 221) antiperovskites as well as the symmetry allowed noncollinear chiral antiferromagnetic \(\Gamma_{\mathrm{4g}}\) and \(\Gamma_{\mathrm{5g}}\) orderings. Thus, in the nitrogen deficiency antiperovskite, the \(\Gamma_{\mathrm{4g}}\) ordering is more stable over the \(\Gamma_{\mathrm{5g}}\) explained in terms of the MAE energy \(\Delta E=E_{\Gamma_{\mathrm{4g}}}-E_{\Gamma_{\mathrm{5g}}}=-0.58\) meV-f.u.\({}^{-1}\). Thus, as in the Mn\({}_{3}\)NiN case, the magnetic ground state in the Mn\({}_{3}\)Ni is the chiral \(\Gamma_{\mathrm{4g}}\) antiferromagnetic ordering that allows the anomalous Hall effect[61], as will be discussed further. Therefore, it is worth recalling that all the calculations contained in this work were performed considering the spin-orbit coupling and the noncollinear antiferromagnetic states for Mn\({}_{3}\)NiN, as well as for Mn\({}_{3}\)Ni antiperovskites. After fully relaxing the Mn\({}_{3}\)NiN and Mn\({}_{3}\)Ni the obtained lattice parameters are \(a=3.889\) A and \(a=3.707\) A respectively. It can be noted that, in the Mn\({}_{3}\)NiN case, the lattice parameter is in good agreement with the experimentally reported value of \(a=3.886\) A[62]. In the Mn\({}_{3}\)Ni, although there is no experimentally reported parameter, as the exchange-correlation correction was also considered in the Mn:3\(d\) orbitals, it can be expected a close value to the one reported here by us. When comparing the lattice parameter, it can be observed that the inclusion of the N-site in the octahedral center induces a tangible lattice expansion, equivalent to 0.182 A. However, the symmetry space groups remain the same being _Pm_\(\overline{3}\)_m_ (SG. 221) without considering the magnetic ground state and \(R\overline{3}\)_m_\({}^{\prime}\)_ (MSG. 166.101) once the chiral noncollinear antiferromagnetic ground state is accounted into the symmetry operations. Then, only changes in the volume and electronic structure were found. Moreover, both Mn\({}_{3}\)NiN and Mn\({}_{3}\)Ni antiperovskite are fully dynamically stable in the cubic configuration under the noncollinear \(\Gamma_{\mathrm{4g}}\) antiferromagnetic ordering, see Fig. 1. As it can be observed, the high-frequency modes are absent in Mn\({}_{3}\)Ni in which case, these modes are nitrogen driven. For instance, the antiperovskite Mn\({}_{3}\)_bN_ structure can be viewed as magnetic Mn-based kagome lattices with _B_-sites embedded into them and separated by nitrogen layers. In Fig. 2 we present the entire electronic characterization for the Mn\({}_{3}\)NiN and Mn\({}_{3}\)Ni antiperovskites. Here, the full orbitally-projected band electronic structure is presented, in Fig. 2(a), as well as the local density of states, in Fig. 2(b), and the computed anomalous Hall conductivity for the \(\alpha_{xy}\), and \(\sigma_{111}\) terms, in Fig. 2(c). At first glance, we can observe from Fig. 2(a) that there is a substantial reduction of the available states close to the Fermi energy, here located at \(E_{F}=0.0\) eV by notation, once the N-site is introduced in the antiperovskite. It can be appreciated that the major contribution at and above the Fermi level is associated with the Mn:3\(d\) orbitals in both cases. As for the Ni states, those appear to be located well below \(E=-0.5\) eV and are quite localized around \(-1.5\) eV as in an insulator case, see Fig. 2(b). Nevertheless, a small contribution from the Ni states can be observed above the Fermi level. The latter is expected because the antiperovskite structure can be understood, as commented before, as (111) Mn-based kagome planes with Ni sites embedded and separated by the N-sites. Importantly, as the N-site is located at the octahedral center, the Mn\({}_{3}\)NiN and the Mn\({}_{3}\)Ni hold the same crystallographic and magnetic symmetry. Thus, only modifications in the electronic structure are observed, but the AHC tensor is kept fixed. In this case, the anomalous Hall conductivity component, \(\sigma_{xy}\), has been computed by the relationship: \[\sigma_{xy}=-\frac{2\pi e}{h}\sum_{n}^{occ}\int_{BZ}\frac{d^{3}\mathbf{k}}{(2 \pi)^{3}}f_{n}(\mathbf{k})\Omega_{n,xy}(\mathbf{k}), \tag{1}\] where \(\Omega_{xy}(\mathbf{k})\)=\(\sum_{n}^{occ}f_{n}(\mathbf{k})\Omega_{n,xy}(\mathbf{k})\) is the summation of all the occupied \(n\)-bands and \(f_{n}(\mathbf{k})\) represents the Fermi distribution. Moreover, the symmetry-allowed AHC components within the \(\Gamma_{4g}\) ordering in the \(R^{3}m^{\prime}\) magnetic symmetry group are: \[\sigma_{R^{3}m^{\prime}}=\begin{pmatrix}0&\sigma_{xy}&-\sigma_{xy}\\ -\sigma_{xy}&0&\sigma_{xy}\\ \sigma_{xy}&-\sigma_{xy}&0\end{pmatrix} \tag{2}\] The charge at the Ni-site is expected to be the same and only changes in the allowed electronic states in the proximity to the Fermi level might influence the anomalous Hall conductivity. Additionally, as the AHC is strongly dependent on the spin-orbit coupling strength [63], the absence or presence of the nitrogen octahedral center site has a negligible effect on \(\sigma_{xy}\). In Fig. 2(c) we show the computed anomalous Hall conductivity for the \(\sigma_{xy}\) in both compounds, as well as the \(\sigma_{111}\) component in the magnetic kagome lattice at the (\(111\)) lattice plane. The \(\sigma_{111}\) component is computed as \(\sigma_{111}\equiv\frac{1}{\sqrt{3}}\left(\sigma_{xy}+\sigma_{yz}+\sigma_{xz}\right)\) and corresponds to the conductivity on the (\(111\)) kagome lattice. We then found that in absence of the N-site \(\sigma_{xy}=139\) S\(\cdot\)cm\({}^{-1}\) (\(\sigma_{111}=241\) S\(\cdot\)cm\({}^{-1}\)) whereas in the Mn\({}_{3}\)NiN is \(\sigma_{xy}=78\) S\(\cdot\)cm\({}^{-1}\) (\(\sigma_{111}=135\) S\(\cdot\)cm\({}^{-1}\)), both at the \(E_{F}\) level,. Our findings show a considerable increase of the \(\sigma_{xy}\) in the nitrogen-deficient antiperovskite that can be correlated to the increase of the available electronic states close to Fermi. The latter enhancement of the \(\sigma_{xy}\) component is in agreement with the electronic band structure, also shown and discussed before. Therefore, the N-site is directly influencing the \(f_{n}(\mathbf{k})\) function into Eq. 1 modifying the \(\sigma_{xy}\) value but keeping the symmetry operations. In regards to the electronic structure, formally, the oxidation states according to the IUPAC [64, 65, 66], quantifies the oxidation degree of an atom defined based-on the electron counting of such atomic species after the bonding is reached. Therefore, the oxidation number can be obtained by following a set of rules, as exposed by A. Walsh _et al._[67, 68] that, as mentioned before, can be ascribed to the electron counting. Then, aiming to estimate the potential oxidation states, hold by each atomic component in the antiperovskite, we proceded by obtaining the charges around each site. As in the Mn\({}_{3}\)NiN and Mn\({}_{3}\)Ni antiperovskites, the electronic structure is metallic, the Born effective charges, \(Z^{*}\), are Fig. 1: (Color online) (a) Mn\({}_{3}\)Ni and Mn\({}_{3}\)NiN \(Pn\bar{3}\bar{m}\) structures where the N-site octahedral center is shown in the latter and absent in the former. In (b) are shown the chiral antiferromagnetic noncollinear \(\Gamma_{4g}\) and \(\Gamma_{5g}\) orderings. Here, the magnetic moments per Mn atom are shown as grey arrows by notation. In (c) are presented the full phonon-dispersion curves obtained for the Mn\({}_{3}\)NiN, as well as the nitrogen-deficient Mn\({}_{3}\)Ni antiperovskites. The latter were computed with \(U=2.0\) eV. In both cases, we consider the \(\Gamma_{4g}\) chiral antiferromagnetic ordering ground state. not accessible due to the ill-defined polarization in metals1, and therefore, the Bader charges offer an alternative route to estimate the charges in the atomic species. In Table 1 are condensed the results related to the Bader charges computed in the Mn\({}_{3}\)NiN and Mn\({}_{3}\)Ni antiperovskites. These values were obtained for the PBEsol+\(U\), SCAN, and HSE06 exchange-correlation approaches. We can observe that, independently of the exchange-correlation considerations, following the Jacob's ladder from the GGA+\(U\) to the hybrid-functional approach [69], the computed charges are close to \(+0.9e^{-}\), \(-0.7e^{-}\), and \(-2.0e^{-}\) for the Mn, Ni, and N sites, respectively, in the Mn\({}_{3}\)NiN case. The previous charges are in contrast with the computed charges of \(+0.3e^{-}\) and \(-0.7e^{-}\) for Mn and Ni in the Mn\({}_{3}\)Ni. These results are suggesting a good representation of the charges with the PBEsol+\(U\) approach. Thus, the PBEsol+\(U\) exchange-correlation is used for further analysis. As expected, the Mn-sites hold, in both antiperovskite cases, a positive charge associated with a Mn\({}^{\alpha+}\) oxidation state. Meanwhile, the corner Ni-site shows a negative charge leading to a Ni\({}^{\beta-}\) oxidation state. In the nitrogen case, as expected the Bader charge is negative and it is related to the N\({}^{\delta-}\), (\(\delta=3\)) oxidation state expected in this anionic site. Interestingly, the Mn's Bader charge is \(+0.259e^{-}\) in the Mn\({}_{3}\)Ni whereas is \(+0.907e^{-}\) in the Mn\({}_{3}\)NiN. This can be explained due to the charge localized in the nitrogen site when incorporated in the antiperovskite and that it is transferred from the manganese sites. Moreover, this result is in agreement with larger electronic states, close to the Fermi level, available in the Mn\({}_{3}\)Ni in comparison to Mn\({}_{3}\)NiN, and also explaining the AHC results. Aiming to compare with other insulating antiperovskites, such as Ca\({}_{3}\)SnO and Ca\({}_{3}\)BiN, we have computed the Bader charges and found that \(Z_{Ca}=+1.308e^{-}\), \(Z_{Sn}=-2.364e^{-}\), and \(Z_{O}=-1.558e^{-}\) for the Ca\({}_{3}\)SnO oxide, and \(Z_{Ca}=+1.333e^{-}\), \(Z_{Bi}=-1.955e^{-}\), \(Z_{N}=-2.043e^{-}\), for the Ca\({}_{3}\)BiN nitride antiperovskite case. As for Born effective charges, \(Z^{*}\) accessible in these compounds, we observed that the diagonal terms are \(Z^{*}_{Ca}=+2.388e^{-}\), \(Z^{*}_{Sn}=-3.023e^{-}\), and \(Z^{*}_{O}=-3.381e^{-}\) in the Ca\({}_{3}\)SnO oxide, and \(Z^{*}_{Ca}=+2.380e^{-}\), \(Z^{*}_{Bi}=-2.899e^{-}\), \(Z^{*}_{N}=-4.397e^{-}\) for the Ca\({}_{3}\)BiN nitride. The deviation of the Born effective charges, with respect to the nominal values (\(Z_{Ca}=+2e^{-}\), \(Z_{Sn}=-4e^{-}\), \(Z_{Bi}=-3e^{-}\), \(Z_{O}=-2e^{-}\) and \(Z_{N}=-3e^{-}\)), can be explained in terms of the large polarizability of the Sn-O and Bi-N bondings widely observed and reported in ferroelectric perovskite oxides [70, 71]. Despite the charge underestimation, shown by the Bader analysis, and overestimation, obtained with the Born effective charges, the latter results are in fair agreement with the expected oxidation states of \(A_{3}^{2+}B^{4-}O^{2-}\) and \(A_{3}^{2+}B^{3-}N^{3-}\) compounds, respectively. As such, these findings are consistent with the experimentally measured, by Mossbauer spectroscopy and X-ray photoemission spectroscopy, XPS, oxidation states of the atomic constituents in the Sr\({}_{3}\)SnO and Sr\({}_{3}\)PbO antiperovskites [22, 23]. In such compounds, the oxidation state was associated with Sn\({}^{4-}\) and Pb\({}^{4-}\) states based on the experimental results. Additionally, these results on the oxidation states are also in agreement with the calculations in other antiperovskite insulators such as Ba\({}_{3}\)SiO and Ba\({}_{3}\)SiO/Ba\({}_{3}\)GeO ferroelectric superlattices in which, the \(Z^{*}\) values are \(+2.396e^{-}\), \(-4.720e^{-}\), \(-4.594e^{-}\), and \(-2.801e^{-}\) for the Ba, Si, Ge, and O sites, respectively [72, 11]. \begin{table} \begin{tabular}{c c c c|c} \hline \hline XC\({}_{PBEsol}\) & \(Z_{Mn}\) & \(Z_{Ni}\) & \(Z_{N}\) & \(m\) (\(\mu_{B}\cdot\)Mn\({}^{-1}\)) \\ \hline Mn\({}_{3}\)NiN & \(+0.907\) & \(-0.723\) & \(-2.006\) & \(3.560\) \\ Mn\({}_{3}\)Ni & \(+0.259\) & \(-0.704\) & — & \(3.583\) \\ \hline XC\({}_{SCAN}\) & \(Z_{Mn}\) & \(Z_{Ni}\) & \(Z_{N}\) & \(m\) (\(\mu_{B}\cdot\)Mn\({}^{-1}\)) \\ \hline Mn\({}_{3}\)NiN & \(+0.806\) & \(-0.745\) & \(-1.957\) & \(3.418\) \\ Mn\({}_{3}\)Ni & \(+0.262\) & \(-0.788\) & — & \(3.470\) \\ \hline XC\({}_{HSE06}\) & \(Z_{Mn}\) & \(Z_{Ni}\) & \(Z_{N}\) & \(m\) (\(\mu_{B}\cdot\)Mn\({}^{-1}\)) \\ \hline Mn\({}_{3}\)NiN & \(+0.973\) & \(-0.790\) & \(-2.130\) & \(3.854\) \\ Mn\({}_{3}\)Ni & \(+0.324\) & \(-0.718\) & — & \(3.883\) \\ \hline \hline \end{tabular} \end{table} Table 1: Bader charges, in \(e^{-}\) units, computed for the Mn, Ni, and N sites in the Mn\({}_{3}\)NiN and Mn\({}_{3}\)Ni considering the chiral antiferromagnetic \(\Gamma_{4g}\) ordering. The latter values were extracted under several exchange-correlation representations. Additionally, we present the magnetic moment, per Mn atom, in each case. Figure 2: (Color online) (a) Atomically projected band structure, and (b) atomically projected density of states, DOS. Here, the Mn, Ni, and N states are denoted in violet, blue, and green colors respectively. Additionally, in (b) the total DOS is denoted in grey color. (c) Anomalous Hall conductivity, \(\sigma_{\rm{V}}\) and \(\sigma_{\rm{I11}}\) components, computed at the \(\Gamma_{4g}\) orderings in the Mn\({}_{3}\)NiN and Mn\({}_{3}\)Ni. To contrast the obtained oxidation states in the antiperovskite Mn\({}_{3}\)NiN, we defined a hypothetical perovskite compound as NiMnN\({}_{3}\) by inverting the Mn and N sites. Thus, the Mn occupies the octahedral center whereas the N sites form the octahedra, \(i.e.\) MnN\({}_{6}\). Here, the Ni sites remain in the cell's corner site. In such a compound, we have fully relaxed the structural and electronic degrees of freedom and found a metallic behavior with a tangible magnetic response in which, the Mn holds a \(m=2.501\)\(\mu_{B}\cdot\)Mn\({}^{-1}\) and the Ni is \(m=-1.080\)\(\mu_{B}\cdot\)Ni\({}^{-1}\). After extracting the Bader charges we obtained values of \(Z_{Mn}=+1.981e^{-}\), \(Z_{Ni}\)\(=+0.888e^{-}\), \(Z_{N}=-0.957e^{-}\). As expected, the \(A^{+}B^{+}X_{3}^{-}\) is kept as Ni\({}^{a+}\)Mn\({}^{b+}\)N\({}_{3}^{a-}\). It is worth mentioning that the Bader charges seem to underestimate the computed charge in the atomic sites, as observed, for example in Ca\({}_{3}\)BiN possibly due to the partitioning methodology and exchange-correlation approach [73]. Nevertheless, it can be concluded that as perovskite and antiperovskite structures are considered, the Ni-site oxidation state is reversed from positive, in the former, to negative in the latter. Moving forward, we applied our analysis to several members of the Mn\({}_{3}\)_BN_ family in order to extrapolate our findings. In Table 2 we present our calculations of the Bader charges across several reported Mn-based antiperovskites. In all the cases, the chiral antiferromagnetic \(\Gamma_{4g}\) ordering was considered as the magnetic ground state in the calculations. As expected, the N-sites remain negative with values between \(-2.0e^{-}\) to \(-1.6e^{-}\) whereas the Mn-sites vary from \(+0.9e^{-}\) to \(+0.6e^{-}\). Thus, we followed the trend vertically along the periodic table from the \(3d\) in Ni to \(5d\) in Pt and horizontally from Ni to Ga. We found an increase of the charge at the \(B\)-site, from Ni to Pt, suggesting an increase of the negative oxidation state, for example, \(\beta\sim 1-\) in Ni to \(\beta\sim 2-\) in Pt. In the Mn\({}_{3}\)IrN case, the Bader charge is close to the value observed for the Pt site. Interestingly, the received charge could be possibly located at the open \(s\)- and \(d\)-orbitals. On the contrary, the charge decreases from Ni to Ga possibly due to the tendency, in this case, to keep the outer electronic shells closed. As for the magnetic moment value, per Mn site, remains in values between 3.5 \(\mu_{B}\cdot\)Mn\({}^{-1}\) to 3.8 \(\mu_{B}\cdot\)Mn\({}^{-1}\). In contrast, we observed that the charge at the \(B\)-sites decreases when going from Ni to Ga. This can be explained in terms of the closing electronic shell limiting the space for acquired charge and therefore, diminishing the possible negative oxidation state. Although we are aware of the underestimated charge obtained through the Bader charges approach, and fluctuations in the values of the charges could be expected due to the partitioning method employed [68, 73], our findings consistently suggest negative oxidation states in metals when located at the antiperovskite's corner \(B\)-site. Finally, it is worth noticing that as the spin-orbit coupling, SOC, increases with the oxidation state [74], the \(B\)-site's oxidation is paramount to understanding its contribution to the anomalous Hall conductivity. Thus, in the case of Mn\({}_{3}\)_BN_ compounds, the SOC possibly decreases due to the negative oxidation state in the \(B\)-site when compared with perovskite compounds. ## 4 Conclusions: In this paper, we have studied the electronic structure of the Mn\({}_{3}\)NiN and Mn\({}_{3}\)Ni magnetically chiral noncollinear antiferromagnetic antiperovskites by utilizing first-principles calculations. We found that the N-site expands the cell when it is located at the center of the octahedral. Nonetheless, due to the center position, the symmetry operations and expected properties are conserved. We observed a tangible increase of the available electronic states close to the Fermi level that favors the conductivity, as in the case of the anomalous Hall effect in which, \(\sigma_{xy}=139\) S\(\cdot\)cm\({}^{-1}\) (\(\sigma_{111}\) = 241 S\(\cdot\)cm\({}^{-1}\)) in absence of the N-site in contrast to \(\sigma_{xy}=78\) S\(\cdot\)cm\({}^{-1}\) (\(\sigma_{111}\) = 135 S\(\cdot\)cm\({}^{-1}\)) in the Mn\({}_{3}\)NiN counterpart. Our findings suggest that the nitrogen inclusion in the Mn\({}_{3}\)NiN system enhances a positive oxidation state, possibly \(\sim\)1+, in the Mn whereas, and more interestingly, the Ni sites hold a negative, potentially \(\sim\)1-, oxidation state. This behavior is observed although the overall electronic structure remains metallic. Finally, our findings also suggest that several transition metals may exhibit negative oxidation states when located at the \(B\)-site in the Mn\({}_{3}\)_BN_ antiperovskite. We thus hope that our result will motivate further studies in antiperovskite structures that might be ideal candidates to further investigate the negative oxidation states in metals. ## Author contributions All of the authors were involved in the preparation and development of the manuscript. Moreover, all of the authors read and approved the final manuscript. ## Conflict of interest The authors declare no personnel or financial conflict of interests with any person or organization. ## Acknowledgments: The calculations presented in this paper were carried out using the Grid UIS-2 experimental testbed, being developed under the Universidad Industrial de Santander (SC3-UIS) High Performance and Scientific Computing Centre, development action with sup \begin{table} \begin{tabular}{l c c c|c} \hline \hline Mn\({}_{3}\)BN & \(Z_{Mn}\) & \(Z_{B}\) & \(Z_{N}\) & \(B\)-site \\ \hline Mn\({}_{3}\)NiN & \(+0.907\) & \(-0.723\) & \(-2.006\) & Ni:[Ar]4\(s^{2}3d^{8}\) \\ Mn\({}_{3}\)PdN & \(+0.884\) & \(-1.023\) & \(-1.629\) & Pd:[K]5.5\({}^{0}4d^{10}\) \\ Mn\({}_{3}\)PtN & \(+0.982\) & \(-1.351\) & \(-1.596\) & Pt:[Xe]6\(s^{1}5d^{9}\) \\ Mn\({}_{3}\)IrN & \(+0.946\) & \(-1.273\) & \(-1.566\) & Ir:[Xe]6\(s^{2}5d^{7}\) \\ \hline \hline Mn\({}_{3}\)NiN & \(+0.907\) & \(-0.723\) & \(-2.006\) & Ni:[Ar]4\(s^{2}3d^{8}\) \\ Mn\({}_{3}\)CuN & \(+0.754\) & \(-0.601\) & \(-1.661\) & Cu:[Ar]4\(s^{1}3d^{10}\) \\ Mn\({}_{3}\)ZnN & \(+0.682\) & \(-0.391\) & \(-1.656\) & Zn:[Ar]4\(s^{2}3d^{10}\) \\ Mn\({}_{3}\)GaN & \(+0.593\) & \(-0.136\) & \(-1.643\) & Ga:[Ar]4\(s^{2}3d^{10}4p^{1}\) \\ Mn\({}_{3}\)SnN & \(+0.661\) & \(-0.355\) & \(-1.630\) & Sn:[Kr]5\(s^{2}4d^{10}5p^{2}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Computed Bader charges, in \(e^{-}\) units, for the Mn, \(B\)-sites, and N sites in the Mn\({}_{3}\)BN within the chiral antiferromagnetic \(\Gamma_{4g}\) ordering. Here, we also present the magnetic moment, per Mn atom in each case as well as the electronic configuration for each \(B\)-site with the outer valence electrons in neutral state. In the Mn and Ni cases, the outer electrons are [Ar]4\(s^{2}3d^{5}\) and [He]2\(s^{2}2p^{3}\), respectively. port from UIS Vicerrectoria de Investigacion y Extension (VIE-UIS) and several UIS research groups as well as other funding resources. Additionally, we acknowledge the computational support extended to us by Laboratorio de Supercomputo del Sureste de Mexico (LNS), Benemerita Universidad Autonoma de Puebla, BUAP, for performing heavy theoretical calculations. A. C. Garcia-Castro acknowledge the grant No. 2677 entitled "Quiralidad y Ordenanimento Magnetico en Sistemas Cristalinos: Estudio Teorico desde Primeros Principios" supported by the VIE - UIS.
2310.04503
${\rm H{\scriptsize ALO}F{\scriptsize LOW}}$ I: Neural Inference of Halo Mass from Galaxy Photometry and Morphology
We present ${\rm H{\scriptsize ALO}F{\scriptsize LOW}}$, a new machine learning approach for inferring the mass of host dark matter halos, $M_h$, from the photometry and morphology of galaxies. ${\rm H{\scriptsize ALO}F{\scriptsize LOW}}$ uses simulation-based inference with normalizing flows to conduct rigorous Bayesian inference. It is trained on state-of-the-art synthetic galaxy images from Bottrell et al. (2023; arXiv:2308.14793) that are constructed from the IllustrisTNG hydrodynamic simulation and include realistic effects of the Hyper Suprime-Cam Subaru Strategy Program (HSC-SSP) observations. We design ${\rm H{\scriptsize ALO}F{\scriptsize LOW}}$ to infer $M_h$ and stellar mass, $M_*$, using $grizy$ band magnitudes, morphological properties quantifying characteristic size, concentration, and asymmetry, total measured satellite luminosity, and number of satellites. We demonstrate that ${\rm H{\scriptsize ALO}F{\scriptsize LOW}}$ infers accurate and unbiased posteriors of $M_h$. Furthermore, we quantify the full information content in the photometric observations of galaxies in constraining $M_h$. With magnitudes alone, we infer $M_h$ with $\sigma_{\log M_h} \sim 0.115$ and 0.182 dex for field and group galaxies. Including morphological properties significantly improves the precision of $M_h$ constraints, as does total satellite luminosity: $\sigma_{\log M_h} \sim 0.095$ and 0.132 dex. Compared to the standard approach using the stellar-to-halo mass relation, we improve $M_h$ constraints by $\sim$40\%. In subsequent papers, we will validate and calibrate ${\rm H{\scriptsize ALO}F{\scriptsize LOW}}$ with galaxy-galaxy lensing measurements on real observational data.
ChangHoon Hahn, Connor Bottrell, Khee-Gan Lee
2023-10-06T18:00:13Z
http://arxiv.org/abs/2310.04503v1
# HaloFlow I: Neural Inference of Halo Mass from Galaxy Photometry and Morphology ###### Abstract We present HaloFlow, a new machine learning approach for inferring the mass of host dark matter halos, \(M_{h}\), from the photometry and morphology of galaxies. HaloFlow uses simulation-based inference with normalizing flows to conduct rigorous Bayesian inference. It is trained on state-of-the-art synthetic galaxy images from Bottrell et al. (2023) that are constructed from the IllustrisTNG hydrodynamic simulation and include realistic effects of the Hyper Suprime-Cam Subaru Strategy Program (HSC-SSP) observations. We design HaloFlow to infer \(M_{h}\) and stellar mass, \(M_{*}\), using \(grizy\) band magnitudes, morphological properties quantifying characteristic size, concentration, and asymmetry, total measured satellite luminosity, and number of satellites. We demonstrate that HaloFlow infers accurate and unbiased posteriors of \(M_{h}\). Furthermore, we quantify the full information content in the photometric observations of galaxies in constraining \(M_{h}\). With magnitudes alone, we infer \(M_{h}\) with \(\sigma_{\log M_{h}}\sim 0.115\) and \(0.182\) dex for field and group galaxies. Including morphological properties significantly improves the precision of \(M_{h}\) constraints, as does total satellite luminosity: \(\sigma_{\log M_{h}}\sim 0.095\) and \(0.132\) dex. Compared to the standard approach using the stellar-to-halo mass relation, we improve \(M_{h}\) constraints by \(\sim\)40%. In subsequent papers, we will validate and calibrate HaloFlow with galaxy-galaxy lensing measurements on real observational data. large-scale structure of the Universe -- galaxy clusters -- galaxy groups -- Machine learning ## 1 Introduction Inferring the masses of host dark matter halos of galaxies has significant implications for cosmology and galaxy formation. The abundance of most massive halos that host galaxy clusters, for instance, is sensitive to both the expansion history of the Universe and the growth rate of structure (_e.g._ Voit 2005; Allen et al., 2011; Kravtsov & Borgani, 2012; Weinberg et al., 2013; Mantz et al., 2015; Dodelson et al., 2016). It was identified as one of the most promising dark energy probes by the Dark Energy Task Force (Albrecht et al., 2006). With upcoming wide-field surveys such as the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST; Ivezic et al., 2019), galaxy cluster studies are expected to significantly improve current constraints on dark energy. Dark matter halos also profoundly influence the evolution and properties of galaxies that they host (see Wechsler & Tinker, 2018, for a recent review). Galaxy properties, such as color, stellar mass, star formation rates, and morphology, have long been shown to depend significantly on local environment, which is primarily defined by the halo (_e.g._Oemler, 1974; Davis & Geller, 1976; Dressler, 1980; Hogg et al., 2004; Kauffmann et al., 2004; Blanton et al., 2005; Baldry et al., 2006; Blanton & Moustakas, 2009; Hahn et al., 2015). Observational studies of this galaxy-halo connection have now firmly established that halo mass plays the most dominant role (_e.g._Tinker et al., 2011; Moster et al., 2018; Behroozi et al., 2019). Halos can also be used to investigate the cosmic baryon distribution. They harbor a significant fraction of cosmic baryons, both in their stellar and interstellar media (ISM) components as well as in their extended circum-galactic media (CGM). The CGM, notably, exists in a warm ionized state that is not easily amenable to direct observations. This leads to large uncertainties regarding their overall contribution to the cosmic baryon budget, _i.e._ part of the so-called "missing baryon problem" (_e.g._Fukugita & Peebles, 2004; Cen & Ostriker, 2006; Bregman, 2007). Fast radio bursts (FRBs, see Cordes & Chatterjee, 2019 for a review) are a new probe that can constrain the integrated free electron column density along each line-of-sight through their observed frequency dispersion (e.g., McQuinn, 2014; Macquart et al., 2020). By combining localized FRBs with detailed observations of intervening foreground galaxies, ongoing observations promise to constrain the cosmic baryon partition between the CGM and the intergalactic medium (IGM) (Lee et al., 2022, 2023). However, since the amount and extent of CGM gas is expected to scale with the underlying halo mass (Prochaska & Zheng, 2019; Khrykin et al., 2023), uncertainties in the halo mass of the intervening galaxies constitute a major source of uncertainty in efforts to study the cosmic baryon distribution. Despite the pivotal role that halos play across cosmology and galaxy evolution, inferring the properties of halos remains a major observational challenge. A number of different methods are currently used to infer halo mass. For example, the gravitational potential of halos can be directly probed using gravitational lensing (_e.g._Mandelbaum et al., 2006; Cacciato et al., 2009, 2013; Mandelbaum et al., 2016; Huang et al., 2020). Lensing mass measurements, however, require deep and high resolution imaging especially for lower mass halos. This makes it difficult to extend the approach to large galaxy samples. Satellite kinematics has also been used to infer halo mass (_e.g._Norberg et al., 2008; More et al., 2009, 2011; Lange et al., 2019). These studies, however, rely on the assumption of virial equilibrium, velocity bias between the distribution of matter and satellite galaxies, and accurate identification of satellite galaxies. Galaxy-based halo mass estimation methods that use the phase space or richness information of galaxies, more broadly, have been shown to be significantly susceptible to systematics (see Old et al., 2014, 2015, 2018; Wojtak et al., 2018, for an overview). There are also more indirect methods for inferring halo mass. Abundance matching methods assume a monotonic relation between halo mass and galaxy stellar mass or luminosity. Halo masses are assigned to galaxies by matching the cumulative number densities of halos and galaxies (Kravtsov et al., 2004; Tasitsiomi et al., 2004; Vale and Ostriker, 2004; Hearin et al., 2013). Such approach has also been used in conjunction with halo-based group finding algorithms: _e.g._Yang et al. (2009); Tinker et al. (2011); Tinker (2022). These methods ultimately rely on the well-studied stellar-to-halo-mass relation (SHMR; see Wechsler and Tinker, 2018, and references therein). Consequently, they do not exploit additional galaxy properties beyond stellar mass: _e.g._ color, morphology. In this work, we present HaloFlow, a new machine learning (ML) based approach that utilize the full photometric and morphological information of galaxies for inferring their host halo masses. HaloFlow goes beyond previous ML-based halo mass estimation methods (_e.g._Nampaka et al., 2015, 2016; Calderon and Berlind, 2019; Villanueva-Domingo et al., 2022) in two key ways. First, it uses simulation-based inference (SBI) based on neural density estimation to conduct rigorous Bayesian inference. We produce full posterior distributions of halo masses that accurately quantify the statistical uncertainties. Second, HaloFlow is designed to be applied directly to observations. To do this, we use state-of-the-art synthetic galaxy images made with a dust radiative transfer forward model (Bottrell et al., 2023). The images are constructed from the IllustrisTNG cosmological magneto-hydrodynamical simulations (hereafter "TNG" Weinberger et al., 2018; Pillepich et al., 2018; Nelson et al., 2018) and include the full observational realism of Subaru Hyper Suprime-Cam (HSC) imaging data obtained through the HSC Subaru Strategy Program (HSC-SSP; Aihara et al., 2022). This work is the first of a series of paper, where we present HaloFlow and validate its performance and accuracy. Furthermore, we quantify the information content of photometric and morphological properties of galaxies for constraining halo mass. We begin in Section 2 with a brief explanation of our forward-modeled synthetic images. We present HaloFlow in Section 3. Afterwards, we present the results of applying HaloFlow in Section 4 and discuss their implications in Section 5. ## 2 Data One of the main components of SBI is a forward-model of the observable. In this work, we use forward-modeled images and corresponding photometric and morphological measurements of galaxies from TNG. Below, we briefly describe the forward-model. We refer readers to Bottrell et al. (2023) for full details. ### TNG simulations The TNG simulations1 are a suite of publicly available cosmological magneto-hydrodynamical simulations (Weinberger et al., 2018; Pillepich et al., 2018; Pillepich et al., 2018; Springel et al., 2018; Marinacci et al., 2018; Naiman et al., 2018; Nelson et al., 2018; Nelson et al., 2019) that use Arepo2(Springel, 2010) to track the co-evolution of gas, stars, dark matter, super-massive black holes, and magnetic fields from \(z=127\) to \(z=0\). The model includes subgrid treatments for the formation and evolution of stellar populations, black hole growth, radiative cooling, stellar and black hole feedback, and magnetic fields. TNG includes simulations run at three sets of volumes and resolutions. This work makes use of data derived from the highest-resolution TNG50 simulation, which is run in a \((35\,\mathrm{cMpc}/h)^{3}\) box with baryonic mass resolution of \(M_{b}\approx 8.5\times 10^{4}\mathrm{M}_{\odot}\). TNG50 galaxies with stellar masses of \(M_{*}\geq 10^{9}M_{\odot}\) (i.e. \(\gtrsim 10^{4}\) star particles) have been shown to be reasonably resolved with robust stellar structures (Ludlow et al., 2021, 2023). ### Synthetic images and measurements TNG50 galaxies spanning \(0.\leq z\leq 0.7\) and \(M_{*}\geq 10^{9}M_{\odot}\) were forward-modeled into synthetic images from the HSC-SSP by Bottrell et al. (2023). The forward-modeling procedure first uses the SKIRT dust radiative transfer code3(Baes et al., 2011; Camps & Baes, 2015, 2020) to make noise/background-free, high-resolution (idealized) images in the HSC \(grizy\) bands4(Kawanomoto et al., 2018) and several supplementary filters spanning \(0.3-5\) microns. The radiative transfer model uses the Bruzual & Charlot (2003) stellar population synthesis (SPS) library to model light from stellar populations older than 10 Myr and assumes a Chabrier (2003) initial mass function. Continuum and nebular line emission from young stellar populations embedded in birth clouds (star particles ages \(<10\) Myr) are modeled with the MAPPINGS III library (Groves et al., 2008). Dust is not explicitly modelled in TNG, therefore, we carry out post-processing to model the absorption/scattering of light by dust. We ascribe dust densities to gas cells using the method described by Popping et al. (2022), in which the dust-to-gas mass ratio scales with gas metallicity (Remy-Ruyer et al., 2014). The dust model further assumes a Weingartner & Draine (2001) Milky Way dust grain composition and size distribution. Each galaxy is 'observed' along four different orientations in order to increase the overall statistical sample. Footnote 3: [https://skirt.ugent.be](https://skirt.ugent.be) Footnote 4: [https://www.tng-project.org/explore/gallery/bottrell23i](https://www.tng-project.org/explore/gallery/bottrell23i) Next, the survey effects of the HSC-SSP Public Data Release 3 (PDR3; Aihara et al., 2022) are applied to the output images from SKIRT. An adapted version of RealSim5(Bottrell et al., 2019) is used to assign insertion locations, perform flux calibration, spatially rebin to the HSC pixel scale, convolve the images with reconstructed HSC point-spread functions, and insert into realistic HSC-like images6. The images have sufficiently large fields-of-view that they include extended structure, satellites, and nearby group/cluster members. Eisert et al (in prep) make a detailed comparison of these TNG50 mocks to real HSC galaxy images and show that their morphologies broadly agree. Footnote 5: [https://github.com/cbottrell/RealSim](https://github.com/cbottrell/RealSim) Footnote 6: [https://www.tng-project.org/explore/gallery/bottrell23](https://www.tng-project.org/explore/gallery/bottrell23) After the TNG50 mock images were generated, we carried out photometric and morphological measurements using the GALIGHT 7 surface-brightness decomposition software (Ding et al., 2020). Specifically, we measure magnitudes and effective radii, \(R_{\mathrm{eff}}\), from parametric Sersic fits and morphologies quantified by the Concentration, Asymmetry, and Smoothness/Clumpiness (_CAS_) parameters (Abraham et al., 1994, 1996; Bershady et al., 2000; Conselice, 2003). Since these quantities are measured from the synthetic HSC images, they include realistic measurement uncertainties. In Fig ure 1, we show forward-modeled \(gri\) composite images of 12 randomly selected central galaxies from the data set. For this paper, we focus on central galaxies at \(z=0.1\), classified using the TNG Friends-of-Friends (FoF; Davis et al., 1985) group catalog. For each central galaxy, we compile its true stellar mass (\(M_{*}\)), true host halo mass (\(M_{h}\)), and the 'observed' \(griyz\)-band Sersic magnitudes \(\mathbf{X}_{\rm mag}\), and morphological properties \(\mathbf{X}_{\rm morph}=\{R_{\rm eff,X},c_{X},A_{X}\}\). The variables \(R_{\rm eff,X},c_{X},A_{X}\) correspond to the characteristic size, concentration, and asymmetry in the \(X=g,r,i,z,y\) bands. We also compile the total measured luminosity, \(L_{\rm sat.,X}\), and number, \(N_{\rm sat}\), of satellites brighter than \(M_{r}<-18\) within each individual group. In total, we use 7,468 photometric measurements from 1,867 simulated central galaxies. We reserve a subset of 125 random central galaxies with \(M_{*}>10^{9.5}M_{\odot}\) for testing HaloFlow and use the rest for training. Figure 1: Synthetic forward-modeled \(gri\) composite images of galaxies tailored to the HSC-SSP observations. The images are constructed from TNG50 using SKIRT and stellar population synthesis. They include the realistic survey effects as found in the HSC-SSP. We show 12 randomly selected central galaxies from our data set. The magnitudes, \(R_{\rm eff}\), and \(CAS\) morphology parameters used in this work are measured from these forward-modeled images. ## 3 HaloFlow To infer the posterior, \(p(\boldsymbol{\theta}\,|\,\boldsymbol{x})\), of \(\boldsymbol{\theta}=\{M_{*},M_{h}\}\) given observational measurements, \(\boldsymbol{x}\), we use the HaloFlow SBI framework. SBI8 enables inference using only a generative model of mock observations. While various SBI approaches have been applied to inference problems in astronomy (_e.g._Cameron and Pettitt, 2012; Weyant et al., 2013; Hahn et al., 2017; Alsing et al., 2018; Wong et al., 2020; Zhang et al., 2021), we specifically use an approach based on neural density estimation. In particular, we use "normalizing flow" models (Tabak and Vanden-Eijnden, 2010; Tabak and Turner, 2013), following the SBI approach in SEDFlow(Hahn and Melchior, 2022). Footnote 8: also known as “likelihood-free” or “implicit-likelihood” inference Flows use neural networks to learn an extremely flexible and bijective transformation, \(f:x\mapsto z\), that maps a complex target distribution to a simple base distribution, \(\pi(\boldsymbol{z})\). The target distribution, in our case, is the posterior \(p(\boldsymbol{\theta}\,|\,\boldsymbol{x})\) and \(f\) is designed to be invertible and have a tractable Jacobian. This is so that the posterior can be evaluated from \(\pi(\boldsymbol{z})\) by change of variables. We choose a multivariate Gaussian for \(\pi(\boldsymbol{z})\), which makes the posterior easy to sample and evaluate. Out of the different flow architectures, we use Masked Autoregressive Flow (MAF; Papamakarios et al., 2017) models as implemented in the spi Python package (Greenberg et al., 2019; Tejero-Cantero et al., 2020). We train a flow, \(q_{\boldsymbol{\phi}}\), with hyperparameters, \(\boldsymbol{\phi}\), to best approximate the posterior, \(q_{\boldsymbol{\phi}}(\boldsymbol{\theta}\,|\,\boldsymbol{x})\approx p( \boldsymbol{\theta}\,|\,\boldsymbol{x})\). We split the simulated galaxies into a training and validation set with 90/10 split. Then for galaxies \(\{(\boldsymbol{\theta}_{i},\boldsymbol{x}_{i})\}\) in the training set, we maximize the total log-likelihood \(\sum_{i}\log q_{\boldsymbol{\phi}}(\boldsymbol{\theta}_{i}\,|\,\boldsymbol{x }_{i})\). This is equivalent to minimizing the KL divergence between \(p(\boldsymbol{\theta},\boldsymbol{x})=p(\boldsymbol{\theta}\,|\,\boldsymbol{x })p(\boldsymbol{x})\) and \(q_{\boldsymbol{\phi}}(\boldsymbol{\theta}\,|\,\boldsymbol{x})p(\boldsymbol{x})\). We use the Adam optimizer (Kingma and Ba, 2017) with a learning rate of \(5\times 10^{-4}\). To prevent overfitting, we stop training when the log-likelihood evaluated on the validation set fails to increase after 20 consecutive epochs. To determine our final normalizing flow, we train a large number (\(\sim\)1000) of flows with architectures determined using the Optuna hyperparameter optimization framework (Akiba et al., 2019). We then select five flows, \(q_{\boldsymbol{\phi}}^{j}\), with the lowest validation losses and construct our final flow by ensembling them: \(q_{\boldsymbol{\phi}}(\boldsymbol{\theta}\,|\,\boldsymbol{x})=\sum_{j=1}^{5}q _{\boldsymbol{\phi}}^{j}(\boldsymbol{\theta}\,|\,\boldsymbol{x})/5\). Ensembling flows with different initializations and architectures improves the accuracy of our normalizing flow (Lakshminarayanan et al., 2016; Alsing et al., 2019). \(q_{\boldsymbol{\phi}}\) implicitly includes a prior, \(p(\boldsymbol{\theta})\), which is set by the \(M_{*}\) and \(M_{h}\) distribution of our training data set. Without any corrections, this prior reflects the stellar and halo mass functions. Posteriors with this prior would favor low \(M_{*}\) and \(M_{h}\) values since there are more galaxies with lower \(M_{*}\) and \(M_{h}\). We correct for this implicit prior and impose uniform priors on \(\log M_{*}\) and \(\log M_{h}\) using the Handley and Millea (2019) maximum entropy prior method. In practice, for a sample drawn from our posterior, \(\boldsymbol{\theta}^{\prime}\sim q_{\boldsymbol{\phi}}(\boldsymbol{\theta}\,| \,\boldsymbol{x})\), we impose an importance weight of \(1/p(\boldsymbol{\theta}^{\prime})\). This ensures that we assume uniform priors on \(M_{*}\) and \(M_{h}\). In this work, we make use of different sets of photometric measurements: photometry, photometry and morphology, and etc. For each set of observables, we repeat the entire procedure above and train a separate ensembled flow. ## 4 Results In Figure 2, we present the posteriors of \(M_{*}\) and \(M_{h}\) for an arbitrarily selected field galaxy (left) and group galaxy (right) inferred using HaloFlow with different sets of observables: \(\{X_{\rm mag}\}\) (black), \(\{X_{\rm mag},X_{\rm morph}\}\) (blue), \(\{X_{\rm mag},X_{\rm morph},L_{\rm sat}\}\) (orange), and \(\{X_{\rm mag},X_{\rm morph},L_{\rm sat},N_{\rm sat}\}\) (red). We mark the 68\({}^{\rm th}\) and 95\({}^{\rm th}\) percentiles of the posteriors as contours as well as the true \(M_{*}\) and \(M_{h}\) values of the galaxy (black x). The field galaxy resides in a \(M_{h}=10^{11.27}M_{\odot}\) halo and has no satellite galaxies \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline HaloFlow Input & \multicolumn{2}{|c|}{Field Galaxies} & \multicolumn{2}{|c|}{Group Centrals} \\ & \(\sigma_{\log M_{*}}\) & \(\sigma_{\log M_{h}}\) & \(\sigma_{\log M_{*}}\) & \(\sigma_{\log M_{h}}\) \\ \hline \(\{X_{\rm mag}\}\) & 0.096 & 0.115 & 0.151 & 0.182 \\ \(\{X_{\rm mag},X_{\rm morph}\}\) & 0.078 & 0.105 & 0.109 & 0.149 \\ \(\{X_{\rm mag},X_{\rm morph},L_{\rm sat}\}\) & 0.078 & 0.095 & 0.118 & 0.138 \\ \(\{X_{\rm mag},X_{\rm morph},L_{\rm sat},N_{\rm sat}\}\) & 0.073 & 0.095 & 0.108 & 0.132 \\ \hline Standard SHMR Method & & 0.175 & & 0.208 \\ \hline \end{tabular} \end{table} Table 1: HaloFlow median posterior standard error (dex) in stellar mass, \(\sigma_{\log M_{*}}\), and halo mass, \(\sigma_{\log M_{h}}\), predictions for field and group galaxies relative to ground truth. Figure 2: \(M_{*}\) and \(M_{h}\) posteriors inferred from different sets of photometric measurements: \(\{X_{\rm mag}\}\) (black), \(\{X_{\rm mag},X_{\rm morph}\}\) (blue), \(\{X_{\rm mag},X_{\rm morph},L_{\rm sat}\}\) (orange), and \(\{X_{\rm mag},X_{\rm morph},L_{\rm sat},N_{\rm sat}\}\) (red). The contours mark the 68\({}^{\rm th}\) and 95\({}^{\rm th}\) percentiles. The left and right set of panels show the posterior for a field and group galaxy, respectively. We also mark the true \(M_{*}\) and \(M_{h}\) value of the arbitrarily selected simulated galaxy (black \(\times\)). All of the HaloFlow posteriors are consistent with the true \(M_{*}\) and \(M_{h}\) values. Furthermore, the posteriors demonstrate that morphology, satellite luminosity, and richness contribute significant additional constraining power for \(M_{*}\) and \(M_{h}\). brighter than \(M_{r}<-18\); the group galaxy resides in a \(M_{h}=10^{12.41}M_{\odot}\) halo and has 7 satellites brighter than \(M_{r}<-18\). All of the HaloFlow posteriors are consistent with each other and in excellent agreement with the true \(M_{*}\) and \(M_{h}\). For the field galaxy, there is a significant improvement from including \(X_{\rm morph}\). However, there is expectedly little improvement from including \(L_{\rm sat}\) or \(N_{\rm sat}\) since it has no satellite galaxies. The central column of Table 1 summarizes the median standard deviation of HaloFlow posteriors all field central galaxies in the test sample. With the inclusion of \(X_{\rm morph},L_{\rm sat},N_{\rm sat}\), we improve the precision of \(M_{*}\) and \(M_{h}\) constraints for the field galaxy sample by \(\sim\)0.023 and 0.020 dex -- a \(\sim\)20% improvement. Meanwhile, for the group central galaxy in Figure 2, including each additional photometric measurement significantly improves the precision of the \(M_{*}\) and \(M_{h}\) constraints. The median standard deviation of HaloFlow posteriors for all group centrals in the test sample are shown in the right column of Table 1 for each set of observables. Photometric measurements beyond magnitudes, improve the precision of \(M_{*}\) and \(M_{h}\) constraints by \(\sim\)0.043 and 0.050 dex -- a \(\sim\)30% improvement. For both field and group centrals, the HaloFlow posteriors firmly demonstrate that galaxy morphology encodes significant information on both \(M_{h}\) and \(M_{*}\). This confirms previous works that found connections between morphology and local environment (_e.g._Dressler, 1980; Wilman & Erwin, 2012; Perez-Millan et al., 2023). Our results also show that the photometric measurements of satellite galaxies are informative of \(M_{h}\) and \(M_{*}\). This is consistent with Tinker et al. (2021), who found that total satellite luminosity is an excellent proxy for \(M_{h}\). Beyond confirming previous works, with HaloFlow we precisely quantify the information content of these observables for constraining \(M_{*}\) and \(M_{h}\). Furthermore, HaloFlow provides a rigorous Bayesian inference framework for actually leveraging these photometric measurements. Figure 3: HaloFlow\(M_{h}\) posterior using \(\{X_{\rm mag},X_{\rm morph},L_{\rm sat},N_{\rm sat}\}\) (red) compared to the \(M_{h}\) constraints from the standard approach using the SHMR (black dashed) for the same field (left) and group galaxy (right) as in Figure 2. The \(M_{h}\) constraint for the standard method is derived from measuring \(M_{*}\) from photometry using SED modeling and then translating it to \(M_{h}\) using the SHMR. With HaloFlow, we can exploit the constraining power of \(X_{\rm morph}\), \(L_{\rm sat}\), and \(N_{\rm sat}\) and improve \(M_{h}\) constraints by \(\sim\)0.08 dex (\(\sim\)40%) over the standard approach. We further compare the HaloFlow\(M_{h}\) posterior (red) to constraints derived from the standard approach (black dashed) for a field (left) and group galaxy (right) in Figure 3. We use the same galaxies as in Figure 2. For the standard approach, we derive the \(M_{h}\) constraint by first measuring \(M_{*}\) from photometry using SED modeling and then converting \(M_{*}\) to \(M_{h}\) using the SHMR. We first draw samples from the \(M_{*}\) posterior, \(M_{*}^{\prime}\sim p(M_{*}\,|\,X_{\rm mag})\), then sample \(M_{h}^{\prime}\sim p(M_{h}\,|\,M_{*}^{\prime})\) given by the SHMR. We estimate \(p(M_{*}\,|\,X_{\rm mag})\) using HaloFlow and estimate \(p(M_{h}\,|\,M_{*}^{\prime})\) from the TNG simulations9. This ensures that we do not introduce any biases from discrepant assumptions in the SED modeling (_e.g._ stellar library, initial mass function, dust modeling) and provides the most "apples to apples" comparisons with HaloFlow. For the HaloFlow posterior, we use the posterior from \(\{X_{\rm mag},X_{\rm morph},L_{\rm sat},N_{\rm sat}\}\). With the standard approach, we derive \(\sigma_{\log M_{h}}=0.175\) and \(0.208\) dex for satellite and group central galaxies, respectively (Table 1). Our HaloFlow\(M_{h}\) constraints are significantly, \(\gtrsim 0.080\) and \(0.076\) dex, tighter than the standard approach. This corresponds to \(\sim 40\%\) tighter constraints on \(M_{h}\). Footnote 9: We derive the mean SMHR directly from TNG50 (Section 2.1) and use \(\sigma_{\log M_{h}}\sim 0.15\) dex based on Wechsler and Tinker (2018), due to the limited number of galaxies in TNG50. In addition to the tighter \(M_{h}\) constraints, HaloFlow provides a fully consistent framework for deriving \(M_{h}\) directly from observations. In our comparison, as mentioned above, we implement an idealized version of the standard approach where the SED modeling and the SHMR are consistent by construction. However, in practice, the \(M_{*}\) derived from SED modeling are not consistent with the \(M_{*}\) values used in the SHMR from simulations. The \(M_{*}\) from SED modeling is a measurement that depends on the specific assumptions of stellar population synthesis. Depending on modeling choices, the inferred \(M_{*}\) can vary by \(\sim 0.1\) dex (Pacifici et al., 2023). Observational effects, such as backgound subtraction (Bernardi et al., 2017), can also significantly impact the \(M_{*}\) inferred from SED modeling. Meanwhile, the \(M_{*}\) in SHMR is a theoretical quantity, typically derived from summing up the masses of all the star particles in a subhalo. Any discrepancies in the \(M_{*}\) measured from SED modeling versus the simulation will bias the inferred \(M_{h}\). In HaloFlow, all of these effects are consistently accounted in the forward model. Next, we validate the posteriors from HaloFlow. In Figure 4, we compare the HaloFlow\(M_{*}\) and \(M_{h}\) inferred from \(\{X_{\rm mag}\}\) (black), \(\{X_{\rm mag},X_{\rm morph}\}\) (blue), \(\{X_{\rm mag},X_{\rm morph},L_{\rm sat}\}\) (orange), and \(\{X_{\rm mag},X_{\rm morph},L_{\rm sat},N_{\rm sat}\}\) (red) to the true values (top panels). We show the residuals in the bottom panels. The error bars represent the \(68^{\rm th}\) percentiles of the HaloFlow posteriors. For clarity, we only include \(60\) of \(125\) galaxies from the test sample in Figure 4. The comparison illustrates that overall HaloFlow accurately infers the true \(M_{*}\) and \(M_{h}\). Furthermore, it shows that incorporating \(X_{\rm morph}\), \(L_{\rm sat}\), and \(N_{\rm sat}\) significantly tightens the \(M_{*}\) and \(M_{h}\) posteriors. As additional validation, we use the Lemos et al. (2023a) "data to random point" (DRP) coverage test. For each test sample, we evaluate distances between samples drawn from HaloFlow posteriors and a random point in parameter space. We compare these distances to the distance between the true \(M_{*}\), \(M_{h}\) and the random point to derive an estimate of the expected coverage probability. This approach is necessary and sufficient to show that a posterior estimator is optimal. In Figure 5, we present the DRP coverage test of the HaloFlow posteriors for each of the photometric mea surements. Significant discrepancies from the true posterior (black-dashed) can reveal whether the posterior estimates are underconfident, overconfident, or biased. In our case, we find no significant discrepancies. Hence, HaloFlow provides near optimal estimates of the true posteriors for all of the adopted combinations of observables. ## 5 Discussion HaloFlow leverages state-of-the-art forward modeled galaxy images to exploit additional photometric information that significantly improves constraints on \(M_{*}\) and \(M_{h}\). Hence, a primary determining factor for the fidelity of HaloFlow is the quality of the forward-model. Below, we discuss the caveats and limitations of our forward-model. First, our forward model is based on a particular galaxy formation model: that adopted in the TNG cosmological magneto-hydrodynamical simulation. However, previous works have revealed significant discrepancies among the properties of galaxy populations predicted by different state-of-the-art galaxy formation models. Hahn et al. (2019), for instance, found significant discrepancies among the \(M_{*}\)-star formation rate relations of Illustris, EAGLE, and MUFASA hydrodynamical simulations. Nevertheless, a number of works have demonstrated the success of TNG at reproducing a wide range of observations: _e.g._ galaxy color bimodality (Nelson et al., 2018), sizes (Genel et al., 2018), optical morphologies (Rodriguez-Gomez et al., 2019; Zanisi et al., 2021), mass-metallicity relation (Torrey Figure 4: Inferred HaloFlow\(M_{*}\) and \(M_{h}\) versus the true values for galaxies in our test sample (top panels). The bottom panels show the residuals. We include the posteriors from the different sets of photometric measurements (black, blue, orange, red) and represent their \(68^{\text{th}}\) percentiles with the error bars. We include only a subset of the test sample for clarity. Overall, HaloFlow accurately infers the true \(M_{*}\) and \(M_{h}\). The comparison further confirms that \(X_{\text{morph}}\), \(L_{\text{sat}}\), and \(N_{\text{sat}}\) each significantly tighten the \(M_{*}\) and \(M_{h}\) posteriors for certain galaxies. et al., 2019), and low redshift quasar luminosity function and black hole mass - stellar bulge mass relation (Weinberger et al., 2018). Furthermore, one of the main relations that HaloFlow exploits from TNG is the SHMR. The SHMR in galaxy formation models is typically calibrated against constraints from observations and, thus, consistent across different models (Wechsler and Tinker, 2018). In TNG, the SHMR is not explicitly calibrated against observational constraints; however, it is indirectly calibrated since the subgrid physics is optimized to match the \(z=0\) stellar mass function (Pillepich et al., 2018). Our forward model also relies on a specific mass-to-light conversion framework. SKIRT is responsible for generating our synthetic observables from the galaxy physical properties predicted by TNG. The transfer model for the HSC-SSP synthetic images uses the MAPPINGS III SED library to model emission from stellar populations forming in birth-clouds and Bruzual and Charlot (2003) templates for older stellar populations, assuming a Chabrier (2003) initial mass function. These libraries are standard choices in the literature for both SED modeling (_e.g._ MAGPHYS da Cunha et al., 2015, BEAGLE Chevallard and Charlot, 2016, BAGPIPES Carnall et al., 2018) and for radiative transfer modeling (_e.g._Torrey et al., 2015; Trayford et al., 2017; Cochrane et al., 2023; Guzman-Ortega et al., 2023). However, for inferring galaxy properties (_i.e._ the inverse problem of forward modeling), different SED modeling choices can produce significant discrepancies in galaxy properties, _e.g._\(M_{*}\) and SFR (Pacifici et al., 2023). The second component of the mass-to-light conversion framework is the dust model, which handles the scattering and absorption of stellar light by dust. The synthetic images from Bottrell et al. (2023) Figure 5: Coverage test validating the accuracy of our HaloFlow posterior estimate using \(\{X_{\rm mag}\}\) (black), \(\{X_{\rm mag},X_{\rm morph}\}\) (blue), \(\{X_{\rm mag},X_{\rm morph},L_{\rm sat}\}\) (orange), and \(\{X_{\rm mag},X_{\rm morph},L_{\rm sat},N_{\rm sat}\}\) (red). The test is calculated using the test sample. The black-dashed line represents an optimal estimate of the true posterior. HaloFlow provides a near optimal estimate of the true posterior for all sets of photometric measurements. use a dust model calibrated to gas mass, gas metallicity, and dust mass estimates for galaxies in the local Universe (Remy-Ruyer et al., 2014; Popping et al., 2022). While empirical, the model is physically intuitive -- the gas-to-dust mass ratio should scale with the metal content of the gas at temperatures where dust grains can form. However, there is considerable scatter about the empirical relation for observed galaxies. An in-depth exploration of different SED/dust modeling choices is beyond the scope of this work. But given that significant differences in the inferred physical properties of galaxies arise from the choice of SED and dust modeling this is an area that warrants investigation. In future works, we will investigate the choices made in our forward-model. For instance, we will train HaloFlow using additional simulations based on other galaxy formation models (_e.g._ SIMBA, Dave et al., 2019; EAGLE, Crain et al., 2015; Schaye et al., 2015). We will also utilize alternative SPS and dust models. With different forward-models we can improve the robustness of HaloFlow by ensuring that it does not learn relationships among galaxy and halo properties specific to a single galaxy formation model. We will also be able to extensively cross-validate HaloFlow and confirm the robustness of the inferred host halo properties. These additional simulations will also improve the accuracy of HaloFlow, especially at high \(M_{*}\) and \(M_{h}\) where there are currently a limited number of simulated galaxies. A larger set of simulated galaxies will also enable more systematic exploration of additional photometric observables that can inform \(M_{*}\) and \(M_{h}\). Furthermore, with HaloFlow trained on different galaxy formation models, we can precisely quantify the information content of the galaxy-halo connection in each model. The information content will not only serve as an informative statistic of the models but also by comparing across the models we will be able to inform galaxy formation. In subsequent papers, we will also test and calibrate the HaloFlow\(M_{h}\) constraints of observed galaxies and groups in HSC-SSP against constraints from galaxy-galaxy weak lensing measurements (_e.g._ Rana et al., 2022). We will also compare them to dynamical mass estimates of groups identified in GAMA (Driver et al., 2022). Once validated, we will apply HaloFlow to intervening halos observed by the FLIMFLAM spectroscopic survey (Lee et al., 2022) targeting FRB foreground fields to enhance their constraints on the CGM baryonic fraction. Lastly, we limited the \(L_{\rm sat}\) and \(N_{\rm sat}\) measurements to satellite galaxies with \(M_{r}<-18\) in this work. Including fainter galaxies in \(L_{\rm sat}\) and \(N_{\rm sat}\) would further improve the precision of HaloFlow. Future observations from DESI and Rubin, which will probe significantly fainter galaxies, will be able to take advantage of the additional gains from HaloFlow. ## 6 Summary We present HaloFlow, a framework for inferring host halo masses from the photometry and morphology of galaxies using simulation-based inference (SBI) with normalizing flows. HaloFlow is specifically tailored to galaxies in the Hyper Suprime-Cam Subaru Strategy Program (HSC-SSP) and, thus, leverages state-of-the-art synthetic galaxy images that model the realistic effects of the HSC-SSP observations (Bottrell et al., 2023). These images are constructed from the TNG hydrodynamic simulations using the SKIRT dust radiative transfer and an adapted version of RealSim. We train HaloFlow using 7,468 photometric measurements of 1,867 central galaxies made on the synthetic HSC images using GALIGHT. The measurements include \(grizy\)-band magnitudes (\(X_{\rm mag}\)), \(CAS\) morphological parameters (\(X_{\rm morph}\)), total measured satellite luminosities (\(L_{\rm sat}\)), and number of satellites (\(N_{\rm sat}\)). HaloFlow uses normalizing flows to perform neural density estimation of the posterior, \(p(\boldsymbol{\theta}\,|\,\boldsymbol{x})\), of \(\boldsymbol{\theta}=\{M_{*},M_{h}\}\) given the photometric measurements. We follow the SBI approach of Hahn and Melchior (2022) with two additional steps. First, our final flow is derived from ensembling the five flows with the lowest validation losses. Second, we correct for the implicit prior on \(M_{*}\) and \(M_{h}\) set by the stellar and halo mass functions using the Handley and Millea (2019) maximum entropy prior method. We train separate flows, \(q_{\phi}(\boldsymbol{\theta}\,|\,\boldsymbol{x})\approx p(\boldsymbol{\theta} \,|\,\boldsymbol{x})\), for different sets of photometric measurements, \(\boldsymbol{x}=\{X_{\rm mag}\}\), \(\{X_{\rm mag},X_{\rm morph}\}\), \(\{X_{\rm mag},X_{\rm morph},L_{\rm sat}\}\), \(\{X_{\rm mag},X_{\rm morph},L_{\rm sat},N_{\rm sat}\}\). When we apply HaloFlow to a subset of 125 random central galaxies with \(M_{*}>10^{9.5}M_{\odot}\) excluded from the training we find: * HaloFlow successfully infers posteriors that are consistent with the true \(M_{*}\) and \(M_{h}\) for every set of photometric measurements. We further validate the accuracy of the posteriors using the Lemos et al. (2023) DRP cover test and confirm that HaloFlow provides near optimal estimates of the true posteriors. * Comparison of the HaloFlow posteriors firmly demonstrate that galaxy morphology encodes significant information on both \(M_{*}\) and \(M_{h}\). This confirms and quantifies the known connection between morphology and local environment. The HaloFlow posteriors also show that satellite measurements can further improve \(M_{h}\) constraints. With these additional observables, we can improve \(M_{h}\) constraints by \(\sim\)20 and 30% for field and group galaxies, respectively. * With all of our photometric measurements, we can constrain \(M_{*}\) and \(M_{h}\) with precision levels of \(\sigma_{\log M_{*}}\sim 0.073\) and \(\sigma_{\log M_{h}}\sim 0.095\) dex for field galaxies and \(\sigma_{\log M_{*}}\sim 0.108\) and \(\sigma_{\log M_{h}}\sim 0.132\) dex for group galaxies. Our HaloFlow\(M_{h}\) constraints are \(\sim\)40% tighter than standard \(M_{h}\) methods based on the SHMR. HaloFlow uses SBI to leverage state-of-the-art synthetic galaxy images. Its fidelity is, therefore, determined by the quality of the forward model used to simulate the images. Our forward model relies on the TNG galaxy formation model, which has been show to successfully reproduce a wide range of observations. It also relies on SKIRT, which uses standard modeling choices in the literature. Nevertheless, in future work we will go beyond these choices and use additional galaxy formation models and alternative SPS and dust models to improve the robustness of HaloFlow. We will also further validate HaloFlow using alternative methods for inferring \(M_{h}\) based on galaxy-galaxy weak lensing. Afterwards, we will apply HaloFlow to HSC-SSP and infer \(M_{h}\) for a variety of applications: _e.g._ constrainig \(M_{h}\) of halos in the foreground of FRBs to place stringent constraints on the CGM baryonic fraction. ## Acknowledgements It's a pleasure to thank Marc Huertas-Company, Mike Walmsley, Ken Wong, and John Wu for useful discussions. This work was supported by the AI Accelerator program of the Schmidt Futures Foundation. CB gratefully acknowledges support from the Natural Sciences and Engineering Council of Canada and the Forrest Research Foundation. This work was substantially performed using the Princeton Research Computing resources at Princeton University, which is a consortium of groups led by the Princeton Institute for Computational Science and Engineering (PICSciE) and Office of Information Technology's Research Computing. Kavli IPMU is supported by World Premier International Research Center Initiative (WPI), MEXT, Japan. This work made use of premier images captured by Subaru Telescope on the summit of Maunakea, Hawaii. We acknowledge the cultural, historical, and natural significance and reverence that Maunakea has for the indigenous Hawaiian community. We are deeply fortunate and grateful to share in the opportunity to explore the Universe from this mountain. The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University.
2303.13147
Estimates for the Kolmogorov widths of an intersection of two balls in a mixed norm
Order estimates for the Kolmogorov widths of an intersection of two finite-dimensional balls in a mixed norm under some conditions on the parameters are obtained.
A. A. Vasil'eva
2023-03-23T09:58:21Z
http://arxiv.org/abs/2303.13147v1
# Estimates for the Kolmogorov widths of an intersection of two balls in a mixed norm ###### Abstract Order estimates for the Kolmogorov widths of an intersection of two finite-dimensional balls in a mixed norm under some conditions on the parameters are obtained. ## 1 Introduction In this paper, a problem of order estimates for the Kolmogorov widths of an intersection of two finite-dimensional balls in a mixed norm is studied. First we give necessary definitions and notation. Let \(m\), \(k\in\mathbb{N}\), \(1\leqslant p<\infty\), \(1\leqslant\theta<\infty\). By \(l_{p,\theta}^{m,k}\) we denote the space \(\mathbb{R}^{mk}\) with the norm \[\|(x_{i,j})_{1\leqslant i\leqslant m,\,1\leqslant j\leqslant k}\|_{l_{p,\theta }^{m,k}}=\left(\sum_{j=1}^{k}\left(\sum_{i=1}^{m}|x_{i,j}|^{p}\right)^{\theta/ p}\right)^{1/\theta}.\] For \(p=\infty\) or \(\theta=\infty\), the definition is modified naturally. By \(B_{p,\theta}^{m,k}\) we denote the unit ball of the space \(l_{p,\theta}^{m,k}\). If \(k=1\), we write \(l_{p}^{m}:=l_{p,\theta}^{m,1}\) and \(B_{p}^{m}:=B_{p,\theta}^{m,1}\). Let \(X\) be a normed space, and let \(M\subset X\), \(n\in\mathbb{Z}_{+}\). The Kolmogorov \(n\)-widths of the set \(M\) in the space \(X\) is defined as follows: \[d_{n}(M,\,X)=\inf_{L\in\mathcal{L}_{n}(X)}\sup_{x\in M}\inf_{y\in L}\|x-y\|;\] here \(\mathcal{L}_{n}(X)\) is the family of all subspaces in \(X\) with dimension at most \(n\). For details, see, e.g., [1, 2, 3]. Exact values of the widths \(d_{n}(B_{p}^{m},\,l_{q}^{m})\) were obtained in [4, 5] (for \(p\geqslant q\)) and in [6, 7] (for \(p=1\), \(q=2\)). For \(p\leqslant q<\infty\), order estimates were obtained in [8, 9]. The problem of estimating the widths \(d_{n}(B_{p}^{m},\,l_{\infty}^{m})\) was studied in [10, 11, 12]; for \(p\geqslant 2\), the order estimates were obtained; for \(1\leqslant p<2\), the values are known up to a factor, which is a degree of \(\log\left(\frac{em}{n}\right)\). Approximative properties of the balls \(B_{p,\theta}^{m,k}\) in \(l_{q,\sigma}^{m,k}\) are interesting in relation to Besov classes with dominating mixed smoothness [13, 14, 15] and weighted Besov classes [16]. In [14, 16, 17, 18, 19, 20, 21], the problem of estimating the Kolmogorov widths \(d_{n}(B_{p,\theta}^{m,k}\), \(l_{q,\sigma}^{m,k})\) for \(n\leqslant\frac{mk}{2}\) was studied (more precisely, in [14] the Gelfand widths were considered; for \(p\), \(\theta\), \(q\), \(\sigma\geqslant 1\), the problem can be formulated in terms of the Kolmogorov widths [22]). The order estimates were obtained for the following parameters: 1. E. M. Galeev [17]: \(p=1\), \(\theta=\infty\), \(q=2\), \(1<\sigma<\infty\); 2. E. M. Galeev [18]: \(p=1\) or \(p=\infty\); \(\theta=\infty\); here one of the following conditions holds: a) \(q=2\), \(1<\sigma\leqslant\infty\) or b) \(1<q\leqslant\min\{2\), \(\sigma\}\); 3. A. D. Izaak [20]: \(p=\theta\), \(q=2\), \(\sigma=1\), where \(p=1\) or \(2\leqslant p\leqslant\infty\); 4. in [16] the case \(2\leqslant q<\infty\), \(2\leqslant\sigma<\infty\), \(1\leqslant p\leqslant q\), \(1\leqslant\theta\leqslant\sigma\), \(n\leqslant a(q,\,\sigma)mk\) was considered (here \(a(q,\,\sigma)\) is a positive number); 5. Yu. V. Malykhin, K. S. Rjutin [21]: \(p=1\), \(\theta=\infty\), \(q=2\), \(\sigma=1\) (earlier in [19] the estimates were obtained up to a logarithmic factor), as well as \(p\leqslant q\leqslant 2\), \(\theta\geqslant\sigma\); 6. S. Dirksen, T. Ullrich [14]: a) \(p=q=2\), \(\theta\geqslant 2\), \(\sigma=\infty\); b) \(p=\theta=\sigma\geqslant 2\), \(q=\infty\). In addition, E. M. Galeev [23] obtained the lower estimate of the Kolmogorov widths for \(1\leqslant p\leqslant\infty\), \(\theta=\infty\), \(2\leqslant q<\infty\), \(\sigma=q\), \(n\leqslant c(q)mk\) (here \(c(q)\) is a positive number). The problem of estimating the Kolmogorov widths of an intersection of a family of Sobolev or Besov classes [13, 17, 24] can be reduced by the discretization method to estimating the widths of \(d_{n}(\cap_{\alpha\in\varLambda}\nu_{\alpha}B_{p_{\alpha}}^{m}\), \(l_{q}^{m})\). E. M. Galeev [24] obtained the order estimates of these values for \(n=\frac{m}{2}\); in [25] this result was generalized to \(n\leqslant\frac{m}{2}\). Naturally arises the question of estimating the Kolmogorov widths of an intersection of finite-dimension balls in a mixed norm. The result can be employed in estimating the widths of an intersection of weighted Besov classes or Besov classes with dominating mixed smoothness. Here we consider the case of two balls \(\nu_{i}B_{p_{i},\theta_{i}}^{m,k}\), \(i=1,\,2\), where \(2\leqslant q<\infty\), \(2\leqslant\sigma<\infty\), \(1\leqslant p_{i}\leqslant q\), \(1\leqslant\theta_{i}\leqslant\sigma\), \(i=1,\,2\). It turns out that for these parameters the problem can be reduced to estimating the widths of one ball in a mixed norm; the order estimates for such widths are already obtained in [16] (see Theorem A below). Given sets \(X\), \(Y\) and functions \(f_{1}\), \(f_{2}:\;X\times Y\rightarrow\mathbb{R}_{+}\), we write \(f_{1}(x,\,y)\underset{y}{\lesssim}f_{2}(x,\,y)\) (or \(f_{2}(x,\,y)\underset{y}{\gtrsim}f_{1}(x,\,y)\)) if for each \(y\in Y\) there exists \(c(y)>0\) such that \(f_{1}(x,\,y)\leqslant c(y)f_{2}(x,\,y)\) for all \(x\in X\); \(f_{1}(x,\,y)\underset{y}{\asymp}f_{2}(x,\,y)\) if \(f_{1}(x,\,y)\underset{y}{\lesssim}f_{2}(x,\,y)\) and \(f_{2}(x,\,y)\underset{y}{\lesssim}f_{1}(x,\,y)\). Let \(q>2\), \(1\leqslant p\leqslant q\). We set \(\lambda_{p,q}=\min\left\{\frac{1/p-1/q}{1/2-1/q},\,1\right\}\). For \(q=2\), \(1\leqslant p\leqslant 2\), we set \(\lambda_{p,2}=1\). **Theorem A**.: _(see [16]). Let \(m\), \(k\in\mathbb{N}\), \(n\in\mathbb{Z}_{+}\), \(n\leqslant\frac{mk}{2}\), \(2\leqslant q<\infty\), \(2\leqslant\sigma<\infty\), \(1\leqslant p\leqslant q\), \(1\leqslant\theta\leqslant\sigma\). Then_ * _if_ \(\max\{p,\,\theta\}\leqslant 2\)_, then_ \[d_{n}(B_{p,\theta}^{m,k},\,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\asymp}\min \{1,\,n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}}\};\] (1) * _if_ \(\max\{p,\,\theta\}\geqslant 2\)_,_ \(\lambda_{p,q}\leqslant\lambda_{\theta,\sigma}\)_, then_ \[d_{n}(B_{p,\theta}^{m,k},\,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\asymp}\begin{cases}1,&n \leqslant m^{\frac{2}{q}}k^{\frac{2}{\sigma}},\\ \left(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}}\right)^{\lambda_{p,q}},&m^{\frac{2}{q}}k^{\frac{2}{\sigma}}\leqslant n\leqslant mk^{\frac{2}{ \sigma}},\\ m^{\frac{1}{q}-\frac{1}{p}}(n^{-\frac{1}{2}}m^{\frac{1}{2}}k^{\frac{1}{\sigma} })^{\lambda_{\theta,\sigma}},&mk^{\frac{2}{\sigma}}\leqslant n\leqslant\frac{ mk}{2};\end{cases}\] (2) * _if_ \(\max\{p,\,\theta\}\geqslant 2\)_,_ \(\lambda_{p,q}\geqslant\lambda_{\theta,\sigma}\)_, then_ \[d_{n}(B_{p,\theta}^{m,k},\,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\asymp} \begin{cases}1,&n\leqslant m^{\frac{2}{q}}k^{\frac{2}{\sigma}},\\ \left(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}}\right)^{\lambda_{ \theta,\sigma}},&m^{\frac{2}{q}}k^{\frac{2}{\sigma}}\leqslant n\leqslant km^{ \frac{2}{q}},\\ k^{\frac{1}{\sigma}-\frac{1}{\theta}}(n^{-\frac{1}{2}}k^{\frac{1}{2}}m^{\frac{ 1}{q}})^{\lambda_{p,q}},&km^{\frac{2}{q}}\leqslant n\leqslant\frac{mk}{2}.\end{cases}\] (3) In [16] this theorem was proved for \(n\leqslant a(q,\,\sigma)mk\); in addition, in the statement, the constants in order equalities depend on \(p\), \(\theta\), \(q\), \(\sigma\), but the proof shows that they are independnet of \(p\) and \(\theta\). The upper estimate holds for all \(n\leqslant mk\). For \(a(q,\,\sigma)mk\leqslant n\leqslant\frac{mk}{2}\) the lower estimate will be proved in SS2 (see Corollary 1). Notice that if \(2\leqslant p\leqslant q\), \(2\leqslant\theta\leqslant\sigma\), \(\lambda_{p,q}=\lambda_{\theta,\sigma}\), then \[\begin{split}&\left(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{ \sigma}}\right)^{\lambda_{p,q}}=m^{\frac{1}{q}-\frac{1}{p}}(n^{-\frac{1}{2}}m^ {\frac{1}{2}}k^{\frac{1}{\sigma}})^{\lambda_{\theta,\sigma}}\\ &=\left(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}} \right)^{\lambda_{\theta,\sigma}}=k^{\frac{1}{\sigma}-\frac{1}{\theta}}(n^{- \frac{1}{2}}k^{\frac{1}{2}}m^{\frac{1}{q}})^{\lambda_{p,q}}.\end{split} \tag{4}\] Now we formulate the main result of the article. **Theorem 1**.: _Let \(m\), \(k\in\mathbb{N}\), \(n\in\mathbb{Z}_{+}\), \(n\leqslant\frac{mk}{2}\), \(2\leqslant q<\infty\), \(2\leqslant\sigma<\infty\), \(1\leqslant p_{i}\leqslant q\), \(1\leqslant\theta_{i}\leqslant\sigma\), \(\nu_{i}>0\), \(i=1,\,2\). We define the values \(\Phi_{j}(m,\,k,\,n)=\Phi_{j}(m,\,k,\,n;\,p_{1},\,p_{2},\,\theta_{1},\,\theta_{2},\,q,\, \sigma,\,\nu_{1},\,\nu_{2})\) (\(j=1,\,\ldots,\,5\)) as follows:_ 1. \(\Phi_{j}(m,\,k,\,n)=\nu_{j}d_{n}(B_{p_{j},\theta_{j}}^{m,k},\,l_{q,\sigma}^{m,k})\) _for_ \(j=1,\,2\)_;_ 2. _if there exists_ \(\tilde{\lambda}\in[0,\,1]\) _such that_ \(\frac{1}{2}=\frac{1-\tilde{\lambda}}{p_{1}}+\frac{\tilde{\lambda}}{p_{2}}\)_, we define the number_ \(\tilde{\theta}\) _by_ \(\frac{1}{\tilde{\theta}}=\frac{1-\tilde{\lambda}}{\theta_{1}}+\frac{\tilde{ \lambda}}{\theta_{2}}\) _and set_ \(\Phi_{3}(m,\,k,\,n)=\nu_{1}^{1-\tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}d_{n}(B _{2,\tilde{\theta}}^{m,k},\,l_{q,\sigma}^{m,k})\)_; otherwise, we set_ \(\Phi_{3}(m,\,k,\,n)=+\infty\)_;_ 3. _if there exists_ \(\tilde{\mu}\in[0,\,1]\) _such that_ \(\frac{1}{2}=\frac{1-\tilde{\mu}}{\theta_{1}}+\frac{\tilde{\mu}}{\theta_{2}}\)_, we define the number_ \(\tilde{p}\) _by_ \(\frac{1}{\tilde{p}}=\frac{1-\tilde{\mu}}{p_{1}}+\frac{\tilde{\mu}}{p_{2}}\) _and set_ \(\Phi_{4}(m,\,k,\,n)=\nu_{1}^{1-\tilde{\mu}}\nu_{2}^{\tilde{\mu}}d_{n}(B_{\tilde {p},2}^{m,k},\,l_{q,\sigma}^{m,k})\)_; otherwise, we set_ \(\Phi_{4}(m,\,k,\,n)=+\infty\)_;_ 4. _if there exist_ \(\lambda\in[0,\,1]\)_,_ \(p\in[2,\,q]\)_,_ \(\theta\in[2,\,\sigma]\) _such that_ \(\frac{1}{p}=\frac{1-\lambda}{p_{1}}+\frac{\lambda}{p_{2}}\)_,_ \(\frac{1}{\theta}=\frac{1-\lambda}{\theta_{1}}+\frac{\lambda}{\theta_{2}}\) _and_ \(\lambda_{p,q}=\lambda_{\theta,\sigma}\)_, we set_ \(\Phi_{5}(m,\,k,\,n)=\nu_{1}^{1-\lambda}\nu_{2}^{\lambda}d_{n}(B_{p,\theta}^{m,k},\,l_{q,\sigma}^{m,k})\)_; otherwise, we set_ \(\Phi_{5}(m,\,k,\,n)=+\infty\)_._ _Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\asymp}\min_{1\leqslant j\leqslant 5} \Phi_{j}(m,\,k,\,n).\] The result is announced in [26]. ## 2 Auxiliary results Let \(k,\,m,\,r,\,l\in\mathbb{N},\,1\leqslant r\leqslant m,\,1\leqslant l\leqslant k\). We set \[G=\{(\tau_{1},\,\tau_{2},\,\varepsilon_{1},\,\varepsilon_{2}):\;\tau_{1}\in S _{m},\,\tau_{2}\in S_{k},\,\varepsilon_{1}\in\{1,\,-1\}^{m},\,\varepsilon_{2} \in\{1,\,-1\}^{k}\},\] where \(S_{m}\) and \(S_{k}\) are groups of permutations of \(m\) and \(k\) elements, respectively. For \(x=(x_{i,j})_{1\leqslant i\leqslant m,\,1\leqslant j\leqslant k}\in\mathbb{R}^ {mk}\), \(\gamma=(\tau_{1},\,\tau_{2},\,\varepsilon_{1},\,\varepsilon_{2})\in G\), \(\varepsilon_{1}=(\varepsilon_{1,i})_{1\leqslant i\leqslant m}\), \(\varepsilon_{2}=(\varepsilon_{2,j})_{1\leqslant j\leqslant k}\), we set \[\gamma(x)=(\varepsilon_{1,i}\varepsilon_{2,j}x_{\tau_{1}(i)\tau_{2}(j)})_{1 \leqslant i\leqslant m,\,1\leqslant j\leqslant k}. \tag{5}\] We write \(e=(e_{i,j}^{m,k,r,l})_{1\leqslant i\leqslant m,\,1\leqslant j\leqslant k}\), where \[e_{i,j}^{m,k,r,l}=\left\{\begin{array}{ll}1&\mbox{if }1\leqslant i\leqslant r,\;1\leqslant j\leqslant l,\\ 0&\mbox{otherwise},\end{array}\right. \tag{6}\] \[V_{r,l}^{m,k}=\mbox{conv}\{\gamma(e):\;\gamma\in G\}. \tag{7}\] In [16, formula (34)] the following assertion was obtained: if \(2\leqslant q<\infty\), \(2\leqslant\sigma<\infty\), \(n\in\mathbb{Z}_{+}\), \(n\leqslant a(q,\,\sigma)m^{\frac{2}{q}}k^{\frac{2}{\sigma}}r^{1-\frac{2}{q}}l^ {1-\frac{2}{\sigma}}\), then \[d_{n}(V_{r,l}^{m,k},\,l_{q,\sigma}^{m,k})\geqslant b(q,\,\sigma)r^{\frac{1}{q }}l^{\frac{1}{\sigma}}; \tag{8}\] here \(a(q,\,\sigma)>0\), \(b(q,\,\sigma)>0\), \(a(\cdot,\,\cdot)\) is a function nonincreasing in each argument, \(b(\cdot,\,\cdot)\) is a continuous function. Here we obtain the estimate for all \(n\leqslant\frac{mk}{2}\). We apply the method from the paper of Gluskin [8]. **Proposition 1**.: _Let \(2\leqslant q<\infty\), \(2\leqslant\sigma<\infty\), \(n\in\mathbb{Z}_{+}\), \(n\leqslant\frac{mk}{2}\). Then_ \[d_{n}(V_{r,l}^{m,k},\,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim} \begin{cases}r^{\frac{1}{q}}l^{\frac{1}{\sigma}}&\mbox{if }n\leqslant m^{\frac{2}{q}}k^{\frac{2}{\sigma}}r^{1-\frac{2}{q}}l^{1-\frac{2} {\sigma}},\\ n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}}r^{\frac{1}{2}}l^{\frac{1}{2}} &\mbox{if }n\geqslant m^{\frac{2}{q}}k^{\frac{2}{\sigma}}r^{1-\frac{2}{q}}l^{1-\frac{2} {\sigma}}.\end{cases} \tag{9}\] **Proof.** For \(n\leqslant a(q,\,\sigma)m^{\frac{2}{q}}k^{\frac{2}{\sigma}}r^{1-\frac{2}{q}}l^{1- \frac{2}{\sigma}}\), the estimate follows from (8). Let \(a(q,\,\sigma)m^{\frac{2}{q}}k^{\frac{2}{\sigma}}r^{1-\frac{2}{q}}l^{1-\frac{2}{ \sigma}}\leqslant n\leqslant a(q,\,\sigma)mk\). There exist numbers \(\tilde{q}\in[2,\,q]\) and \(\tilde{\sigma}\in[2,\,\sigma]\) such that \[n=a(q,\,\sigma)m^{\frac{2}{q}}k^{\frac{2}{\sigma}}r^{1-\frac{2}{q}}l^{1-\frac {2}{\sigma}}. \tag{10}\] Since the function \(a(\cdot,\,\cdot)\) is non-increasing in each argument, we have \(n\leqslant a(\tilde{q},\,\tilde{\sigma})m^{\frac{2}{q}}k^{\frac{2}{\sigma}}r^{ 1-\frac{2}{q}}l^{1-\frac{2}{\sigma}}\). Hence, by (8), \[d_{n}(V_{r,l}^{m,k},\,l_{\tilde{q},\tilde{\sigma}}^{m,k})\geqslant b(\tilde{ q},\,\tilde{\sigma})r^{\frac{1}{q}}l^{\frac{1}{\sigma}}\underset{q,\sigma}{ \longrightarrow}r^{\frac{1}{q}}l^{\frac{1}{\sigma}}\] (here we used the fact that \(b\) is continuous). This implies \[d_{n}(V_{r,l}^{m,k},\,l_{q,\sigma}^{m,k})\geqslant m^{\frac{1}{q}-\frac{1}{q} }k^{\frac{1}{\sigma}-\frac{1}{\sigma}}d_{n}(V_{r,l}^{m,k},\,l_{\tilde{q},\tilde {\sigma}}^{m,k})\underset{q,\sigma}{\underset{q,\sigma}{\succ}}\] \[\gtrsim m^{\frac{1}{q}-\frac{1}{q}}k^{\frac{1}{\sigma}-\frac{1}{\sigma}}r^{ \frac{1}{q}}l^{\frac{1}{\sigma}}\underset{q,\sigma}{\underset{q,\sigma}{ \succ}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}}n^{-\frac{1}{2}}r^{\frac{1}{2}}l^{ \frac{1}{2}}.\] It remains to consider \(a(q,\,\sigma)mk\leqslant n\leqslant\frac{mk}{2}\). First we show that \[d_{n}(V_{r,l}^{m,k},\,l_{2,2}^{m,k})\gtrsim r^{\frac{1}{2}}l^{\frac{1}{2}} \quad\text{for}\;n\leqslant\frac{mk}{2}. \tag{11}\] To this end, we argue as in [16, pp. 14-17] for this particular case. Let \(Y\subset l_{2,2}^{m,k}\) be a subspace of dimension at most \(n\), and let \(y^{\gamma}=(y_{i,j}^{\gamma})_{1\leqslant i\leqslant m,\,1\leqslant j \leqslant k}\in Y\) be a nearest point from \(Y\) to \(\gamma(e)\) (see (5), (6)). Then \[\|\gamma(e)-y^{\gamma}\|_{l_{2,2}^{m,k}}^{2}=\sum_{j=1}^{k}\sum_{i=1}^{m}| \gamma(e)_{i,j}-y_{i,j}^{\gamma}|^{2}=\] \[=\sum_{j=1}^{k}\sum_{i=1}^{m}|\gamma(e)_{i,j}|^{2}+\sum_{j=1}^{k}\sum_{i=1}^{m }|y_{i,j}^{\gamma}|^{2}-2\sum_{j=1}^{k}\sum_{i=1}^{m}\gamma(e)_{i,j}y_{i,j}^{ \gamma}.\] Averaging over \(\gamma\in G\) and taking into account formulas (5), (6), we get \[\sup_{\gamma\in G}\|\gamma(e)-y^{\gamma}\|_{l_{2,2}^{m,k}}^{2}\geqslant rl+ \frac{1}{|G|}\sum_{\gamma\in G}\sum_{j=1}^{k}\sum_{i=1}^{m}|y_{i,j}^{\gamma}| ^{2}-\] \[-2\frac{1}{|G|}\sum_{\gamma\in G}\sum_{j=1}^{k}\sum_{i=1}^{m}\gamma(e)_{i,j}y _{i,j}^{\gamma}=:S.\] In [16, pp. 16] it was proved that \[\left|\frac{1}{|G|}\sum_{\gamma\in G}\sum_{j=1}^{k}\sum_{i=1}^{m}\gamma(e)_{i,j}y_{i,j}^{\gamma}\right|\leqslant\left(\frac{nrl}{mk}\right)^{1/2}\xi,\] where \(\xi=\left(\frac{1}{|G|}\sum\limits_{\gamma\in G}\sum\limits_{j=1}^{k}\sum\limits_{ i=1}^{m}|y_{i,j}^{\gamma}|^{2}\right)^{1/2}\). Hence \[S\geqslant rl-2\left(\frac{nrl}{mk}\right)^{1/2}\xi+\xi^{2}\geqslant rl\left(1- \frac{n}{mk}\right).\] For \(n\leqslant\frac{mk}{2}\) we get \(S\geqslant\frac{rl}{2}\); this together with (7) implies (11). Let now \(q\in[2,\,\infty)\), \(\sigma\in[2,\infty)\). Then \[d_{n}(V_{r,l}^{m,k},\,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\overset{(\ref{ r1})}{\gtrsim}}\,m^{\frac{1}{q}-\frac{1}{2}}k^{\frac{1}{\sigma}-\frac{1}{2}}r^{ \frac{1}{2}}l^{\frac{1}{2}},\quad n\leqslant\frac{mk}{2}.\] Hence, for \(a(q,\,\sigma)mk\leqslant n\leqslant\frac{mk}{2}\) we have \[d_{n}(V_{r,l}^{m,k},\,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\overset{}{ \gtrsim}}\,n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}}r^{\frac{1}{2}} l^{\frac{1}{2}}.\] This completes the proof. **Corollary 1**.: _Let \(2\leqslant q<\infty\), \(2\leqslant\sigma<\infty\), \(1\leqslant p\leqslant q\), \(1\leqslant\theta\leqslant\sigma\), \(m\), \(k\), \(n\in\mathbb{N}\), \(a(q,\,\sigma)mk\leqslant n\leqslant\frac{mk}{2}\). Then_ \[d_{n}(B_{p,\theta}^{m,k},\,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\overset{}{ \gtrsim}}\begin{cases}m^{\frac{1}{q}-\frac{1}{p}}k^{\frac{1}{\sigma}-\frac{1}{ \theta}}&\text{if }\min\{p,\,\theta\}\geqslant 2,\\ m^{\frac{1}{q}-\frac{1}{p}+\frac{1}{2}}n^{-\frac{1}{2}}k^{\frac{1}{\sigma}}& \text{if }p\geqslant 2,\,\theta\leqslant 2,\\ k^{\frac{1}{\sigma}-\frac{1}{\theta}+\frac{1}{2}}n^{-\frac{1}{2}}m^{\frac{1}{ q}},&\text{if }\theta\geqslant 2,\,p\leqslant 2,\\ n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}}&\text{if }\max\{p,\,\theta\} \leqslant 2.\end{cases}\] Proof.: For \(\min\{p,\,\theta\}\geqslant 2\), we use the inclusion \(m^{-\frac{1}{p}}k^{-\frac{1}{\theta}}V_{m,k}^{m,k}\subset B_{p,\theta}^{m,k}\), for \(\theta\leqslant 2\leqslant p\), the inclusion \(m^{-\frac{1}{p}}V_{m,1}^{m,k}\subset B_{p,\theta}^{m,k}\), for \(p\leqslant 2\leqslant\theta\), the inclusion \(V_{1,1}^{m,k}\subset B_{p,\theta}^{m,k}\). In what follows, \(m\), \(k\in\mathbb{N}\), \(n\in\mathbb{Z}_{+}\), \(n\leqslant\frac{mk}{2}\), \(\nu_{i}>0\), \(1=1,\,2\). **Lemma 1**.: _Let \(1\leqslant p_{i}\leqslant\infty\), \(1\leqslant\theta_{i}\leqslant\infty\), \(\lambda\in[0,\,1]\). We define the numbers \(p\), \(\theta\in[1,\,\infty]\) by_ \[\frac{1}{p}=\frac{1-\lambda}{p_{1}}+\frac{\lambda}{p_{2}},\quad\frac{1}{ \theta}=\frac{1-\lambda}{\theta_{1}}+\frac{\lambda}{\theta_{2}}. \tag{12}\] _Then_ \[\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}\subset \nu_{1}^{1-\lambda}\nu_{2}^{\lambda}B_{p,\theta}^{m,k}.\] **Proof.** It is sufficient to prove that \[\|(x_{i,j})_{1\leqslant i\leqslant m,\,1\leqslant j\leqslant k}\|_{l^{m,k}_{p, \theta}}\leqslant\|(x_{i,j})_{1\leqslant i\leqslant m,\,1\leqslant j\leqslant k }\|^{1-\lambda}_{l^{m,k}_{p_{1},\theta_{1}}}\|(x_{i,j})_{1\leqslant i\leqslant m,\,1\leqslant j\leqslant k}\|^{\lambda}_{l^{m,k}_{p_{2},\theta_{2}}}.\] We define the number \(\beta\) by the equation \(\frac{\beta}{p}=\frac{\lambda}{p_{2}}\). From (12) it follows that \(\frac{1-\beta}{p}=\frac{1-\lambda}{p_{1}}\); hence \(\beta\in[0,\,1]\). By Holder's inequality, we get for each \(j\in\{1,\,\ldots,\,k\}\) \[\|(x_{i,j})_{1\leqslant i\leqslant m}\|_{l^{m}_{p}}=\left(\sum_{i=1}^{m}|x_{i, j}|^{p(1-\lambda)}|x_{i,j}|^{p\lambda}\right)^{\frac{1}{p}}\leqslant\|(x_{i,j})_{1 \leqslant i\leqslant m}\|^{1-\lambda}_{l^{m}_{p_{1}}}\|(x_{i,j})_{1\leqslant i \leqslant m}\|^{\lambda}_{l^{m}_{p_{2}}}. \tag{13}\] Now we define \(\gamma\) by the equation \(\frac{\gamma}{\theta}=\frac{\lambda}{\theta_{2}}\). Then \(\frac{1-\gamma}{\theta}\stackrel{{\eqref{eq:1}}}{{=}}\frac{1- \lambda}{\theta_{1}}\); hence \(\gamma\in[0,\,1]\). By Holder's inequality, we obtain \[\|(x_{i,j})_{1\leqslant i\leqslant m,\,1\leqslant j\leqslant k}\|_{l^{m,k}_{ p,\theta}}\stackrel{{\eqref{eq:1}}}{{\leqslant}}\left(\sum_{j=1}^{k}\|(x_{i, j})_{1\leqslant i\leqslant m}\|^{(1-\lambda)\theta}_{l^{m}_{p_{1}}}\|(x_{i,j})_{1 \leqslant i\leqslant m}\|^{\lambda\theta}_{l^{m}_{p_{2}}}\right)^{\frac{1}{ \theta}}\leqslant\] \[\leqslant\|(x_{i,j})_{1\leqslant i\leqslant m,\,1\leqslant j\leqslant k}\|^{1 -\lambda}_{l^{m,k}_{p_{1},\theta_{1}}}\|(x_{i,j})_{1\leqslant i\leqslant m,\,1 \leqslant j\leqslant k}\|^{\lambda}_{l^{m,k}_{p_{2},\theta_{2}}}.\] This completes the proof. **Lemma 2**.: _Let \(\lambda\in[0,\,1]\), \(\frac{1}{p}=\frac{1-\lambda}{p_{1}}+\frac{\lambda}{p_{2}}\), \(\frac{1}{\theta}=\frac{1-\lambda}{\theta_{1}}+\frac{\lambda}{\theta_{2}}\), \(\tilde{r}\in[1,\,m]\), \(\tilde{l}=[1,\,k]\), \(r=\left\lfloor\tilde{r}\right\rfloor\) or \(r=\left\lceil\tilde{r}\right\rceil\), \(l=\left\lfloor\tilde{l}\right\rfloor\) or \(l=\left\lceil\tilde{l}\right\rceil\),_ \[\frac{\nu_{1}}{\nu_{2}}=\tilde{r}^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\tilde{l}^ {\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}. \tag{14}\] _Then_ \[\nu_{1}^{1-\lambda}\nu_{2}^{\lambda}r^{-\frac{1}{p}}l^{-\frac{1}{\theta}}V_{r, l}^{m,k}\subset 4(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}).\] **Proof.** By (5)-(7), it suffices to prove that \[\nu_{1}^{1-\lambda}\nu_{2}^{\lambda}\tilde{r}^{\frac{1}{p_{1}}-\frac{1}{p}} \tilde{l}^{\frac{1}{\theta_{1}}-\frac{1}{\theta}}\leqslant\nu_{1},\,\,\,\nu_{1 }^{1-\lambda}\nu_{2}^{\lambda}\tilde{r}^{\frac{1}{p_{2}}-\frac{1}{p}}\tilde{l }^{\frac{1}{\theta_{2}}-\frac{1}{\theta}}\leqslant\nu_{2}.\] It follows from (14). **Lemma 3**.: _Let \(2\leqslant q<\infty\), \(2\leqslant\sigma<\infty\). Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l^{m,k}_{q,\sigma})\underset{q,\sigma}{\gtrsim}\min\{\nu_{1},\,\nu_{2}\} \min\{1,\,n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}}\}.\] **Proof.** From the inclusion \(\min\{\nu_{1},\,\nu_{2}\}V_{1,1}^{m,k}\subset\nu_{1}B_{p_{1},\theta_{1}}^{m,k} \cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}\) and Proposition 1 it follows that \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l^{m,k}_{q,\sigma})\geqslant\] \[\geqslant d_{n}(\min\{\nu_{1},\,\nu_{2}\}V_{1,1}^{m,k},\,l^{m,k}_{q,\sigma}) \underset{q,\sigma}{\gtrsim}\min\{\nu_{1},\,\nu_{2}\}\min\{1,\,n^{-\frac{1}{ 2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}}\}.\] This completes the proof. **Lemma 4**.: _Let \(2\leqslant q<\infty\), \(2\leqslant\sigma<\infty\)._ 1. _Let_ \(q>2\)_,_ \(m^{\frac{2}{q}}k^{\frac{2}{\sigma}}\leqslant n\leqslant mk^{\frac{2}{\sigma}}\)_,_ \[\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}.\] (15) _Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\nu_{1}(n^{-\frac{1}{2}}m^{ \frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/p_{1}-1/q}{1/2-1/q}}.\] (16) 2. _Let_ \(\sigma>2\)_,_ \(m^{\frac{2}{q}}k^{\frac{2}{\sigma}}\leqslant n\leqslant m^{\frac{2}{q}}k\)_,_ \[\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}.\] _Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\nu_{1}(n^{-\frac{1}{2}}m^{ \frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\sigma}{1/2-1/\sigma}}.\] **Proof.** We prove assertion 1 (assertion 2 is similar). We set \[r=\left\lceil(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1} {2}-\frac{1}{q}}\right\rceil. \tag{17}\] Since \(m^{\frac{2}{q}}k^{\frac{2}{\sigma}}\leqslant n\leqslant mk^{\frac{2}{\sigma}}\), we have \(1\leqslant r\leqslant m\). We claim that \[\nu_{1}r^{-\frac{1}{p_{1}}}V_{r,1}^{m,k}\subset 2(\nu_{1}B_{p_{1},\theta_{1}}^{ m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}). \tag{18}\] It suffices to check that \(\nu_{1}r^{\frac{1}{p_{2}}-\frac{1}{p_{1}}}\leqslant 2\nu_{2}\) (see (6), (7)). It follows from (15) and (17). By (17), we have \(n\leqslant m^{\frac{2}{q}}k^{\frac{2}{\sigma}}r^{1-\frac{2}{q}}\). Hence \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l_{q,\sigma}^{m,k})\underset{\sim}{\gtrsim}\nu_{1}r^{-\frac{1}{p_{1}}}d_{n} (V_{r,1}^{m,k},\,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\nu_{1}r^{ \frac{1}{q}-\frac{1}{p_{1}}};\] this together with (17) implies (16). **Lemma 5**.: _Let \(2\leqslant q<\infty\), \(2\leqslant\sigma<\infty\)._ 1. _Let_ \(\sigma>2\)_,_ \(mk^{\frac{2}{\sigma}}\leqslant n\leqslant\frac{mk}{2}\)_,_ \[\frac{\nu_{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{ 1}{2}}m^{-\frac{1}{2}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2 }}{1/2-1/\sigma}}.\] (19) _Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\nu_{1}m^{\frac{1}{q}-\frac {1}{p_{1}}}(n^{-\frac{1}{2}}m^{\frac{1}{2}}k^{\frac{1}{\sigma}})^{\frac{1/ \theta_{1}-1/\sigma}{1/2-1/\sigma}}.\] (20) 2. _Let_ \(q>2\)_,_ \(m^{\frac{2}{q}}k\leqslant n\leqslant\frac{mk}{2}\)_,_ \[\frac{\nu_{1}}{\nu_{2}}\leqslant k^{\frac{1}{p_{1}}-\frac{1}{\theta_{2}}}(n^{ \frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}.\] _Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\, l_{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\nu_{1}k^{\frac{1}{\sigma}-\frac{1}{ \theta_{1}}}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{2}})^{\frac{1/p_{1}-1/ q}{1/2-1/q}}.\] **Proof.** We prove assertion 1 (assertion 2 is similar). Let \[l=\left\lceil\left(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1}{\sigma}}\right) ^{\frac{1}{2}-\frac{1}{\sigma}}\right\rceil. \tag{21}\] Since \(mk^{2/\sigma}\leqslant n\leqslant\frac{mk}{2}\), we have \(1\leqslant l\leqslant k\). We prove that \[\nu_{1}m^{-\frac{1}{p_{1}}}l^{-\frac{1}{\theta_{1}}}V_{m,l}^{m,k}\subset 2(\nu _{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}). \tag{22}\] It suffices to check that \[\nu_{1}m^{\frac{1}{p_{2}}-\frac{1}{p_{1}}}l^{\frac{1}{\theta_{2}}-\frac{1}{ \theta_{1}}}\leqslant 2\nu_{2}.\] It follows from (19) and (21). By (21), we have \(n\leqslant m^{\frac{2}{q}}k^{\frac{2}{\sigma}}m^{1-\frac{2}{\sigma}}l^{1-\frac {2}{\sigma}}\). Now we obtain \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\, l_{q,\sigma}^{m,k})\overset{\eqref{eq:22}}{\gtrsim}\nu_{1}m^{-\frac{1}{p_{1}}}l^{- \frac{1}{\theta_{1}}}d_{n}(V_{m,l}^{m,k},\,l_{q,\sigma}^{m,k})\overset{\eqref{eq:2 }}{\gtrsim}\nu_{1}m^{\frac{1}{q}-\frac{1}{p_{1}}}l^{\frac{1}{\sigma}-\frac{1}{ \theta_{1}}}.\] This together with (21) implies (20). **Lemma 6**.: _Let \(q>2\), \(\sigma>2\),_ \[m^{2/q}k^{2/\sigma}\leqslant n\leqslant\min\{mk^{2/\sigma},\,m^{2/q}k\}, \tag{23}\] \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2} }{1/2-1/q}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac {1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}} \tag{24}\] _or_ \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/ \theta_{2}}{1/2-1/\sigma}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac {1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}, \tag{25}\] \(\lambda\in[0,\,1]\)_, \(\frac{1}{p}=\frac{1-\lambda}{p_{1}}+\frac{\lambda}{p_{2}}\), \(\frac{1}{\theta}=\frac{1-\lambda}{\theta_{1}}+\frac{\lambda}{\theta_{2}}\),_ \[\frac{1/p-1/q}{1/2-1/q}=\frac{1/\theta-1/\sigma}{1/2-1/\sigma}. \tag{26}\] _Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\nu_{1}^{1-\lambda}\nu_{2}^{ \lambda}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/p-1/q} {1/2-1/q}}\overset{\eqref{eq:2}}{=}\nu_{1}^{1-\lambda}\nu_{2}^{\lambda}(n^{- \frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/\theta-1/\sigma}{1/2 -1/\sigma}}. \tag{27}\] **Proof.** We set \[\tilde{r}=(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1-\alpha }{1/2-1/q}},\quad\tilde{l}=(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma} })^{\frac{\alpha}{1/2-1/\sigma}}, \tag{28}\] where \(\alpha\in[0,\,1]\). By (23), we have \(\tilde{r}\in[1,\,m]\), \(\tilde{l}\in[1,\,k]\). We choose \(\alpha\) such that (14) holds; it exists by (24) or (25). Let \(r=\lceil\tilde{r}\rceil\), \(l=\lceil\tilde{l}\rceil\), \(W=\nu_{1}^{1-\lambda}\nu_{2}^{\lambda}r^{-\frac{1}{p}}l^{-\frac{1}{\theta}}V_{ r,l}^{m,k}\). By Lemma 2, \[W\subset 4(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^ {m,k}). \tag{29}\] From (28) it follows that \(n\leqslant m^{\frac{2}{q}}k^{\frac{2}{\sigma}}r^{1-\frac{2}{q}}l^{1-\frac{2}{ \sigma}}\). Hence \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l_{q,\sigma}^{m,k})\stackrel{{\eqref{eq:20}}}{{\gtrsim}}d_{n}(W,\,l_{q,\sigma}^{m,k})\stackrel{{\eqref{eq:20}}}{{\gtrsim}}\nu_{ 1}^{1-\lambda}\nu_{2}^{\lambda}r^{1/q-1/p}l^{1/\sigma-1/\theta}.\] This together with (4), (26), (28) yields (27). **Lemma 7**.: _Let \(q>2\), \(\sigma>2\), \(1\leqslant p_{i}\leqslant\infty\), \(1\leqslant\theta_{i}\leqslant\infty\), \(i=1,\,2\), \(\lambda\in[0,\,1]\), \(\frac{1}{p}=\frac{1-\lambda}{p_{1}}+\frac{\lambda}{p_{2}}\), \(\frac{1}{\theta}=\frac{1-\lambda}{\theta_{1}}+\frac{\lambda}{\theta_{2}}\),_ \[\frac{1/p-1/q}{1/2-1/q}=\frac{1/\theta-1/\sigma}{1/2-1/\sigma}. \tag{30}\] _Let one of the following conditions hold:_ 1. \(km^{\frac{2}{q}}\leqslant n\leqslant mk^{\frac{2}{\sigma}}\)_,_ (31) 2. \(mk^{\frac{2}{\sigma}}\leqslant n\leqslant km^{\frac{2}{q}}\)_,_ \[\begin{array}{c}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{ 1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant\frac{\nu_{1}}{\nu_{2}} \leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k ^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\quad or \\ m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{ 1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant\frac{ \nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma }})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}};\end{array}\] _Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l_{q,\sigma}^{m,k})\stackrel{{\ref{eq:20}}}{{\gtrsim}}\nu_{ 1}^{1-\lambda}\nu_{2}^{\lambda}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{ \sigma}})^{\frac{1/p-1/q}{1/2-1/q}}\stackrel{{\eqref{eq:20}}}{{=}} \nu_{1}^{1-\lambda}\nu_{2}^{\lambda}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1} {\sigma}})^{\frac{1/\theta-1/\sigma}{1/2-1/\sigma}}.\] **Proof.** Let condition 1 hold (the case of condition 2 is similar). We set \(W=\nu_{1}^{1-\lambda}\nu_{2}^{\lambda}r^{-\frac{1}{p}}l^{-\frac{1}{\theta}}V_{ r,l}^{m,k}\), where \(r=\lceil\tilde{r}\rceil\), \(l=\lceil\tilde{l}\rceil\), \[\tilde{r}=(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1- \alpha}{1/2-1/q}}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{\frac{ \alpha}{1/2-1/q}},\quad\tilde{l}=k^{\alpha}, \tag{32}\] \(\alpha\in[0,\,1]\). Since \(km^{\frac{2}{q}}\leqslant n\leqslant mk^{\frac{2}{\sigma}}\), we have \(1\leqslant r\leqslant m\), \(1\leqslant l\leqslant k\). From (31) it follows that there exists \(\alpha\in[0,\,1]\) such that (14) holds. By Lemma 2, \[W\subset 4(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m, k}). \tag{33}\] From (32) it follows that \(n\leqslant m^{\frac{2}{q}}k^{\frac{2}{\sigma}}r^{1-\frac{2}{q}}l^{1-\frac{2}{ \sigma}}\); therefore, \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\, l_{q,\sigma}^{m,k})\stackrel{{\eqref{eq:2}}}{{\gtrsim}}d_{n}(\nu_{1}^{1- \lambda}\nu_{2}^{\lambda}r^{-\frac{1}{p}}l^{-\frac{1}{\sigma}}V_{r,l}^{m,k},\, l_{q,\sigma}^{m,k})\stackrel{{\eqref{eq:2}}}{{\gtrsim}}\] \[\gtrsim\nu_{1}^{1-\lambda}\nu_{2}^{\lambda}r^{\frac{1}{q}-\frac{1}{p}}l^{ \frac{1}{\sigma}-\frac{1}{\theta}};\] this together with (4), (30), (32) implies the desired estimate. **Lemma 8**.: _Let \(q>2\), \(\sigma>2\), \(\max\{mk^{\frac{2}{\sigma}},\,m^{\frac{2}{q}}k\}\leqslant n\leqslant\frac{mk}{2}\), \(\lambda\in[0,\,1]\), \(\frac{1}{p}=\frac{1-\lambda}{p_{1}}+\frac{\lambda}{p_{2}}\), \(\frac{1}{\theta}=\frac{1-\lambda}{\theta_{1}}+\frac{\lambda}{\theta_{2}}\),_ \[\frac{1/p-1/q}{1/2-1/q}=\frac{1/\theta-1/\sigma}{1/2-1/\sigma}. \tag{34}\] _Let one of the following conditions hold:_ \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{\sigma}}k^{- \frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant \frac{\nu_{1}}{\nu_{2}}\leqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}} }(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2 -1/q}} \tag{35}\] _or_ \[k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{q} }k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\leqslant\frac{\nu_{1}}{ \nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{- \frac{1}{2}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/ \sigma}}. \tag{36}\] _Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l_{q,\sigma}^{m,k})\stackrel{{\sim}}{{\gtrsim}}\nu_{1}^{1- \lambda}\nu_{2}^{\lambda}k^{\frac{1}{\sigma}-\frac{1}{\theta}}(n^{-\frac{1}{2} }m^{\frac{1}{q}}k^{\frac{1}{2}})^{\frac{1/p-1/q}{1-1/q}}\stackrel{{ \eqref{eq:2}}}{{=}}\] \[=\nu_{1}^{1-\lambda}\nu_{2}^{\lambda}m^{\frac{1}{q}-\frac{1}{p}}(n^{-\frac{1 }{2}}m^{\frac{1}{2}}k^{\frac{1}{\sigma}})^{\frac{1/\theta-1/\sigma}{1/2-1/ \sigma}}.\] **Proof.** We set \(W\subset\nu_{1}^{1-\lambda}\nu_{2}^{\lambda}r^{-\frac{1}{p}}l^{-\frac{1}{ \theta}}V_{r,l}^{m,k}\), where \(r=\lceil\tilde{r}\rceil\), \(l=\lceil\tilde{l}\rceil\), \[\tilde{r}=m^{1-\alpha}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{ \frac{\alpha}{1/2-1/q}},\quad\tilde{l}=(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{- \frac{1}{\sigma}})^{\frac{1-\alpha}{1/2-1/\sigma}}k^{\alpha}, \tag{37}\] \(\alpha\in[0,\,1]\). Since \(\max\{mk^{\frac{2}{\sigma}},\,m^{\frac{2}{q}}k\}\leqslant n\leqslant\frac{mk} {2}\), we get \(1\leqslant r\leqslant m\), \(1\leqslant l\leqslant k\). By (35) or (36), there is \(\alpha\in[0,\,1]\) such that (14) holds. From Lemma 2 it follows that \[W\subset 4(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m, k}). \tag{38}\] By (37), \(n\leqslant m^{\frac{2}{q}}k^{\frac{2}{q}}r^{1-\frac{2}{q}}l^{1-\frac{2}{\sigma}}\). Hence \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\,l _{q,\sigma}^{m,k})\stackrel{{\eqref{eq:20}}}{{\sim}}d_{n}(\nu_{1} ^{1-\lambda}\nu_{2}^{\lambda}r^{-\frac{1}{p}}l^{-\frac{1}{\theta}}V_{r,l}^{m,k },\,l_{q,\sigma}^{m,k})\stackrel{{\eqref{eq:20}}}{{\sim}}\] \[\gtrsim\nu_{1}^{1-\lambda}\nu_{2}^{\lambda}r^{\frac{1}{q}-\frac{1}{p}}l^{ \frac{1}{\sigma}-\frac{1}{\theta}};\] this together with (4), (34), (37) yields the desired estimate. **Lemma 9**.: _Let \(q\geqslant 2\), \(\sigma\geqslant 2\)._ 1. _Let_ \(mk^{2/\sigma}\leqslant n\leqslant\frac{mk}{2}\)_,_ \(\frac{\nu_{1}}{\nu_{2}}\geqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\)_. Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\,l _{q,\sigma}^{m,k})\stackrel{{\eqref{eq:20}}}{{\sim}}\nu_{2}m^{ \frac{1}{q}-\frac{1}{p_{2}}+\frac{1}{2}}k^{\frac{1}{\sigma}}n^{-\frac{1}{2}}.\] 2. _Let_ \(m^{2/q}k\leqslant n\leqslant\frac{mk}{2}\)_,_ \(\frac{\nu_{1}}{\nu_{2}}\geqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\)_. Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\,l _{q,\sigma}^{m,k})\stackrel{{\eqref{eq:20}}}{{\sim}}\nu_{2}k^{ \frac{1}{\sigma}-\frac{1}{\theta_{2}}+\frac{1}{2}}m^{\frac{1}{q}}n^{-\frac{1} {2}}.\] **Proof.** We consider assertion 1 (assertion 2 is similar). We set \(W=\nu_{2}m^{-\frac{1}{p_{2}}}V_{m,1}^{m,k}\). From the inequality \(\nu_{2}m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\leqslant\nu_{1}\) it follows that \(W\subset\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}\). We have \(n\geqslant m^{\frac{2}{q}}k^{\frac{2}{\sigma}}m^{1-\frac{2}{q}}\). Hence \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\,l _{q,\sigma}^{m,k})\geqslant d_{n}(\nu_{2}m^{-\frac{1}{p_{2}}}V_{m,1}^{m,k},\,l _{q,\sigma}^{m,k})\stackrel{{\eqref{eq:20}}}{{\sim}}\nu_{2}m^{- \frac{1}{p_{2}}}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}}m^{\frac{1} {2}}.\] This completes the proof. **Lemma 10**.: _Let \(q\geqslant 2\), \(\sigma\geqslant 2\)._ 1. _Let_ \(mk^{2/\sigma}\leqslant n\leqslant\frac{mk}{2}\)_,_ \(\tilde{\mu}\in[0,\,1]\)_,_ \(\frac{1}{2}=\frac{1-\tilde{\mu}}{\theta_{1}}+\frac{\tilde{\mu}}{\theta_{2}}\)_,_ \(\frac{1}{\tilde{p}}=\frac{1-\tilde{\mu}}{p_{1}}+\frac{\tilde{\mu}}{p_{2}}\)_,_ \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1} {\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant\frac{\nu _{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\] _or_ \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant m ^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1 }{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}.\] _Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\,l _{q,\sigma}^{m,k})\stackrel{{\eqref{eq:20}}}{{\sim}}\nu_{1}^{1- \tilde{\mu}}\nu_{2}^{\tilde{\mu}}m^{\frac{1}{q}-\frac{1}{\tilde{p}}+\frac{1}{2} }k^{\frac{1}{\sigma}}n^{-\frac{1}{2}}.\] 2. _Let_ \(m^{2/q}k\leqslant n\leqslant\frac{mk}{2}\)_,_ \(\tilde{\lambda}\in[0,\,1]\)_,_ \(\frac{1}{2}=\frac{1-\tilde{\lambda}}{p_{1}}+\frac{\tilde{\lambda}}{p_{2}}\)_,_ \(\frac{1}{\tilde{\theta}}=\frac{1-\tilde{\lambda}}{\theta_{1}}+\frac{\tilde{ \lambda}}{\tilde{\theta}_{2}}\)_,_ \[k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k ^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\leqslant\frac{\nu_{1}}{\nu _{2}}\leqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\] _or_ \[k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\leqslant\frac{\nu_{1}}{\nu_{2} }\leqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}(n^{\frac{1}{2}}m^{- \frac{1}{q}}k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}.\] _Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\nu_{1}^{1-\tilde{\lambda} }\nu_{2}^{\tilde{\lambda}}k^{\frac{1}{\sigma}-\frac{1}{\theta}+\frac{1}{2}}m^ {\frac{1}{q}}n^{-\frac{1}{2}}.\] **Proof.** We prove assertion 1 (assertion 2 is similar). We set \(W=\nu_{1}^{1-\tilde{\mu}}\nu_{2}^{\tilde{\mu}}m^{-\frac{1}{\tilde{\mu}}}l^{- \frac{1}{\tilde{\mu}}}V_{m,l}^{m,k}\), where \(l=\lfloor\tilde{l}\rfloor\), \(\tilde{l}=(n^{\frac{1}{2}}m^{-\frac{1}{\tilde{\sigma}}}k^{-\frac{1}{\sigma}}) ^{\frac{1-\alpha}{1/2-1/\sigma}}\); \(\alpha\in[0,\,1]\) is such that \(\frac{\nu_{1}}{\nu_{2}}=m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\tilde{l}^{\theta _{1}}-\frac{1}{\theta_{2}}\). Since \(mk^{2/\sigma}\leqslant n\leqslant mk\), we have \(1\leqslant l\leqslant k\). By Lemma 2, \(W\subset 4(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k})\). Notice that \(n\geqslant m^{\frac{2}{q}}k^{\frac{2}{\sigma}}m^{1-\frac{2}{q}}l^{1-\frac{2}{ \sigma}}\). Hence \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\, l_{q,\sigma}^{m,k})\gtrsim d_{n}(\nu_{1}^{1-\tilde{\mu}}\nu_{2}^{\tilde{\mu}}m^{- \frac{1}{\tilde{\rho}}}l^{-\frac{1}{2}}V_{m,l}^{m,k},\,l_{q,\sigma}^{m,k}) \underset{q,\sigma}{\gtrsim}\] \[\gtrsim\nu_{1}^{1-\tilde{\mu}}\nu_{2}^{\tilde{\mu}}m^{-\frac{1}{\tilde{\rho}} }l^{-\frac{1}{2}}n^{-\frac{1}{\tilde{\sigma}}}m^{\frac{1}{\tilde{\sigma}}}k^ {\frac{1}{\sigma}}m^{\frac{1}{2}}l^{\frac{1}{2}}=\nu_{1}^{1-\tilde{\mu}}\nu_{ 2}^{\tilde{\mu}}m^{\frac{1}{q}-\frac{1}{\tilde{\rho}}}n^{-\frac{1}{\tilde{ \rho}}}l^{\frac{1}{2}}L^{\frac{1}{\sigma}}.\] This completes the proof. \(\square\) **Lemma 11**.: _Let \(q\geqslant 2\), \(\sigma\geqslant 2\)._ 1. _Let_ \(m^{2/q}k^{2/\sigma}\leqslant n\leqslant mk^{2/\sigma}\)_,_ \(1\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^ {-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\) _or_ \((n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2}}{ 1/2-1/q}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant 1\)_,_ \(\tilde{\lambda}\in[0,\,1]\)_,_ \(\frac{1}{2}=\frac{1-\tilde{\lambda}}{p_{1}}+\frac{\tilde{\lambda}}{p_{2}}\)_. Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\nu_{1}^{1-\tilde{\lambda} }\nu_{2}^{\tilde{\lambda}}n^{-\frac{1}{2}}m^{\frac{1}{\tilde{\sigma}}}k^{ \frac{1}{\sigma}}.\] 2. _Let_ \(m^{2/q}k^{2/\sigma}\leqslant n\leqslant km^{2/q}\)_,_ \(1\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^ {-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\) _or_ \((n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/ \theta_{2}}{1/2-1/\sigma}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant 1\)_,_ \(\tilde{\mu}\in[0,\,1]\)_,_ \(\frac{1}{2}=\frac{1-\tilde{\mu}}{\theta_{1}}+\frac{\tilde{\mu}}{\theta_{2}}\)_. Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\,l_{ q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\nu_{1}^{1-\tilde{\mu}}\nu_{2}^{ \tilde{\mu}}n^{-\frac{1}{2}}m^{\frac{1}{\tilde{\sigma}}}k^{\frac{1}{\sigma}}.\] **Proof.** We prove assertion 1 (assertion 2 is similar). We set \(r=\lfloor\tilde{r}\rfloor\), where \(\tilde{r}=(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{\alpha}{1 /2-1/q}}\), \(\alpha\in[0,\,1]\) is such that \(\frac{\nu_{1}}{\nu_{2}}=\tilde{r}^{1/p_{1}-1/p_{2}}\). Since \(m^{2/q}k^{2/\sigma}\leqslant n\leqslant mk^{2/\sigma}\), we have \(1\leqslant r\leqslant m\). By Lemma 2, \(\nu_{1}^{1-\tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}r^{-1/2}V_{r,1}^{m,k}\subset 4(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k})\). In addition, \(n\geqslant m^{\frac{2}{q}}k^{\frac{2}{\sigma}}r^{1-\frac{2}{q}}\). Hence \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\,l_{ q,\sigma}^{m,k})\gtrsim d_{n}(\nu_{1}^{1-\tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}r^{- \frac{1}{2}}V_{r,1}^{m,k},\,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\] \[\gtrsim\nu_{1}^{1-\tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}r^{-\frac{1}{2}}n^{- \frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}}r^{\frac{1}{2}}=\nu_{1}^{1- \tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{ \frac{1}{\sigma}}.\] This completes the proof. \(\square\) **Lemma 12**.: _Let \(q\geqslant 2\), \(\sigma\geqslant 2\)._ 1. _Let_ \(mk^{2/\sigma}\leqslant n\leqslant\frac{mk}{2}\)_,_ \(1\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\) _or_ \(m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant 1\)_,_ \(\tilde{\lambda}\in[0,\,1]\)_,_ \(\frac{1}{2}=\frac{1-\tilde{\lambda}}{p_{1}}+\frac{\tilde{\lambda}}{p_{2}}\)_. Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\,l _{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\nu_{1}^{1-\tilde{\lambda}}\nu _{2}^{\tilde{\lambda}}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}}.\] 2. _Let_ \(m^{2/q}k\leqslant n\leqslant\frac{mk}{2}\)_,_ \(1\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant k^{\frac{1}{\theta_{1}}-\frac{1}{ \theta_{2}}}\) _or_ \(k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant 1\)_,_ \(\tilde{\mu}\in[0,\,1]\)_,_ \(\frac{1}{2}=\frac{1-\tilde{\mu}}{\theta_{1}}+\frac{\tilde{\mu}}{\theta_{2}}\)_. Then_ \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\,l _{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\nu_{1}^{1-\tilde{\mu}}\nu_{2} ^{\tilde{\mu}}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}}.\] **Proof.** We prove assertion 1 (assertion 2 is similar). Let \(r=\left\lfloor\tilde{r}\right\rfloor,\,\tilde{r}=m^{\alpha}\), where \(\alpha\in[0,\,1]\) is such that \(\frac{\nu_{1}}{\nu_{2}}=\tilde{r}^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\). By Lemma 2, \(\nu_{1}^{1-\tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}r^{-\frac{1}{2}}V_{r,1}^{ m,k}\subset 4(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k})\). Since \(n\geqslant mk^{2/\sigma}\), we have \(n\geqslant m^{\frac{2}{q}}k^{\frac{2}{\sigma}}r^{1-\frac{2}{q}}\). Therefore, \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\, l_{q,\sigma}^{m,k})\gtrsim d_{n}(\nu_{1}^{1-\tilde{\lambda}}\nu_{2}^{\tilde{ \lambda}}r^{-\frac{1}{2}}V_{r,1}^{m,k},\,l_{q,\sigma}^{m,k})\underset{q,\sigma }{\gtrsim}\] \[\gtrsim\nu_{1}^{1-\tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}r^{-\frac{1}{2}}n ^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}}r^{\frac{1}{2}}=\nu_{1}^{1- \tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{ \frac{1}{\sigma}}.\] This completes the proof. \(\square\) ## 3 Estimates for the widths of an intersection of finite-dimensional balls We prove Theorem 1. The upper estimate follows from Lemma 1. Let us prove the lower estimate. If \(n\leqslant m^{\frac{2}{q}}k^{\frac{2}{\sigma}}\), we use Lemma 3 and (1)-(3). In what follows, \(m^{\frac{2}{q}}k^{\frac{2}{\sigma}}\leqslant n\leqslant\frac{mk}{2}\). Here we consider all cases up to rearrangement of indices \(1\) and \(2\). **1. Case \(p_{1}\), \(p_{2}\), \(\theta_{1}\), \(\theta_{2}\in[1,\,2]\).** From Lemma 3 and (1) it follows that \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\min_{j=1,2}\Phi_{j}(m,\,k, \,n).\] **2. Case \(p_{1}\), \(p_{2}\in[2,\,q]\), \(\theta_{1}\), \(\theta_{2}\in[2,\,\sigma]\); we suppose that one of the following conditions holds: a) \(q>2\), \(\lambda_{p_{i,q}}\leqslant\lambda_{\theta_{i,\sigma}}\), \(i=1,\,2\), b) \(\sigma>2\), \(\lambda_{p_{i,q}}\geqslant\lambda_{\theta_{i,\sigma}}\), \(i=1,\,2\).** We prove that \(d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\,l _{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\min_{j=1,2}\Phi_{j}(m,\,k,\,n)\). Consider case a); case b) is similar. For \(m^{\frac{2}{q}}k^{\frac{2}{\sigma}}\leqslant n\leqslant mk^{\frac{2}{\sigma}}\) we use Lemma 4 and the estimate \[\Phi_{i}(m,\,k,\,n)=\nu_{i}d_{n}(B^{m,k}_{p_{i},\theta_{i}},\,l^{m,k}_{q,\sigma} )\underset{q,\sigma}{\overset{\eqref{eq:m-1}}{\asymp}}\nu_{i}(n^{-\frac{1}{2}} m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/p_{i}-1/q}{1/2-1/q}},\quad i=1,\,2.\] For \(n\geqslant mk^{\frac{2}{\sigma}}\) we use Lemma 5 and the estimate \[\Phi_{i}(m,\,k,\,n)=\nu_{i}d_{n}(B^{m,k}_{p_{i},\theta_{i}},\,l^{m,k}_{q,\sigma })\underset{q,\sigma}{\overset{\eqref{eq:m-1}}{\asymp}}\nu_{i}m^{\frac{1}{q }-\frac{1}{p_{i}}}(n^{-\frac{1}{2}}m^{\frac{1}{2}}k^{\frac{1}{\sigma}})^{\frac {1/\theta_{i}-1/\sigma}{1/2-1/\sigma}},\quad i=1,\,2.\] **3. Case \(p_{1}\), \(p_{2}\in[2,\,q]\), \(\theta_{1}\), \(\theta_{2}\in[2,\,\sigma]\), \(\lambda_{p_{1},q}<\lambda_{\theta_{1},\sigma}\), \(\lambda_{p_{2},q}>\lambda_{\theta_{2},\sigma}\).** From the last two inequalities it follows that \[\frac{1/p_{1}-1/p_{2}}{1/2-1/q}<\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}. \tag{39}\] We claim that \[d_{n}(\nu_{1}B^{m,k}_{p_{1},\theta_{1}}\cap\nu_{2}B^{m,k}_{p_{2},\theta_{2}}, \,l^{m,k}_{q,\sigma})\underset{q,\sigma}{\overset{\eqref{eq:m-1}}{\asymp}} \min_{j=1,2,5}\Phi_{j}(m,\,k,\,n).\] **Subcase \(m^{\frac{2}{q}}k^{\frac{2}{\sigma}}\leqslant n\leqslant\min\{mk^{\frac{2}{ \sigma}},\,m^{\frac{2}{q}}k\}\)**. We have \[\Phi_{1}(m,\,k,\,n)=\nu_{1}d_{n}(B^{m,k}_{p_{1},\theta_{1}},\,l^{m,k}_{q,\sigma })\underset{q,\sigma}{\overset{\eqref{eq:m-1}}{\asymp}}\nu_{1}(n^{-\frac{1}{ 2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/p_{1}-1/q}{1/2-1/q}},\] \[\Phi_{2}(m,\,k,\,n)=\nu_{2}d_{n}(B^{m,k}_{p_{2},\theta_{2}},\,l^{m,k}_{q,\sigma })\underset{q,\sigma}{\overset{\eqref{eq:m-1}}{\asymp}}\nu_{2}(n^{-\frac{1}{ 2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/\theta_{2}-1/\sigma}{1/2-1/ \sigma}},\] \[\Phi_{5}(m,\,k,\,n)=\nu_{1}^{1-\lambda}\nu_{2}^{\lambda}d_{n}(B^{m,k}_{p, \theta},\,l^{m,k}_{q,\sigma})\underset{q,\sigma}{\overset{\eqref{eq:m-1}}{ \asymp}}\] \[\asymp\nu_{1}^{1-\lambda}\nu_{2}^{\lambda}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{ \frac{1}{\sigma}})^{\frac{1/p-1/q}{1/2-1/q}}\overset{\eqref{eq:m-1}}{=}\nu_{ 1}^{1-\lambda}\nu_{2}^{\lambda}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{ \sigma}})^{\frac{1/\theta-1/\sigma}{1/2-1/\sigma}}\] (the last equatity follows from the choice of \(\lambda\) in the definition of \(\Phi_{5}(m,\,k,\,n)\); see the statement of Theorem 1). Also from the definition of \(\lambda\) it follows that \[\nu_{1}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/p_{1}-1/q }{1/2-1/q}}\leqslant\nu_{1}^{1-\lambda}\nu_{2}^{\lambda}(n^{-\frac{1}{2}}m^{ \frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/p-1/q}{1/2-1/q}}\ \Leftrightarrow\ \frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}, \tag{40}\] \[\nu_{1}^{1-\lambda}\nu_{2}^{\lambda}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1 }{\sigma}})^{\frac{1/\theta-1/\sigma}{1/2-1/\sigma}}\leqslant\nu_{2}(n^{-\frac{1 }{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/\theta_{2}-1/\sigma}{1/2-1 /\sigma}}\ \Leftrightarrow\ \frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1 }{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}. \tag{41}\] From (39) and the condition \(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}}\geqslant 1\) it follows that \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2}}{1 /2-1/q}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac {1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}.\] By (40), (41), for \(\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\) we have \[\min_{j=1,2,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1}(n^{-\frac{ 1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/p_{1}-1/q}{1/2-1/q}}.\] Hence, in order to estimare \(d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l_{q,\sigma}^{m,k})\) from below, it suffices to apply Lemma 4. The case \(\frac{\nu_{1}}{\nu_{2}}\geqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ q}})^{\frac{1/p_{1}-1/q}{1/2-1/q}}\) can be considered similarly. Let \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2}} {1/2-1/q}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{ 1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}.\] From (40), (41) it follows that \[\min_{j=1,2,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1}^{1-\lambda }\nu_{2}^{\lambda}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac {1/p-1/q}{1/2-1/q}}.\] It suffices to apply Lemma 6. **Subcase \(km^{\frac{2}{q}}\leqslant n\leqslant mk^{\frac{2}{\sigma}}\).** We prove that \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2}} {1/2-1/q}}\leqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}(n^{\frac{1} {2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}.\] It is equivalent to \(1\leqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}-\frac{1/2-1/2-1/\sigma }{1/2-1/q}}\big{(}\frac{1}{p_{1}}-\frac{1}{p_{2}}\big{)}\); this follows from (39). We apply (2), (3) and compare the right-hand sides of the corresponding order equalities, taking into account (4) (as in the previous case). If \(\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{ \sigma}})^{\frac{1/p_{2}-1/p_{1}}{1/2-1/q}}\), we have \[\min_{j=1,2,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1}(n^{-\frac {1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/p_{1}-1/q}{1/2-1/q}};\] it remains to apply Lemma 4. If \(\frac{\nu_{1}}{\nu_{2}}\geqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}} }(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/ 2-1/q}}\), then \[\min_{j=1,2,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{2}k^{\frac{1 }{\sigma}-\frac{1}{\theta_{2}}}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{2} })^{\frac{1/p_{2}-1/q}{1/2-1/q}};\] we apply Lemma 5 (rearranging the indices \(1\) and \(2\)). If \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2}} {1/2-1/q}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant k^{\frac{1}{\theta_{1}}- \frac{1}{\theta_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{\frac{1 /p_{1}-1/p_{2}}{1/2-1/q}},\] then \[\min_{j=1,2,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1}^{1-\lambda }\nu_{2}^{\lambda}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{ 1/p-1/q}{1/2-1/q}};\] we apply Lemma 7. **Subcase \(mk^{\frac{2}{\sigma}}\leqslant n\leqslant km^{\frac{2}{q}}\)** is similar. **Subcase \(\max\{mk^{\frac{2}{\sigma}},\,km^{\frac{2}{q}}\}\leqslant n\leqslant\frac{ mk}{2}\).** Notice that \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1}{ \sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant k^{\frac{1 }{\theta_{1}}-\frac{1}{\theta_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1 }{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}.\] Indeed, it is equivalent to \[(mkn^{-1})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\geqslant(mkn^{-1}) ^{\frac{1/p_{1}-1/\theta_{2}}{1/2-1/q}}.\] This inequality follows from (39). We apply (2), (3), compare the right-hand sides of these order equalities and take into account (4). The following assertions hold: * if \(\frac{\nu_{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac {1}{2}}m^{-\frac{1}{2}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2 }}{1/2-1/\sigma}}\), then \[\min_{j=1,2,5}\Phi_{j}(m,\,n,\,k)\underset{q,\sigma}{\asymp}\nu_{1}m^{\frac{1 }{q}-\frac{1}{p_{1}}}(n^{-\frac{1}{2}}m^{\frac{1}{2}}k^{\frac{1}{\sigma}})^{ \frac{1/p_{1}-1/\sigma}{1/2-1/\sigma}};\] we use Lemma 5; * if \(\frac{\nu_{1}}{\nu_{2}}\geqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}} (n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2 -1/q}}\), then \[\min_{j=1,2,5}\Phi_{j}(m,\,n,\,k)\underset{q,\sigma}{\asymp}\nu_{2}k^{\frac{1 }{\sigma}-\frac{1}{\theta_{2}}}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{2}}) ^{\frac{1/p_{2}-1/q}{1/2-1/q}};\] we use Lemma 5; * if \(m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{ 1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant\frac{ \nu_{1}}{\nu_{2}}\leqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}(n^{ \frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\), then \[\min_{j=1,2,5}\Phi_{j}(m,\,n,\,k)\underset{q,\sigma}{\asymp}\nu_{1}^{1-\lambda }\nu_{2}^{\lambda}m^{\frac{1}{q}-\frac{1}{p}}(n^{-\frac{1}{2}}m^{\frac{1}{2}}k ^{\frac{1}{\sigma}})^{\frac{1/\theta-1/\sigma}{1/2-1/\sigma}};\] we apply Lemma 8. **4a. Case \(p_{1}\), \(p_{2}\in[2,\,q]\), \(\theta_{1}\in(2,\,\sigma]\), \(\theta_{2}\in[1,\,2)\), \(\lambda_{p_{1},q}\leqslant\lambda_{\theta_{1},\sigma}\). We claim that** \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\asymp}\min_{j=1,2,4}\Phi_{j}(m,\,k, \,n).\] **Subcase \(m^{2/q}k^{2/\sigma}\leqslant n\leqslant mk^{2/\sigma}\).** We have \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k},\,l_{q,\sigma}^{m,k})\underset{q, \sigma}{\asymp}\nu_{1}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{ \frac{1/p_{1}-1/q}{1/2-1/q}}=:a,\] \[d_{n}(\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\,l_{q,\sigma}^{m,k})\underset{q, \sigma}{\asymp}\nu_{2}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{ \frac{1/p_{2}-1/q}{1/2-1/q}}=:b,\] \[d_{n}(\nu_{1}^{1-\bar{\mu}}\nu_{2}^{\bar{\mu}}B_{\bar{p},2}^{m,k},\,l_{q, \sigma}^{m,k})\underset{q,\sigma}{\asymp}\nu_{1}^{1-\bar{\mu}}\nu_{2}^{\bar{ \mu}}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/\theta-1/q }{1/2-1/q}}=a^{1-\bar{\mu}}b^{\bar{\mu}}.\] It suffices to prove that for \(\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\) \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\,l _{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\nu_{1}(n^{-\frac{1}{2}}m^{\frac {1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/p_{1}-1/q}{1/2-1/q}},\] and for \(\frac{\nu_{1}}{\nu_{2}}\geqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\), \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\, l_{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\nu_{2}(n^{-\frac{1}{2}}m^{\frac {1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/p_{2}-1/q}{1/2-1/q}}.\] It follows from Lemma 4. **Subcase \(mk^{2/\sigma}\leqslant n\leqslant\frac{mk}{2}\).** Notice that \(m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1 }{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant m^{\frac {1}{p_{1}}-\frac{1}{p_{2}}}\). We apply (2) and compare the right-hand sides of the corresponding order equalities. If \(\frac{\nu_{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{ \frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/ \theta_{2}}{1/2-1/\sigma}}\), then \[\min_{j=1,2,4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1}m^{\frac{1 }{q}-\frac{1}{p_{1}}}(n^{-\frac{1}{2}}m^{\frac{1}{2}}k^{\frac{1}{\sigma}})^{ \frac{1/\theta_{1}-1/\sigma}{1/2-1/\sigma}};\] we use Lemma 5. If \(\frac{\nu_{1}}{\nu_{2}}\geqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\), then \[\min_{j=1,2,4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{2}m^{\frac{1 }{q}-\frac{1}{p_{2}}+\frac{1}{2}}k^{\frac{1}{\sigma}}n^{-\frac{1}{2}};\] here we use Lemma 9. If \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{ 1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant\frac{ \nu_{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}},\] we have \[\min_{j=1,2,4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1}^{1-\tilde {\mu}}\nu_{2}^{\tilde{\mu}}m^{\frac{1}{q}-\frac{1}{\tilde{\mu}}+\frac{1}{2}}k^ {\frac{1}{\sigma}}n^{-\frac{1}{2}};\] now we allpy Lemma 10. **4b. Case \(p_{1}\in(2,\,q]\), \(p_{2}\in[1,\,2)\), \(\theta_{1}\), \(\theta_{2}\in[2,\,\sigma]\), \(\lambda_{p_{1},q}\geqslant\lambda_{\theta_{1},\sigma}\) is similar.** **5a. Case \(q>2\), \(\sigma>2\), \(p_{1}\), \(p_{2}\in[2,\,q]\), \(\theta_{1}\in[2,\,\sigma]\), \(\theta_{2}\in[1,\,2]\), \(\lambda_{p_{1},q}>\lambda_{\theta_{1},\sigma}\).** We claim that \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\, l_{q,\sigma}^{m,k})\underset{q,\sigma}{\gtrsim}\min_{j=1,2,4,5}\Phi_{j}(m,\,k,\,n).\] Since \(\lambda_{p_{1},q}>\lambda_{\theta_{1},\sigma}\), \(\lambda_{\tilde{p},q}\leqslant\lambda_{2,\sigma}\), we have \(\Phi_{5}(m,\,k,\,n)<\infty\) and \(\tilde{\mu}\geqslant\lambda\). In addition, from \(\lambda_{p_{1},q}>\lambda_{\theta_{1},\sigma}\) and \(\theta_{2}\leqslant 2\leqslant p_{2}\) we get \[\frac{1/p_{1}-1/p_{2}}{1/2-1/q}>\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}. \tag{42}\] We use (2), (3) and compare the right-hand sides of these order equalities taking into account (4) and the inequalities \(0\leqslant\lambda\leqslant\tilde{\mu}\leqslant 1\). If \(m^{\frac{2}{q}}k^{\frac{2}{\sigma}}\leqslant n\leqslant\min\{m^{\frac{2}{q}}k\), \(mk^{\frac{2}{\sigma}}\}\), we apply Lemma 4 for \(\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\) or \(\frac{\nu_{1}}{\nu_{2}}\geqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\), and we use Lemma 6 for \((n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/ \theta_{2}}{1/2-1/\sigma}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{ 1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2} }{1/2-1/\sigma}}\) (we argue as in Case 3). **Subcase \(mk^{\frac{2}{\sigma}}\leqslant n\leqslant m^{\frac{2}{q}}k\).** Notice that \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/ \theta_{2}}{1/2-1/\sigma}}\overset{\eqref{eq:m^{\frac{1}{p_{1}}-\frac{1}{p_{2} }}}}{\leqslant}m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1} {2}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}} \leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}.\] We use (2), (3) taking into account (4). If \(\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\), then \[\min_{j=1,2,4,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1}(n^{- \frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\sigma }{1/2-1/\sigma}};\] we use Lemma 4. If \(\frac{\nu_{1}}{\nu_{2}}\geqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\), then \[\min_{j=1,2,4,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{2}n^{- \frac{1}{2}}m^{\frac{1}{q}-\frac{1}{p_{2}}+\frac{1}{2}}k^{\frac{1}{\sigma}};\] we use Lemma 9. If \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/ \theta_{2}}{1/2-1/\sigma}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant m^{\frac{ 1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1}{\sigma} })^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}},\] then \[\min_{j=1,2,4,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1}^{1- \lambda}\nu_{2}^{\lambda}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma} })^{\frac{1/\theta-1/\sigma}{1/2-1/\sigma}}=\nu_{1}^{1-\lambda}\nu_{2}^{ \lambda}m^{\frac{1}{q}-\frac{1}{p}}(n^{-\frac{1}{2}}m^{\frac{1}{2}}k^{\frac{1} {\sigma}})^{\frac{1/\theta-1/\sigma}{1/2-1/\sigma}};\] we apply Lemma 7. If \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1 }{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant\frac{\nu _{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}},\] then \[\min_{j=1,2,4,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1}^{1- \bar{\mu}}\nu_{2}^{\bar{\mu}}m^{\frac{1}{q}-\frac{1}{p}}n^{-\frac{1}{2}}m^{ \frac{1}{2}}k^{\frac{1}{\sigma}}.\] we apply Lemma 10. **Subcase \(m^{2/q}k\leqslant n\leqslant mk^{2/\sigma}\).** Notice that \[k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^ {-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\overset{\eqref{eq:m^{\frac{1}{ q}}-\frac{1}{q}}}{\leqslant}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{ \frac{1/p_{1}-1/p_{2}}{1/2-1/q}}.\] If \(\frac{\nu_{1}}{\nu_{2}}\leqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}( n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\), then \[\min_{j=1,2,4,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1}k^{ \frac{1}{\sigma}-\frac{1}{\theta_{1}}}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1 }{2}})^{\frac{1/p_{1}-1/q}{1/2-1/q}};\] we use Lemma 5. If \(\frac{\nu_{1}}{\nu_{2}}\geqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma }})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\), then \[\min_{j=1,2,4,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{2}(n^{- \frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/p_{2}-1/q}{1/2-1/q}};\] we apply Lemma 4. If \[k^{\frac{1}{q_{1}}-\frac{1}{q_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1 }{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\leqslant\frac{\nu_{1}}{\nu_{2}} \leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1} -1/p_{2}}{1/2-1/q}},\] then \[\min_{j=1,2,4,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1}^{1- \lambda}\nu_{2}^{\lambda}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}} )^{\frac{1/p-1/q}{1/2-1/q}};\] we apply Lemma 7. **Subcase \(n\geqslant\max\{mk^{2/\sigma},\,m^{2/q}k\}\).** Notice that \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2- 1/q}}k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\overset{(\ref{eq:1})}{ \leqslant}m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}} k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant m^{ \frac{1}{p_{1}}-\frac{1}{p_{2}}}.\] If \(\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ 2}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}k^{\frac{1}{\theta_{1}}-\frac{1}{\theta _{2}}}\), then \[\min_{j=1,2,4,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1}k^{\frac {1}{\sigma}-\frac{1}{\theta_{1}}}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{2 }})^{\frac{1/p_{1}-1/q}{1/2-1/q}};\] we use Lemma 5. If \(\frac{\nu_{1}}{\nu_{2}}\geqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\), then \[\min_{j=1,2,4,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{2}m^{ \frac{1}{q}-\frac{1}{p_{2}}}n^{-\frac{1}{2}}m^{\frac{1}{2}}k^{\frac{1}{\sigma}};\] we apply Lemma 9. If \(m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1 }{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant\frac{\nu _{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\), then \[\min_{j=1,2,4,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1}^{1- \tilde{\mu}}\nu_{2}^{\tilde{\mu}}n^{-\frac{1}{2}}m^{\frac{1}{q}-\frac{1}{\tilde {\mu}}+\frac{1}{2}}k^{\frac{1}{\sigma}};\] we use Lemma 10. If \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2- 1/q}}k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\leqslant\frac{\nu_{1}}{ \nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac {1}{2}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}},\] then \[\min_{j=1,2,4,5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1}^{1- \lambda}\nu_{2}^{\lambda}m^{\frac{1}{q}-\frac{1}{p}}(n^{-\frac{1}{2}}m^{\frac {1}{2}}k^{\frac{1}{\sigma}})^{\frac{1/\theta-1/\sigma}{1/2-1/\sigma}};\] we apply Lemma 8. **5b. Case \(q>2\), \(\sigma>2\), \(p_{1}\in[2,\,q]\), \(p_{2}\in[1,\,2]\), \(\theta_{1},\,\theta_{2}\in[2,\,\sigma]\), \(\lambda_{p_{1},q}<\lambda_{\theta_{1},\sigma}\)** is similar. **6a. Case \(p_{1}\), \(\theta_{1}\), \(\theta_{2}\in[1,\,2]\), \(p_{2}\in[2,\,q]\).** We claim that \[d_{n}(\nu_{1}B^{m,k}_{p_{1},\theta_{1}}\cap\nu_{2}B^{m,k}_{p_{2},\theta_{2}},\,l ^{m,k}_{q,\sigma})\underset{q,\sigma}{\succ}\underset{1\leqslant j\leqslant 3}{ \min}\Phi_{j}(m,\,k,\,n).\] **Subcase \(m^{\frac{2}{q}}k^{\frac{2}{\sigma}}\leqslant n\leqslant mk^{2/\sigma}\).** Notice that \(1\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1 }-1/p_{2}}{1/2-1/q}}\). If \(\frac{\nu_{1}}{\nu_{2}}\leqslant 1\), then \[\underset{1\leqslant j\leqslant 3}{\min}\Phi(m,\,k,\,n)\underset{q,\sigma}{ \asymp}\nu_{1}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}};\] we use Lemma 3. If \(\frac{\nu_{1}}{\nu_{2}}\geqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\), then \[\underset{1\leqslant j\leqslant 3}{\min}\Phi(m,\,k,\,n)\underset{q,\sigma}{ \asymp}\nu_{2}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/ p_{2}-1/q}{1/2-1/q}};\] we apply Lemma 4. If \(1\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{- \frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\), then \[\underset{1\leqslant j\leqslant 3}{\min}\Phi(m,\,k,\,n)\underset{q,\sigma}{ \asymp}\nu_{1}^{1-\tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}n^{-\frac{1}{2}}m ^{\frac{1}{q}}k^{\frac{1}{\sigma}};\] we use Lemma 11. **Subcase \(n\geqslant mk^{2/\sigma}\).** Notice that \(1\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\). If \(\frac{\nu_{1}}{\nu_{2}}\leqslant 1\), then \[\underset{1\leqslant j\leqslant 3}{\min}\Phi(m,\,k,\,n)\underset{q,\sigma}{ \asymp}\nu_{1}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}};\] we use Lemma 3. If \(\frac{\nu_{1}}{\nu_{2}}\geqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\), then \[\underset{1\leqslant j\leqslant 3}{\min}\Phi(m,\,k,\,n)\underset{q,\sigma}{ \asymp}\nu_{2}n^{-\frac{1}{2}}m^{\frac{1}{q}-\frac{1}{p_{2}}+\frac{1}{2}}k^{ \frac{1}{\sigma}};\] we use Lemma 9. If \(1\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\), then \[\underset{1\leqslant j\leqslant 3}{\min}\Phi(m,\,k,\,n)\underset{q,\sigma}{ \asymp}\nu_{1}^{1-\tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}n^{-\frac{1}{2}}m ^{\frac{1}{q}}k^{\frac{1}{\sigma}};\] we use Lemma 12. **6b. Case \(p_{1}\), \(\theta_{1}\), \(p_{2}\in[1,\,2]\), \(\theta_{2}\in[2,\,\sigma]\)** is similar. **7. Case \(2<q<\infty\), \(2<\sigma<\infty\), \(p_{1}\in[2,\,q]\), \(\theta_{1}\in[2,\,\sigma]\), \(p_{2}\), \(\theta_{2}\in[1,\,2]\); here one of the following conditions holds: a) \(\lambda_{p_{1},q}\leqslant\lambda_{\theta_{1},\sigma}\), \(\tilde{\mu}>\tilde{\lambda}\), b) \(\lambda_{p_{1},q}\geqslant\lambda_{\theta_{1},\sigma}\), \(\tilde{\mu}<\tilde{\lambda}\).** We consider the case a); the case b) is similar. First we notice that from \(\tilde{\mu}\geqslant\tilde{\lambda}\) it follows that \(\tilde{\theta}\geqslant 2\) and \(\tilde{p}\leqslant 2\). Indeed, \(\frac{1}{\tilde{p}}-\frac{1}{2}=(\tilde{\mu}-\tilde{\lambda})\left(\frac{1}{p_ {2}}-\frac{1}{p_{1}}\right)\geqslant 0\), \(\frac{1}{2}-\frac{1}{\tilde{\theta}}=(\tilde{\mu}-\tilde{\lambda})\left(\frac{1}{ \theta_{2}}-\frac{1}{\theta_{1}}\right)\geqslant 0\). From \(\lambda_{p_{1},q}\leqslant\lambda_{\theta_{1},\sigma}\) and \(\lambda_{2,q}\geqslant\lambda_{\tilde{\theta},\sigma}\) it follows that \(\lambda\) is well-defined and \(\lambda\in[0,\,\tilde{\lambda}]\). In addition, \[\frac{1/p_{1}-1/p_{2}}{1/2-1/q}\leqslant\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1 /\sigma}. \tag{43}\] Indeed, from \(\lambda_{p_{1},q}\leqslant\lambda_{\theta_{1},\sigma}\) we have \(\frac{1/2-1/p_{1}}{1/2-1/q}\geqslant\frac{1/2-1/\theta_{1}}{1/2-1/\sigma}\), or \(\frac{\tilde{\lambda}(1/p_{2}-1/p_{1})}{1/2-1/q}\geqslant\frac{\tilde{\mu}(1/ \theta_{2}-1/\theta_{1})}{1/2-1/\sigma}\). Taking into account that \(\tilde{\mu}>\tilde{\lambda}\geqslant 0\), \(p_{2}\leqslant p_{1}\), \(\theta_{2}\leqslant\theta_{1}\), we get the desired inequality. **Subcase \(m^{2/q}k^{2/\sigma}\leqslant n\leqslant\min\{mk^{2/\sigma},\,km^{2/q}\}\).** Notice that \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2}} {1/2-1/q}}\stackrel{{\eqref{eq:2}}}{{\leqslant}}(n^{\frac{1}{2} }m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2 -1/\sigma}}\leqslant 1.\] We apply (1)-(3), compare the right-hand sides of these order inequalities, and take into account (4) and the inequalities \(0\leqslant\lambda\leqslant\tilde{\lambda}\leqslant\tilde{\mu}\leqslant 1\). If \(\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\), then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{ \asymp}\nu_{1}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/ p_{1}-1/q}{1/2-1/q}};\] we use Lemma 4. If \(\frac{\nu_{1}}{\nu_{2}}\geqslant 1\), then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{ \asymp}\nu_{2}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}};\] we use Lemma 3. If \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/ \theta_{2}}{1/2-1/\sigma}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant 1,\] then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{ \asymp}\nu_{1}^{1-\tilde{\mu}}\nu_{2}^{\tilde{\mu}}n^{-\frac{1}{2}}m^{\frac{1 }{q}}k^{\frac{1}{\sigma}};\] we apply Lemma 11. If \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2}} {1/2-1/q}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1 }{q}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}},\] then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{ \asymp}\nu_{1}^{1-\lambda}\nu_{2}^{\lambda}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^ {\frac{1}{\sigma}})^{\frac{1/p-1/q}{1/2-1/q}}=\nu_{1}^{1-\lambda}\nu_{2}^{ \lambda}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/\theta -1/\sigma}{1/2-1/\sigma}};\] we use Lemma 6. **Subcase \(mk^{2/\sigma}\leqslant n\leqslant m^{2/q}k\).** Notice that \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1 }{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\stackrel{{ \eqref{eq:2}}}{{\leqslant}}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant 1.\] If \(\frac{\nu_{1}}{\nu_{2}}\geqslant 1\), then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{ \asymp}\nu_{2}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}};\] we use Lemma 3. If \[\frac{\nu_{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac {1}{2}}m^{-\frac{1}{2}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}} {1/2-1/\sigma}},\] then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{ 1}m^{\frac{1}{q}-\frac{1}{p_{1}}}(n^{-\frac{1}{2}}m^{\frac{1}{2}}k^{\frac{1}{ \sigma}})^{\frac{1/\theta_{1}-1/\sigma}{1/2-1/\sigma}};\] we apply Lemma 5. If \((n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/ \theta_{2}}{1/2-1/\sigma}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant 1\), then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}^{1-\tilde{\mu}}\nu_{2}^{\tilde{\mu}}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{ \frac{1}{\sigma}};\] we use Lemma 11. If \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{ 1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant\frac{ \nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma }})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}},\] then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}^{1-\lambda}\nu_{2}^{\lambda}m^{\frac{1}{q}-\frac{1}{p}}(n^{-\frac{1}{ 2}}m^{\frac{1}{2}}k^{\frac{1}{\sigma}})^{\frac{1/\theta-1/\sigma}{1/2-1/ \sigma}};\] we apply Lemma 7. **Subcase \(km^{2/q}\leqslant n\leqslant mk^{2/\sigma}\).** Notice that \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2} }{1/2-1/q}}\overset{\eqref{eq:1}}{\leqslant}(n^{\frac{1}{2}}m^{-\frac{1}{q}} k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}k^{\frac{1}{\theta_{1}}-\frac{1}{ \theta_{2}}}\leqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\leqslant 1.\] If \(\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\), then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/p_{1}-1 /q}{1/2-1/q}};\] we apply Lemma 4. If \(\frac{\nu_{1}}{\nu_{2}}\geqslant 1\), then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{2}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}};\] we use Lemma 3. If \(k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant 1\), then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}^{1-\tilde{\mu}}\nu_{2}^{\tilde{\mu}}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{ \frac{1}{\sigma}};\] we use Lemma 12. If \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2- 1/q}}k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\leqslant\frac{\nu_{1}}{\nu _{2}}\leqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}},\] then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}^{1-\tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}k^{\frac{1}{\sigma}-\frac {1}{\theta}}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{2}};\] we use Lemma 10. If \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2} }{1/2-1/q}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac {1}{q}}k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}k^{\frac{1}{\theta_{1} }-\frac{1}{\theta_{2}}},\] then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}^{1-\lambda}\nu_{2}^{\lambda}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1} {\sigma}})^{\frac{1/p-1/q}{1/2-1/q}};\] we apply Lemma 7. **Subcase \(n\geqslant\max\{mk^{2/\sigma},\,m^{2/q}k\}\).** Notice that \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1 }{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\overset{\eqref{eq: 1.1}}{\leqslant}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{\frac{1/p_{1 }-1/p_{2}}{1/2-1/q}}k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\leqslant k^{ \frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\leqslant 1.\] If \(\frac{\nu_{1}}{\nu_{2}}\geqslant 1\), we have \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{ \asymp}\nu_{2}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}};\] we apply Lemma 3. If \(\frac{\nu_{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac {1}{2}}m^{-\frac{1}{2}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2 }}{1/2-1/\sigma}}\), then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{ \asymp}\nu_{1}m^{\frac{1}{q}-\frac{1}{p_{1}}}(n^{-\frac{1}{2}}m^{\frac{1}{2}} k^{\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\sigma}{1/2-1/\sigma}};\] we use Lemma 5. If \(k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant 1\), then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}^{1-\tilde{\mu}}\nu_{2}^{\tilde{\mu}}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^ {\frac{1}{\sigma}};\] we use Lemma 12. If \((n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2 -1/q}}k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\leqslant\frac{\nu_{1}}{ \nu_{2}}\leqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\), then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}^{1-\tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}k^{\frac{1}{\sigma}-\frac {1}{\theta}}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{2}};\] we use Lemma 10. If \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1 }{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant\frac{ \nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{ \frac{1/p_{1}-1/p_{2}}{1/2-1/q}}k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}},\] then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}^{1-\lambda}\nu_{2}^{\lambda}m^{\frac{1}{q}-\frac{1}{p}}(n^{-\frac{1}{ 2}}m^{\frac{1}{2}}k^{\frac{1}{\sigma}})^{\frac{1/\theta-1/\sigma}{1/2-1/ \sigma}};\] we apply Lemma 8. **8. Case \(q>2\), \(\sigma>2\), \(p_{1}\in[2,\,q]\), \(\theta_{1}\in[2,\,\sigma]\), \(p_{2},\,\theta_{2}\in[1,\,2]\); we suppose that one of the following conditions holds: a) \(\lambda_{p_{1},q}\leqslant\lambda_{\theta_{1},\sigma}\), \(\tilde{\mu}\leqslant\tilde{\lambda}\), b) \(\lambda_{p_{1},q}\geqslant\lambda_{\theta_{1},\sigma}\), \(\tilde{\mu}\geqslant\tilde{\lambda}\).** Let condition a) hold (case b) is similar). Since \(\tilde{\mu}\leqslant\tilde{\lambda}\), we have \(\tilde{p}\geqslant 2\), \(\tilde{\theta}\leqslant 2\). We prove that \(d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\,l_ {q,\sigma}^{m,k})\underset{q,\sigma}{\asymp}\min_{1\leqslant j\leqslant 4} \Phi_{j}(m,\,k,\,n)\). **Subcase \(m^{2/q}k^{2/\sigma}\leqslant n\leqslant mk^{2/\sigma}\).** Notice that \((n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2}}{1 /2-1/q}}\leqslant 1\). If \(\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\), then \[\min_{1\leqslant j\leqslant 4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{ 1}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}};\] we use Lemma 4. If \(\frac{\nu_{1}}{\nu_{2}}\geqslant 1\), then \[\min_{1\leqslant j\leqslant 4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{2}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}};\] we apply Lemma 3. If \((n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2} }{1/2-1/q}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant 1\), then \[\min_{1\leqslant j\leqslant 4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}^{1-\tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}n^{-\frac{1}{2}}m^{\frac{1 }{q}}k^{\frac{1}{\sigma}};\] we use Lemma 11. **Subcase \(n\geqslant mk^{2/\sigma}\).** Notice that \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1 }{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant m^{\frac {1}{p_{1}}-\frac{1}{p_{2}}}\leqslant 1.\] If \(\frac{\nu_{1}}{\nu_{2}}\geqslant 1\), then \[\min_{1\leqslant j\leqslant 4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{2}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}};\] we use Lemma 3. If \(\frac{\nu_{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac {1}{2}}m^{-\frac{1}{2}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2 }}{1/2-1/\sigma}}\), then \[\min_{1\leqslant j\leqslant 4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}m^{\frac{1}{q}-\frac{1}{p_{1}}}(n^{-\frac{1}{2}}m^{\frac{1}{2}}k^{\frac {1}{\sigma}})^{\frac{1/\theta_{1}-1/\sigma}{1/2-1/\sigma}};\] we use Lemma 5. If \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1 }{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant\frac{ \nu_{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}},\] then \[\min_{1\leqslant j\leqslant 4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}^{1-\tilde{\mu}}\nu_{2}^{\tilde{\mu}}m^{\frac{1}{q}-\frac{1}{p}}n^{- \frac{1}{2}}m^{\frac{1}{2}}k^{\frac{1}{\sigma}};\] we use Lemma 10. If \(m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant 1\), then \[\min_{1\leqslant j\leqslant 4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}^{1-\tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}n^{-\frac{1}{2}}m^{\frac{1 }{q}}k^{\frac{1}{\sigma}};\] we use Lemma 12. **9a. Case \(p_{1}\)**, \(p_{2}\in[2,\,q]\), \(\theta_{1}\), \(\theta_{2}\in[1,\,2]\). We claim that \(d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k},\,l _{q,\sigma}^{m,k})\underset{q,\sigma}{\asymp}\min_{j=1,2}\Phi_{j}(m,\,k,\,n)\). If \(m^{\frac{2}{q}}k^{\frac{2}{\sigma}}\leqslant n\leqslant mk^{\frac{2}{\sigma}}\), then \[d_{n}(\nu_{i}B_{p_{i},\theta_{i}}^{m,k},\,l_{q,\sigma}^{m,k})\underset{q, \sigma}{\asymp}\nu_{i}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{ \frac{1/p_{i}-1/q}{1/2-1/q}};\] \[\nu_{1}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/q}{ 1/2-1/q}}\leqslant\nu_{2}(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}}) ^{\frac{1/p_{2}-1/q}{1/2-1/q}}\ \Leftrightarrow\ \frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1} {\sigma}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}.\] Now we apply Lemma 4. Let \(n\geqslant mk^{\frac{2}{\sigma}}\). Then \[d_{n}(\nu_{i}B_{p_{i},\theta_{i}}^{m,k},\ l_{q,\sigma}^{m,k})\underset{q, \sigma}{\overset{(2)}{\underset{q,\sigma}{\succ}}}\nu_{i}m^{\frac{1}{q}-\frac {1}{p_{i}}}n^{-\frac{1}{2}}m^{\frac{1}{2}}k^{-\frac{1}{\sigma}}.\] We have \[\nu_{1}m^{\frac{1}{q}-\frac{1}{p_{1}}}n^{-\frac{1}{2}}m^{\frac{1}{2}}k^{-\frac {1}{\sigma}}\leqslant\nu_{2}m^{\frac{1}{q}-\frac{1}{p_{2}}}n^{-\frac{1}{2}}m^ {\frac{1}{2}}k^{-\frac{1}{\sigma}}\ \Leftrightarrow\ \frac{\nu_{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}.\] It remains to apply Lemma 9. **9b. Case \(p_{1}\)**, \(p_{2}\in[1,\,2]\)**, \(\theta_{1}\)**, \(\theta_{2}\in[2,\,\sigma]\) is similar. **10. Case \(q>2\), \(\sigma>2\), \(p_{1}\in[2,\,q]\), \(\theta_{1}\in[1,\,2]\), \(p_{2}\in[1,\,2]\), \(\theta_{2}\in[2,\,\sigma]\), \(\tilde{\lambda}\geqslant\tilde{\mu}\).** Since \(\tilde{\lambda}\geqslant\tilde{\mu}\), we have \(\tilde{p}\geqslant 2\), \(\tilde{\theta}\geqslant 2\). It follows from the equations \(\frac{1}{2}-\frac{1}{\tilde{p}}=(\tilde{\lambda}-\tilde{\mu})\left(\frac{1}{p_ {2}}-\frac{1}{p_{1}}\right)\), \(\frac{1}{2}-\frac{1}{\tilde{\theta}}=(\tilde{\lambda}-\tilde{\mu})\left(\frac {1}{\theta_{1}}-\frac{1}{\theta_{2}}\right)\). In addition, \(\Phi_{5}(m,\,k,\,n)<\infty\) and \(\lambda\in[\tilde{\mu},\,\tilde{\lambda}]\), since \(\frac{1/\tilde{p}-1/q}{1/2-1/q}\leqslant 1=\frac{1/2-1/\sigma}{1/2-1/\sigma}\), \(\frac{1/2-1/q}{1/2-1/q}=1\geqslant\frac{1/\tilde{\theta}-1/\sigma}{1/2-1/\sigma}\). **Subcase \(m^{\frac{2}{q}}k^{\frac{2}{\sigma}}\leqslant n\leqslant\min\{mk^{\frac{2}{ \sigma}},\,m^{\frac{2}{q}}k\}\).** Notice that \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2}}{ 1/2-1/q}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{ \frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}.\] If \(\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\) or \(\frac{\nu_{1}}{\nu_{2}}\geqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\), we have, respectively, \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1}(n^ {-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/p_{1}-1/q}{1/2-1/q }},\] \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{2}(n^ {-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/\theta_{2}-1/ \sigma}{1/2-1/\sigma}};\] now we use Lemma 4. If \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2}}{ 1/2-1/q}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{ q}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}},\] then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{1} ^{1-\lambda}\nu_{2}^{\lambda}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma }})^{\frac{1/p-1/q}{1/2-1/q}};\] we use Lemma 6. **Subcase \(mk^{2/\sigma}\leqslant n\leqslant m^{2/q}k\).** Notice that \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}} (n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/ \theta_{2}}{1/2-1/\sigma}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1} {\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}.\] If \(\frac{\nu_{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\), then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}m^{\frac{1}{q}-\frac{1}{p_{1}}}n^{-\frac{1}{2}}m^{\frac{1}{2}}k^{\frac{ 1}{\sigma}};\] we use Lemma 9. If \(\frac{\nu_{1}}{\nu_{2}}\geqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\), then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{2}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/\theta_{ 2}-1/\sigma}{1/2-1/\sigma}};\] we use Lemma 4. If \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant m ^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1} {\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}},\] then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}^{1-\tilde{\mu}}\nu_{2}^{\tilde{\mu}}m^{\frac{1}{q}-\frac{1}{\tilde{p} }}n^{-\frac{1}{2}}m^{\frac{1}{2}}k^{\frac{1}{\sigma}};\] we use Lemma 10. If \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{ 1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant\frac{ \nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma }})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}},\] then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}^{1-\lambda}\nu_{2}^{\lambda}m^{\frac{1}{q}-\frac{1}{\tilde{p}}}(n^{- \frac{1}{2}}m^{\frac{1}{2}}k^{\frac{1}{\sigma}})^{\frac{1/\theta-1/\sigma}{1 /2-1/\sigma}};\] we apply Lemma 7 and (4). **Subcase \(m^{2/q}k\leqslant n\leqslant mk^{2/\sigma}\)** is similar. **Subcase \(\max\{m^{2/q}k,\,mk^{2/\sigma}\}\leqslant n\leqslant\frac{mk}{2}\).** Notice that \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2} }}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1}{\sigma}})^{\frac{1/\theta_{1}- 1/\theta_{2}}{1/2-1/\sigma}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac {1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}k^{\frac{1}{\theta_{1}}-\frac{1}{ \theta_{2}}}\leqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}.\] If \(\frac{\nu_{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\) or \(\frac{\nu_{1}}{\nu_{2}}\geqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\), we get, respectively, \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}m^{\frac{1}{q}-\frac{1}{p_{1}}}n^{-\frac{1}{2}}m^{\frac{1}{2}}k^{\frac{ 1}{\sigma}},\] \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{2}k^{\frac{1}{\sigma}-\frac{1}{\theta_{2}}}n^{-\frac{1}{2}}m^{\frac{1}{q} }k^{\frac{1}{2}};\] then we use Lemma 9. If \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant m ^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1 }{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\] or \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1 /q}}k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}}\leqslant\frac{\nu_{1}}{\nu_ {2}}\leqslant k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}},\] then we get, respectively, \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{ 1}^{1-\bar{\mu}}\nu_{2}^{\bar{\mu}}m^{\frac{1}{q}-\frac{1}{\bar{p}}}n^{-\frac{ 1}{2}}m^{\frac{1}{2}}k^{\frac{1}{\sigma}},\] \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{ 1}^{1-\bar{\lambda}}\nu_{2}^{\tilde{\lambda}}k^{\frac{1}{\sigma}-\frac{1}{ \tilde{\sigma}}}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{2}};\] now we apply Lemma 10. If \[m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}(n^{\frac{1}{2}}m^{-\frac{1}{2}}k^{-\frac{1 }{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\leqslant\frac{ \nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{2}})^ {\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}k^{\frac{1}{\theta_{1}}-\frac{1}{\theta_{2}}},\] then \[\min_{1\leqslant j\leqslant 5}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{ 1}^{1-\lambda}\nu_{2}^{\lambda}m^{\frac{1}{q}-\frac{1}{p}}(n^{-\frac{1}{2}}m^ {\frac{1}{2}}k^{\frac{1}{\sigma}})^{\frac{1/\theta-1/\sigma}{1/2-1/\sigma}};\] we use Lemma 8. **11. Case \(p_{1}\in[2,\,q]\), \(\theta_{1}\in[1,\,2]\), \(p_{2}\in[1,\,2]\), \(\theta_{2}\in[2,\,\sigma]\), \(\tilde{\lambda}\leqslant\tilde{\mu}\).** We prove that \[d_{n}(\nu_{1}B_{p_{1},\theta_{1}}^{m,k}\cap\nu_{2}B_{p_{2},\theta_{2}}^{m,k}, \,l_{q,\sigma}^{m,k})\underset{q,\sigma}{\asymp}\min_{1\leqslant j\leqslant 4 }\Phi_{j}(m,\,k,\,n).\] Since \(\tilde{\lambda}\leqslant\tilde{\mu}\), we have \(\tilde{p}\leqslant 2\), \(\tilde{\theta}\leqslant 2\). **Subcase \(m^{2/q}k^{2/\sigma}\leqslant n\leqslant\min\{mk^{2/\sigma},\,m^{2/q}k\}\).** Notice that \[(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2 }}{1/2-1/q}}\leqslant 1\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}.\] If \(\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/p_{1}-1/p_{2}}{1/2-1/q}}\) or \(\frac{\nu_{1}}{\nu_{2}}\geqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{ \sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\), we have, respectively, \[\min_{1\leqslant j\leqslant 4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{ 1}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/p_{1}-1/q}{ 1/2-1/q}},\] \[\min_{1\leqslant j\leqslant 4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{ 2}(n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{\frac{1}{\sigma}})^{\frac{1/\theta_{2}-1/ \sigma}{1/2-1/\sigma}};\] we use Lemma 4. If \((n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{-\frac{1}{\sigma}})^{\frac{1/p_{1}-1/p_{2 }}{1/2-1/q}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant 1\,\,\text{or}\,\,1 \leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{- \frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}\), we get, respectively, \[\min_{1\leqslant j\leqslant 4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{ 1}^{1-\tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}n^{-\frac{1}{2}}m^{\frac{1}{q}}k ^{\frac{1}{\sigma}},\] \[\min_{1\leqslant j\leqslant 4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp}\nu_{ 1}^{1-\tilde{\mu}}\nu_{2}^{\tilde{\mu}}n^{-\frac{1}{2}}m^{\frac{1}{q}}k^{ \frac{1}{\sigma}};\] we apply Lemma 11. **Subcase \(mk^{2/\sigma}\leqslant n\leqslant m^{2/q}k\).** Notice that \[m^{1/p_{1}-1/p_{2}}\leqslant 1\leqslant(n^{\frac{1}{2}}m^{-\frac{1}{q}}k^{- \frac{1}{\sigma}})^{\frac{1/\theta_{1}-1/\theta_{2}}{1/2-1/\sigma}}.\] If \(\frac{\nu_{1}}{\nu_{2}}\geqslant 1\), as in the previous subcase, we use Lemmas 4 and 11. Let \(\frac{\nu_{1}}{\nu_{2}}\leqslant 1\). If \(\frac{\nu_{1}}{\nu_{2}}\leqslant m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\), we have \[\min_{1\leqslant j\leqslant 4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}m^{\frac{1}{q}-\frac{1}{p_{1}}}n^{-\frac{1}{2}}m^{\frac{1}{2}}k^{\frac {1}{\sigma}};\] we use Lemma 9. If \(m^{\frac{1}{p_{1}}-\frac{1}{p_{2}}}\leqslant\frac{\nu_{1}}{\nu_{2}}\leqslant 1\), then \[\min_{1\leqslant j\leqslant 4}\Phi_{j}(m,\,k,\,n)\underset{q,\sigma}{\asymp} \nu_{1}^{1-\tilde{\lambda}}\nu_{2}^{\tilde{\lambda}}n^{-\frac{1}{2}}m^{\frac{1 }{q}}k^{\frac{1}{\sigma}};\] we use Lemma 12. **Subcases \(m^{2/q}k\leqslant n\leqslant mk^{2/\sigma}\) and \(\max\{m^{2/q}k,\,mk^{2/\sigma}\}\leqslant n\leqslant\frac{mk}{2}\)** are similar.
2308.11422
Recommending Analogical APIs via Knowledge Graph Embedding
Library migration, which re-implements the same software behavior by using a different library instead of using the current one, has been widely observed in software evolution. One essential part of library migration is to find an analogical API that could provide the same functionality as current ones. However, given the large number of libraries/APIs, manually finding an analogical API could be very time-consuming and error-prone. Researchers have developed multiple automated analogical API recommendation techniques. Documentation-based methods have particularly attracted significant interest. Despite their potential, these methods have limitations, such as a lack of comprehensive semantic understanding in documentation and scalability challenges. In this work, we propose KGE4AR, a novel documentation-based approach that leverages knowledge graph (KG) embedding to recommend analogical APIs during library migration. Specifically, KGE4AR proposes a novel unified API KG to comprehensively and structurally represent three types of knowledge in documentation, which can better capture the high-level semantics. Moreover, KGE4AR then proposes to embed the unified API KG into vectors, enabling more effective and scalable similarity calculation. We build KGE4AR' s unified API KG for 35,773 Java libraries and assess it in two API recommendation scenarios: with and without target libraries. Our results show that KGE4AR substantially outperforms state-of-the-art documentation-based techniques in both evaluation scenarios in terms of all metrics (e.g., 47.1%-143.0% and 11.7%-80.6% MRR improvements in each scenario). Additionally, we explore KGE4AR' s scalability, confirming its effective scaling with the growing number of libraries.
Mingwei Liu, Yanjun Yang, Yiling Lou, Xin Peng, Zhong Zhou, Xueying Du, Tianyong Yang
2023-08-22T13:12:13Z
http://arxiv.org/abs/2308.11422v1
# Recommending Analogical APIs via Knowledge Graph Embedding ###### Abstract. Library migration, which replaces the current library with a different one to retain the same software behavior, is common in software evolution. An essential part of this is finding an analogous API for the desired functionality. However, due to the multitude of libraries/APIs, manually finding such an API is time-consuming and error-prone. Researchers created automated analogical API recommendation techniques, notably documentation-based methods. Despite potential, these methods have limitations, e.g., incomplete semantic understanding in documentation and scalability issues. In this study, we present KGE4AR, a novel documentation-based approach using knowledge graph (KG) embedding for recommending analogical APIs during library migration. KGE4AR introduces a unified API KG to comprehensively represent documentation knowledge, capturing high-level semantics. It further embeds this unified API KG into vectors for efficient, scalable similarity calculation. We assess KGE4AR with 35,773 Java libraries in two scenarios, with and without target libraries. KGE4AR notably outperforms state-of-the-art techniques (e.g., 47.1%-143.0% and 11.7%-80.6% MRR improvements), showcasing scalability with growing library counts. API Migration, Knowledge Graph, Knowledge Graph Embedding + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal 60, 79]. Among these, documentation-based API recommendation has been intensively studied in the literature, since API documentation is prevalent and at low cost to collect while other information could be time-consuming to collect and are not always available. For a given source API, existing documentation-based techniques calculate the textual similarity between each candidate API and the source API (_e.g._, the textual similarity between two API functionality descriptions in the documentation), and then recommend the candidate API with the highest similarity as the target API. While promising, current documentation-based API recommendation techniques face two limitations. First, their way of calculating textual similarity falls short in capturing semantic-level connections in API documentation. These techniques mainly calculate the textual similarity based on the overlapping tokens [59] or measure the token similarity without contextual consideration [79]. This can lead to identifying analogical semantics in API descriptions that share similar noun phrases but differing action verbs (_e.g._, "set S3 Object content" vs. "get S3 Object content"). Additionally, these techniques seldom consider domain knowledge when calculating textual similarity. For example, JSON arrays, JSON objects, keys, and values are all JSON-related concepts that often occur in APIs related to JSON processing. Concepts, in the context of our work, refer to domain-specific entities or terms, often represented by noun phrases, that capture specific elements or ideas within the API domain. Without considering such conceptual relationships, the estimation of semantic similarity/relevance between two analogical APIs might be underestimated. Second, these techniques typically compute similarity pairwise, posing computational challenges with a vast number of candidate APIs. For example, envision a library like TestNG [23], encompassing over 4,000 candidate APIs. Existing techniques require performing over 4,000 pairwise comparisons to calculate the similarity between a single source API and all the candidate APIs. This exhaustive calculation demands substantial online costs and becomes prohibitively expensive when multiple target libraries are involved. To address this, we propose **KGE4AR**, a novel documentation-based method leveraging **K**nowledge **G**raph **E**mbedding **for** analogical **API** Recommendation _effectively and scalably_. KGE4AR constructs a unified API knowledge graph (KG) for third-party libraries from API documentation, leveraging graph embedding to represent nodes and edges as numeric vectors. It efficiently retrieves the most similar API for a given source API from the embedded KG. Compared to previous approaches, KGE4AR introduces two technical innovations. Firstly, it presents a novel _unified API KG_ that comprehensively represents three types of documentation knowledge across diverse libraries, better capturing overall semantics in API documentation. Secondly, KGE4AR proposes _embedding the unified API KG_, enhancing efficiency and scalability by streamlining analogous API vector retrieval via vector indexing. To implement KGE4AR, we build a unified API KG consisting of 59,155,631 API elements sourced from 35,773 Java libraries. This KG comprises a total of 72,242,099 entities and 289,122,265 relations connecting these entities. We evaluate KGE4AR in two API recommendation scenarios: with and without target libraries. When given the target libraries, KGE4AR achieves 47.1%-143.0% and 41.4%-95.4% improvements over the baselines in terms of MRR and Hit@10, respectively; while without a given target library, KGE4AR substantially outperforms existing analogical API recommendation techniques by achieving 11.7%-80.6%, 26.2%-72.0%, and 33.2%-116.5% improvements in terms of MRR, precision, and recall, respectively. We also evaluate the scalability of KGE4AR and find that it scales well with an increasing number of libraries. Furthermore, we extensively investigate the impact of different design choices in KGE4AR. In summary, this work makes the following contributions: * **Novel Approach:** We introduce KGE4AR, a documentation-based analogical API recommendation method that builds a unified API KG for numerous libraries, offering scalable recommendations via KG embedding. * **Thorough Evaluation:** We thoroughly evaluate KGE4AR through effectiveness comparisons in two API recommendation scenarios, scalability assessment across various library quantities, and analysis of design choice implications. * **Public Benchmark**: We release a benchmark for extensive analogical API evaluations across numerous libraries. ## 2. Background and Related Work In this section, we discuss related work in analogical API recommendation and knowledge graphs in software engineering. ### Analogical API Recommendation Existing analogical API recommendation techniques leverage various sources like evolution history [58, 29], online posts [49], and API documentation [59, 60, 34, 54, 79, 28] to find suitable target APIs. Evolution-history-based methods [65] use evolution history (_e.g._, code changes) to mine frequently co-occurring API pairs, while documentation-based ones [59, 60, 34, 28, 30] calculate textual similarity using API-related text (_e.g._, descriptions). We concentrate on documentation-based recommendation due to its prevalence, low cost of data collection, and recent research emphasis. Existing documentation-based API analogical techniques mainly fall into two categories, _e.g._, supervised learning based [28] and unsupervised learning based ones [59, 60, 33, 34, 54, 59, 69]. For supervised learning-based techniques, Alrubaye _et al._[28] propose to train a machine learning model (_i.e._, boosted decision tree) for analogical API inference based on the features extracted from API documentation (_e.g._, the similarity of their method descriptions, return type descriptions, method names, and class names) and leverage the trained model to predict the probability of an unseen API pair being analogical. Different from supervised techniques that require a large amount of labeled data, unsupervised learning-based techniques often vectorize APIs in an unsupervised way and then recommend analogical APIs based on vector similarity. For example, Zhang _et al._[79] leverage the Word2Vec model to vectorize the API functionality description, API parameters, and API return values, and then calculate a joint similarity based on these vectors. Although achieving promising effectiveness, existing documentation-based techniques suffer from two major drawbacks. First, they calculate the textual similarity based on the overlapping tokens [59] or measure the token similarity without considering the whole context [79], thus cannot well capture the semantic-level similarity in API documentation. Second, they calculate the pair-wise similarity between all APIs in an exhaustive way, thus suffering from the scalability issue when the number of APIs is large. To address these issues, our work makes the first attempt to comprehensively and structurally represent the knowledge in API documentation with a novel _unified API KG_. In addition, we further leverage the KG embed to enable more effective and scalable similarity calculation. Our evaluation results also demonstrate our improvements over existing documentation-based techniques. ### Knowledge Graph in Software Engineering In the domain of software engineering, researchers have established knowledge graphs for diverse objectives, encompassing API concepts (Wang et al., 2017; Wang et al., 2017), API caveats (Wang et al., 2017), API comparison (Wang et al., 2017), API documentation (Wang et al., 2017; Wang et al., 2017), domain terminology (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), programming tasks (Wang et al., 2017), ML/DL models (Wang et al., 2017), and bugs (Wang et al., 2017; Wang et al., 2017). Our work applies the API knowledge graph in a task that is distinct from existing work, namely analogical API recommendation. In addition, since targeting different tasks, the design and focus of our API knowledge graph are also different from existing ones. For example, the existing API knowledge graph constructed for API misuse detection (Wang et al., 2017) mainly includes the call-order and condition-checking relations between APIs, while our API knowledge graph focuses on three types of knowledge (_i.e._, API structures, API functionality descriptions, and API conceptual relationships) in API documentation which are helpful for analogical API recommendation. Moreover, we also propose a novel knowledge graph embedding to enable more effective and more scalable analogical API recommendation. ### Knowledge Graph Embedding Knowledge graph embedding (KGE) uses low-dimensional vectors to represent entities and relationships in a knowledge graph, capturing semantic relationships between entities (Wang et al., 2017). KGE models map entities into a vector space, where similar ones are closer. They excel in applications like question answering, recommendations, and knowledge graph completion (Wang et al., 2017; Wang et al., 2017). Common KGE approaches are TransE, TransR, and DistMult (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). These methods encode KG triples (head entity, relation, tail entity) into continuous vector representations. For instance, TransE treats entities and relations as vectors, defining relationships as translations from head to tail entities (Wang et al., 2017). We employ KGE to embed a unified API KG for analogical API recommendation. ## 3. Approach As shown in Figure 1, KGE4AR includes three phases, _i.e._, API KG construction, API KG embedding, and analogical API method inferring. Given the API documentation from a large number of libraries as inputs, KGE4AR first constructs a unified API KG (Section 3.1) and then trains an embedding model to embed the constructed KG (Section 3.2). Lastly, for a given source API, KGE4AR returns its analogical API based on the embedded KG (Section 3.3). Note that the first two phases only need to be run once. Once the unified KG is constructed and embedded, KGE4AR can recommend analogical APIs for the given API efficiently. In particular, KGE4AR mainly has two technical novelties. **Novelty 1: a unified API KG for a large number of libraries.** We propose constructing a unified API KG for a substantial library count (_e.g._, 35,773 Java libraries in this study). Our API KG comprises three knowledge types found in documentation, which often resemble analogical APIs: (1) _API structures_ (e.g., package structures, class definitions, method declarations), (2) _API functionality descriptions_ (e.g., _"get the number of elements in the JSONArray"_), and (3) _API conceptual relationships_ (i.e., API concepts and their relationships like _"belong to"_). Unlike existing approaches that focus solely on API structures or functionality descriptions presented as token sequences, our unified API KG offers a broader, structural representation encompassing all three knowledge types. This includes a novel category--API conceptual relationships--previously unexplored. A graphical structure inherently suits the structured unification of multi-type data, thus effectively capturing the higher-level semantics within API documentation. **Novelty 2: a KG embedding-based similarity calculation.** We propose embedding the unified API KG, representing each KG API as a vector. KG embedding offers two advantages. First, it effectively preserves structural and semantic data in the unified KG. Second, it expedites similarity calculations between APIs in the KG. Retrieving similar API vectors from a database via vector indexing is highly efficient. Unlike existing methods requiring exhaustive similarity calculations for all API pairs, our KG embedding enables a more efficient and effective approach to similarity calculation. ### API Knowledge Graph Construction In this phase, KGE4AR constructs a unified API KG for a large number of libraries based on their API documentation. The API KG construction mainly consists of three steps. (1) Structure knowledge extraction: KGE4AR first extracts all API elements (_e.g._, packages, classes/interfaces, methods, fields, parameters) and their relationships from the documentation to form a basic skeleton of the API KG; (2) Functionality knowledge extraction: KGE4AR then extracts the functionality knowledge of the API libraries, _i.e._, the standardized functionality expressions of the methods (including functionality verbs, functionality categories, and phrase patterns) and the involved concepts, from the names and text descriptions of methods; (3) Conceptual relation completion: KGE4AR completes conceptual relations between API elements and concepts by analyzing names and text descriptions of API elements and concepts. In this way, API elements from different libraries can be related to each other based on shared type references (_e.g._, types of method parameters and return values), functionality expressions, and concepts. #### 3.1.1. Schema of the Unified API Knowledge Graph Our API KG captures the structural and high-level information present in API documentation. It consists of entities (nodes) and relations (edges) Figure 1. Overview of KGE4AR that represent various aspects of APIs. Here, we offer definitions for key entities and relations: * **API Element**. _API elements_ encompass components like libraries, packages, classes/interfaces, fields, methods, return values, parameters, and abstract parameters, forming the fundamental API building blocks. * **Structural Relation**. _Structural relations_ describe the relationships between API elements, including "extend" (inheritance), "implement" (interface implementation), "has field" (fields within classes/interfaces), "has method" (methods within classes/interfaces), and "has parameter" (methods with required parameters), forming the API KG's foundation. * **Functionality Expression Element**. _Functionality expression elements_ pertain to the structural representation of API functionality descriptions. This includes functionality expressions, functionality verbs, functionality categories, phrase patterns. They facilitate the standardized representation of API functionalities, as defined by Xie et al. (2019). * **Functionality Expression**. A _functionality expression_ provides a structural representation for the functionality descriptions of methods following the standardized form defined by Xie _et al._ (2019). It is extracted from the description sentence of a method. * **Functionality Verb**. A _functionality verb_ represents the verb that express the main action of the functionality, _e.g._, "return", "get", and "obtain". * **Functionality Category**. A _functionality category_ categorizes the functionality expressions based on their semantic meanings, which is abstracted from a set of functionality verbs that have similar meanings, _e.g._, "return", "get", and "obtain" can be classified into the same category. * **Phrase Pattern**. _Phrase patterns_ capture specific syntactic patterns or templates used in functionality expressions, _e.g._, "_V [patient]_" and "_V [patient] in [location]_". In the phrase pattern _V [patient] in [location]_" the placeholders "patient" and "location" represent noun phrases that fulfill semantic roles. "[Patient]" corresponds to the direct object of the functionality verb, signifying the entity or object directly affected by the action. "[location]" denotes the spatial or temporal context associated with the verb. * **Concept**. Concepts in the API KG are specific semantic units that capture domain-specific knowledge or common themes in API documentation. These concepts are typically represented by noun phrases. For instance, in APIs related to JSON processing, concepts like JSON arrays, JSON objects, keys, and values frequently appear. Concepts may be involved in functionality expressions by playing some semantic roles (_e.g._, patient, location). Figure 2 showcases the schema of our API KG, illustrating the types of entities and relations involved. Furthermore, Figure 3 provides a partial API KG example, highlighting the interconnectedness of these entities and relations. The complete schema, including definitions for all the entity and relation types, is available in our replication package (Kal of each library given its neat format and prevalence, and KGE4AR could also use API documentation from other sources (_e.g._, online official documentation). In particular, KGE4AR extracts all API elements and their structural relations from the API definition according to the schema shown in Figure 2. Meanwhile, KGE4AR further extracts the textual descriptions of API elements from their Javadoc comment (_i.e._, the comment before the method declaration (Bartos et al., 2017)). The extracted text descriptions can be used for the subsequent functionality knowledge extraction and conceptual knowledge extraction. In our implementation, we utilize JavaParser (Han et al., 2017) to analyze the Java source files contained within JAR files. Through static analysis based on abstract syntax tree (AST), we extract all the API elements, as well as their structural relations and textual descriptions. #### 3.1.3. Functionality Knowledge Extraction We extract functionality knowledge of API methods by analyzing their names and text descriptions. Xie _et al._(Xie et al., 2017) provide a dataset for standardized functionality description which is available online (Xie et al., 2017). It includes 10,016 functionality verbs, 89 functionality categories, and 523 phrase patterns. We add all of them into the API KG as the basis of functionality knowledge extraction. Xie _et al._(Xie et al., 2017) also provide a toolFuncVerbNet (Chen et al., 2017), which can parse a functionality description into a standardized functionality expression. FuncVerWebNet uses a text classifier to classify a functionality description into a functionality category and then identifies the corresponding phrase pattern, functionality verb, and concepts based on dependency tree parsing. For example, it extracts the following functionality expression from the description _"returns the number of elements in the array"_: [leftmargin=*] _Functionality Category_: get; _Functionality Verb_: return; _Phrase Pattern_: V [patient] in [location]; _Concepts_: [element number, array]; _Functionality Expression_: return | element number | array For each API method, we take the first sentence of its text description as its functionality description (if exists), following previous work (Zhu et al., 2017; Xie et al., 2017). Next, we utilize FuncVebNet to extract the associated functionality expressions. The concepts present in the functionality expressions, which correspond to noun phrases that fulfill semantic roles in the phrase pattern, are extracted and refined through the removal of stop words and lemmatization techniques (Xie et al., 2017). If the extracted functionality expressions and associated concepts do not already exist in the API KG, we add them as entities and establish _"involve"_ relations between them. We also establish relations between the extracted functionality expressions and other existing elements like functionality verbs, phrase patterns, and functionality categories defined by the schema (see Figure 2). If a method has no text description, we extract functionality expression from its name. We split the name into a sequence of tokens according to camel case and underscore and then use the token sequence as the functionality description of the method. For example, _e.g._, _"get Int"_ can be extracted from the name of the method _getInt()_ as its functionality description. If a verb is missing at the beginning of the method name, we add a default functionality verb according to the following rules. We utilize WordNet (Xie et al., 2017), a lexical database that provides word meanings and classifications, to determine the part of speech (_e.g._, adjective, noun) of words. * Add _"get"_ if the method name is a noun phrase, _e.g._, _"get length"_ for _JSONArray.length()_; * Add _"convert"_ if the method name starts with "to", _e.g._, _"convert to String"_ for _JSONArray.toString()_; * Add _"check"_ if the method name is an adjective, _e.g._, _"check empty"_ for _ArrayList.empty()_. #### 3.1.4. Conceptual Relation Completion Conceptual relation completion establishes conceptual relations between analogical APIs by analyzing the names/descriptions of API elements and concepts and then completing conceptual relations for methods. API element name/description analysis creates relations between API elements and concepts and adds new concepts if necessary. Concept name analysis creates relations between concepts. Method conceptual relation completion completes the relations between API methods and concepts based on existing relations. **API Element Name Analysis**. Each API element (except method) can be regarded as an instance of a corresponding concept, for example _java.io.File_ represents an instance of the concept _file_. We extract the corresponding concepts in different ways according to the type of API elements: * Package, Class, and Interface: the lowercase phrase obtained by splitting the short name (_i.e._, the part after the last dot of the fully qualified name) of the API element by camel case and underscore, _e.g._, _"json array"_ is the concept for _org.json.JSONArray_; * Return Value: the lowercase phrase obtained by splitting the return value type's short name by camel case and underscore; * Parameter and Field: the lowercase phrase obtained by splitting the short name of the parameter/field by camel case and underscore _e.g._, "src file" is the concept for _File srcFile_. For each concept obtained in this way we create an "instance of" relation between the API element and the concept, _e.g._, _-org.json.JSONArray_, _"_Array_, _instance class of concept, json arrays_-. **API Element Description Analysis**. We extract concepts from the descriptions of API elements with the following steps: * Extract all the noun phases with Spacy (Sparley, 1995), for example _"A JSONObject"_ and _"the value"_ are extracted from the description of a return value _"A JSONObject which is the value"_; * Lowercase and lemmatize extracted noun phrases, for example _"files"_ and _"A JSONObject"_ are converted into _"file"_ and _"ajsonobject"_, respectively; * Remove stop words at the beginning of a phrase, for example "a" is removed from _"ajsonobject"_. All the remaining noun phrases are treated as concepts mentioned in the description of API elements and the corresponding concept mention relations are created between them, _e.g._, _-jsonobject, mentioned in return value description, org.json.JSONObject.optJSONObject(java.lang.String)_-_RS-_. **Concept Name Analysis**. The name of a concept may imply some conceptual relations between concepts, _e.g._, _-json array_, is, _array_-. Such conceptual relations are useful for establishing possible associations between API elements with subtle differences in concept expression. Following the previous work (Zhu et al., 2017), we use the following rules to identify possible conceptual relations between two concepts \(C1\) and \(C2\) in the API KG: * If C1's name is derived from _C2_'s name, add a relation _<C1, derived from_, _C2_>, _e.g., _<builder, derived from, build_>. * If C1's name is shorter than and the prefix of _C2_'s name and there are no other longer concepts that satisfy this rule for _C1_, add a relation _<C2, facet of, C1_>, _e.g., _<character sequence length, facet of, character sequence_>; * If C1's name is shorter than the suffix of _C2_'s name and there are no other longer concepts that satisfy this rule for _C1_, add a relation _<C2, is, C1_>, _e.g., _<json array, is, array_>. * If C1's name is the same as _C2_'s name after removing spaces, add bidirectional relations _<C2, same as, C1_> and _<C1, same as, C2_>, _e.g., _<json array, same as, jsonarray_> and _<jsonarray, same as, json array_>; **API Method Conceptual Relation Completion**. To better reflect the conceptual associations between methods in the subsequent API KG embedding, we further create direct relations between methods and concepts that are indirectly connected through multi-hop relations. We follow the rules shown in Table 1 to complete the relations. In this way, we establish direct relations between methods and concepts based on different parts of the methods, _i.e.,_ object, input value, input type, and output type. ### API Knowledge Graph Embedding In this phase, KGE4AR trains a KG embedding model based on all the relation triples of the API KG. The model maps all the entities in the API KG (_e.g.,_ API elements, functionality expression elements, concepts) to a high-dimensional vector space, where API elements with similar structural, functionality, and conceptual relationships are close. The benefits of KG embedding include: (i) graph embedding could well reserve both structural and semantic information in the graph, and (ii) mapping APIs into vector spaces could accelerate similar API retrieval since all API vectors are restored in a vector database and the vector index is very efficient. In particular, we use the ComplEx model (Zhou et al., 2017), a tensor decomposition based KG embedding method, to train the API KG embedding model. A tensor decomposition models the KG as a three-way tensor (_i.e.,_ a three-dimensional adjacency matrix), which can be decomposed into a combination of low-dimensional vectors (_i.e.,_ the embeddings of entities and relations (Zhou et al., 2017)). ComplEx calculates a score for each relation triple \(<\)_h_, \(r,r\)_> using the equation: \(\phi(h,r,t)=E_{h}\times E_{r}\times E_{t}\), where \(h\), \(r\) and \(t\) are the head entity, relation type and tail entity respectively, and \(E_{h}\), \(E_{r}\) and \(E_{t}\) are their embeddings. The score indicates the probability that the corresponding relation holds. The model training takes all the relation triples in a KG as input and produces the embeddings of all the entities and relations in the KG as output. The goal of the optimization during training is to assign a higher score to the true triplet (\(E_{h},E_{r},E_{t}\)) compared to the corrupted false triplets (\(E_{h}^{r},E_{r},E_{t}\)) and (\(E_{h},E_{r},E_{t}^{r}\)). To support antisymmetric relations, the model represents \(E_{h}\), \(E_{r}\) and \(E_{t}\) in complex-valued space instance of real-valued space, _e.g.,_ \(h\) has a real part \(Re(h)\) and an imaginary part \(lm(h)\), _i.e.,_\(h=Re(h)+im(h)\). Given the large size of the API KG (_i.e.,_ including more than 72 million entities and more than 289 million relations), we use PyTorch-BigGraph (PBG) (Zhou et al., 2017) and its implementation shared on GitHub (Zhou et al., 2017) to train the ComplEx model. PyTorch-BigGraph is a distributed system implemented by Facebook with the purpose of supporting the training of knowledge graph embedding models on large graphs. We also investigate using other KG embedding models (_e.g.,_ TransE (Zhou et al., 2017) and DistMult (Zhou et al., 2017)) in Section 4.3. To facilitate more efficient similarity calculation based on the KG embeddings, we store all the KG embeddings in a vector database, _i.e.,_ Milvus (Milvus, 2018). Milvus is an open-source vector database that supports high-efficient vector index and similarity search. Based on Milvus, we can efficiently obtain the KG embeddings for a given entity in the KG or find the top-\(k\) similar entity embeddings for a given embedding. Figure 4 shows the distribution of KG embeddings of some API methods in the vector space, which is generated after dimension reduction through PCA (Principal Component Analysis) (Zhou et al., 2017). Each point in Figure 4 represents an API method from our benchmark (in Section 4.1.1). Points with the same color and shape (_i.e.,_ triangle or circle) represent API methods from the same library. The API methods of the two analogical libraries have the same color but different shapes. We could observe that API methods in the same library (_e.g., org/json_) or analogical libraries (_e.g., org/json_ and _gson_) are relatively close in the vector space, while the API methods of libraries with different topics are far apart. For example, the API methods of the libraries related to logging (_sdf4y_(Zhou et al., 2017) and _commonslogging_(Barbani et al., 2018)) are far apart from the ones of the libraries related to testing (_e.g., junit_(Zhou et al., 2018) and _org.testing_(Zhou et al., 2017)). ### Analogical API Method Inferring In this phase, KGE4AR returns a list of ranked analogical API methods for a given source API method based on the API KG and the embedding model. First, KGE4AR selects candidate API methods based on their similarities with the given API method (candidate API method retrieval in Section 3.3.1); then, KGE4AR re-ranks the candidate API methods by considering the similarities between the given API method and the neighbors of the candidate API methods (candidate API method re-ranking in Section 3.3.2). The purpose of the candidate API method retrieval in the first step is to narrow \begin{table} \begin{tabular}{|c|c|} \hline Existing Multi-hop Relations & Completed Relation \\ \hline \textless{}C, has method, M- & \textless{}M, operation of, Conv- \\ \textless{}M, has parameter, P- & \textless{}M, has input value, Conv- \\ \textless{}P, instance parameter of concept, Conv- & \textless{}M, has input type, Conv- \\ \textless{}M, has return value type, T- & \textless{}M, has output type, Conv- \\ \textless{}T, instance class of concept, Conv- & \\ \hline \end{tabular} \end{table} Table 1. Method Conceptual Relation Completion Rules (M: Method; C: Class, T: Type; Con: Concept) Figure 4. Examples of API KG Embeddings Using ComplEx down the scope of candidate APIs, so that the second re-ranking step only needs to calculate the similarity between the given API and a small number of candidate APIs. #### 3.3.1. Candidate API Method Retrieval For a given source API method \(s\), we first obtain its KG embedding \(E_{s}\) by querying Milvus. Then we calculate the KG similarity \(Sim_{kg}\) between \(s\) and methods from other libraries (called method similarity \(Sim_{m}\)) according to Eq. 1, which is normalized cosine similarities between their KG embeddings. We select the top-\(k\) (_e.g.,_ 100) API methods as candidates, utilizing the efficient vector indexing in our database, which achieves low latency in milliseconds on trillion vector datasets. \[Sim_{kg}(E_{s},E_{2})=(Cos(E_{s},E_{2})+1)/2 \tag{1}\] #### 3.3.2. Candidate API Method Re-ranking Two API methods with high KG embedding similarity are not necessarily analogical API methods. For example, _org.json.JSONArray.getJSONObject(int)_ and _com.google.json.JsonArray.remove(int)_ have a high KG embedding similarity since they belong to analogical classes. To address this issue, we further compute the similarity between the same type of neighbor concepts of the given API method \(s\) and each candidate API method \(e\), which reflects the conceptual similarity of API methods in different aspects (_e.g.,_ functionalities, inputs, and outputs). The neighbor-based similarities we compute include functionality similarity \(Sim_{func}\), object similarity \(Sim_{obj}\), input type similarity \(Sim_{it}\), input value similarity \(Sim_{io}\), output type similarity \(Sim_{ot}\), and average neighbor similarity \(Sim_{neigh}\). To get the final analogical score \(Score(s,e)\), we then perform a weighted sum of these neighbor-based similarities and the method similarity \(Sim_{m}\) according to Eq. 2. \[Score(s,e)=\sum_{t\in run\_obj,int,at,netig}W_{t}\times Sim_{t}(s,e) \tag{2}\] All candidates are ranked by the analogical scores. We then explain each similarity as follows. **Method Similarity**\((Sim_{m})\). \(Sim_{m}\) is the KG similarity between two methods, which is already computed in the retrieval step. **Functionality Similarity**\((Sim_{func})\).The functionality similarity measure \((Sim_{func})\) captures the similarity in the functionalities provided by two API methods. It relies on the assumption that comparable APIs should have similar functionality expressions. We calculate the maximum similarity between the functionality expressions corresponding to the two methods according to Eq. 3 as their functionality similarity \(Sim_{func}(s,e)\). In Eq. 3, \(Func(s)\) denotes the functionality expression of the method \(s\) (_i.e.,_ \(\prec\)\(s\), _has functionality_, \(Func(s)\)-), which is extracted from the method name or the functionality description (see Section 3.1.3). This measure allows us to capture the similarity of API methods based on their intended functionality and purpose. \[Sim_{func}(s,e)=Max(Sim_{bg}(E_{Func(s)},E_{Func(e)})) \tag{3}\] **Object Similarity**\((Sim_{obj})\). \(Sim_{obj}\) captures the conceptual-level similarity between the classes of two API methods. It is based on the intuition that methods belonging to analogous classes are likely to exhibit similar behavior and usage patterns. \(Sim_{obj}\) is calculated according to Eq. 4, where \(Obj(s)\) represents the concept corresponding to the class of the method \(s\) (_i.e.,_ \(\prec\)\(Obj(s)\), _has operation_, \(\prec\)\(s\)-). \[Sim_{obj}(s,e)=Sim_{bg}(E_{Obj(s)},E_{Obj(e)})) \tag{4}\] **Input Type Similarity**\((Sim_{it})\). \(Sim_{it}\) of two methods reflects the conceptual-level similarity of their parameter types. Analogical APIs are expected to operate on similar types of input data. \(Sim_{it}\) is calculated according to Eq. 5, where \(InType(s)\) represents a concept corresponding to one of the parameter types of the method \(s\) (_i.e.,_ \(\prec\)\(\mathit{lnType}(s)\), _has input type_, \(\prec\)\(s\)-) and \(\bar{E}_{\mathit{lnType}(s)}\) represents the average of KG embeddings of all \(\mathit{lnType}(s)\). \[Sim_{it}(s,e)=Sim_{bg}(\bar{E}_{\mathit{lnType}(s)},\bar{E}_{\mathit{lnType}( s)})) \tag{5}\] **Input Value Similarity**\((Sim_{it})\). The purpose of \(Sim_{io}\) is to capture the conceptual-level similarity of parameters between two methods, which contributes to identifying analogical APIs. Analogical APIs often exhibit similarities in the values they accept as input, irrespective of the specific parameter types. \(Sim_{io}\) is calculated according to Eq. 6, where \(\mathit{lnVal}(s)\) represents a concept corresponding to one of the parameter of the method \(s\) (_i.e.,_ \(\prec\)\(\mathit{lnVal}(s)\), _has input value_, \(\prec\)\(s\)-) and \(\bar{E}_{\mathit{lnVal}(s)}\) represents the average of KG embeddings of all \(\mathit{lnVal}(s)\). \[Sim_{in}(s,e)=sim_{bg}(\bar{E}_{\mathit{lnVal}(s)},\bar{E}_{\mathit{lnVal}(s)})) \tag{6}\] **Output Type Similarity**\((Sim_{ot})\). \(Sim_{ot}\) of two methods reflects the conceptual-level similarity of their return value types. Analogical APIs often exhibit similarities in the types of values they return. \(Sim_{ot}\) is calculated according to Eq. 4, where \(\mathit{OutType}(s)\) represents a concept corresponding to the return value type of the method \(s\) (_i.e.,_ \(\prec\)\(s\), _has output type_, \(\mathit{OutType}(s)\)-). \[Sim_{ot}(s,e)=Sim_{bg}(E_{\mathit{OutType}(s)},E_{\mathit{OutType}(s)}) \tag{7}\] **Average Neighbor Similarity**\((Sim_{neigh})\). Analogical APIs often exhibit similarities not only in their individual aspects but also in their overall context or behavior. By calculating \(Sim_{neigh}\) using Eq.8 and Eq.9, where \(E_{Neigh(s)}\) represents the average of KG embeddings of the method and its neighboring concepts, we can capture the similarity of overall neighbors between two methods. This similarity measure provides a holistic view of the methods' surrounding context, allowing us to identify analogical APIs based on the similarity of their overall behavior. \[E_{Neigh(s)}=Aug(E_{s}+E_{Obj(s)}+E_{Func(s)}\] \[+E_{\mathit{InVal}(s)}+E_{\mathit{InType}(s)}+E_{\mathit{OutType}( s)}\] \[Sim_{neigh}(s,e)=Sim_{bg}(E_{Neigh(s)},E_{Neigh(e)})) \tag{8}\] Note that instead of directly using the similarity of neighboring API elements of two methods (_e.g.,_ their return values), we use concepts related to neighboring API elements, as API elements are library-specific while concepts are more likely to be shared between libraries. In addition, to ensure the diversity of the return APIs, we further limit the number (_i.e.,_ 3) of recommended API methods that come from the same library. ## 4. Evaluation To implement KGE4AR, we construct a unified API KG from 35,773 Java libraries. Table 2 presents the entity type statistics of the resulting API KG. To collect the Javadoc documentation for those libraries, we first get the metadata (_e.g.,_ groupId and artifactId) of a \begin{table} \begin{tabular}{|l|c|l|c|} \hline Type & Number & Type & Number \\ \hline Library & 35,773 & Return Value & 15,451,223 \\ \hline Package & 229,061 & Abstract Parameter & 1,892,120 \\ \hline Class & 3,905,537 & Functionality Expression & 5,200,297 \\ \hline Interface & 281,854 & Functionality Category & 89 \\ \hline Field & 6,232,643 & Functionality Verb & 10,016 \\ \hline Method & 15,441,057 & Prrase Pattern & 523 \\ \hline Parameter & 16,501,363 & Concept & 5,660,553 \\ \hline \end{tabular} \end{table} Table 2. Statistics of Resulting API KG list of Java libraries according to the Libraries.io dataset (Krizhevsky et al., 2017) (last updated in January 2020); then, we download the latest version of JAR files (as of August 11, 2022) from the Maven Central Repository, resulting in 35,773 JAR files; lastly, we leverage zipfile (Krizhevsky et al., 2017) and JavasParser (2017) to extract the API-relevant documentation from JAR files, including the API definition and API functionality descriptions. In this way, we construct an API KG with 72,242,099 entities, including 59,155,631 API elements, 5,210,925 functionality elements, and 5,660,553 concepts. Further, we train the KG embedding model using ComplEx with a logistic loss. The weight of each similarity in Eq. 2 is determined as \(W_{m}=0.05\), \(W_{func}=0.95\), \(W_{obj}=0.8\), \(W_{it}=0.25\), \(W_{it}=0.05\), \(W_{ot}=0.05\), and \(W_{neigh}=0.95\) based on our experiments in a separate validation setting (to avoid overfitting) and we also investigate the impact of different weights in Section 4.3. We evaluate KGE4AR by answering the following research questions. RQ1 and RQ2 investigate the effectiveness of KGE4AR in two analogical API recommendation scenarios, _i.e._, one with the given target library and the other without the given target library. To better understand the capabilities and characteristics of KGE4AR, RQ3 analyzes the impact of different components in KGE4AR, and RQ4 further studies the scalability of KGE4AR when the number of libraries is increasing. * **RQ1 (Effectiveness with target libraries)**: How does KGE4AR compare to existing documentation-based techniques when recommending analogical API methods _with given target libraries_? * **RQ2 (Effectiveness without target libraries)**: How does KGE4AR compare to existing documentation-based techniques when recommending analogical API methods _without given target libraries_? * **RQ3 (Impact Analysis)**: How do different components in KGE4AR (i.e., the KG embedding models, knowledge types, and similarity types and weights) impact the effectiveness of KGE4AR? * **RQ4 (Scalability)**: How scalable is KGE4AR with the increasing number of libraries? ### RQ1: Effectiveness with Target Libraries In this RQ, we evaluate the effectiveness of KGE4AR and state-of-the-art documentation-based analogical API recommendation techniques with given target libraries. #### 4.1.1. Protocol In this section, we introduce the benchmark, baselines, and metrics utilized for this research question. **Benchmark.** There are two exiting benchmarks (Zhu et al., 2017; Zhang et al., 2018) of manually validated analogical API pairs; and we directly obtain both datasets from their replication packages (Beng et al., 2019; Zhang et al., 2018) and merge them into one benchmark. In this way, we construct a large benchmark, which contains 245 pairs of analogical API methods from 16 pairs of analogical libraries, covering different topics such as JSON processing, testing, logging, and network requests. For each analogical API pair, either API can be used as the source API, resulting in 490 source APIs (245 pairs \(\times\) 2). In each query, the source API and all candidate APIs from the target library are provided as inputs, and the output is the ranked list of candidate APIs. **Baselines.** We include two state-of-the-art documentation-based analogical API recommendation techniques (_i.e._, RAPIM (Krizhevsky et al., 2017) and D2APIMap (Zhu et al., 2017)) for comparison. We select these two techniques since they are the latest and the most effective ones in the unsupervised learning-based category and supervised learning-based category, respectively. * RAPIM (Krizhevsky et al., 2017) is a supervised learning-based approach, which trains a machine learning model (_i.e._, boosted decision tree) and leverages the trained model to predict the probability of an unseen API pair being analogical. In particular, for a given API pair, RAPIM calculates a set of features that are based on the lexical similarity of the method descriptions, return type descriptions, method names, and class names between two APIs. We collect their features according to the paper and then directly use RAPIM via its network requests (Krizhevsky et al., 2017). * D2APIMap (Zhu et al., 2017) is an unsupervised learning-based approach that utilizes the Word2Vec model to compute similarities between functionality descriptions, return values, and parameters of API pairs. It recommends the API with the highest total similarity. Due to the unavailability of the source code, we re-implement D2APIMap following the original paper. **Metrics.** Following prior work (Zhu et al., 2017), we adopt common evaluation metrics: MRR (Mean Reciprocal Rank) and Hit@k (\(k=1,3,5,10\)). MRR calculates the average rank of the correct analogical API in the generated list, while Hit@k measures the proportion of queries in which the correct analogical API appears within the top-k positions. Considering the vast number of APIs in each library, we limit our analysis to the top 100 candidates in the ranked list for each query. #### 4.1.2. Results Table 3 presents the evaluation results, and the best value of each metric is in boldface. KGE4AR substantially outperforms both baselines on all metrics. In particular, KGE4AR achieves 47.1%-143.0%, 48.3%-225.6%, 61.5%-149.4%, 53.6%-130.1%, and 41.4%-95.4% improvements over the baselines in terms of MRR, Hit@1, Hit@3, Hit@5, and Hit@10, respectively. We further investigate the results and find the potential reason why KGE4AR outperforms baselines might be that KGE4AR analyzes API functionality descriptions in a better way. For example, when two APIs share the same noun phrases and different verbs (_e.g._, _StorageObject.getContentLength()_ and _S3ObjectWrapper.setObjectContent(S3ObjectInputStream)_), it is often difficult for RAPIM and D2APIMap to distinguish them. RAPIM incorporates a TF-IDF model to calculate similarity-related features, which often assigns functionality verbs with low weights due to their high frequency in the names and descriptions; D2APIMap incorporates a Word2Vec model to calculate similarities, which often represents functionality verbs with similar vectors due to their similar contexts. However, KGE4AR extracts the functionality knowledge of methods (_e.g._, functionality category, functionality verb), and considers functionality similarity of methods in the re-ranking step (see Section 3.3), which can effectively distinguish the difference between methods even if they share same noun phrases. Therefore, in this example, KGE4AR successfully identifies these two APIs as not analogical while baselines consider them as analogical. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Approach & MRR & Hit@1 & Hit@3 & Hit@5 & Hit@10 \\ \hline RAPIM & 0.158 & 0.082 & 0.180 & 0.229 & 0.304 \\ \hline D2APIMap & 0.261 & 0.180 & 0.278 & 0.343 & 0.420 \\ \hline KGE4AR & **0.384** & **0.267** & **0.449** & **0.527** & **0.594** \\ \hline \end{tabular} \end{table} Table 3. Effectiveness with Given Target Libraries In summary, KGE4AR substantially outperforms state-of-the-art documentation-base techniques when inferring analogical API methods with given target libraries. ### RQ2: Effectiveness without Target Libraries RQ1 evaluates analogical API recommendation techniques when the target library is known. However, in practice, selecting the correct target library is challenging, and existing automated target library recommendation approaches have limited effectiveness (Top1-recall \(<\) 20% [(42)]). Therefore, in this RQ, we assess the effectiveness of KGE4AR in the scenario where no target library is available. #### 4.2.1. Protocol We then introduce the benchmark, metrics, and baselines used in this RQ. **Benchmark.** The benchmark in RQ1 only contains analogical API pairs whose candidate APIs are from one given target library, and is not suitable for the analogical API recommendation scenario without target libraries. Therefore, in this RQ, we manually construct a new benchmark of analogical API pairs whose candidate APIs are from a wide range of libraries instead of from one given target library. In particular, based on previous work [(34)], online resources such as Awesome-Java [(3)], and our expert knowledge, we first manually select 9 pairs of analogical libraries (_i.e._, 18 libraries); then for each of these 18 libraries, we randomly select 15 API methods in the library as the source APIs for evaluation, leading to 270 source APIs in total. The selected libraries include both popular ones (usage number \(>\) 500 in Maven Central [(13)]), such as _gson_[(4)], and less popular libraries, such as _dash-json_[(7)] and _dom4j_[(6)]. The selected libraries represent diverse domains such as data processing and code analysis, ensuring the evaluation of our approach's effectiveness and generalizability in real-world scenarios. _Ground-truth labeling._ We manually label whether the API pair in our newly-constructed benchmark is analogical or not. Due to the large number of potential API pairs, we only label the Top-10 APIs returned by each technique in each query, resulting in a total of 6,986 labeled API pairs. In particular, six participants each with more than three years of Java development experience manually assess whether the returned APIs are analogical to the source API. In each query, two participants are asked to read the API documentation of the source API and the returned APIs to make the judgment whether they are analogical or not. The returned APIs for each source API are shuffled before assessment, and annotators are unaware of the technique that produced the results. In cases where the assessment by two annotators is inconsistent, a third annotator is involved to make a judgment, and the final annotation is based on majority agreement. The inter-annotator agreement is substantial, with a Cohen's Kappa coefficient [(55)] of 0.666. **Metrics.** In addition to the four metrics used in RQ1 (i.e., MRR, Hit@1, Hit@3, Hit@5, and Hit@10), we further include precision and recall in this RQ, since in this scenario there could be multiple correct answers corresponding to a source API. In particular, precision is the fraction of analogical API methods among the returned results, while recall is the fraction of analogical API methods that are retrieved. In total, we compare KGE4AR with baselines on all these metrics based on manually labeled ground truths. **Baselines.** Existing baselines (_i.e._, RAPIM and D2APIMap) exhaustively calculate the similarity between the source API and all candidate APIs, and thus it is unaffordably expensive to directly apply these techniques when there is no given target library and the number of candidate APIs is extremely large (_e.g._, there could be over 15 million candidate APIs for each source API when there is no specified target library). Therefore, in this RQ, we enhance baselines by first narrowing the scope of their candidate APIs. In particular, we first leverage the lightweight information retrieval technique BM25 [(63)] to select Top-100 candidate APIs whose documentations share high relevance to the source API; we then apply baselines on these candidate APIs. We adopt BM25 for its effectiveness and efficiency [(63)]. Additionally, we clean the documentation (_e.g._, removing stop words, splitting camel case, and performing lemmatization) following previous work [(79)] to further enhance the effectiveness of BM25. For distinction, we denote two baselines (_i.e._, RAPIM and D2APIMap) enhanced with BM25 as RAPIM* and D2APIMap*, respectively. We implement the BM25-based candidate selection with Elasticsearch [(8)]. #### 4.2.2. Results Table 4 presents the evaluation results. Overall, KGE4AR outperforms both baselines on all metrics by achieving 11.7%-80.6%, 13.7%-108.3%, 11.6%-77.9%, 7.6%-52.0%, 8.3%-32.3%, 26.2%-72.0%, and 33.2%-116.5% improvements in terms of MRR, Hit@1, Hit@3, Hit@5, Hit@10, precision, and recall, respectively. We further investigate how KGE4AR performs on different libraries. Figure 5 shows how KGE4AR and baselines perform on popular libraries and less popular libraries. We could find that KGE4AR consistently outperforms baselines in both popular and less popular libraries. Interestingly, the improvement of KGE4AR over baselines is even larger on those less popular libraries. For example, MRR, precision, and recall of KGE4AR on _dst-json_[(7)] (with only 18 usages on Maven Central) are 0.542, 0.327, 0.562, respectively; while these metrics of D2APIMap* on the same library are only 0.206, 0.080, and 0.171, respectively. One potential reason might be that the APIs of less popular libraries may target relatively uncommon functionality, whose descriptions may have a large semantic gap with analogical APIs. Existing baselines rely on simplistic text matching to recommend analogical APIs, which cannot handle less popular APIs well; while KGE4AR could better combine the structural information and functionality descriptions of APIs together through knowledge graph embedding to infer analogical APIs from a large number of candidate APIs. In summary, KGE4AR outperforms existing techniques for inferring analogical APIs without given target libraries. ### RQ3: Factor Impact In this RQ, we further analyze the impact of components in KGE4AR, including the re-ranking component, KG embedding models, knowledge types, similarity types, and weights. Given the large number of comparison experiments in this RQ (_i.e._, 15 runs), we perform experiments on a small-scale API KG based on RQ1 benchmark. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Approach & MRR & Hit@1 & Hit@3 & Hit@5 & Hit@10 & Precision & Recall \\ \hline RAPIM* & 0.381 & 0.311 & 0.404 & 0.485 & 0.585 & 0.271 & 0.237 \\ \hline D2APIMap & 0.616 & 0.570 & 0.644 & 0.685 & 0.715 & 0.369 & 0.385 \\ \hline KGE4AR & **0.688** & **0.648** & **0.719** & **0.737** & **0.774** & **0.513** & **0.480** \\ \hline \end{tabular} \end{table} Table 4. Effectiveness without Given Target Libraries #### 4.3.1. Impact of Re-ranking Component To investigate the contribution of the re-ranking step in KGE4AR, we include a variant (denoted as KGE4AR-Ret) of KGE4AR by removing the re-ranking step in inferring the analogical API. The results of KGE4AR-Ret in MRR, Hit@1, Hit@3, Hit@5, and Hit@10 are 0.233, 0.133, 0.253, 0.327, and 0.447 respectively, which are much lower than the default KGE4AR (_e.g._, 50.2% lower in Hit@1). Such results indicate the re-rank step indeed contributes to the effectiveness of KGE4AR. #### 4.3.2. Impact of KG Embedding Models We train various KG embedding models on the small-scale API KG to explore their impact. We compare ComplEx with TransE (Zhou et al., 2019) and DistMult (Zhou et al., 2019). We evaluate KG embedding models using KGE4AR-Ret baseline on inferring analogical API methods with given target libraries (Section 4.1). KGE4AR-Ret retrieves analogical API methods using KG embedding similarity, reflecting how well models learn method semantics. Comparison is on top 100 results (Table 5). As shown in the table, ComplEx, the default in KGE4AR, achieves the best performance on all metrics, implying its suitability. #### 4.3.3. Impact of Knowledge Types in the API Knowledge Graph To evaluate the impact of different types of knowledge in the API KG, we train different KG embedding models based on a subset of relation triples in the small-scale API KG. We try three situations: only structural relation triples (denoted as _Structure_), all relation triples except functionality-related relations (denoted as _Functionality_*), and all relation triples except conceptual relations (denoted as _Concept_*). Then we evaluate different KG embedding models based on KGE4AR-Ret and the benchmark as well. The results are shown in Table 6. Both functionality and conceptual contribute positively to analogical API method inferring, while conceptual knowledge has a greater impact than functionality knowledge. #### 4.3.4. Impact of Similarity Types and Similarity Weights As mentioned in Section 3.3.2, we tune the weights of similarities (_i.e._, \(W_{m}\), \(W_{func}\), \(W_{obj}\), \(W_{it}\), \(W_{io}\), \(W_{ot}\), and \(W_{neigh}\)) in the re-ranking step on a small-scale API KG instead of on the large-scale API KG to avoid overfitting. In particular, we randomly divide the benchmark into 10 folds and then use a different number of folds to tune the weights in turn. We use Beam search (Zhou et al., 2019) to tune all weights one by one with a step size of 0.05 and a beam number of 4. Figure 6 shows experimental results of weights tuned with different folds of data. We could observe there is a subtle improvement when more tuning data is used, indicating that tuning with a small set of data might already be sufficient to achieve decent effectiveness. Note that our weight tuning is performed on a small-scale API KG, while the previous experiments (RQ1 and RQ2) are based on a large-scale API KG. Thus, it further indicates the tuned weights can be generalized even on different API KGs. In addition, we further remove each similarity (by setting its weight as 0) so as to investigate its impact on the effectiveness of KGE4AR. Table 7 presents the evaluation results, with \(Sim_{t}*\) representing the KGE4AR variant that excludes the similarity \(Sim_{t}\). We can observe a decrease in the performance of KGE4AR when each similarity is removed. Particularly, the removal of functionality similarity \(Sim_{func}\) leads to the largest drop, with a 22.9% decrease in MRR. It shows the importance of the functionality knowledge for analogical API method inferring. Additionally, removing \(Sim_{neigh}\) increases MRR and decreases Hit@10, suggesting that neighbor similarity brings some noise but improves recall. Figure 7 presents a heatmap of the correlation matrix, showing the relationships between different similarity measures (e.g., m, func, obj) and analogical relationships (i.e., anal.). We perform the widely-used Pearson correlation coefficient (Zhou et al., 2019) and Welch's t-test (Zhou et al., 2019) to assess the statistical significance of the observed correlations. First, we could observe statistically-positive correlation of all similarities with the analogical relationships (\(p<<0.05\)) based on Welch's t-test, implying that included similarities are helpful for inferring analogical relationship more or less. Second, each similarity score exhibits different correlation coefficients to the analogical relationship, implying a different importance of their role in inferring analogical relationship. Third, most similarity scores exhibit \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Method & MRR & Hit@1 & Hit@3 & Hit@5 & Hit@10 \\ \hline TransE & 0.284 & 0.174 & 0.312 & 0.396 & 0.518 \\ \hline DistMult & 0.288 & 0.180 & **0.331** & 0.400 & 0.494 \\ \hline ComplEx & **0.293** & **0.183** & 0.324 & **0.422** & **0.524** \\ \hline \end{tabular} \end{table} Table 6. Impact of Different Knowledge Types Figure 5. Effectiveness on Popular and Less Popular Libraries Figure 6. Impact of the Number of Data for Tuning Weights Figure 7. Similarities and Analogical Relationships Correlation Matrix low correlations with others and only a few similarity scores exhibit high high correlation (e.g., it vs. iv). Overall, the statistical analysis indicates the potential benefits of different similarities to the analogical relationship inference; but at the same time there could be some redundant information among some similarities, indicating a potentially improving direction for the future work. In summary, the current design choices (i.e., re-ranking step, KG embedding model, knowledge types, similarity types, and weights) all positively contribute to the effectiveness of KGE4AR. ### RQ4: Scalability In this RQ, we explore the scalability of KGE4AR. **Online Cost.** The online inference time of KGE4AR is less than one second for one query in RQ1 and RQ2. It consists of two main steps: candidate API method retrieval and re-rank. The re-rank step's time is proportional to the number of candidates and remains constant once the candidates are determined. The retrieval step's time depends on the API KG size and the vector database used. To address this, we employed the highly efficient vector index mechanism provided by Milvus, a scalable and highly available vector database. Milvus has been proven to achieve an average latency of milliseconds for vector search and retrieval on trillion-vector datasets (Mikolov et al., 2017). This ensures that the retrieval step of KGE4AR is performed efficiently, even as the size of the API KG increases. **Offline Cost.** We primarily discuss the offline costs of KGE4AR with different KG scales. Table 8 presents the construction costs for three API KGs: large-scale, medium-scale, and small-scale. The costs are calculated on a Linux server with a 36-core CPU and 128GB RAM. The columns _Input_, _Construct_, and _Embed_. represent the time for downloading/preparing documentations as input, API KG construction, and API KG embedding, respectively. Although the number of entities increases by 2,019 times from a small-scale API KG to a large-scale API KG, the time required for collecting inputs, API KG construction, and API KG embedding only increases by 386 times, 121 times, and 40 times, respectively. Note that the KG construction and embedding are only executed once, and the KG could be incrementally extended when there are new libraries. In summary, there is evidence to suggest that KGE4AR has the potential to scale effectively as the number of libraries increases. ### Threats to Validity **Internal Validity.** A threat to the internal validity of our studies is the subjectivity of human annotations in RQ2. To mitigate this threat, we implemented measures such as multiple annotators, conflict resolution, and reporting agreement coefficients. These practices were employed to minimize bias and ensure the reliability of the human annotations. **External Validity.** A limitation of our study is the exclusive focus on Java libraries, potentially limiting the generalizability of our findings to other programming languages. However, the core concept of our approach, involving the construction of a unified knowledge graph across libraries, remains applicable. While our knowledge graph design is not limited to Java, it can be extended to accommodate libraries from other object-oriented languages. However, specific implementation adjustments would be required. For example, supporting languages like Python, which lack strong typing, would necessitate modifying the schema. Future work will explore more programming languages for a comprehensive evaluation of our approach's effectiveness across diverse language environments. **Construct Validity.** A common threat is that the baselines we used in RQ1 and RQ2 are implemented by ourselves due to publicly unavailable implementations. However, we carefully reproduced and tested the baselines to avoid introducing errors. Another threat is the way similarity weights are determined. We tuned the weights through the benchmark in RQ1 and the weights may overfit the benchmark. To mitigate this threat, we tuned the weights on a small-scale API KG instead of the large-scale API KG used by RQ1. Figure 6 also shows that our weights do not overfit the benchmark. ## 5. Conclusions This work proposes KGE4AR, a novel documentation-based approach that leverages knowledge graph (KG) embedding to recommend analogical APIs during library migration. In particular, KGE4AR proposes a novel unified API KG to comprehensively and structurally represent three types of knowledge in documentation, which could better capture the high-level semantics. In addition, KGE4AR then proposes to embed the unified API KG, which enables more effective and scalable similarity calculation. We implement KGE4AR as a fully automated technique with constructing a unified APIKG for 35,773 Java libraries. We further evaluate KGE4AR in two API recommendation scenarios (i.e., with given target libraries or without given target libraries), and our results show that KGE4AR substantially outperforms state-of-the-art documentation-based techniques in both evaluation scenarios in terms of all metrics. In addition, we further investigate the scalability of KGE4AR and find that KGE4AR can well scale with the increasing number of libraries. ## 6. Data Availability All the data and code could be found in our replication package (Kang et al., 2019). ## Acknowledgments This work is supported by National Natural Science Foundation of China under Grant No. 61972098.
2306.07934
BoardgameQA: A Dataset for Natural Language Reasoning with Contradictory Information
Automated reasoning with unstructured natural text is a key requirement for many potential applications of NLP and for developing robust AI systems. Recently, Language Models (LMs) have demonstrated complex reasoning capacities even without any finetuning. However, existing evaluation for automated reasoning assumes access to a consistent and coherent set of information over which models reason. When reasoning in the real-world, the available information is frequently inconsistent or contradictory, and therefore models need to be equipped with a strategy to resolve such conflicts when they arise. One widely-applicable way of resolving conflicts is to impose preferences over information sources (e.g., based on source credibility or information recency) and adopt the source with higher preference. In this paper, we formulate the problem of reasoning with contradictory information guided by preferences over sources as the classical problem of defeasible reasoning, and develop a dataset called BoardgameQA for measuring the reasoning capacity of LMs in this setting. BoardgameQA also incorporates reasoning with implicit background knowledge, to better reflect reasoning problems in downstream applications. We benchmark various LMs on BoardgameQA and the results reveal a significant gap in the reasoning capacity of state-of-the-art LMs on this problem, showing that reasoning with conflicting information does not surface out-of-the-box in LMs. While performance can be improved with finetuning, it nevertheless remains poor.
Mehran Kazemi, Quan Yuan, Deepti Bhatia, Najoung Kim, Xin Xu, Vaiva Imbrasaite, Deepak Ramachandran
2023-06-13T17:39:20Z
http://arxiv.org/abs/2306.07934v1
# BoardgameQA: A Dataset for Natural Language Reasoning with Contradictory Information ###### Abstract Automated reasoning with unstructured natural text is a key requirement for many potential applications of NLP and for developing robust AI systems. Recently, Language Models (LMs) have demonstrated complex reasoning capacities even without any finetuning. However, existing evaluation for automated reasoning assumes access to a consistent and coherent set of information over which models reason. When reasoning in the real-world, the available information is frequently inconsistent or contradictory, and therefore models need to be equipped with a strategy to resolve such conflicts when they arise. One widely-applicable way of resolving conflicts is to impose preferences over information sources (e.g., based on source credibility or information recency) and adopt the source with higher preference. In this paper, we formulate the problem of reasoning with contradictory information guided by preferences over sources as the classical problem of _defeasible reasoning_, and develop a dataset called BoardgameQA for measuring the reasoning capacity of LMs in this setting. BoardgameQA also incorporates reasoning with implicit background knowledge, to better reflect reasoning problems in downstream applications. We benchmark various LMs on BoardgameQA and the results reveal a significant gap in the reasoning capacity of state-of-the-art LMs on this problem, showing that reasoning with conflicting information does not surface out-of-the-box in LMs. While performance can be improved with finetuning, it nevertheless remains poor. ## 1 Introduction A fundamental goal of AI since its early days has been automatically applying logical or deductive reasoning to draw new conclusions from existing knowledge [28; 20]. Since a large amount of knowledge is available in the form of natural language, tremendous effort has been put into developing models that can understand and reason over natural language [22; 40; 52; 30; 12] (see [33] for a survey). Recent years have seen substantial improvements in this direction thanks to advancements in pretrained language models (LMs) [8; 9] that can handle unstructured data more flexibly, combined with advanced prompting techniques [50; 29], and modular reasoning approaches [22; 12]. Existing work in automated reasoning in natural language usually assumes that the provided knowledge is consistent and reliable. But in many applications, the collection of information one has to reason with is inconsistent and contradictory. This is the case, for instance, when reasoning is performed with information found in different online sources or social media (e.g., retrieval-augmented LMs [17; 3]). When input sources are contradictory, one can consider various strategies to resolve the contradictions. One simple and practical formulation, which we adopt in this work, is to resolve the conflicts based on preferences over the information sources: when a con flict arises, the information from the source with a higher preference should be used to solve the reasoning problem. Depending on the application, preferences can be assigned based on different criteria, e.g., based on the credibility of websites or social media users, or based on the recency of the information with newer information being preferred over older information. Exceptions to generics can also be expressed as preferences; for example, generic knowledge such as _"birds fly"_ (see also [6]) should be overridden by exceptions such as _"penguins are birds but do not fly"_ (see also [1]) when reasoning about penguins. Figure 1 demonstrates an example of a reasoning problem with conflicting information, where the conflict is resolved based on recency. Reasoning with conflicting information guided by preferences can be formulated as a form of the classical _defeasible reasoning_ problem [31; 19; 27]. In this work, we study the reasoning ability of LMs in this setting. Toward this goal, we create a synthetic dataset where each example contains a defeasible theory (a set of input facts, possibly contradictory rules, and preferences over the rules), and a question about that theory. Answering the questions in the dataset requires multi-hop reasoning and conflict resolution over the input theory. The difficulty level (e.g., the depth, amount and type of conflicts, etc.) of the examples in the dataset can be controlled automatically, enabling targeted comparisons of various aspects of reasoning. We also note that while a large number of logical reasoning benchmarks provide all the knowledge needed to answer questions [46; 39; 40; 18], such benchmarks do not reflect common real-world scenarios where implicit background knowledge plays an important role in reasoning. Moreover, models that translate the textual examples into logical form and then leverage off-the-shelf solvers may excel on these datasets, which does not reflect the true performance of such models in real-world applications. For these reasons, in BoardgameQA only part of the knowledge required to solve the problem is provided as input to the LM; the missing knowledge has to come from the LM itself. The problems in our dataset are formulated as scenarios of a board game, hence we name it BoardgameQA1. A board game theme allows us to create synthetic scenarios with complex defeasible rules to reason about that seem natural when stated in text and hence allows background commonsense world knowledge to also be used. To the best of our knowledge, BoardgameQA is the first dataset for multi-hop reasoning _with contradictory inputs_. Figure 2 shows a sample example from the dataset where the conflict resolution and missing knowledge have been highlighted. Footnote 1: Available at: [https://storage.googleapis.com/gresearch/BoardgameQA/BoardgameQA.zip](https://storage.googleapis.com/gresearch/BoardgameQA/BoardgameQA.zip). License: CC BY. We benchmark various LMs on BoardgameQA and measure their defeasible reasoning capacity. Most notably, our results reveal that LMs perform poorly when reasoning with conflicting sources, especially in the few-shot setting (compared to the finetuning setting) suggesting that preference understanding and defeasible reasoning capacities do not surface out-of-the-box in pretrained LMs. Secondly, we find that smaller LMs perform poorly when not all of the required information is provided as input. These results highlight a critical gap in the reasoning capacity of current LMs, considering that reasoning over contradicting and incomplete sets of information is a common scenario in many applications, and is key for developing robust AI systems. ## 2 Related Work Our work spans three dimensions: 1- text-based logical reasoning, 2- reasoning with conflicting sources, and 3- reasoning with incomplete information. In the following section, we briefly summarize the literature on each of these axes that relate to our work. **Text-based logical reasoning approaches:** Earlier works on natural language logical reasoning have finetuned LMs to directly provide answers to logical reasoning questions [11; 4; 38; 18]. Later work showed that explicitly generating the entire proof leads to substantial improvements both in Figure 1: A reasoning problem with contradictory information (conflict resolved based on recency). the case of finetuning and in the case of few-shot learning [29; 13; 55; 56]. In addition, modular reasoning approaches where the LM is used as a tool within a reasoning algorithm [22; 12; 49; 23] have been shown to achieve both performance gains and more precise intermediate proof chains. In this paper, we experiment with four types of approaches: 1- finetuning without explicit reasoning steps, 2- finetuning with explicit reasoning steps, 3- prompt-tuning with chain-of-thought (CoT) prompting [50], and 4- few-shot in-context learning with CoT. **Text-based logical reasoning datasets:** Many datasets have been created to measure the logical reasoning ability of NLP models [46; 40; 58; 18; 43]. In Table 1, we provide a comparison of (a subset of) these datasets along three desired features in this work. All datasets compared contain only facts and rules that are non-contradicting. The dataset closest to our work is the _ConditionalQA_ dataset [45] where the answer to the questions follows a _"If X then yes, if Y then no"_ format. **Reasoning with conflicts:** From the early days of AI, reasoning with conflicting information has been an important topic and many approaches have been developed to handle such conflicts [32; 31; 35]. The problem we study in this paper is an instance of defeasible reasoning [31; 19; 27] which has applications in various domains (especially in legal reasoning) [41; 16; 7] and has been argued to be one of the most important future directions in a recent survey of LM reasoning literature [54]. In defeasible reasoning, there are preferences over the rules and in the case of conflict between two rules, the conclusion from the higher preference rule is accepted. Previous work on defeasible reasoning with natural language has studied the problem of adjusting the probability of a conclusion based on new (single-hop) evidence [37; 26]. Our work extends this line of work by developing a dataset for multi-hop defeasible reasoning with preferences over sources. **Reasoning with incomplete information:** Several existing reasoning benchmarks adopt a setup where part of the required information is missing and needs to come from the model itself [44; 5; 2; 48]. Some datasets also employ a setup in which none of the required rules are provided as input [47; 15; 43; 21]. Our work focuses mainly on cases where part of the knowledge needs to come from the model and another part of the knowledge is provided as input. ## 3 Background and Notation We let \(\mathcal{E}=\{e_{1},\ldots,e_{N}\}\) and \(\mathcal{P}=\{p_{1},\ldots,p_{M}\}\) represent a set of entities and predicates. We represent a fact in the logical form using the triple notation \((e_{i},p_{j},e_{k})\), where \(e_{i},e_{k}\in\mathcal{E}\) and \(p_{j}\in\mathcal{P}\), and a rule as \(r:r_{b}\to r_{h}\) where \(r_{b}\) represents the body of the rule and \(r_{h}\) represents the head. We use \(!\) to indicate negation. A monotonic theory \(\mathcal{T}=(\mathcal{F},\mathcal{R})\) is a tuple containing a set \(\mathcal{F}\) of (positive or negative) facts, and a set \(\mathcal{R}=\{r_{1},\ldots,r_{|\mathcal{R}|}\}\) of rules. We let \(\mathcal{T}\models f\) represent that the fact \(f\) can be derived from the theory \(\mathcal{T}\) using the standard inference rules of logic (See [42]). For a monotonic theory \(\mathcal{T}=(\mathcal{F},\mathcal{R})\), if \(\mathcal{T}\models f\), then for any theory \(\mathcal{T}^{\prime}\) such that \(\mathcal{T}^{\prime}=(\mathcal{F}\cup\mathcal{F}^{\prime},\mathcal{R})\), we also have \(\mathcal{T}^{\prime}\models f\) (that is, adding new facts does not change previously derived facts). **Defeasible Theory:** A defeasible theory \(\mathcal{T}^{(d)}=(\mathcal{F},\mathcal{R},\mathcal{O})\) is a triple containing a set \(\mathcal{F}\) of facts, a set \(\mathcal{R}=\{r_{1},\ldots,r_{|\mathcal{R}|}\}\) of rules, and a set \(\mathcal{O}=\{r_{t_{1}}>r_{t_{2}},\ldots,r_{t_{3}}>r_{t_{4}}\}\) of pair-wise relative priorities/preferences between rules.2 The rules hold _defeasibly_, meaning the conclusion from a rule may be defeated by contrary evidence from a higher priority rule. This happens, for example, when one rule implies something is true but another rule with a higher priority implies it is false; in Figure 2: A sample example from BoardgameQA that requires one hop of reasoning. The text in violet highlights conflict resolution and the text in blue highlights the missing information. such cases, we accept the conclusion from the higher priority rule (see Figure 1). We let \(\mathcal{T}^{(d)}\models f\) represent that \(f\) can be derived from a defeasible theory \(\mathcal{T}^{(d)}\) after resolving conflicts. Note that the initial facts \(\mathcal{F}\) are internally consistent and always have priority over the derived facts. We assume the theory is _defeasibly consistent_, meaning whenever a conflict arises, the preferences can be used to resolve it. An example of a defeasible theory \(\mathcal{T}^{(d)}\) is as follows: **Example 3.1**.: \(\mathcal{F}=\{\textit{Tweedy is a penguin}.\},\mathcal{R}=\{r_{1}:\textit{ Penguins are birds. }r_{2}:\textit{ Birds fly. }r_{3}:\textit{ Penguins do not fly.}\},\mathcal{O}=\{r_{3}>r_{2}\}\)_. From the theory, one can first use \(r_{1}\) to derive that "Tweedy is a bird". Then, one can use \(r_{2}\) to derive that "Tweedy flies". However, one can also use \(r_{3}\) to derive that "Tweedy does not fly", which is in conflict with the previous derivations. Since \(r_{3}>r_{2}\), we accept the derivation that "Tweedy does not fly"._ **Conflict types:** Conflicts can arise for rules whose heads cannot be simultaneously true, e.g., for two rules \(r:r_{b}\to z\) and \(r^{\prime}:r^{\prime}_{b}\rightarrow\uparrow\downarrow z\). For a theory \(\mathcal{T}^{(d)}\) with these two rules, \(\mathcal{T}^{(d)}\models z\) in two cases: (a) \(r\) has higher priority than \(r^{\prime}\) and we can prove \(r_{b}\), and (b) \(r\) has lower priority than \(r^{\prime}\) and we can prove \(r_{b}\) but we cannot prove \(r^{\prime}_{b}\). In the first case, one does not need to take into account \(r^{\prime}_{b}\) for conflict resolution, but in the second case it is critical to take \(r^{\prime}_{b}\) into account. We name the first type of conflict _Type1_ conflict and the second type _Type2_. ## 4 The BoardgameQA Dataset We now describe how we create a dataset for measuring the ability of LMs in reasoning with conflicting inputs in a defeasible setup. Our dataset creation follows a backward story generation strategy similar to [53; 22]. Each example in the dataset contains a (defeasible) theory \(\mathcal{T}^{(d)}\) and a question \(q\). The goal is to predict whether \(\mathcal{T}^{(d)}\models q\), or \(\mathcal{T}^{(d)}\models\ \!!\! 2- \(\forall X:(X,p_{1},e_{1})\wedge(X,p_{2},e_{2})\Rightarrow(X,p_{3},e_{3})\), 3- \((e_{1},p_{1},e_{2})\Rightarrow(e_{2},p_{2},e_{3})\), 4- \((e_{1},p_{1},e_{2})\wedge(e_{3},p_{2},e_{2})\Rightarrow(e_{2},p_{3},e_{4})\), 5- \((e_{1},\hat{p},\hat{e})\Rightarrow(e_{1},p_{2},e_{2})\), and 6- \(\exists X(X,p_{1},e_{1})\Rightarrow(e_{2},p_{2},e_{3})\), where \(X\) represents a universally or existentially bounded variable, each \(e_{i}\) represents an entity, and each \(p_{j}\) represents a predicate. The fifth rule template corresponds to a rule where the predicate (or object entity) in the rule body may not be an element of \(\mathcal{P}\) (resp. \(\mathcal{E}\)). For more information, see below. **Selecting a question:** To generate each example, we first sample a question \(q=(e_{i},p_{j},e_{k})\) that should be proved or disproved, where \(e_{i}\) and \(e_{k}\) are sampled from \(\mathcal{E}\) and \(p_{j}\) is sampled from \(\mathcal{P}\). We also sample the sign of the question (positive or negative). For example, we might sample the question _!(dog, attack the fields, lion)_ asking whether _the dog does not attack the fields of the lion_. The question is then converted into natural language using a template (see Appendix C.3). **Theory generation:** The theory generation is the main component of the dataset generation that constructs the facts, rules and question to be used in each example. A high-level description is provided in Algorithm 16 and an example generation is shown in Appendix C. We first sample some sub-questions \(\mathcal{Q}=q_{1},\ldots,q_{n}\) and a rule \(r\) which has \(\mathcal{Q}\) in its body and \(q\) in its head, such that \(q\) can be derived from \(\mathcal{Q}\) and \(r\). The sampling is done by first selecting one of the aforementioned rule types, then matching the head of the rule to the question \(q\), and then sampling sub-questions \(\mathcal{Q}\) based on the body of the rule. For example for the question _!(dog, attack the fields, lion)_, we might sample the first rule type (see the six types above), then \(p_{2}\) will be mapped to _attack the fields_ and \(e_{2}\) will be mapped to _lion_, and we also sample a sub-question such as _(dog, unite with, cat)_ and add the rule \(\forall X:(X,\textit{unite with},cat)\Rightarrow!(X,\textit{attacks the fields},\textit{ lion})\) to our set of rules. We then make a recursive call for each \(q_{i}\) to generate new rules and facts for them. ``` 0: Question \(q\), Depth \(d\) 1:if d == 0 then 2:addToFacts(q) 3:else 4: Sample sub-questions \(\mathcal{Q}=\{q_{1},...,q_{n}\}\) and rule \(r\) s.t. \(q\) can be derived from \(\mathcal{Q}\) and \(r\). 5:addToRules(r) 6:ifCoinFlip(\(p_{\textit{Conv}}\)) == Conflict then 7: Sample sub-questions \(\mathcal{Q}^{\prime}=\{q^{\prime}_{1},...,q^{\prime}_{m}\}\) and rule \(r^{\prime}\) s.t. \(!q\) can be derived from \(\mathcal{Q}^{\prime}\) and \(r^{\prime}\). 8:addToRules(\(r^{\prime}\)) 9:ifCoinFlip(\(p_{\textit{Conv}}\)) == Type1 then 10:\(\mathcal{Q}=\mathcal{Q}+SubSample(\mathcal{Q}^{\prime})\) 11:addToPreferences(\(r\), \(r^{\prime}\)) 12:else 13:\(\mathcal{Q}=\mathcal{Q}+RemoveOneSubquestion(\mathcal{Q}^{\prime})\) 14:addToPreferences(\(r^{\prime}\), \(r\)) 15:for\(q_{i}\) in \(\mathcal{Q}\)do 16: GenerateTheory(\(q_{i}\), d-1) 17: \begin{table} \begin{tabular}{l|c|c|c} \hline \hline Category & Description & Example Facts & Example Rule \\ \hline \hline Time Conversion & Compares the age of an entity to a certain & The dog is 13 months and a half old & If the dog is more than a year old, then \\ & age specified with different units. & The dog is named Paco. The cat is & If the dog has a name that starts with \\ Orthography & Asts about the letters in names. & named Pashbank. & the same letter as the name of the cat, \\ \hline Number & Some numbers are required to be summed & The dog has two friends that are & If the dog has less than 10 friends, then \\ Comparisons & and then compared to other numbers. & nice and free that are not & \\ \hline Lexical Entailment & The fact and the rule body are not identical & The dog asestimated the mayor & If the dog killed the mayor, then... \\ & but the fact entails the rule body. & The dog is currently in Canada, then \\ \hline World Kroach & Some knowledge about times of events in weeks in & The dog is uncurently in Montreal. & If the dog is watching a movie that was \\ & to connect the fact to the rule body. & was released in 2005. & released after Covidity started, then... \\ \hline Part Of & The fact and the rule body have a part of & The dog is a nurse & If the dog works in healthcare, then... \\ & relation. & The rule body is about a certain & The dog has a knife & If the dog has a sharp object, then... \\ \hline \multirow{2}{*}{**Attendance**} & The rule body is about a certain & The dog has a knife & If the dog has a knife \\ & feature/attolerance of the fact. & & \\ \hline \multirow{2}{*}{**Volumes**} & Knowledge of what objects fit in what other & The dog has a ball with a radius of & If the dog has a ball that fits in a 28 x \\ & objects is required. & 15 inches. & 35 x 3 inches, then... \\ \hline \hline \end{tabular} \end{table} Table 2: Categories, descriptions, and examples of incomplete information in BoardgameQA. For lexical entailment, world knowledge, event times, and affordance, a list of examples is written manually from which the sampling procedure can select. In others, examples are generated automatically. a biased coin flip with probability \(p_{\textit{ConfType1}}\). If the first case is selected, then \(r>r^{\prime}\) is added to the preferences. In this case, we can make recursive calls for all or a subset of the facts in \(\mathcal{Q}^{\prime}\). Otherwise, \(r^{\prime}>r\) is added to the preferences. In this case, we make recursive calls for _all but one_ of the facts in \(\mathcal{Q}^{\prime}\) (selecting randomly) to ensure that \(r^{\prime}\) does not activate. **Proofs:** We keep track of the facts, rules, and preferences during the generation process and turn them into proofs for the examples. **Stopping criterion:** Every time we make a recursive call to the function in Algorithm 16, the example will contain one extra hop in its proof. We set the stopping criterion as the number of hops in the proof. Toward this goal, we included an argument \(d\) in Algorithm 16 which corresponds to the target maximum number of hops in the proof; \(d\) decreases by one every time we make a recursive call. When the algorithm is called with \(d=0\), instead of generating rules and sub-questions for the input question \(q\), we simply add \(q\) to our set of facts. **Incomplete information:** We generate examples with incomplete information where part of the knowledge should come from the LM (corresponds to rule type 5). For a question \(q\) in the theory generation phase, we sample sub-questions \(\mathcal{Q}\) and rule \(r\) such that \(\hat{\mathcal{Q}}\) can be derived based on \(\mathcal{Q}\) and \(q\) can be derived from \(\hat{\mathcal{Q}}\) and \(r\). We then hide \(\hat{\mathcal{Q}}\) from the model so the model has to derive it itself. We use a separate body of world knowledge, commonsense knowledge, mathematical, and orthography reasoning for generating \(\mathcal{Q}\) and \(\hat{\mathcal{Q}}\) (see Table 2 for a high-level description and Appendix C.2 for more details). For example, for the goal _"the dog unites with the cat"_ we generate the sub-question _"The dog is in Montreal."_ and the rule _"If the dog is in Canada, then the dog unites with the cat."_. Then, an extra reasoning step is needed from the model to recognize that Montreal is in Canada. We generate sub-questions and rules that require extra knowledge and reasoning with probability \(p_{\textit{MisInfo}}\); otherwise, we create sub-questions and rules that require no extra knowledge and reasoning. To make the problem more challenging, we only include some categories of extra knowledge and reasoning in the training set; this ensures that the models cannot simply learn the extra knowledge from the training set and use it in the test set. **Conversion to natural language:** Finally, once we generate the facts, rules, preferences, and question, we use manually constructed templates to turn each of them into a textual format. To make the problem more challenging, we use multiple templates per rule type and use some of the templates only in the test set (see Appendix C.3 for details). A comparison of BoardgameQA with other prominent deductive reasoning datasets in terms of the average length of examples and the average number of unique tokens per example is provided in Figure 3. **Disproved and unknown examples:** So far, we described how to generate examples with the label _proved_. Generating examples with the label _disproved_ can be done simply by first generating an example with the label _proved_ and then negating the question. Also, generating examples with the label _unknown_ can be done by perturbing the theory until the statement in the question cannot be derived from the theory (e.g., reducing the amount of money of the frog to 50 dollars in the example of Figure 2). We randomly select and apply the following perturbations to the theory and run a defeasible solver implemented based on the scalable solver in [27] on the resulting theory until the label becomes unknown: 1- change the predicate of a fact or a rule, 2-change the sign of a fact or an element of the rule, 3- replace a fact with a new fact, and 4- flip the order of a preference. ## 5 Experiments One of the primary goals of our experiments is to verify if LMs are capable of reasoning in a defeasible setup. For this reason, we conduct experiments with various LM architectures (encoder-only, encoder-decoder, and decoder-only) and various pre-training and learning paradigms (finetune with and without proofs, prompt tuning, few-shot in-context learning, and instruction-tuned). Specifically, we Figure 3: A comparison of BoardgameQA with ProofWriter [46] and PrOntoQA [39] in terms of average length of examples and average number of unique tokens per example on depth 3 of the datasets. test 1) finetuning BERT-large [14] with a classification head to predict the label directly, 2) finetuning TS 1.1 XXL [34] to generate the entire proof and then the label, 3) few-shotting PaLM 62B and PaLM 540B [9] where we provide demonstration examples and chain-of-thought (CoT) in the prompt (the CoT corresponds to the proof), 4) few-shotting the instruction-finetuned FLAN-PaLM 540B [10] with CoT, and 5) soft prompt-tuning [25] PaLM 62B with CoT where instead of providing a static prompt, we make the prompt embedding learnable and tune its parameters using the training data (the rest of the LM parameters are frozen). We report classification accuracy as the metric. We also report the _majority class_ baseline (\(\sim\)33% since our labels are balanced). **Dataset sizes:** To gain a more detailed understanding of the models' defeasible reasoning capacity, we create several variations of BoardgameQA. The nature of the variation will be discussed in the remainder of this section with each experiment. For each variation, we sample \(1000\) examples for train, \(500\) for validation, and \(1000\) for test. We sample an equal number of examples from each label. ### Can LMs Reason with Contradictory Inputs? As explained in Section 4, BoardgameQA makes use of a number of variables that control various aspects of the dataset such as the amount and types of conflict and the amount of extra knowledge required. We start by creating a default version of the dataset that exhibits each of these properties to some degree by setting \(p_{\textit{Conf}}=0.5\), \(p_{\textit{Conf}\textit{T}\textit{p}1}=0.5\), and \(p_{\textit{MisInfo}}=0.5\). We then generate three datasets with depth 1-3 (i.e., requiring 1-3 hop(s) of reasoning, respectively), and measure the performance of our baselines on these datasets. The results are in Figure 4. The tuned models perform reasonably on depth 1, but their performance substantially degrades on depths 2-3. This contrasts with previous observations for monotonic reasoning (e.g., in [11; 46]) where finetuned LMs reach near-perfect performance even on higher depths. This indicates that reasoning with contradictory inputs is more difficult even with finetuning. Moreover, we see that the few-shot models perform poorly across all depths showing that conflict resolution is not achieved out-of-the-box with pretrained models. This includes both PaLM and instruction-finetuned FLAN PaLM models. PaLM 540B performs better than PaLM 62B showing that larger models may have higher capacity for defeasible reasoning. More insights from full confusion matrices can be found in Appendix A. Hereafter, due to inference costs, we only experiment with finetuned BERT and T5, prompt-tuned PaLM 62B, and few-shot PaLM 540B, and with examples of depth 2 to keep a medium level of difficulty in terms of reasoning hops and enable measuring the effect of the other factors. ### Does Correct Label Prediction Mean Correct Proof? Recently, it has been shown that although large LMs achieve high accuracy on label prediction for (monotonic) reasoning task, they do so by generating spurious proofs that do not represent valid steps of reasoning [22]. There is also evidence that LMs frequently exploit spurious correlations in the data distribution to achieve high label accuracy, rather than reasoning purely deductively [57]. Hence we design evaluation metrics to reflect a more rigorous measure of accurate defeasible reasoning. In the case where a model predicts the label correctly, and the label is one of _proved_ or _disproved_ (where an actual proof exists), we measure whether the proof generated by the model is correct or not. For this purpose, we compute two automated proof accuracy metrics (named _Rule F1_ and _Conflict F1_) and one manual metric (named _Overall Proof Accuracy_) as described below. For _Rule F1_, we extract Figure 4: The model performances on depths 1–3 of the BoardgameQA dataset. Many models struggle on this dataset, especially with higher depths. the rules used in the golden proof and the ones in the proof generated by the model that are used to derive new facts (and ultimately, the goal). Then we compute the F1-score of the overlap of the two sets. For _Conflict F1_, we extract the conflict resolutions (corresponding to pairs of rules) used in the gold proof and the ones in the proof generated by the model, and compute the F1-score of their overlap. For _Overall Proof Accuracy_, we manually verify whether the proof is correct for \(50\) sampled examples per model. We compute these metrics on depth 2 of the dataset. According to the results in Figure 5, all models perform relatively well in selecting the correct set of rules for the proof. The few-shot model performs poorly on conflict resolution whereas the tuned models perform substantially better, suggesting that preference understanding and conflict resolution do not surface with simple few-shot prompting, and tuning is required for models to exhibit this capacity. Second, the models often generate wrong proofs, even when they predict the label correctly. The issue is less severe in the case of the prompt-tuned model but becomes more severe for the finetuned and few-shot models. We provide examples of proof failures in Appendix A. ### Do Conflicts Make Reasoning More Difficult? We create four versions of BoardgameQA named NoConflict, LowConflict, Medium-Conflict, and HighConflict, with \(p_{\textit{Conf}}\) set to 0.0, 0.2, 0.5 and 0.8 respectively; other factors are kept the same. Note that the MediumConflict corresponds to the dataset in Figure 4. The results of the models on these datasets are reported in Figure 6. The performance of all models monotonically degrades as the number of conflicts increases, showing that conflict resolution is indeed a major factor in the difficulty of the problems. For example, BERT performs above-random for the NoConflict and LowConflict cases, but the model performance drops to near-random on MediumConflict and HighConflict cases. ### Which Conflict Type is More Difficult to Resolve? To test which type of conflict (See sec. 4) is more difficult for the models, we create three versions of the dataset with varying proportions of Type1 vs Type2 conflicts, by setting \(p_{\textit{ConfType1}}\) to 0.2, 0.5, and 0.8 respectively. The first dataset mostly contains conflicts of Type1, the second contains both conflicts in a similar amount, and the third dataset contains mostly Type2 conflicts. The other factors are kept constant across the datasets. The results of the models are reported in Figure 7. We see that models perform slightly better on the dataset with mostly Type1 conflicts. This discrepancy between performance on Type1 and Type2 conflicts is intuitive because in the case of Type1 conflicts, the model can ignore the conflicting rule and whether its body can be proved, but in the case of Type2 conflicts, the model has to show that at least one of the elements in the body of the conflicting rule cannot be proved. In the case of tuned models, we furthermore observe that biasing the dataset toward one conflict type results in better performance overall. This might be because the model mostly needs to learn to resolve one type of conflict which may be easier than learning both. Figure 5: Proof accuracy metrics for various models on depth 2 of the dataset, when the label is predicted correctly. Figure 6: The model performances on four versions of the BoardgameQA dataset with various amounts of conflicts in them. Figure 7: The model performances on three versions of the BoardgameQA dataset with different distributions on the type of conflicts. ### Does Information Incompleteness Make Reasoning More Difficult? As described in Section 4, we can control the amount of information incompleteness using a parameter which we named \(p_{\textit{MisInfo}}\). To test how the information incompleteness affects the performance of various models, we create three versions of our dataset with \(p_{\textit{MisInfo}}\) set to \(0.2\), \(0.5\) and \(0.8\), which we name _KnowledgeLight_, _KnowledgeMedium_ and _KnowledgeHeavy_, respectively. The results are reported in Figure 8. We observe that as the amount of required knowledge increases, the performance of the fine-tuned models decreases accordingly. However, the performance of the prompt-tuned and few-shot models remain relatively unchanged, likely due to the larger size of the model and the extra amount of knowledge that is present in the model, as well as the fact that working with real-world knowledge might be easier for these models than with artificial knowledge. ### Do Distractors Make Reasoning More Difficult? We also measure the effect of distracting facts and rules on model performance. A distracting fact or rule is one that does not appear in the proof and does not change the label. In Figure 2, for example, _"the frog has a knife"_ is a distracting fact. To this end, each time we call Algorithm 16, besides the sampled sub-questions, we also sample some distracting sub-questions and add them to the set of sub-questions. We create three versions of the BoardgameQA dataset where we add 0, 1, and 2 distracting facts in each step, which we name _NoDistractors_, _SomeDistractors_, and _ManyDistractors_, respectively. According to the results in Figure 9, the performance of the tuned models does not substantially degrade with a small number of distractors, potentially because the distractors can help the model avoid learning spurious correlations. However, their performance drops substantially with more distractors. Also, with more distractors, the performance of the few-shot model decreases monotonically, although only marginally (this observation is consistent with the results of [40]). This shows that distractors (that are typically common in real applications) can also compound the problem difficulty. ## 6 Conclusion In this work, we introduced BoardgameQA, a dataset for measuring the natural language reasoning ability of language models (LMs) in the presence of conflicting input sources. Our dataset furthermore includes scenarios in which the knowledge required for reasoning is only partially provided as input and additional information needs to come from the model itself. We tested several types of LMs on different variations of the dataset and observed that LMs perform poorly when reasoning with conflicting inputs. In the case of smaller models, the performance was also poor when additional knowledge from the LM is needed. Since reasoning over contradicting and incomplete sets of information is a common scenario in real-world applications, our results highlight an important gap in the reasoning capacity of current LMs. We hope our dataset can guide future work developing methodology to improve the reasoning ability of LMs under this setup, or finding alternative formulations of conflict resolution that better facilitate LM reasoning. Figure 8: The model performances on three versions of BoardgameQA with various degrees of incomplete information. Figure 9: The model performances on three versions of BoardgameQA with various amounts of distracting facts and rules.
2309.01518
Curvature sensing of curvature-inducing proteins with internal structure
Many types of peripheral and transmembrane proteins can sense and generate membrane curvature. Laterally isotropic proteins and crescent proteins with twofold rotational symmetry, such as Bin/Amphiphysin/Rvs superfamily proteins, have been studied theoretically. However, proteins often have an asymmetric structure or a higher rotational symmetry. We theoretically studied the curvature sensing of proteins with asymmetric structures and structural deformations. First, we examined proteins consisting of two rod-like segments. When proteins have mirror symmetry, their sensing ability is similar to that of single-rod proteins; hence, with increasing protein density on a cylindrical membrane tube, a second- or first-order transition occurs at a middle or small tube radius, respectively. As asymmetry is introduced, this transition becomes a continuous change, and metastable states appear at high protein densities. Protein with threefold, fivefold, or higher rotational symmetry has laterally isotropic bending energy. However, when a structural deformation is allowed, the protein can have a preferred orientation and stronger curvature sensing.
Hiroshi Noguchi
2023-09-04T10:55:44Z
http://arxiv.org/abs/2309.01518v2
# Curvature sensing of curvature-inducing proteins with internal structure ###### Abstract Many types of peripheral and transmembrane proteins can sense and generate membrane curvature. Laterally isotropic proteins and crescent proteins with two-fold rotational symmetry, such as Bin/Amphiphysin/Rvs superfamily proteins, have been studied theoretically. However, proteins often have an asymmetric structure or a higher rotational symmetry. We theoretically studied the curvature sensing of proteins with asymmetric structures and structural deformations. First, we examined proteins consisting of two rod-like segments. When proteins have mirror symmetry, their sensing ability is similar to that of single-rod proteins, and second- or first-order transition occurs on the cylindrical membrane of a middle or small radius, respectively. As asymmetry is introduced, this transition becomes a continuous change, and metastable states appear at high protein densities. Protein with three-, five-, or more-fold rotational symmetry has laterally isotropic bending energy. However, when an asymmetric structural deformation is allowed, they can have a preferred orientation and stronger curvature sensing. ## I Introduction In living cells, biomembranes are primarily composed of lipids and proteins. Transmembrane proteins span the membrane, while peripheral proteins bind and unbind to the membrane surface. Many of these proteins modify membrane properties, such as bending rigidity, spontaneous curvature, membrane thickness, and viscosity. Curvature-inducing proteins, such as Bin/Amphiphysin/Rvs (BAR) superfamily proteins, regulate cell and organelle membrane shapes [1; 2]. The BAR superfamily proteins have a crescent binding-domain (BAR domain), which is a dimer with two-fold rotational symmetry. The BAR domain bends membranes along its axis and generates a cylindrical membrane tube [1; 2; 3; 4; 5; 6; 7]. Clathrin and coat protein molecules assemble to form spherical cargo, generating spherical membrane buds [8; 9; 10; 11; 3; 12]. These curvature-inducing proteins sense membrane curvature and are concentrated at the membrane locations of their preferred curvatures. Curvature sensing of BAR proteins [13; 14; 15; 16; 17], dynamin [18], annexins [19], G-protein coupled receptors (GPCRs) [20], ion channels [21; 22], and Ras proteins [23] has been reported using tethered vesicles. The dependence of protein binding on vesicle size also indicated curvature sensing [24; 25; 23]. Theoretically, curvature-inducing proteins have been modeled as laterally isotropic or crescent objects. For isotropic objects, the Canham-Helfrich model [26; 27] was applied to the bending energy [16; 17; 28; 29; 30; 31]. For crescent objects, anisotropic bending energies were considered [28; 32; 33; 34; 35; 36; 37]. An elliptical shape was typically considered, such that a two-fold rotational and mirror-symmetric shape was assumed. However, actual proteins often have more complicated shapes. BAR domains have two-fold rotational symmetry but are chiral and are not mirror symmetric. Their chirality is the origin of the helical assembly of the BAR domains [6; 7] and is important for generating membrane tubes with a constant radius [38]. Many of BAR and other curvature-inducing proteins have intrinsically disordered domains [39], and recent experiments have demonstrated that these disordered domains have significant effects on curvature generation [40; 25; 41]. Theoretically, they are treated as excluded-volume linear polymer chains. At a low polymer density on the membrane surface, polymer-membrane interactions can weakly induce a spontaneous curvature in a laterally isotropic manner [42; 43; 44; 45; 46]. Conversely, at high densities, inter-polymer interactions can induce a large spontaneous curvature [42; 47; 48; 49; 46; 49] and promote membrane tubulation or prevent it because of the repulsion between polymers [50]. In this study, we consider two types of curvature-inducing proteins: asymmetric proteins, and proteins with three- or more-fold rotational symmetry. Dynamin [51; 52; 53] has an asymmetric structure, and its helical assembly induces membrane fission by choking a membrane neck. Melittin and amphipathic peptides [54; 55; 56; 57] bind onto the membrane, and their circular assembly forms a membrane pore. Gomez-Llobregat et al. reported the curvature sensing of three amphipathic peptides using a coarse-grained simulation of a buckled membrane [58]. They revealed that melittin and the amphipathic peptides LL-37 (PDB: 2k6O) exhibited asymmetric curvature sensing, which means the angle distribution with respect to the buckled axis was not symmetric. We use a protein model consisting of two crescent-rod-like segments connected by a kink, like melittin (see Fig. 1(a)), and investigate how the asymmetry modifies curvature sensing. Many transmembrane proteins, such as ion channels [59; 60] and GPCRs [61; 62; 63; 64], form rotational symmetric structures. Several types of microbial rhodopsins form a trimer or pentamer with three- or five-fold symmetry, respectively [64]. Moreover, peripheral proteins can have three-fold symmetry. For example, the clathrin monomer has three-fold symmetry [8], and annexin A5 molecules form a trimer with a triangular shape [65; 66]. Recently, deformation of the lipid bilayer induced by the hydrophobic mismatch of rotationally symmetric transmembrane proteins was theoretically studied [67]. In this study, we investigate curvature sensing of rotationally symmetric proteins. The rigid rotationally symmetric proteins exhibit isotropic bending energy. However, the anisotropy can be induced by protein deformation. The previous theoretical models of curvature-inducing proteins are outlined in Sec. II. The curvature sensing of asymmetric proteins is described in Sec. III. The protein model is presented in Sec. III.1. Curvature sensing at low-density limits and at finite densities is described in Sec. III.2 and III.3, respectively. Sec IV discusses rotationally symmetric proteins. Sec. V concludes the paper. ## II Protein models with anisotropic bending energy Crescent proteins were modeled to have different bending rigidities and spontaneous curvatures along the protein axis and in the perpendicular (side) direction. Note that this protein axis is set along the main preferred curvature of the protein on the membrane, so that it can be different from the protein axis of the elliptical approximation (e.g., BAR-PH domains [5; 6]). The membrane curvatures along these two directions are given by \[C_{\ell 1} = C_{1}\cos^{2}(\theta_{\rm pc})+C_{2}\sin^{2}(\theta_{\rm pc})= H+D\cos(2\theta_{\rm pc}), \tag{1}\] \[C_{\ell 2} = C_{1}\sin^{2}(\theta_{\rm pc})+C_{2}\cos^{2}(\theta_{\rm pc})= H-D\cos(2\theta_{\rm pc}), \tag{2}\] where \(\theta_{\rm pc}\) is the angle between the protein axis and direction of either principal membrane curvature (the azimuthal direction is chosen for a cylindrical membrane as depicted in Fig. 1(b)). \(H=(C_{1}+C_{2})/2\) and \(D=(C_{1}-C_{2})/2\) represent the mean and deviatoric curvatures of the membrane, respectively, where \(C_{1}\) and \(C_{2}\) represent the principal curvatures. The bending energy of a protein is expressed as [28; 36; 68] \[U_{\rm 1rod} = \frac{\kappa_{\rm p}a_{\rm p}}{2}(C_{\ell 1}-C_{\rm p})^{2}+ \frac{\kappa_{\rm s}a_{\rm p}}{2}(C_{\ell 2}-C_{\rm s})^{2} \tag{4}\] \[= a_{\rm p}\bigg{\{}\frac{(\kappa_{\rm p}+\kappa_{\rm s})}{2} \bigg{[}H^{2}+\frac{D^{2}}{2}(\cos(4\theta_{\rm pc})+1)\bigg{]}\] \[-(\kappa_{\rm p}C_{\rm p}+\kappa_{\rm s}C_{\rm s})H+\frac{\kappa_ {\rm p}C_{\rm p}^{2}+\kappa_{\rm s}C_{\rm s}^{2}}{2}\] \[+(\kappa_{\rm p}-\kappa_{\rm s})HD\cos(2\theta_{\rm pc})\] \[-(\kappa_{\rm p}C_{\rm p}-\kappa_{\rm s}C_{\rm s})D\cos(2\theta_{ \rm pc})\bigg{\}},\] where \(a_{\rm p}\) is the contact area of the bound protein, \(\kappa_{\rm p}\) and \(C_{\rm p}\) are the bending rigidity and spontaneous curvature along the protein axis, respectively, and \(\kappa_{\rm s}\) and \(C_{\rm s}\) are along the side axis. From the comparison of the experimental data of tethered vesicles [16; 17], the bending rigidity and spontaneous curvature along the protein axis were estimated: \(\kappa_{\rm p}/k_{\rm B}T=82\pm 20\) and \(C_{\rm p}({\rm nm}^{-1})=-0.047+0.0003(\kappa_{\rm p}/k_{\rm B}T-82)\pm 0.001\) for I-BAR domain, and \(30\lesssim\kappa_{\rm p}/k_{\rm B}T\lesssim 60\) and \(0.06\lesssim C_{\rm p}({\rm nm}^{-1})\lesssim 0.09\) for N-BAR domain [37]. Different forms of the anisotropic bending energy have also been used. In Ref. [32], only the linear terms of \(H\) and \(D\) were considered in addition to the tilt energy. In Ref. [33], the energy was considered to be \[U_{\rm grad} = \frac{k_{\rm m}}{2}(H-H_{0})^{2}\] \[+\frac{k_{\rm m}+k_{\rm d}}{4}\big{(}D^{2}-2DD_{0}\cos(2\theta_{ \rm pc})+D_{0}^{2}\big{)}.\] The second term assumes an energy proportional to a rotational average in the squared gradient of \(C_{\ell}-C_{p}\) with respect to the protein rotation. In this form, the protein depends only weakly on the protein orientation; the cross term of \(HD\) does not appear and the \(D^{2}\) term is independent of the angle \(\theta_{\rm pc}\). In these protein models, the bending energy depends on the angle only as a function of \(\cos(2\theta_{\rm pc})\), owing to symmetry. For asymmetric proteins, the energy can include an odd function of the angle \(\theta_{\rm pc}\). To the best of our knowledge, such a term was previously considered only in the model by Akabori and Santangelo [34]. They added the following term to Eq. (4): \[U_{\rm asy}=k_{\rm asy}(D\sin(2\theta_{\rm pc})-C_{\rm asy})^{2}, \tag{5}\] where \(D\sin(2\theta_{\rm pc})\) is the non-diagonal element of the curvature tensor. In Ref. [58], this model was used to estimate the bending rigidities of amphipathic peptides. However, this model does not have a microscopic basis. In this study, we examine the bending energies of asymmetric proteins using a 2-rod protein model. Figure 1: Schematic of an asymmetric curvature-inducing protein. (a) Model of the protein with two rod-like segments. (b) Protein on a cylindrical membrane. The angles between the nematic direction \(\mathbf{S}\), azimuthal direction, and/or protein axis are depicted. ## III Protein consisting of two rods ### Protein Model We consider a protein or peptide consisting of two segments (segments \(a\) and \(b\) in Fig. 1(a)). Each segment is modeled as the symmetric protein model (in the absence of side bending rigidity for simplicity), and the orientations of the two segments have an angle \(\omega\) on the membrane surface. Melittin is an example of this type of molecule, in which two alpha helices are connected by a kink. The bending energy of one protein is expressed as \[U_{\rm 2rod} = \frac{\kappa_{\rm pa}a_{\rm ppa}}{2}(C_{\ell 1\rm a}-C_{\rm pa})^{2 }+\frac{\kappa_{\rm pb}a_{\rm pb}}{2}(C_{\ell 1\rm b}-C_{\rm pb})^{2} \tag{7}\] \[= \kappa_{\rm pm}a_{\rm p}\bigg{[}(H-C_{\rm pm})^{2}+C_{\rm pd}^{2}\] \[+2(H-C_{\rm pm})D\cos(\omega)\cos(2\theta_{\rm pc})\] \[+2C_{\rm pd}D\sin(\omega)\sin(2\theta_{\rm pc})\] \[+\frac{D^{2}}{2}(\cos(2\omega)\cos(4\theta_{\rm pc})+1)\bigg{]}\] \[+\kappa_{\rm pd}a_{\rm p}\bigg{[}-2HC_{\rm pd}+2C_{\rm pm}C_{\rm pd}\] \[-2C_{\rm pd}D\cos(\omega)\cos(2\theta_{\rm pc})\] \[-2(H-C_{\rm pm})D\sin(\omega)\sin(2\theta_{\rm pc})\] \[-\frac{D^{2}}{2}\sin(2\omega)\sin(4\theta_{\rm pc})\bigg{]},\] where \(C_{\rm pm}=(C_{\rm pa}+C_{\rm pb})/2\), \(C_{\rm pd}=(C_{\rm pa}-C_{\rm pb})/2\), \(\kappa_{\rm pm}a_{\rm p}=(\kappa_{\rm pa}a_{\rm p}-\kappa_{\rm pb}a_{\rm p}+ \kappa_{\rm pb}a_{\rm pb})/2\), and \(\kappa_{\rm pd}a_{\rm p}=(\kappa_{\rm pa}a_{\rm p}-\kappa_{\rm pb}a_{\rm p})/2\). We use \(\kappa_{\rm pm}=50k_{\rm B}T\) and \(a_{\rm p}C_{\rm pm}^{2}=0.1\). These values are typical of curvature-inducing proteins. The angle \(\omega=\pi/6\) is used, unless otherwise specified. Note that \(\kappa_{\rm pd}\) varies according to the bending rigidity difference and the area difference between the two segments. In Eq. (7), the deviatoric curvature \(D\) and angle \(\theta_{\rm pc}\) always appear as pairs as a function of \(D\cos(2\theta_{\rm pc})\) and/or \(D\sin(2\theta_{\rm pc})\). The asymmetric terms \(\propto HD\sin(2\theta_{\rm pc})\) and \(\propto D^{2}\sin(4\theta_{\rm pc})\) exist in addition to the term \(\propto D\sin(2\theta_{\rm pc})\). Therefore, the asymmetric energy described in Eq. (6) [34] is insufficient to express the asymmetric bending energy. For a symmetric protein (\(C_{\rm pd}=k_{\rm pd}=0\)), it is expressed as \[U_{\rm 2rod}^{\rm sym} = \frac{\kappa_{\rm pm}a_{\rm p}}{2}(1+\cos(\omega))(C_{\ell 1}-C_{ \rm pm})^{2} \tag{8}\] \[+\frac{\kappa_{\rm pm}a_{\rm p}}{2}(1-\cos(\omega))(C_{\ell 2}-C_{ \rm pm})^{2}\] \[-\frac{\kappa_{\rm pm}a_{\rm p}}{2}D^{2}(1-\cos(2\omega))\cos(4 \theta_{\rm pc}).\] The first and second terms correspond to the bending energies along the protein main and side axes of the protein in Eq. (3), respectively. However, the last term is new. At \(\omega=0\), the second and last terms vanish, and with increasing \(\omega\), they increase. ### Isolated Proteins First, we consider protein binding at the low-density limit, in which bound proteins are isolated on a membrane and inter-protein interactions are negligible. Hence, the density \(\phi\) of bound proteins is given by \(\phi=(1/2\pi)\int_{-\pi}^{\pi}\exp[\beta(\mu-U_{\rm 2rod})]\ {\rm d} \theta_{\rm pc}\), where \(\mu\) is the binding chemical potential and \(\beta=1/k_{\rm B}T\). The binding ratio of proteins to a cylindrical membrane tube with respect to a flat membrane is expressed as \[\frac{\phi_{\rm cy}}{\phi_{\rm flat}}=\frac{\exp(\beta U_{\rm 2rod}^{\rm flat })}{2\pi}\int_{-\pi}^{\pi}\exp(-\beta U_{\rm 2rod}^{\rm cy})\ {\rm d}\theta_{\rm pc}, \tag{9}\] where \(U_{\rm 2rod}^{\rm flat}\) is the bending energy for the flat membrane (\(H=D=0\)) and \(U_{\rm 2rod}^{\rm cy}\) is that for the cylindrical membrane (\(H=D=1/2R_{\rm cy}\)). This ratio \(\phi_{\rm cy}/\phi_{\rm flat}\) is independent of \(\mu\) at the low-density limit (\(\phi_{\rm cy}\ll 1\) and \(\phi_{\rm flat}\ll 1\)). Figure 2 shows the dependence on the curvature \(1/R_{\rm cy}\) of the cylindrical membrane for symmetrical proteins (Eq. (8)) with a fixed angle \(\omega\). The binding density reaches a maximum at \(1/R_{\rm cy}C_{\rm pm}\simeq 1.2\), and the maximum level decreases with increasing \(\omega\). The density distribution is mirror symmetric with respect to \(\theta_{\rm pc}=0\) and has one or two peaks (\(\theta_{\rm peak}\)) at low or high membrane curvatures, respectively (see Fig. 2(b) and the dashed lines in Fig. 3(c)). This peak split occurs since the mem Figure 2: Binding of symmetric proteins (\(\kappa_{\rm pd}=C_{\rm pd}=0\)) at the low-density limit. (a) Binding density \(\phi_{\rm cy}\) on a cylindrical membrane with respect to the density \(\phi_{\rm flat}\) on a flat membrane. The solid lines represent the data for \(\omega/\pi=1/12\), \(1/6\), and \(1/3\) (from top to bottom in the left region, respectively). (b) Peak position of the angle \(\theta_{\rm pc}\) at \(\omega/\pi=1/6\). The dashed lines in (a) and (b) represent the data obtained using the orthogonal approximation at \(\omega/\pi=1/6\). brane curvature becomes higher than the preferred curvature for the protein at high curvatures. Each protein segment has the lowest bending energy when it is along the azimuthal direction for \(1/R_{\rm cy}C_{\rm pm}\leq 1\), whereas it deviates from the azimuthal direction as \(\theta_{\rm pc}\pm\omega/2=\pm\arccos(\sqrt{R_{\rm cy}C_{\rm pm}})\) for \(1/R_{\rm cy}C_{\rm pm}>1\). For \(\omega=\pi/6\), the split point is shifted to a slightly higher membrane curvature (see Fig. 2(b)), since two segments are tilted with \(\pm\omega/2\), when the protein is oriented in the azimuthal direction (\(\theta_{\rm pc}=0\)). When the orthogonal protein model given in Eq. (3) is used (i.e., the last term in Eq. (8) is not accounted for), the protein behavior can be reproduced well at low membrane curvatures but not at high curvatures (see the dashed lines in Fig. 2). Therefore, the last term in Eq. (8) significantly modifies protein behavior at high membrane curvatures. Next, we consider the asymmetric proteins with \(\omega=\pi/6\) (see Figs. 3 and 4). Figure 3 shows the case that the spontaneous curvatures of two segments are different with keeping \(\kappa_{\rm pd}=0\). Since segment \(a\) has a large spontaneous curvature, it is more oriented in the azimuthal direction than segment \(b\). Hence, the peak angle of \(\theta_{\rm pc}\) becomes negative and decreases continuously with increasing \(1/R_{\rm cy}\) (see Fig. 3(b)). The upper peak becomes the second maximum for a finite range of \(1/R_{\rm cy}\) (see the solid lines in Fig. 3(c)). The width of this range decreases with increasing \(C_{\rm pd}\) (see dashed lines in Fig. 3(b)). However, the binding protein ratio \(\phi_{\rm cy}/\phi_{\rm flat}\) is only slightly modified (see Fig. 3(a)). Figure 3: Binding of asymmetric proteins with \(\kappa_{\rm pd}=0\) and \(\omega/\pi=1/6\) at the low-density limit. (a) Binding density \(\phi_{\rm cy}\) on a cylindrical membrane with respect to the density \(\phi_{\rm flat}\) on a flat membrane. From top to bottom: \(C_{\rm pd}/C_{\rm pm}=0.2\), \(0.1\), and \(0.05\). (b) Peak position of the angle \(\theta_{\rm pc}\). The solid and dashed lines represent the first and second peaks, respectively. (c) Distribution of the angle \(\theta_{\rm pc}\) at \(1/R_{\rm cy}C_{\rm pm}=0.8\) and \(1.6\). The solid and dashed lines represent the data for \(C_{\rm pd}/C_{\rm pm}=0.1\) and \(0\), respectively. When the bending rigidities of the two segments are different, the proteins exhibit more complicated behavior. For a small curvature of \(1/R_{\rm cy}\), the angle distribution is slightly asymmetric and has a peak at \(\theta_{\rm pc}<0\), as in the previous case (compare Figs. 3(c) and 4(c)). However, the peak position shifts to \(\theta_{\rm pc}>0\) with increasing \(1/R_{\rm cy}\), and a second peak appears at \(\theta_{\rm pc}<0\). At \(1/R_{\rm cy}C_{\rm pm}>2\), the peak at \(\theta_{\rm pc}<0\) becomes larger than the other one (see Fig. 4(b) and (c)). These peak behaviors are caused by the last two terms in Eq. (7). The increase in \(\theta_{\rm peak}\) at \(1/R_{\rm cy}C_{\rm pm}\simeq 1\) is mainly due to the last term. When both the bending rigidities and spontaneous curvatures of the two segments are different, the ratio \(\phi_{\rm cy}/\phi_{\rm flat}\) can vary considerably from that of symmetric protein, and the angle distribution can be more asymmetrical (see the uppermost line in Fig. 4(a) and the dashed line in Fig. 4(c)). This increases in \(\phi_{\rm cy}/\phi_{\rm flat}\) is due to the enhancement of protein curvature induction by the effectively large protein curvature (\(\kappa_{\rm pa}a_{\rm pa}C_{\rm pa}+\kappa_{\rm pb}a_{\rm pb}C_{\rm pb}=( \kappa_{\rm pm}a_{\rm p}C_{\rm pm}+\kappa_{\rm pd}a_{\rm p}C_{\rm pd})/2\)). Further, we consider the conformational fluctuations in the protein. To allow an angle fluctuation of \(\omega\), a harmonic potential \(U_{\omega}=(k_{\omega}k_{\rm B}T/2)(\omega-\omega_{0})^{2}\) is added, where \(\omega_{0}=\pi/6\). At \(k_{\omega}=0\), the two segments act as two separate rods, and the binding ratio \(\phi_{\rm cy}/\phi_{\rm flat}\) exhibits a smaller peak and broader tail, since the effective bending rigidity is smaller but the orientation is less constrained, respectively (see Fig. 5). As \(k_{\omega}\) increases, the ratio continuously changes into that at the fixed angle. ### Density Dependence As the binding density increases, inter-protein interactions have more significant effects on protein binding. Here, we use the mean-field theory [35; 36; 37], including orientation-dependent excluded-volume interactions based on Nascimentos' theory for three-dimensional liquid crystals [69]. Although 2-rod proteins likely form a smectic liquid crystals at high densities, we consider only the isotropic and nematic phases in this study. The free energy \(F_{\rm p}\) of the bound proteins is expressed Figure 5: Binding density of asymmetric proteins with the harmonic angle potential at the low-density limit. The potential strength is varied as \(k_{\omega}=0\), 1, and 10 at \(\omega_{0}/\pi=1/6\). The lowest lines in the right region (\(1/R_{\rm cy}C_{\rm pm}>2\)) represent the data when the angle is fixed at \(\omega/\pi=1/6\). (a) \(\kappa_{\rm pd}=0\) and \(C_{\rm pd}/C_{\rm pm}=0.2\). (b) \(\kappa_{\rm pd}/\kappa_{\rm pm}=0.5\) and \(C_{\rm pd}/C_{\rm pm}=0.1\). Figure 6: Binding of symmetric proteins (\(\kappa_{\rm pd}=C_{\rm pd}=0\)) for finite densities \(\phi_{\rm cy}\) at \(\omega/\pi=1/6\). The second- and first-order transitions occur at \(1/R_{\rm cy}C_{\rm pm}=1.6\) and 1.8, respectively. (a) Angle \(\theta_{\rm sc}\) between the orientational order and azimuthal direction. (b) Orientational degree S of the proteins. The right line represents the maximum density \(\phi_{\rm lim}(S)\). (c) Distribution of the angle \(\theta_{\rm pc}\). The solid lines represent the data for \(\phi_{\rm cy}=0.2\), 0.57, and 0.58 at \(1/R_{\rm cy}C_{\rm pm}=1.6\). The dashed lines represent the data for \(\phi_{\rm cy}=0.5\) and 0.6 at \(1/R_{\rm cy}C_{\rm pm}=1.8\). as follows: \[F_{\rm p} = \int f_{\rm p}\ {\rm d}A, \tag{10}\] \[f_{\rm p} = \frac{\phi k_{\rm B}T}{a_{\rm p}}\Big{[}\ln(\phi)+\frac{S\Psi}{2}- \ln\Big{(}\int_{-\pi}^{\pi}w(\theta_{\rm ps})\ {\rm d}\theta_{\rm ps}\Big{)}\Big{]},\] (11) \[w(\theta_{\rm ps}) = g\exp\Big{[}\Psi s_{\rm p}(\theta_{\rm ps})+\bar{\Psi}\sin( \theta_{\rm ps})\cos(\theta_{\rm ps})\] (12) \[-\frac{U_{\rm 2rod}}{k_{\rm B}T}\Big{]}\Theta(g),\] \[g = 1-\phi(b_{0}-b_{2}Ss_{\rm p}(\theta_{\rm ps})), \tag{13}\] where \(s_{\rm p}(\theta_{\rm ps})=\cos^{2}(\theta_{\rm ps})-1/2\) and \(\Theta(x)\) denotes the unit step function. The order of proteins is obtained by an ensemble average, \(\langle...\rangle\), of \(2s_{\rm p}\): \[S = 2\langle s_{\rm p}(\theta_{\rm ps})\rangle, \tag{14}\] \[= 2\frac{\int_{-\pi}^{\pi}s_{\rm p}(\theta_{\rm ps})w(\theta_{\rm ps })\ {\rm d}\theta_{\rm ps}}{\int_{-\pi}^{\pi}w(\theta_{\rm ps})\ {\rm d}\theta_{\rm ps}}, \tag{15}\] where \(\theta_{\rm ps}\) denotes the angle between the major protein axis and ordered direction \({\bf S}\) (see Fig. 1). The factor \(g\) expresses the effect of the orientation-dependent excluded volume, where \(b_{0}=(4+b_{\rm exc}/2)\lambda\) and \(b_{2}=b_{\rm exc}\lambda\). Here, we use \(\lambda=1/3\) and \(b_{\rm exc}=1.98\) for an elliptic protein with an aspect ratio of \(\ell_{1}/\ell_{2}=3\), where \(\ell_{1}\) and \(\ell_{2}\) are the lengths in the major and minor axes, respectively [36]. Proteins can have non-overlapping conformations at \(g>0\), and hence, the maximum density is given by a function of \(S\) as \(\phi_{\rm lim}(S)=1/(b_{0}-b_{2}S/2)\) (see the leftmost line in Fig. 6(b)). The quantities \(\Psi\) and \(\bar{\Psi}\) are the symmetric and asymmetric components of the nematic tensor, respectively, and are determined using Eq. (15) and \(\langle\sin(\theta_{\rm ps})\cos(\theta_{\rm ps})\rangle=0\). Further details of this theory are described in Refs. [35] and [36]. For the symmetric proteins (\(\kappa_{\rm pd}=C_{\rm pd}=0\)), the density dependence is qualitatively the same as that for the 1-rod proteins (\(\omega=0\)) reported in Ref. [36]. On a cylindrical membrane with a small curvature of \(1/R_{\rm cy}C_{\rm pm}\lesssim 0.2\), the 2-rod proteins with \(\omega=\pi/6\) exhibit an isotropic-nematic transition at \(\phi_{\rm cy}\simeq 0.11\) (data not shown). At a middle curvature \(0.2\lesssim 1/R_{\rm cy}C_{\rm pm}\leq 1\), the proteins exhibit no phase transition, and the orientational order \(S\) increases continuously with increasing \(\phi_{\rm cy}\) (data not shown). At \(1/R_{\rm cy}C_{\rm pm}<1\), the preferred direction of the proteins is the azimuthal direction of the membrane tube, i.e., \(\theta_{\rm sc}=0\). At \(1/R_{\rm cy}C_{\rm pm}\gtrsim 1.3\), the preferred direction is tilted symmetrically to the positive and negative angles, as previously explained (see Fig. 2). At low densi Figure 7: Phase diagram for (a) symmetric proteins and (b) asymmetric proteins. (a) The dashed line represents the phase boundary of the second-order transition. Two states coexist between two solid lines. (b) Boundaries of the metastable states. From top to bottom: The upper three lines represent the data for \(C_{\rm pd}/C_{\rm pm}=0.2\), \(0.1\), and \(0.05\) at \(\kappa_{\rm pd}=0\), from top to bottom. The lowest line represents the data for \(\kappa_{\rm pd}/\kappa_{\rm pm}=0.5\) and \(C_{\rm pd}=0\). Figure 8: Binding of asymmetric proteins with \(C_{\rm pd}/C_{\rm pm}=0.1\), \(\kappa_{\rm pd}=0\), and \(\omega/\pi=1/6\) at finite densities \(\phi_{\rm cy}\). (a) Angle \(\theta_{\rm sc}\) between the orientational order and azimuthal direction at \(1/R_{\rm cy}C_{\rm pm}=1.6\) and \(1.8\). (b) Orientational degree \(S\) of the proteins at \(1/R_{\rm cy}C_{\rm pm}=1.6\) and \(1.8\). (c) Distribution of the angle \(\theta_{\rm pc}\) for \(\phi_{\rm cy}=0.4\) and \(0.6\) at \(1/R_{\rm cy}C_{\rm pm}=1.8\). The solid and dashed lines represent the equilibrium and metastable states, respectively. ties, proteins with positive and negative preferred angles can coexist at the same amount with keeping \(\theta_{\rm sc}=0\). In contrast, at high densities, this coexistence is prevented by the larger excluded-volume interactions between proteins of the different angles. Second- and first-order phase transitions occur between these two states for middle membrane curvatures (\(1/R_{\rm cy}C_{\rm pm}<1.6\)) and high membrane curvatures (\(1/R_{\rm cy}C_{\rm pm}>1.6\)), respectively (see Figs. 6 and 7(a)). At the first-order transition, the distribution of \(\theta_{\rm pc}\) changes from two symmetrical peaks to either peak (see the dashed lines in Fig. 6(c)), and \(\theta_{\rm sc}\) and \(S\) exhibit discrete changes (see Fig. 6(a) and (b)). Conversely, for the second-order transition, the two peaks are pushed to \(\theta_{\rm pc}=0\) and unified to reduce the excluded volume before the transition, following which the single peak continuously moves into either the positive or negative direction above the transition point (see the solid lines in Fig. 6(c)). In the phase diagram, the curves of the second- and first-order transitions meet at a single point as shown in Fig. 7(a). A similar phase diagram is obtained for the 1-rod proteins (\(\omega=0\)). For the asymmetric proteins (\(\kappa_{\rm pd}\neq 0\) or \(C_{\rm pd}\neq 0\)), the transition becomes a continuous change; however, a metastable state appears at a high density (see Figs. 8 and 9). At \(\kappa_{\rm pd}=0\) and \(C_{\rm pd}>0\), the negative angles of \(\theta_{\rm pc}\) have lower bending energies (see Fig. 3(c)), such that the branch of \(\theta_{\rm sc}<0\) becomes the equilibrium state (see Fig. 8). The other branch becomes the metastable state that appears at higher membrane curvatures, and the lower-bound curvature increases with increasing \(C_{\rm pd}\) (see Fig. 7(b)). Interestingly, at \(\kappa_{\rm pd}/\kappa_{\rm pm}=0.5\) and \(C_{\rm pd}=0\), the equilibrium value of \(\theta_{\rm sc}\) changes the sign with increasing \(\phi_{\rm cy}\) (see Fig. 9(a)). This is due to high and low peaks at \(\theta_{\rm pc}=\theta_{1}\) and \(-\theta_{2}\) with \(0<\theta_{1}<\theta_{2}\) (see the middle solid line in Fig. 4(c)). With increasing \(\phi_{\rm cy}\), the lower peak is reduced and subsequently disappears in the equilibrium state (see the solid lines in Fig. 9(c)). Thus, the asymmetry of proteins causes the transition to become a continuous change. It resembles with the aforementioned change from the first-order to continuous change at \(1/R_{\rm cy}C_{\rm pm}\simeq 0.2\) in the symmetric proteins. Note that taking a different protein axis for the elliptical approximation does not change this binding behavior except for the protein angles. When the axis of segment \(a\) is taken, the values of \(\theta_{\rm sc}\) and \(\theta_{\rm pc}\) are shifted by \(\omega/2\), while \(S\) is unchanged. ## IV Proteins of three-fold or higher rotational symmetry Single proteins or protein assemblies often exhibit \(N\)-fold rotational symmetry with \(N\geq 3\). First, we consider a case with perfect rotational symmetry. The bending energy of an \(N\)-fold rotationally symmetric protein is generically expressed as \[U_{\rm r,N}(H,K,D,\theta_{\rm p1})= \tag{16}\] \[\sum_{j=1}^{N}u_{0}\Big{(}H,K,D\cos\big{(}2(\theta_{\rm p1}+\frac {2\pi j}{N})\big{)},D\sin\big{(}2(\theta_{\rm p1}+\frac{2\pi j}{N})\big{)} \Big{)},\] where \(K=C_{1}C_{2}\) is the Gaussian curvatures, \(u_{0}(H,K,D\cos(2(\theta_{\rm p1}+2\pi j/N)),D\sin(2(\theta_{\rm p1}+2\pi j/N)))\) is the bending energy of the \(j\)-th segment (or protein), and \(\theta_{\rm p1}\) is the angle between the axis of the first segment and direction of either principal membrane curvature. Here, we only consider the linear and squared terms, as is usual for bending energies. For the symmetry, \(U_{\rm r,N}(H,K,D,\theta+2\pi/N)=U_{\rm r,N}(H,K,D,\theta)\). To satisfy this relation, the linear terms (\(\propto\cos(2(\theta_{\rm p1}+\frac{2\pi j}{N}))\) and \(\sin(2(\theta_{\rm p1}+\frac{2\pi j}{N}))\)) vanish for \(N\geq 3\). The squared terms (\(\propto\cos(4(\theta_{\rm p1}+\frac{2\pi j}{N}))\) and \(\sin(4(\theta_{\rm p1}+\frac{2\pi j}{N}))\)) vanish for \(N=3\) and \(N\geq 5\), because \(e^{8\pi{\rm i}/N}=1\) is satisfied at \(N=1\), \(2\), and \(4\) but otherwise not. Therefore, for the rotational symmetry of \(N=3\) and \(N\geq 5\), the bending energy is independent of \(\theta_{\rm p1}\) but is a function of \(H\) and \(K\), since \(D^{2}=H^{2}-K\). Hence, it is laterally isotropic, Figure 9: Binding of asymmetric proteins with \(\kappa_{\rm pd}/\kappa_{\rm pm}=0.5\), \(C_{\rm pd}=0\), and \(\omega/\pi=1/6\) at finite densities \(\phi_{\rm cy}\). (a) Angle \(\theta_{\rm sc}\) between the orientational order and azimuthal direction at \(1/R_{\rm cy}C_{\rm pm}=1.6\) and \(1.8\). (b) Orientational degree \(S\) of the proteins at \(1/R_{\rm cy}C_{\rm pm}=1.6\) and \(1.8\). (c) Distribution of the angle \(\theta_{\rm pc}\) for \(\phi_{\rm cy}=0.5\) and \(0.6\) at \(1/R_{\rm cy}C_{\rm pm}=1.8\). The solid and dashed lines represent the equilibrium and metastable states, respectively. and the Canham-Helfrich energy [26; 27] is applicable. For \(N=4\), the \(\theta_{\rm p1}\)-dependent term remains. When \(u_{0}=(\kappa_{\rm p}a_{\rm p}/2)(H+D\cos(2(\theta_{\rm p1}+\frac{2\pi j}{N}))- C_{\rm p})^{2}\) is used, the protein bending energy is given by \(U_{\rm r,4}(H,K,D,\theta_{\rm p1})=\kappa_{\rm p}a_{\rm p}[2H^{2}+D^{2}(\cos(4 \theta_{\rm p1})+1)+2C_{\rm p}^{2}]\). Even when a protein has rotational symmetry in its native structure, the proteins can take asymmetric shapes under protein deformation. We consider a protein with three-fold rotational symmetry, as shown in the inset of Fig. 10(a). Three crescent-rod-like segments are connected at the branching point with harmonic angle potentials: \[U_{\rm 3rod} = \sum_{j=1}^{3}\frac{\kappa_{\rm p}a_{\rm p}}{2}\Big{(}H+D\cos \Big{(}2\big{(}\theta_{\rm p1}+\frac{2\pi j}{N}\big{)}\Big{)}-C_{\rm p}\Big{)} ^{2} \tag{17}\] \[+\frac{k_{\omega}k_{\rm B}T}{2}\Big{(}\omega_{j}-\frac{2\pi}{3} \Big{)}^{2},\] where \(\omega_{j}\) is the angle between neighboring segments. We use \(\kappa_{\rm p}=50k_{\rm B}T\) and \(a_{\rm p}C_{\rm p}^{2}=0.1\). The protein deformation is quantified by a shape parameter \(\alpha_{3}=\sqrt{\langle(r_{\rm G}/\ell_{\rm p})^{2}\rangle}\), where \(r_{\rm G}\) is the distance between the center of mass and branching point of the protein, and \(\ell_{\rm p}\) is the length of each protein segment. The orientational order \(S_{z}\) along the (\(z\)) axis of the membrane tube is given by \(S_{z}=2(z_{\rm G}/r_{\rm G})^{2}-1\), where \(z_{\rm G}\) is the \(z\) component of the center of mass of the protein (the branching point is the origin of the coordinate). As the coefficient \(k_{\omega}\) of the angle potentials decreases, the protein exhibits a larger deformation (see Fig. 10(b)) so that each segment can take its preferred orientation more frequently. Thus, the binding ratio \(\phi_{\rm cy}/\phi_{\rm flat}\) increases with decreasing \(k_{\omega}\) (see Fig. 10(a)). The deformed protein is oriented along the azimuthal and tube axes at low and high membrane curvatures, respectively (see Fig. 10(c)). Therefore, protein deformation can induce anisotropic bending energy in rotationally symmetric proteins and enhance curvature sensing. ## V Summary We have studied curvature sensing of proteins with asymmetric shapes and/or protein deformation. Protein asymmetry breaks the symmetry of sensing with respect to the azimuthal direction on cylindrical membranes, such that the transition between the symmetrical and asymmetrical angle distributions disappears and the other branch becomes a metastable state. The \(N\)-fold rotationally symmetric proteins with \(N=3\) or \(N\geq 5\) exhibit laterally isotropic bending energies, when the protein deformation is negligible. However, their deformation can generate asymmetry in the protein shape and enhance protein binding to membranes with preferred curvatures. In this study, we consider the proteins consisting of two rods as asymmetric proteins. The internal structures affect the curvature sensing at membrane curvatures higher than their preferred curvatures, whereas only small modifications occur at lower curvatures. In general, proteins can have more complicated internal structures. The protein bending energy can have nine independent coefficients in Eq. (7) for \(H^{2}\), \(H\), \(K\), \(D\cos(2\theta_{\rm pc})\), \(HD\cos(2\theta_{\rm pc})\), \(D^{2}\cos(4\theta_{\rm pc})\), \(D\sin(2\theta_{\rm pc})\), \(HD\sin(2\theta_{\rm pc})\), and \(D^{2}\sin(4\theta_{\rm pc})\). However, it is difficult to determine such many parameters. The number of parameters should practically be reduced based on each protein structure and experimental/simulation data. For atomistic and coarse-grained molecular simulations, binding of a single protein is relatively easy to investigate. The angle distribution of the protein axis on cylindrical or buckled membranes [70; 71], and the curvature sensing of proteins can be evaluated. A few types of proteins and peptides (amphipathic peptides [58] and F-BAR protein Pacsin1 [72]) have been investigated only Figure 10: Binding of three-fold rotationally symmetric proteins at the low-density limit. (a) Binding density \(\phi_{\rm cy}\) on a cylindrical membrane with respect to the density \(\phi_{\rm flat}\) on a flat membrane. From top to bottom: Upper four lines: \(k_{\omega}=0.2\), 1, 5, and 25, from top to bottom. Lowest line: the angles are fixed as \(\omega_{1}=\omega_{2}=\omega_{3}=2\pi/3\). The schematic of the protein is shown in the inset. (b) Deformation degree \(\alpha_{3}\) for \(k_{\omega}=0.2\), 1, 5, and 25. (c) Orientational degree \(S_{z}\) along the (\(z\)) axis of membrane tube for \(k_{\omega}=0.2\), 1, 5, and 25. on buckled membranes of a single membrane shape. Protein bending properties can be more quantitatively evaluated using membranes with various curvatures. In highly buckled membranes, the membrane curvature under the proteins can vary along the protein axis. This local curvature difference can also modify curvature sensing. These protein properties are important for quantitatively understanding of curvature sensing and generation. ###### Acknowledgements. This work was supported by JSPS KAKENHI Grant Number JP21K03481.
2310.14710
Random Forest Kernel for High-Dimension Low Sample Size Classification
High dimension, low sample size (HDLSS) problems are numerous among real-world applications of machine learning. From medical images to text processing, traditional machine learning algorithms are usually unsuccessful in learning the best possible concept from such data. In a previous work, we proposed a dissimilarity-based approach for multi-view classification, the Random Forest Dissimilarity (RFD), that perfoms state-of-the-art results for such problems. In this work, we transpose the core principle of this approach to solving HDLSS classification problems, by using the RF similarity measure as a learned precomputed SVM kernel (RFSVM). We show that such a learned similarity measure is particularly suited and accurate for this classification context. Experiments conducted on 40 public HDLSS classification datasets, supported by rigorous statistical analyses, show that the RFSVM method outperforms existing methods for the majority of HDLSS problems and remains at the same time very competitive for low or non-HDLSS problems.
Lucca Portes Cavalheiro, Simon Bernard, Jean Paul Barddal, Laurent Heutte
2023-10-23T08:49:39Z
http://arxiv.org/abs/2310.14710v2
# Random Forest Kernel for High-Dimension Low Sample Size Classification ###### Abstract High dimension, low sample size (HDLSS) problems are numerous among real-world applications of machine learning. From medical images to text processing, traditional machine learning algorithms are usually unsuccessful in learning the best possible concept from such data. In a previous work, we proposed a dissimilarity-based approach for multi-view classification, the Random Forest Dissimilarity (RFD), that performs state-of-the-art results for such problems. In this work, we transpose the core principle of this approach to solving HDLSS classification problems, by using the RF similarity measure as a learned precomputed SVM kernel (RFSVM). We show that such a learned similarity measure is particularly suited and accurate for this classification context. Experiments conducted on 40 public HDLSS classification datasets, supported by rigorous statistical analyses, show that the RFSVM method outperforms existing methods for the majority of HDLSS problems and remains at the same time very competitive for low or non-HDLSS problems. keywords: High Dimension Low Sample Size, Classification, Random Forest, Similarity learning, SVM, Kernel + Footnote †: journal: Statistics and Computing ## 1 Introduction In many modern machine learning problems, data are made available as small samples while being described in high dimensions. These datasets are generally referred to as "High Dimension, Low Sample Size" (HDLSS) datasets. These situations occur when the data are intrinsically complex and need to be described with many features, and when at the same time they are available only in potentially very limited quantities. Formally, a dataset \(T\) composed of \(\{(\mathbf{x}_{1},y_{1}),(\mathbf{x}_{2},y_{2}),\ldots,(\mathbf{x}_{n},y_{n})\}\) instances, where \(\mathbf{x}_{i}\) is the vector of descriptive features belonging to \(\mathbb{R}^{m}\) and \(y_{i}\) is its corresponding class label, is considered to be a HDLSS dataset when \(m\gg n\)[1; 2]. However, it should be noted that there is no consensus on the threshold to apply to the ratio between \(m\) and \(n\) to unambiguously decide whether a dataset is pertaining to an HDLSS learning problem [1; 2; 3]. HDLSS datasets are recurring in many real-world applications, including, but not limited to, medical imaging, DNA microarrays, and text processing [1; 4]. For instance, for pattern recognition tasks in medical imaging, numerical image representations are known to be high dimensional, whether they are built with hand-crafted features or with automatically learned deep features [5]. This is combined with the fact that acquiring samples for medical-related pattern recognition applications is not an easy task due to health data privacy policies, political concerns, or even the homogenization of medical protocols. In general, this means dealing with particularly small-sized datasets. HDLSS datasets pose a number of challenges to general-purpose machine learning algorithms. These datasets usually embed very complex concepts due to a large number of relevant features, while they generally do not contain a sufficient number of instances for learning these concepts. This leads to several machine learning issues, well known to the community and often referred to as the curse of dimensionality. For example, for the many methods based on distance metrics, like the \(k\)-Nearest Neighbors methods, the notion of "neighborhood" becomes progressively ill-defined in high dimensions [6]. This is because the distances between instances become more and more similar when the dimension increases [7]. Another example of difficulties encountered in learning HDLSS data is the presence of outliers. Some methods are known to be particularly sensitive to outliers [8], and such data are quite common in high-dimensional feature spaces [6]. A last example of well-known learning problems often encountered in the HDLSS context is overfitting. In this context, many general-purpose machine learning techniques are likely to overfit. It is the case of SVM-based methods for example, for which the so-called data-piling phenomenon is particularly salient in the HDLSS context and leads to strong overfitting [9]. The most common approach in the literature for dealing with HDLSS learning tasks is based on dimensionality reduction. The key idea is to reduce the dimension to transform a high dimensional problem into a learning problem where \(m\approx n\). However, these methods can be considered as workarounds rather than true solutions for learning HDLSS datasets. Moreover, they often suffer from two major drawbacks: * Using dimensionality reduction often result in a significant loss of information if the features are mostly relevant and poorly correlated [10; 11]. * Selecting a small subset of the most relevant features may not yield good results if the number of data is too small [12; 11]. In contrast, few methods have been proposed in the literature specifically to handle HDLSS learning tasks, which we review in Section 2. The most efficient ones are derived from the SVM principle, with a modified formulation of the underlying optimization problem in order to better adapt to the HDLSS specificities [13; 11]. However, we think that another promising approach is to use a HDLSS-compliant similarity measure as a kernel for SVM, instead of relying on a specific problem formulation. This idea has been recently applied to multi-view learning as in this context, similarity representations allow to easily merge the different views together [10]. This approach, named Random Forest Dissimilarity (RFD), leans on Random Forest classifiers to build dissimilarity representations that are then used as pre-computed kernels in SVM classifiers. We show in this paper that this principle can be straightforwardly and efficiently applied to HDLSS classification problems, for which we think it presents real assets: (i) Learning based on (dis)similarities between instances is a good way to deal with particularly small-sized datasets and (ii) the Random Forest (dis)similarity measure is known to be particularly robust to high dimensions. Therefore, this work presents: * a transposition of the RFD method to HDLSS classification tasks with an emphasis on its strength to face the HDLSS challenges. * a rigorous experimental validation, including comparisons with several state-of-the-art HDLSS learning methods on 40 real-world problems, along with a thorough statistical analysis of the results. Note that for simplicity, we focus only on classification problems in this study. However, our proposal is straightforwardly applicable to regression tasks, as well as all the other methods used for comparison. The remainder of this paper is organized as follows. Section 2 reviews the main state-of-the-art approaches for HDLSS learning. Section 3 presents the Random Forest SVM method and discuss its assets fro HDLSS classification. Section 4 describes the experimental setting, followed by the presentation and analysis of the results in Section 5. Finally, Section 6 gives our conclusion and future works. ## 2 Related Work This section focuses on solutions to address the challenges posed by HDLSS learning tasks and provides an overview of the leading solutions that can be found in the literature, with the exception of dimensionality reduction approaches, not included for reasons explained in the introduction. ### Limitations of traditional methods for dealing with HDLSS problems Traditional general-purpose machine learning techniques like Discriminant Analysis [14], \(k\)-Nearest Neighbors [15], and Support Vector Machines [9] usually fail to handle HDLSS datasets mainly due to the fact that such a context leads to ill-posed problems or to unsuitable learning conditions. For example, Linear Discriminant Analysis (LDA) suffers from a well-known problem when the dimension is larger than the number of training instances, i.e., \(m\gg n\). The underlying principle of LDA is to find a projection of the data in which the between-class separability is maximized while the within-class variability is minimized. To do so, it uses a within-class scatter matrix that is known to be singular when \(m\gg n\). This is an important problem since the non-singularity of this matrix is required to find the LDA basis vectors. The usual ways to circumvent this situation is either to perform dimensionality reduction beforehand (which is beyond the scope of this work as previously explained), or to use regularization techniques, as in the Regularized Discriminant Analysis [14] which has been successfully used for real-world HDLSS problems [16]. Distance-based classification techniques, like the \(k\)-Nearest Neighbors family of methods, are classifiers that usually perform well when \(n>m\). However, they are also known to suffer from the curse of dimensionality in the opposite situation because the pairwise distances between all observations concentrate around a single value in this case [17]. Similarly as before, a large part of the solutions proposed in the literature are based on dimensionality reduction techniques, e.g., [18]. A few others, however, propose alternative mechanisms for dealing with HDLSS data like weighted voting scheme [19], fuzzy neighborhood [20], or new proximity measurements [21] to name a few. Nevertheless, most state-of-the-art methods tailored for HDLSS classification in the literature are based on adaptations of SVM classifiers. SVM are known to be particularly prone to overfitting in the HDLSS context, which is often illustrated through the data-piling phenomenon [9]. The following section details this phenomenon as well as the different solutions that have been proposed in the literature. ### SVM-inspired methods In binary classification, the underlying principle of Support Vector Machines (SVM) [22] is to find a hyperplane that best separates the instances according to their classes by maximizing the distance (called the margin) to its closest instances (called the support vectors). The solution is the hyperplane whose parameters that minimizes the following problem: \[\min_{\mathbf{w},b,\{\xi_{i}\}} \frac{1}{2}\|\mathbf{w}\|^{2}+C\sum_{i=0}^{n}\xi_{i} \tag{1}\] \[\text{s.t. }y_{i}\left(\mathbf{w}^{\top}\mathbf{x}_{i}+b\right) \geq 1-\xi_{i},i=1,\ldots,n\] \[\xi_{i}\geq 0,i=1,\ldots,n\] where \(\mathbf{w}\) is the normal vector of the hyperplane, \(b\) its intercept term, and where \(C\) is a regularization hyperparameter to control a trade-off between maximizing the margin and taking into account some training errors when the problem is not strictly linearly separable. The behavior of this SVM method for HDLSS classification has been extensively analyzed and discussed in [9]. The authors show that in such a setting, SVM classifiers face the so-called _data-piling_ problem. This problem arises specifically with HDLSS datasets because, in this case, a large proportion of the training instances are support vectors, i.e. they lie on the margin boundaries resulting from the minimization of Equation 1. When projected in the discriminant direction (i.e. to the normal vector \(\mathbf{w}\) obtained by minimizing Equation 1), all these support vectors "pile up on top of each other", i.e. they are projected onto exactly two points, one for each class. In this discriminant projection, the separating hyperplane is exactly halfway between these two points. Nevertheless, this usually reflects severe overfitting since independent test instances may not be projected the same way in the discriminant direction. As a result, while the resulting linear classifier perfectly fit (most of) the training instances, there is a strong risk that it will not generalize well to new data points. This phenomenon has been widely illustrated and analyzed in the literature and we refer the reader to [9, 23, 2] for further details. Variations have been proposed to solve this problem, mainly by modifying the underlying optimization problem. For example, the Distance Weighted Discrimination (DWD) [9] leans on a minimization problem for which the best separating hyperplane is the one that maximizes the harmonic mean of all distances to the hyperplane: \[\min_{\mathbf{w},b,\{\xi_{i}\}} \sum_{i=1}^{n}\left(\frac{1}{r_{i}}+C\xi_{i}\right) \tag{2}\] \[\text{s.t. }r_{i} =y_{i}\left(\mathbf{w}^{\top}\mathbf{x}_{i}+b\right)+\xi_{i},\] \[r_{i} \geq 0,\xi_{i}\geq 0,\|\mathbf{w}\|^{2}\leq 1\] In this process all training instances are taken into account to find the hyperplane, instead of relying only on the support vectors. A variant of this principle, named Weighted Distance Weighted Discrimination (wDWD) [23], has been proposed to allow for more flexibility and robustness, as the method is known to be sensitive to imbalanced classes and quickly become computationally expensive as the number of training instances increases [2]. Another SVM-inspired method is proposed in [2] named the Population-Guided Large Margin Classification (PGLMC) method, that aims to take up the idea of DWD while improving its computational performance. Equation 1 is modified to take into account the distance between the centroids of the classes (instead of all the traning instances) to ensure the training instances of both classes are as far as possible along the projecting direction \(\mathbf{w}\): \[\min_{\mathbf{w},b,\{\xi_{i}\}} \frac{\|\mathbf{w}\|^{2}}{(m_{1}-m_{2})^{\top}\mathbf{w}}+C\sum_{i= 0}^{n}\xi_{i} \tag{3}\] \[\text{s.t.}\ y_{i}\left(\mathbf{w}^{\top}\mathbf{x}_{i}+b\right) \geq 1-\xi_{i},i=1,\ldots,n\] \[(m_{1}-m_{2})^{\top}\mathbf{w}\geq 2\] (4) \[\xi_{i}\geq 0,i=1,\ldots,n\] where \(m_{1}\) (resp. \(m_{2}\)) is the centroid of the first class (resp. the second class). Following a similar principle, a slightly different formulation is proposed in [11] that still maximize the distance between classes but that also strives to distribute the points as much as possible in the projection direction to avoid data-piling. The resulting method is named No-separated Data Maximum Dispersion classifier (NPDMD). ### SVM with an HDLSS-robust kernel The solutions mentioned above are all based on different formulations of the underlying optimization problem, whose solution leads to the final linear classifier. However, SVMs are known to allow the learning of non-linear classifiers using the kernel trick. Intuitively, it consists in applying a non-linear projection of the data into an _implicit_ feature space in which a separating hyperplane exists. The key feature of this projection is that it does not require coordinates to be calculated explicitly, as all operations are performed through scalar products in this space, calculated with a kernel function. Formally, this trick is based on the Lagrangian dual form of the SVM optimization problem: \[\max_{\boldsymbol{\alpha}} \sum_{i=1}^{n}\alpha_{i}-\frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^{n} \alpha_{i}\alpha_{j}y_{i}y_{j}\mathbf{x}_{j}^{\top}\mathbf{x}_{i} \tag{5}\] \[\text{s.c.} \alpha_{i}\geq 0,\quad i=1,\ldots,n\] (6) \[\sum_{i=1}^{n}\alpha_{i}y_{i}=0 \tag{7}\] where the \(\alpha_{i}\) are the Lagrange multipliers. This dual form can be efficiently solved by quadratic programming algorithm, with the notable advantage of searching for \(n\) parameters (the Lagrange multipliers) instead of \(m+1\) parameters with the primal version (the \(m\) values of \(\mathbf{w}\) and \(b\)). This makes it particularly suitable for HDLSS problems where \(m\gg n\). This also allows to express the resulting classifier has a function of the support vectors: \[h(\mathbf{x})=\sum_{i=1}^{n}\alpha_{i}y_{i}\mathbf{x}_{i}^{\top}\mathbf{x}+b= \sum_{i=1}^{n}\alpha_{i}y_{i}K(\mathbf{x}_{i},\mathbf{x}_{j})+b \tag{8}\] As a consequence, all instances are only accessed through scalar product, allowing the use of any kernel functions \(K\), as in the right-hand expression of Equation 8. We refer the reader to [24] for further details about the kernel trick, and kernel methods in general. There are several popular kernels for SVM in the literature, but the best performing one for a wide range of real-world problems is the radial basis function (RBF) kernel, defined as: \[K(\mathbf{x}_{i},\mathbf{x}_{j})=\exp\left(-\gamma\|\mathbf{x}_{i}-\mathbf{x} _{j}\|^{2}\right) \tag{9}\] where \(\mathbf{x}_{i},\mathbf{x}_{j}\) is any pair of instances, and where \(\gamma\) is a hyperparameter. It is important to note that the behavior of the resulting model is highly sensitive to the value of \(\gamma\) and that it must be tuned in conjunction with the regularization parameter \(C\). The traditionally recommended protocol for tuning these hyperparameters is detailed in Section 4. The asymptotic behaviors of SVM with RBF kernel in the HDLSS context are further investigated in [3]. This study shows that nonlinear SVM classifiers with RBF kernels are highly biased in the HDLSS context, especially with imbalanced classes, and we therefore believe that using an HDLSS-robust kernel would be more relevant. For this, one can rely on the fact that the kernels can be directly interpreted as a similarity measurement [25]. In fact, any similarity measure can be used as kernel provided that it fulfills specific mathematical conditions [26; 10]. Therefore, using SVM for tackling the HDLSS classification challenges could be addressed by proposing a suitable similarity measure instead of modifying the underlying SVM optimization problem. This is the main idea of the method we propose to evaluate in this work and which is described in the next section. ## 3 The RFSVM method ### Using Random Forest as a kernel Random Forest is a very versatile general-purpose learning method, that have shown to be accurate for many real-world problems [27]. These methods are also known to provide a number of mechanisms for analysis and interpretability, such as a similarity (or proximity) measure [10]. For computing the RF similarity between any pair of instances, one must have a previously trained RF classifier noted \(H(\mathbf{x})=\{h_{k}(\mathbf{x})\ |\ 1\leq k\leq M\}\), made up with \(M\) decision trees \(h_{k}\). Any RF learning algorithm can be used for that purpose, as the similarity measurement leans on the final ensemble of decision trees grown during learning. Note that for all experiments of this work, the Breiman's Random Forest method [28] has been used, via the implementation proposed in the Scikit-learn python library [29]. The similarity between two instances \((\mathbf{x}_{i},\mathbf{x}_{j})\) is inferred by the forest by comparing the descending paths followed by both instances in each tree: let \(\mathcal{L}_{k}\) denote the set of leaves of \(h_{k}\), and let \(l_{k}(\mathbf{x})\) denote a function from the input domain \(\mathcal{X}\) to \(\mathcal{L}_{k}\), that returns the leaf of \(h_{k}\) where \(\mathbf{x}\) lands when one wants to predict its class. The similarity measure \(s_{k}\) is defined as in Equation 10: if the two instances \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) land in the same leaf of \(h_{k}\), then the similarity between both instances is set to 1, else it is equal to 0. \[s_{k}(\mathbf{x}_{i},\mathbf{x}_{j})=\left\{\begin{array}{ll}1,&\text{if }l_{k}( \mathbf{x}_{i})=l_{k}(\mathbf{x}_{j})\\ 0,&\text{otherwise}\end{array}\right. \tag{10}\] The RF similarity measure \(s_{H}(\mathbf{x}_{i},\mathbf{x}_{j})\) derived from the whole forest \(H\) consists in calculating \(s_{k}\) for each tree \(h_{k}\), and in averaging the resulting values over the \(M\) trees: \[s_{H}(\mathbf{x}_{i},\mathbf{x}_{j})=\frac{1}{M}\sum_{k=1}^{M}s_{k}(\mathbf{x }_{i},\mathbf{x}_{j}) \tag{11}\] A remarkable property of this similarity measure is that it can be assimilated to a positive semi-definite (p.s.d.) kernel function [10]. Similarly, one can show that the similarity matrix \(\mathbf{S}_{H}\): \[\mathbf{S}_{H}=\begin{bmatrix}s_{H}(x_{1},x_{1})&s_{H}(x_{1},x_{2})&\dots&s_{ H}(x_{1},x_{n})\\ s_{H}(x_{2},x_{1})&s_{H}(x_{2},x_{2})&\dots&s_{H}(x_{2},x_{n})\\ \dots&\dots&\dots&\dots\\ s_{H}(x_{n},x_{1})&s_{H}(x_{n},x_{2})&\dots&s_{H}(x_{n},x_{n})\end{bmatrix} \tag{12}\] whose elements are the RF similarity measures between each pair of training instances, is a positive semi-definite matrix (see the proof in Appendix A of [10]). As a consequence, such a matrix can be used in a kernel method as a pre-computed kernel. In a nutshell, using such a pre-computed kernel with SVM classifier, noted RFSVM in the following, consists of: 1. Training a Random Forest classifier \(H\) on the training set \(T\); 2. Building a similarity matrix \(\mathbf{S}_{H}\) from \(H\); 3. Feeding an SVM classifier with \(\mathbf{S}_{H}\) as a pre-computed kernel. As for the prediction phase, once the RFSVM classifier is trained this way, an unseen testing instance \(\mathbf{x}\) can be predicted as follows: 1. The similarity values between \(\mathbf{x}\) and all the \(n\) training instances from \(T\) are computed, leading to a \(n\)-sized vector of \(s_{H}(\mathbf{x},\mathbf{x}_{i}),i=1,\ldots,n\); 2. This vector is given as input to the RFSVM classifier for prediction. The whole learning procedure is further detailed in Algorithm 1. Note that the Random Tree building method used in this work is the CART-based implementation from the Scikit-learn library [29] and the SVM solver used in this work is the LIBSVM solver that implements the sequential minimal optimization (SMO) algorithm for kernelized SVM [30]. ``` input :\(T\), a training set composed of \(n\) instances \((\mathbf{x}_{i},y_{i})\) input :\(M\), the number of trees in the random forest input :\(\Theta\), the random tree learning hyperparameter values input :\(C\), the SVM regularization hyperparameter value output :\(SVM_{H}\), a RFSVM model trained on \(T\) begin // 1. Train a random forest composed of \(M\) random trees \(H\leftarrow\emptyset\) for\(k=0\)to\(M\)do \(T_{k}\gets BootstrapSampling(T)\) \(h_{k}\gets RandomTree(T_{k},\Theta)\) \(H\gets H\cup h_{k}\) end for // Note: the booststrap sampling method and the random tree method used in this work are those of the random forest implementation of the Scikit-learn library [29] // 2. Compute the similarity matrix from the random forest \(S_{H}\gets I_{n}\) //* init. \(S_{H}\)to a \(n\times n\) identity matrix */ for\(i=1\)to\(n-1\)do /* for each pair \(\mathbf{x}_{i},\mathbf{x}_{j}\in T\) */ for\(j=i+1\)to\(n\)do for\(k=1\)to\(M\)do /* for each tree \(h_{k}\in H\) */ \(l_{i}\gets getLeaf(h_{k},\mathbf{x}_{i})\) /* get the leaf of \(h_{k}\) where \(\mathbf{x}_{i}\) lands */ \(l_{j}\gets getLeaf(h_{k},\mathbf{x}_{j})\) /* get the leaf of \(h_{k}\) where \(\mathbf{x}_{j}\) lands */ if\(l_{i}=l_{j}\)then \(S_{H}(i,j)\gets S_{H}(i,j)+1\) end if end for \(S_{H}(i,j)\gets S_{H}(i,j)/M\) \(S_{H}(j,i)\gets S_{H}(i,j)\) end for end for // 3. Use \(\mathbf{S}_{H}\)as a precomputed kernel for the SVM learning \(SVM_{H}\gets LIBSVMSolver(\mathbf{S}_{H},C)\) // Note: the LIBSVM solver used in this work implements the sequential minimal optimization (SMO) algorithm [30] end for ``` **Algorithm 1**The RFSVM learning method ### Discussion It is worth noting that the RFSVM procedure can be applied with any similarity measure provided that the resulting similarity matrix is p.s.d. For example, the well-known cosine measure could be used to replace the Random Forest similarity measure. However, we would like to stress that the Random Forest similarity measure is particularly suitable to do so for classification tasks. The reason is that it is computed in such a way it can well reflect the class belonging of the instances. According to this measure, two instances that belong to the same class are more likely to be similar than two instances from different classes. This is due to the fact that the Random Forest classifier is built beforehand by taking the class into account so that the leaves of all the trees are expected to gather instances from the same class. Similarly, metric learning methods could replace the RF classifier for inferring the similarity values. However, we argue that the state-of-the-art metric learning methods require a formulation of the metric beforehand, which is not the case of RF methods, and more importantly, they are known to be sensitive to high dimensions. For example, most of the state-of-the-art metric learning methods that are based on the Mahalanobis distance suffer from two important drawbacks for HDLSS classification: 1. They are computationally intractable in high dimensions since the number of parameters to learn (the covariance matrix elements) is \(\mathcal{O}(m^{2})\). 2. They face a strong risk of overfitting phenomenon since the number of training instances used to estimate these parameters is very low in the HDLSS context. Consequently, we argue that classical metric learning methods are not suitable for HDLSS problems, contrary to RF classifiers, known to be very robust to high dimensions without the need for a proportionally large training set. To support this statement, the most popular metric learning method, namely Large Margin Nearest Neighbors (LMNN) [31], has been included in the experiments as an alternative to RF classifiers for building a pre-computed kernel in SVM. The resulting method is named LMNNSVM in the following. The RFSVM method explained in the previous section has been successfully applied to multi-view classification problems [32; 10; 33]. These studies have mainly focused on using RF (dis)similarity matrices on each view and fuse them in order to take benefit from the complementarity between the different views. However, the extent to which the success of this approach depends on how well it exploits complementarities in multi-view learning or on the use of the RF similarity measure itself has not yet been studied. Therefore, the main contribution of this work is to extend the experimental study and validate this approach to regular HDLSS single-view classification problems. ## 4 Experiments This section details the experimental protocol, as well as its underlying goal of evaluating how the proposed method compares to general-purpose machine learning methods, similarity-based methods, and HDLSS-specific methods for a variety of datasets that exhibit different levels of HDLSS. ### Datasets Our experimentation encompasses 40 public datasets, among which 21 datasets were acquired from the OpenML repository [34], 3 text processing datasets from the UCI repository [35], and 16 medical datasets from [36]. In the literature, the only consensus to formally define an HDLSS problem is "\(m\gg n\)" [3; 2; 1]. Therefore, _HDLSSness_ could be measured by the ratio between the number of instances and the number of features in the associated dataset. However, this does not take into account the number and imbalance of classes, which can have a major impact on the difficulties arising from HDLSS learning. Therefore, we propose to quantify a level of HDLSS for each dataset, with the measure defined as: \[\Omega=\frac{1}{m}\times\frac{\sum_{j=1}^{c}n_{j}}{c}, \tag{13}\] where \(c\) is the number of classes, \(m\) is the number of features, and \(n_{j}\) is the number of instances from the \(j\)-th class in a dataset. This \(\Omega\) measure corresponds to the average number of instances per class divided by the number of features in a dataset. Consequently, the smaller the value of \(\Omega\), the more HDLSS a dataset is. Using the average number of instances per class, instead of the total number of instances in the dataset, allows to limit the bias introduced by strong class imbalances, such as those found in some of the datasets we selected in our experiments. Table 1 gives a description of all the datasets used in our experiments, with the number of instances, the number of features, the imbalance ratio (IR), the number of classes, and the \(\Omega\) value. The IR is computed by dividing the number of instances from the majority class by the number of instances from the minority class. In this table, datasets are sorted by increasing value of \(\Omega\). One can observe that as \(\Omega\) increases, the number of instances also increases, and the dimensionality decreases. This table also contains a separation between HDLSS datasets in the upper part and non-HDLSS datasets in the lower part, corresponding to values of \(\Omega=1\). This allows to highlight that these datasets have been chosen in a wide range of cases, including datasets corresponding to traditional classification problems. This will also allow us to show that the RFD method remains competitive in a more classical learning context. ### Methods and parametrization In addition to the RFSVM method, several learning methods were selected for comparison, in three families of methods: general-purpose methods, SVM variants, and similarity-based methods. In the first group, the Random Forest classifier and the Extreme Gradient Boosting (XGBoost) method were selected because of their state-of-the-art performance on various ML problems [28; 37; 27]. Regarding SVM variants, the regular method with RBF kernel [22] has been retained, as well as the DWD method presented in Section 2. The reasons why the DWD methods has been retained in our experiments instead of the other SVM variants listed in Section 2 are (i) because the results obtained in [11] from the experimental comparison between all these methods show very similar results without any statistical test of significance supporting the superiority of one method over the others and (ii) because it is the only one with a freely available implementations. As for similarity-based methods, the cosine distance and the LMNN variant of the Mahalanobis distance were used in the same way as with the RFSVM method, i.e. as a precomputed kernel, to support the claims given in the discussion subsection of the Section 3. For each dataset, all classifiers had their hyperparameters tuned using a 3-fold cross-validation procedure. The tuning process was performed using the hyperopt library [38]. This library requires as input a search space for each hyperparameter and the number of evaluations for the tuning process. In the following experiments, the number of evaluations was set to 100, and the optimization criterion was accuracy maximization. Internally, hyperopt conducts hyperparameter optimization by converting the search process into a generative process using Tree-structured Parzen Trees (TPEs) [39]. We refer the reader to [38] for more details about its functionning. \begin{table} \begin{tabular}{|c c c c c c|} \hline Name & Instances & Features & IR & Classes & \(\Omega\) \\ \hline \hline UMIST\_Faces\_Cropped [34] & 575 & 10304 & 2.526 & 20 & 0.003 \\ \hline leukemia [34] & 72 & 7129 & 1.88 & 2 & 0.005 \\ \hline alizadeh-2000-v3 [36] & 62 & 2091 & 2.333 & 4 & 0.007 \\ \hline tr45.wc [34] & 690 & 8261 & 11.429 & 10 & 0.008 \\ \hline laiho-2007 [36] & 37 & 2202 & 3.625 & 2 & 0.008 \\ \hline bittner-2000 [36] & 38 & 2201 & 1.0 & 2 & 0.009 \\ \hline arcene [34] & 200 & 10000 & 1.273 & 2 & 0.010 \\ \hline ramaswamy-2001 [36] & 190 & 1363 & 3.0 & 14 & 0.010 \\ \hline armstrong-2002-v2 [36] & 72 & 2194 & 1.4 & 3 & 0.011 \\ \hline su-2001 [36] & 174 & 1571 & 4.667 & 10 & 0.011 \\ \hline lapointe-2004-v2 [36] & 110 & 2496 & 3.727 & 4 & 0.011 \\ \hline golub-1999-v2 [36] & 72 & 1868 & 4.222 & 3 & 0.013 \\ \hline Dexter [34] & 600 & 20000 & 1.0 & 2 & 0.015 \\ \hline yeoh-2002-v2 [36] & 248 & 2526 & 5.267 & 6 & 0.016 \\ \hline tomlins-2006-v2 [36] & 92 & 1288 & 2.462 & 4 & 0.018 \\ \hline khan-2001 [36] & 83 & 1069 & 2.636 & 4 & 0.019 \\ \hline west-2001 [36] & 49 & 1198 & 1.042 & 2 & 0.020 \\ \hline eating [34] & 945 & 6373 & 1.176 & 7 & 0.021 \\ \hline bhattacharjee-2001 [36] & 203 & 1543 & 23.167 & 5 & 0.026 \\ \hline micro-mass [34] & 360 & 1300 & 1.0 & 10 & 0.028 \\ \hline oh15.wc [34] & 913 & 3100 & 2.962 & 10 & 0.029 \\ \hline oh10.wc [34] & 1050 & 3238 & 3.173 & 10 & 0.032 \\ \hline shipp-2002-v1 [36] & 77 & 798 & 3.053 & 2 & 0.048 \\ \hline cane-9-half [34] & 540 & 856 & 1.283 & 9 & 0.070 \\ \hline OVA\_Colon [34] & 1545 & 10935 & 4.402 & 2 & 0.071 \\ \hline OVA\_Breast [34] & 1545 & 10935 & 3.491 & 2 & 0.071 \\ \hline imdb [35] & 748 & 3047 & 1.066 & 2 & 0.123 \\ \hline cnae-9 [34] & 1080 & 856 & 1.0 & 9 & 0.140 \\ \hline lsvt [34] & 126 & 310 & 2.0 & 2 & 0.203 \\ \hline yelp [35] & 1000 & 2033 & 1.0 & 2 & 0.246 \\ \hline amazon [35] & 1000 & 1847 & 1.0 & 2 & 0.271 \\ \hline chowary-2006 [36] & 104 & 182 & 1.476 & 2 & 0.286 \\ \hline \hline chen-2002 [36] & 179 & 85 & 1.387 & 2 & 1.053 \\ \hline gina [34] & 3153 & 970 & 1.034 & 2 & 1.625 \\ \hline madelon [34] & 2600 & 500 & 1.0 & 2 & 2.600 \\ \hline scene [34] & 2407 & 299 & 4.585 & 2 & 4.025 \\ \hline wdbc [34] & 569 & 30 & 1.684 & 2 & 9.483 \\ \hline led24 [34] & 3200 & 24 & 1.139 & 10 & 13.333 \\ \hline segment [34] & 2310 & 19 & 1.0 & 7 & 17.368 \\ \hline spambase [34] & 4601 & 57 & 1.538 & 2 & 40.360 \\ \hline \end{tabular} \end{table} Table 1: Datasets description with the number of instances, the number of features, the imbalance ratio (IR), the number of classes, and the \(\Omega\) value. Datasets with \(\Omega<1\) are considered to be HDLSS datasets, whereas \(\Omega\geq 1\) are non-HDLSS datasets. Regarding the Random Forest method, the maximum tree depth1, the maximum number of features assessed for a split decision, the minimum number of samples at leaf nodes, the minimum number of samples for a split, and the number of trees have been tuned following the values given in the upper part of Table 2. The search space for XGBoost is given in the middle part of the same table. XGBoost search space follows the suggestion given in [40], where the maximum tree depth, instance subsample ratio, column sample by tree portion, regularization lambda, and the maximum number of iterations were tuned. Finally, the search space for SVM with the gaussian kernel is depicted in the bottom part of Table 2, where the hyperparameter \(C\) and \(\gamma\) were optimized. For all other SVM-based methods, i.e. RFSVM, DWD, COSSVM and LMNNSVM, only \(C\) needs to be optimized, and it has also been done following the values in Table 2. Footnote 1: The None option stands for fully grown trees, i.e., the trees are grown until all leaf nodes have pure class distributions. ### Validation protocol and implementation details For each dataset used in this experimental comparison, all the methods were tested 10 times, each time with a random half of the dataset for training and the remaining half for test. The hyperparameter setting procedure was performed on the training set and the performance was measured with the traditional accuracy measurement. Two different statistical tests of significance have been used to analyze the differences in performance between the methods. The tests used are (i) the Friedman test along with the Nemenyi post-hoc test [41], and (ii) the Bayesian sign test [42]. The Friedman/Nemenyi test is classically used to assess the statistical significance of a comparison of several methods over multiple datasets, based on the average ranks of the methods. We refer the reader to [41] for more details about its functioning and the reasons why this test is advised for this type of \begin{table} \begin{tabular}{|l c|} \hline \multicolumn{2}{|c|}{**Random Forest**} \\ \hline Max. Depth & \(\{10^{i}|i=1,\ldots,10\}\) and None \\ \hline Max. Features & \(\{1\%,\,5\%,\,10\%,\,20\%,\,30\%\}\) \\ \hline Min. Samples Leaf & \(\{1,\,2,\,4\}\) \\ \hline Min. Samples Split & \(\{2,\,5,\,10\}\) \\ \hline Number of trees & \(500\) \\ \hline \hline \multicolumn{2}{|c|}{**XGBoost**} \\ \hline Max. Depth & \(\{x|x\in\mathbb{N},4\leq x\leq 15\}\) \\ \hline Subsample & \([0.8,1]\) \\ \hline Column sample by tree & \([0.5,1]\) \\ \hline Regularization Lambda & \([0,1]\) \\ \hline Max. number of iterations & \(500\) \\ \hline \hline \multicolumn{2}{|c|}{**SVM**} \\ \hline C & \(\{10^{i}|\ i=-2,\ldots,4\}\) \\ \hline \(\gamma\) & \(\{10^{i}|\ i=-4,\ldots,2\}\) \\ \hline \hline \multicolumn{2}{|c|}{**RFSVM, COSSVM, LMNNSVM, and DWD**} \\ \hline C & \(\{10^{i}|\ i=-2,\ldots,4\}\) \\ \hline \end{tabular} \end{table} Table 2: Hyperparameter search space for Random Forest, XGBoost, SVM, and other SVM-based methods (RFSVM, COSSVM, LMNNSVM, and DWD). comparisons. In contrast, the Bayesian sign test is a pairwise test based on the difference in performance of the two methods over multiple datasets. Unlike frequentist null-hypothesis tests such as the Friedman test, Bayesian analysis can provide more insight than simply rejecting the hypothesis that the two classifiers are of equivalent performance. In particular, it outputs probabilities of one classifier \(a\) to be practically better than another classifier \(b\), based on a set of results from multiple datasets. It also allows to integrate in the analysis a "region of practical equivalence" (_rope_) defined by an average performance gap at which the classifier \(a\) is considered practically better than the classifier \(b\). For instance, with _rope_\(=0.05\), if the mean accuracy of classifier \(a\) is \(0.980\) and the mean accuracy of classifier \(b\) is \(0.976\), both classifier will be considered to be practically equivalent in the Bayesian analysis because the difference \(0.980-0.976=0.04\) is less than the _rope_. In our experiments, we considered two values for _rope_, \(0.005\) and \(0.01\). Finally, all the experiments were executed in Python. Random Forest (RF), SVM, and cosine distance implementations were those available in the _scikit-learn_ library [29] version 0.23.2. The implementation of DWD was found in the IDC9 repository2, and the PYLMNN library [43] version 1.6.4 was used for computing the LMNN distance. XGBoost was performed using the implementation provided in its original paper (version 1.1.1). Footnote 2: [https://github.com/idc9/dwd](https://github.com/idc9/dwd) ## 5 Results and Discussion Table 3 presents the mean accuracy rates (and standard deviations) obtained by the seven methods across all 40 datasets. The values in bold are the best mean accuracy rates obtained for each of the datasets. In this table, the datasets are separated into three groups: the very-HDLSS datasets at the top (when \(\Omega<0.015\)), the mid-HDLSS datasets in the middle (when \(0.015\leq\Omega<1\)), and non-HDLSS datasets at the bottom. In this section, we analyze these results first globally and then in more detail for each of these three groups. ### Analysis of overall results The first observation that can be made from Table 3 is that most of the values in bold, the best mean accuracy rates, are obtained by the methods on the right-hand side of the table, that is to say the kernel based SVM. More precisely, as shown in the last row of Table 3, RFSVM is the method that achieves the best performance on most of the datasets (on 16 of the 40 datasets), followed by COSSVM (on 7 of the 40 datasets), and LMNNSVM (on 5 of the 40 datasets, _ex aequo_ with Random Forest). For a more precise analysis of these global results, it is necessary to look at the results of post-hoc statistical tests. Figure 1 shows the Critical Difference diagram [41] drawn from the results of the Friedman/Nemenyi test. This diagram sorts the seven methods by their average rank across the 40 datasets and indicates whether the difference between them is statistically significant: methods that are connected by a thick line are the one for which the statistical test does not reject the null-hypothesis that the classifiers perform equally well. RFSVM and COSSVM are the two best methods on average but closely followed by Random Forest. Considering statistical significance, RFSVM is the method for which the difference is statistically significant with the most methods. The differences in performance between all other methods are globally not significant according to the Friedman/Nemenyi test. These first overall results confirm that RFSVM is particularly relevant for HDLSS classification. However, this requires further confirmation, with careful analysis of the results obtained on the HDLSS datasets, which we give in the following section. \begin{table} \begin{tabular}{|c c c c c c c c|} \hline Dataset & XGB & RF & SVM & DWD & RFSVM & COSSVM & LMNNSVM \\ \hline UMIST. & 0.928 \(\pm\) 0.016 & 0.985 \(\pm\) 0.013 & 0.365 \(\pm\) 0.402 & 0.948 \(\pm\) 0.009 & 0.988 \(\pm\) 0.010 & 0.970 \(\pm\) 0.011 & **0.989 \(\pm\) 0.006** \\ \hline leukemia & 0.944 \(\pm\) 0.045 & 0.964 \(\pm\) 0.031 & 0.950 \(\pm\) 0.051 & 0.922 \(\pm\) 0.057 & **0.969 \(\pm\) 0.023** & 0.961 \(\pm\) 0.045 & 0.956 \(\pm\) 0.052 \\ \hline aliz. & 0.771 \(\pm\) 0.071 & 0.877 \(\pm\) 0.064 & 0.890 \(\pm\) 0.060 & 0.897 \(\pm\) 0.061 & 0.897 \(\pm\) 0.054 & **0.906 \(\pm\) 0.053** & 0.903 \(\pm\) 0.050 \\ \hline tr45.wc & **0.970 \(\pm\) 0.008** & 0.949 \(\pm\) 0.012 & 0.815 \(\pm\) 0.070 & 0.665 \(\pm\) 0.107 & 0.954 \(\pm\) 0.007 & 0.925 \(\pm\) 0.008 & 0.900 \(\pm\) 0.046 \\ \hline lailho-2007 & 0.805 \(\pm\) 0.024 & 0.805 \(\pm\) 0.034 & 0.874 \(\pm\) 0.059 & 0.826 \(\pm\) 0.041 & 0.858 \(\pm\) 0.041 & 0.879 \(\pm\) 0.058 & **0.911 \(\pm\) 0.041** \\ \hline bittmer-2000 & 0.716 \(\pm\) 0.111 & 0.784 \(\pm\) 0.044 & 0.753 \(\pm\) 0.097 & 0.816 \(\pm\) 0.035 & 0.784 \(\pm\) 0.050 & **0.837 \(\pm\) 0.044** & 0.800 \(\pm\) 0.039 \\ \hline arcene & 0.722 \(\pm\) 0.026 & 0.767 \(\pm\) 0.043 & 0.633 \(\pm\) 0.116 & 0.820 \(\pm\) 0.036 & 0.807 \(\pm\) 0.038 & 0.835 \(\pm\) 0.045 & **0.853 \(\pm\) 0.042** \\ \hline ramas. & 0.705 \(\pm\) 0.037 & 0.751 \(\pm\) 0.027 & 0.635 \(\pm\) 0.038 & 0.669 \(\pm\) 0.035 & **0.773 \(\pm\) 0.021** & 0.760 \(\pm\) 0.042 & 0.617 \(\pm\) 0.026 \\ \hline arms. & 0.942 \(\pm\) 0.032 & **0.975 \(\pm\) 0.015** & 0.928 \(\pm\) 0.042 & 0.961 \(\pm\) 0.025 & 0.972 \(\pm\) 0.018 & 0.975 \(\pm\) 0.015 & 0.942 \(\pm\) 0.049 \\ \hline su-2001 & 0.895 \(\pm\) 0.026 & 0.883 \(\pm\) 0.027 & 0.897 \(\pm\) 0.021 & 0.883 \(\pm\) 0.025 & **0.920 \(\pm\) 0.024** & 0.894 \(\pm\) 0.025 & 0.897 \(\pm\) 0.027 \\ \hline lapo. & 0.804 \(\pm\) 0.044 & 0.809 \(\pm\) 0.030 & 0.818 \(\pm\) 0.047 & 0.815 \(\pm\) 0.041 & 0.851 \(\pm\) 0.039 & 0.833 \(\pm\) 0.034 & **0.853 \(\pm\) 0.043** \\ \hline gol. & 0.939 \(\pm\) 0.055 & 0.931 \(\pm\) 0.031 & 0.881 \(\pm\) 0.071 & 0.864 \(\pm\) 0.040 & **0.944 \(\pm\) 0.030** & 0.903 \(\pm\) 0.031 & 0.914 \(\pm\) 0.019 \\ \hline Dexter & 0.916 \(\pm\) 0.015 & 0.925 \(\pm\) 0.010 & 0.869 \(\pm\) 0.030 & 0.916 \(\pm\) 0.010 & 0.934 \(\pm\) 0.013 & **0.936 \(\pm\) 0.006** & 0.927 \(\pm\) 0.013 \\ \hline yoh. & 0.842 \(\pm\) 0.018 & 0.809 \(\pm\) 0.015 & 0.787 \(\pm\) 0.018 & 0.715 \(\pm\) 0.022 & **0.848 \(\pm\) 0.024** & 0.718 \(\pm\) 0.023 & 0.815 \(\pm\) 0.031 \\ \hline tomi. & 0.715 \(\pm\) 0.052 & 0.730 \(\pm\) 0.050 & 0.811 \(\pm\) 0.071 & 0.798 \(\pm\) 0.046 & 0.763 \(\pm\) 0.047 & 0.837 \(\pm\) 0.049 & **0.859 \(\pm\) 0.044** \\ \hline khan-2001 & 0.960 \(\pm\) 0.035 & **0.983 \(\pm\) 0.015** & 0.955 \(\pm\) 0.044 & 0.955 \(\pm\) 0.034 & 0.979 \(\pm\) 0.020 & 0.957 \(\pm\) 0.030 & 0.974 \(\pm\) 0.033 \\ \hline west-2001 & 0.860 \(\pm\) 0.041 & **0.884 \(\pm\) 0.045** & 0.860 \(\pm\) 0.048 & 0.856 \(\pm\) 0.057 & 0.880 \(\pm\) 0.040 & 0.860 \(\pm\) 0.057 & 0.864 \(\pm\) 0.060 \\ \hline eating & **0.560 \(\pm\) 0.022** & 0.532 \(\pm\) 0.025 & 0.148 \(\pm\) 0.000 & 0.551 \(\pm\) 0.027 & 0.556 \(\pm\) 0.018 & 0.403 \(\pm\) 0.206 & 0.300 \(\pm\) 0.159 \\ \hline bhatt. & 0.946 \(\pm\) 0.014 & 0.929 \(\pm\) 0.016 & 0.937 \(\pm\) 0.017 & 0.923 \(\pm\) 0.022 & **0.957 \(\pm\) 0.019** & 0.936 \(\pm\) 0.018 & 0.934 \(\pm\) 0.017 \\ \hline micro. & 0.929 \(\pm\) 0.032 & 0.931 \(\pm\) 0.023 & 0.429 \(\pm\) 0.040 & 0.912 \(\pm\) 0.040 & **0.937 \(\pm\) 0.024** & 0.908 \(\pm\) 0.020 & 0.904 \(\pm\) 0.027 \\ \hline oh15.wc & 0.824 \(\pm\) 0.014 & 0.825 \(\pm\) 0.008 & 0.742 \(\pm\) 0.041 & 0.558 \(\pm\) 0.170 & **0.835 \(\pm\) 0.006** & 0.803 \(\pm\) 0.025 & 0.763 \(\pm\) 0.010 \\ \hline oh10.wc & 0.831 \(\pm\) 0.012 & **0.836 \(\pm\) 0.013** & 0.759 \(\pm\) 0.021 & 0.654 \(\pm\) 0.103 & 0.835 \(\pm\) 0.017 & 0.765 \(\pm\) 0.019 & 0.736 \(\pm\) 0.014 \\ \hline ship. & 0.841 \(\pm\) 0.057 & 0.828 \(\pm\) 0.051 & ### Analysis of the results on HDLSS datasets Since this work focuses on HDLSS classification, we now deepen the analysis for the HDLSS datasets, that is to say, for the datasets with \(\Omega<1.0\). Figure 2 gives the Critical Difference diagram obtained by considering the HDLSS datasets only, i.e. 32 of the 40 datasets. The main difference one can observe from this diagram, compared to the one given in Figure 1, is that LMNNSVM performs slightly better. This supports the conclusion that the kernel based approach are generally effective for HDLSS classification. It can also be noted that, in contrast, the performance of DWD is surprisingly poor compared to general-purpose machine learning methods. On the other hand, in line with the analysis given in [9], SVM is the least successful method for these tasks. The results of the Bayesian test can now be used to make a more detailed comparison, by giving the probabilities that one classifier is more accurate than another. In the following, \(p(a>b)\) denotes the probability for the classifier \(a\) to be more accurate than the classifier \(b\) according to the Bayesian test. Similarly, \(p(a\sim b)\) denotes the probability that classifiers \(a\) and \(b\) perform equally well. In a nutshell, to estimate these probabilities, the Bayesian test uses the mean differences in accuracy between classifiers \(a\) and \(b\) on all the datasets, and deduces a distribution for \(p(a>b)\), \(p(a\sim b)\) and \(p(b>a)\). We refer the reader to [42] for more details on how this distribution is obtained. Then \(M\) trinomial vectors of probabilities are drawn at random from this distribution. These vectors are typically represented as points in the simplex having vertices \(\{(1,0,0),(0,1,0),(0,0,1)\}\). Three examples of such representations are given in Figure 3, for three pairwise comparisons, RFSVM vs RF, RFSVM vs COSSVM and SVM vs DWD, on the HDLSS datasets. These representations allow to observe the proportion of Figure 1: Critical difference diagram from the Friedman/Nemenyi test results on all the 40 datasets. Figure 2: Critical difference diagram from the Friedman/Nemenyi test results on the HDLSS datasets (\(\Omega<1.0\)). points falling in each of the three zones corresponding to each of the three situations. By considering the number of points that fall in the three regions, one can have an estimate of all three probabilities, \(p(a>b)\), \(p(a\sim b)\) and \(p(b>a)\). For our experimental comparison, these estimates are given as two color maps (the left one for _rope_\(=0.005\) and the right one for _rope_\(=0.01\)) in Figure 4. It should be read as follows: the value in the cell for the row of classifier \(a\) and the column of classifier \(b\) is \(p(a>b)\) according to the Bayesian test. Note that the difference between \(p(a>b)\) and \(p(b>a)\) corresponds to \(p(a\sim b)\), which is not given in these color maps. Therefore, smaller values of both \(p(a>b)\) and Figure 3: Examples of trinomial vectors visual representations for the Bayesian analysis on the 32 HDLSS datasets. \(p(b>a)\) are expected with larger _rope_ values. However, it should be noted that this analysis is subject to random draw, and so for cases where the differences in performance between \(a\) and \(b\) are larger than the _rope_, the opposite behavior may be observed, i.e., marginally larger probabilities occur with larger rope values. In both color maps of Figure 4, the column corresponding to the RFSVM method is the one with the lowest values. This means that overall, it is the method with the lowest probabilities to be outperformed by any other method. Thus, although RFSVM, COSSVM, and RF are not significantly different according to the Friedman/Nemenyi post-hoc test, the detailed pairwise analysis shows that both COSSVM and RF have very low probabilities of giving better results than RFSVM. Conversely, RFSVM has a high probability of being better than the cited methods. As for the comparison between SVM and DWD, these results confirm that the DWD method gives slightly better results on HDLSS datasets than the regular SVM method but with \(p(\text{DWD}>\text{SVM})\) and \(p(\text{SVM}>\text{DWD})\) being very close to 0.5 each. Note that for _rope_\(=0.01\), most of the probabilities decrease as expected, as more of the pairwise comparisons now lie into the _rope_. However, the patterns observed did not have noticeable changes. This analysis is given considering all the datasets with \(\Omega<1.000\), that is to say, the ones at the top and in the middle part of Table 3. It is now interesting to focus on the 13 very-HDLSS datasets, i.e., where \(\Omega<0.015\). Figure 5 gives the Critical Difference diagram for these datasets only. What is interesting to note here is that LMNNSVM is now a lot more competitive and that all three similarity-based methods are clearly in the lead. The difference in average rank between these three methods and the other methods is even greater than it was in the previous results. On the other hand, these differences are considered less statistically significant by the Friedman/Nemenyi test. Nevertheless, when looking at the bayesian analysis for these 13 very-HDLSS datasets in Figure 6, the probability of any similarity-based methods to be outperformed by XGBoost, RF, SVM or DWD is very low (see the upper right part of the color maps). This shows that in the most extreme cases of HDLSS classification, SVM with well chosen similarity measure as kernel are particularly relevant. Figure 4: Pairwise Bayesian analysis for HDLSS datasets. The value in each cell is \(p(a>b)\), i.e. the probability that the row classifier \(a\) outperforms the column classifier \(b\). _A note on class imbalance_ Given the results in Table 1, one can see that some of the datasets have quite high imbalanced ratios (high values in the 'IR' column of Table 1). In particular, three datasets show very high values in the 'IR' column of Table 1: the _tr45.wc_ dataset, the _bhattacharjee-2001_ dataset and the _yeoh-2002-v2_ dataset, for which the imbalance ratio is superior to 5. In class imbalance scenarios, it is well-known that accuracy is not a suitable performance evaluation measure. To determine whether the high IR values affect the results presented so far, we give additional results for these three specific datasets. Table 4 gives the results obtained by the seven methods on these three datasets in terms of F1 (top) and accuracy scores (bottom). F1 represents the harmonic mean between precision and recall, and it is widely applied in the assessment of imbalanced classification tasks [44]. When working with non-binary problems, we used the micro-average for F1, which computes the metric globally by counting the total number of true positives, false negatives and false positives per class. When comparing values in both parts of the table, one can note that most of them are comparable and that the rank of all seven methods is globally similar with respect Figure 5: Critical Difference diagram from the Friedman/Nemenyi test results on the very-HDLSS datasets (\(\Omega<0.015\)). Figure 6: Pairwise Bayesian analysis for very-HDLSS datasets. The value in each cell is \(p(a>b)\), i.e. the probability that the row classifier \(a\) outperforms the column classifier \(b\). to both evaluation measures. It means that the high IR values do not call into question the conclusions drawn earlier, including for the imbalanced HDLSS datasets. ### Analysis of the results on non-HDLSS datasets Finally, we present the results on non-HLDSS datasets in order to analyze whether the similarity-based approaches are still competitive for regular classification tasks, i.e., on datasets with \(\Omega\geq 1\). For that purpose, Figure 7 gives the Critical Difference diagram on the non-HDLSS datasets, that is to say, the ones in the bottom part of Table 1. From this diagram, one can see that the global ranking is quite different, with in particular COSSVM and LMNNSVM being a lot less competitive. In contrast, XGBoost is not surprisingly far more accurate on average on these datasets. Nevertheless, this time, none of the differences in rank is statistically significant according to the Friedman/Nemenyi post-hoc test. However, we would like to emphasize that the RFSVM method is still ranked first on average and is particularly competitive with the state-of-the-art general-purpose classification methods, namely XGBoost and Random Forest. For a more detailed analysis, we also give the results of the Bayesian test in Figure 8. The competitiveness of RFSVM is confirmed by the fact that the probabilities of the RFSVM column in these color maps are still very low for these datasets. ## 6 Conclusion HDLSS classification problems are unavoidable in many real-world pattern recognition problems, and having methods to provide a satisfactory solution to such problems is of crucial importance. Usually, it is faced with dimensionality reduction techniques and posterior induction of general-purpose machine learning models. However, in many situations, dimensionality reduction techniques give unsatisfactory results and genuine HDLSS learning methods are needed. In this work, we show that one of these methods, RFSVM, is particularly efficient, regardless of the "degree of HDLSS" of the problem. This method, that has been designed in our previous \begin{table} \begin{tabular}{|c c c c c c c c|} \hline dataset & XGB & RF & SVM & DWD & RFSVM & COSSVM & LMNNSVM \\ \hline \hline tr45.wc & **0.972 \(\pm\) 0.007** & 0.946 \(\pm\) 0.001 & 0.745 \(\pm\) 0.017 & 0.754 \(\pm\) 0.017 & 0.954 \(\pm\) 0.006 & 0.813 \(\pm\) 0.019 & 0.784 \(\pm\) 0.013 \\ \hline hhattacharjee-2001 & 0.949 \(\pm\) 0.020 & 0.930 \(\pm\) 0.016 & 0.937 \(\pm\) 0.017 & 0.932 \(\pm\) 0.025 & **0.959 \(\pm\) 0.021** & 0.936 \(\pm\) 0.018 & 0.934 \(\pm\) 0.017 \\ \hline yeoh-2002-v2 & 0.848 \(\pm\) 0.017 & 0.811 \(\pm\) 0.017 & 0.787 \(\pm\) 0.018 & 0.782 \(\pm\) 0.029 & **0.849 \(\pm\) 0.027** & 0.718 \(\pm\) 0.023 & 0.817 \(\pm\) 0.034 \\ \hline \hline tr45.wc & **0.970 \(\pm\) 0.008** & 0.949 \(\pm\) 0.012 & 0.815 \(\pm\) 0.070 & 0.665 \(\pm\) 0.107 & 0.954 \(\pm\) 0.007 & 0.925 \(\pm\) 0.008 & 0.900 \(\pm\) 0.046 \\ \hline hhattacharjee-2001 & 0.946 \(\pm\) 0.014 & 0.929 \(\pm\) 0.016 & 0.937 \(\pm\) 0.017 & 0.923 \(\pm\) 0.022 & **0.957 \(\pm\) 0.019** & 0.936 \(\pm\) 0.018 & 0.934 \(\pm\) 0.017 \\ \hline yeoh-2002-v2 & 0.842 \(\pm\) 0.018 & 0.809 \(\pm\) 0.015 & 0.787 \(\pm\) 0.018 & 0.715 \(\pm\) 0.022 & **0.848 \(\pm\) 0.024** & 0.718 \(\pm\) 0.023 & 0.815 \(\pm\) 0.031 \\ \hline \end{tabular} \end{table} Table 4: F1 Score (top) and Accuracy (bottom) results obtained in imbalanced datasets. Figure 7: Critical Difference diagram from the Friedman/Nemenyi test results on the non-HDLSS datasets (\(\Omega\geq 1.0\)). works for multi-view learning, is based on the use of Random Forests to estimate the similarity between the training data and on the use of these similarities as a pre-computed kernel in an SVM classifier. To show the suitability and efficiency of RFSVM for HDLSS data, we designed a rigorous experimental comparison with different state-of-the-art classification techniques, from general-purpose techniques to state-of-the-art HDLSS methods. This comparison has been conducted on 40 different datasets, with 32 HDLSS datasets and 8 regular classification datasets for control, all publicly available. The HDLSS datasets have been carefully selected to propose a wide variety of HDLSS levels, from "slightly HDLSS" to "very HDLSS". Two statistical analyses of the results have been conducted to support the superiority of RFSVM over the others, a frequentist analysis with the Friedman statistical test along with the Nemenyi post-hoc test, and a bayesian analysis with a Bayesian sign test. Both statistical analyses confirm the superiority of RFSVM over its competitors, but with the interesting nuance that, more globally, similarity-based approaches are the most robust to extremely HDLSS learning conditions. Based on the success of these similarity-based classification methods, we believe it is worthwhile to apply them to more specific tasks where HDLSS datasets are known to be challenging, as anomaly/outlier detection, novelty detection, or data domain description. Detecting outliers, for example, is of particular importance in HDLSS problems since without much data for training, outliers may have an important influence on the result. Estimating distances between points, which is crucial for detecting outliers, is however difficult in the HDLSS context. We believe that the Random Forest Kernel approach could be an efficient alternative in this context. Finally, many naturally HDLSS real-world applications have a characteristic that is usually a challenge in machine learning: data sparsity. While Random Forests are known to be robust to high dimensions, they are also known to suffer from data sparsity. However, these methods are frequently used on sparse HDLSS data as they embed efficient pre-processing and/or interpretability tools. For example, they may serve as feature selectors in a sparse data classification context, as in [45] where they are used for gene selection and classification of microarray data. Figure 8: Pairwise bayesian analysis for non-HDLSS datasets. The value in each cell is \(p(a>b)\), i.e. the probability that the row classifier \(a\) outperforms the column classifier \(b\). For this reason, we believe it is relevant to further investigate the behavior of our method in the context of sparse data learning. ## 7 Acknowledgments This work is part of the DAISI project, co-financed by the European Union with the European Regional Development Fund (ERDF) and by the Normandy Region.
2302.02377
Self-induced Transparency in a Semiconductor Quantum Dot medium at ultra-cold temperatures
We investigate the feasibility of minimum absorption and minimum broadening of pulse propagation in an inhomogeneously broadened semiconductor quantum dot medium. The phonon interaction is inevitable in studying any semiconductor quantum dot system. We have used the polaron transformation technique to deal with quantum dot phonon interaction in solving system dynamics. We demonstrate that a short pulse can propagate inside the medium with minimal absorption and broadening in pulse shape. The stable pulse area becomes slightly higher than the prediction of the pulse area theorem and is also dependent on the environment temperature. The change in the final pulse shape is explained very well by numerically solving the propagation equation supported by the susceptibility of the medium. Our system also exhibits the pulse breakup phenomena for higher input pulse areas. Therefore, the considered scheme can have important applications in quantum communication, quantum information, and mode-locking with the advantage of scalability and controllability.
Samit Kumar Hazra, P. K. Pathak, Tarak Nath Dey
2023-02-05T13:04:27Z
http://arxiv.org/abs/2302.02377v2
# Self-induced Transparency in a Semiconductor Quantum Dot medium at ultra-cold temperatures ###### Abstract We investigate the feasibility of minimum absorption and minimum broadening of pulse propagation in an inhomogeneously broadened semiconductor quantum dot medium. The phonon interaction is inevitable in studying any semiconductor quantum dot system. We have used the polaron transformation technique to deal with quantum dot phonon interaction in solving system dynamics. We demonstrate that a short pulse can propagate inside the medium with minimal absorption and broadening in pulse shape. The stable pulse area becomes slightly higher than the prediction of the pulse area theorem and is also dependent on the environment temperature. The change in the final pulse shape is explained very well by numerically solving the propagation equation supported by the susceptibility of the medium. Our system also exhibits the pulse breakup phenomena for higher input pulse areas. Therefore, the considered scheme can have important applications in quantum communication, quantum information, and mode-locking with the advantage of scalability and controllability. ## I Introduction In Self-induced transparency (SIT), an optical pulse propagates resonantly through the two-level absorbing medium without any loss and distortion. This pioneering work was carried out by McCall and Hahn [1; 2]. SIT originates from the generated coherence of a strongly coupled light-medium interaction. Therefore for observing SIT, the incident pulse should be short compared to the various relaxation times present in the system, such that the coherence will not vanish during the pulse propagation. Further, the pulse should also be strong enough to excite the atom from the ground state. One of the best theoretical estimations of the input pulse was reported in the "area theorem" [2]. This theorem dictates that a \(2\pi\) secant pulse can propagate through the medium without any loss and distortion in the pulse shape. In general, for an initial pulse area \(\theta_{0}\) obeying the condition \((n+1)\pi>\theta_{0}>n\pi\), evolves the area towards \((n+1)\pi\) or \(n\pi\) depending on whether \(n\) is odd or even. Therefore input pulse with a larger area of \(2n\pi\), breaks up into \(n\) number of \(2\pi\) pulses with different propagation velocities. These effects have been observed experimentally in atomic rubidium medium by Slusher and Gibbs [3]. In particular, they have found excellent agreement between numerical simulations and experimental results. These fundamental properties of the SIT were investigated several times, both theoretically and experimentally [4; 5; 6]. However, in atomic medium, the preparation and trapping of atomic gas required a vast and sophisticated setup. Moreover, due to the gaseous nature of the medium, the different velocity of the atom shows Dopler broadening in output result. For the last two decades, solid-state semiconductor mediums have emerged as a potential candidate for optical applications, particularly for scalable on-chip quantum technology. Earlier, the resonant coherent pulse propagation in bulk and quantum-well semiconductors behaves differently compared to a two-level atomic medium. The discrepancy mentioned above occurs due to the many-body Coulomb interaction of the different momentum states present in a bulk medium[7; 8; 9]. This problem has been overcome in three-dimensionally confined excitons in quantum dots (QD's). The quantum dots can easily be engineered to get the desired transition frequency to avoid the problem of laser availability. The scalability and fabrication technology make the semiconductor QDs suitable for modern quantum optics experiments. There have been some interesting theoretical proposals about the possibility of observing SIT in self-organized InGaAs QDs[10]. Excitonic transition in InGaAs QDs have large transition dipole moments and long dephasing time in the range of nanoseconds at cryogenic temperatures [11] and are, therefore a promising candidate for SIT. Though the QD medium is a potential candidate for observing SIT, it has a few drawbacks also. All the QDs inside the medium are not identical, so an inhomogeneous level broadening is always present in the system. In semiconductors, longitudinal acoustic phonon interaction is vital because of the environment temperature. Interactions between phonon and exciton lead to dephasing in coupled dynamics of exciton-photon interaction[12; 13]. Several theoretical models and experiments have recently explained SIT in the semiconductor QD medium [14; 15; 16]. Few of them consider the effect of the phonon environment on the system dynamics in the context of group velocity dispersion[17]. Another recent experimental work showed the SIT mode-locking and area theorem for semiconductor QD medium, and rubidium atom [18; 19]. In this paper, we discuss the possibility of SIT in a semiconductor QD medium incorporating the effect of phonon bath in our model. We utilize the recently developed polaron transformed master equation keeping all orders of exciton-phonon interaction [20; 21; 22]. Our model's pulse propagation dynamics depend on system and bath parameters. Hence, the propagation dynamics become more transparent by knowing both the system and the bath's contribution.The motivation behind this work is to find long-distance optical communication without loss of generality in an array of QD. Due to strong confinement of electron hole pairs, QDs have discrete energy levels thus QD arrays mimic atomic medium with the added advantage of scalability and controllability with advanced semiconductor technology. It is also possible to create QD fibers which can be used for quantum communication channels [23; 24]. Motivated by this work, we theoretically investigate the self-induced transparency effect in a semiconductor QD medium. Our paper is organized as follows. Sec. I contains a brief introduction of the SIT in a QD medium and its application. In Sec. II, we present our considered model system along with the theoretical formalism of the polaron master equation. In Sec. III we discuss the result after numerically solving the relevant system equations. Finally, we draw a conclusion in Sec. IV. ## II Model system The phonon contribution to QD dynamics at low temperature is mandatory. We assume the propagation of an optical pulse along the \(z\)-direction. Accordingly, we define the electric field of the incident optical pulse as \[\vec{E}(z,t)=\hat{e}\mathcal{E}(z,t)e^{i(kz-\omega_{L}t)}+c.c, \tag{1}\] where \(\mathcal{E}(z,t)\) is the slowly varying envelope of the field. The bulk QD medium comprises multiple alternating InGaAs/GaAs QD deposition layers. Every QD inside the medium strongly interacts with the electric field due to the significant dipole moment. Since all the QD inside the medium is not identical, the exciton energy of the different QD will vary depending on the dot size. The \(l^{th}\) type QD can be modeled as a two-level system with exciton state \(\ket{1}_{l}\), and ground state \(\ket{2}_{l}\) with energy gap \(\hbar\omega_{l}\) by taking the proper choice of biexciton binding energy and polarisation as shown in the Fig.1. The raising and lowering operator for the \(l^{th}\) type QD can be written as \(\sigma_{l}^{+}=\ket{1\left(\omega_{l}\right)}_{l}(2\left(\omega_{l}\right) \ket{l}_{l}\) and \(\sigma_{l}^{-}=\ket{2\left(\omega_{l}\right)}_{l}(1\left(\omega_{l}\right) \ket{l}_{l}\). In case of semiconductor QD's, the optical properties get modified due to the lattice mode of vibration _i.e._, the acoustic phonon. Hence, QD exciton transition coupled to an acoustic phonon bath model mimics the desired interaction. The phonon bath consists of a large number of closely spaced harmonic oscillator modes. Therefore, we introduce the annihilation and creation operators associated with \(k^{th}\) phonon mode having frequency \(\omega_{k}\) as \(b_{k}\) and \(b_{k}^{\dagger}\). The mode frequency can be expressed as \(\omega_{k}=c_{s}k\) where \(k\) and \(c_{s}\) are the wave vector and velocity of sound. The Hamiltonian for the described model system after making dipole and rotating wave approximation is given by \[H =\sum_{l}\Bigl{[}-\hbar\delta_{l}\sigma_{l}^{+}\sigma_{l}^{-}+ \frac{1}{2}\hbar\Bigl{(}\Omega(z,t)\sigma_{l}^{+}+\Omega^{*}(z,t)\sigma_{l}^{- }\Bigr{)}\] \[+\hbar\sigma_{l}^{+}\sigma_{l}^{-}\sum_{k}\lambda_{k}\left(b_{k}+ b_{k}^{\dagger}\right)\Bigr{]}+\hbar\sum_{k}\omega_{k}b_{k}^{\dagger}b_{k}, \tag{2}\] where \(\lambda_{k}\) is the exciton phonon mode coupling constant and \(\Omega(z,t)=-2\tilde{d}_{12}\cdot\hat{e}\mathcal{E}(z,t)/\hbar\) is the Rabi frequency with transition dipole moment vector \(\tilde{d}_{12}\). The detuning of the optical field with QD transition is defined as \(\delta_{l}=\omega_{L}-\omega_{l}\). We notice that the Hamiltonian contains an infinite sum over phonon modes. Keeping all order of exciton phonon interaction, we made a transformation in the polaron frame. The transformation rule for modified Hamiltonian is given by \(H^{{}^{\prime}}=e^{P}He^{-P}\) where the operator \(P=\sum_{l}\sigma_{l}^{+}\sigma_{l}^{-}\sum_{k}\lambda_{k}(b_{k}^{\dagger}-b_{ k})/\omega_{k}\). This transformation also helps us to separate the system Hamiltonian from the total Hamiltonian which is our primary interest. The transformed Hamiltonian is divided into system, bath and interaction part, which can be decomposed as \(H^{\prime}=H_{s}+H_{b}+H_{I}\), where \[H_{s} =\sum_{l}-\hbar\Delta_{l}\sigma_{l}^{+}\sigma_{l}^{-}+\langle B \rangle X_{l}^{g}, \tag{3}\] \[H_{b} =\hbar\sum_{k}\omega_{k}b_{k}^{\dagger}b_{k},\] (4) \[H_{I} =\sum_{l}\xi_{g}X_{l}^{g}+\xi_{u}X_{l}^{u}, \tag{5}\] and \(\Delta_{l}\) is the redefined detuning by considering the polaron shift \(\sum_{k}\lambda_{k}^{2}/\omega_{k}\). The definition of phonon-modified Figure 1: A Schematic diagram of the QD level system with ground state \(\ket{2}\) and exciton state \(\ket{1}\) driven by the optical pulse with effective coupling \(\langle B\rangle\Omega\)(blue line). The spontaneous decay from the exciton state to the ground state is shown using a curly red line. The parallel violet lines represent the phonon modes interacting with the exciton state. The red and blue dashed lines represent the phonon-induced decay and pumping rate respectively. system operators is given by \[X_{l}^{g} = \frac{\hbar}{2}\left(\Omega(z,t)\sigma_{l}^{+}+\Omega^{*}(z,t) \sigma_{l}^{-}\right), \tag{6}\] \[X_{l}^{u} = \frac{i\hbar}{2}\left(\Omega(z,t)\sigma_{l}^{+}-\Omega^{*}(z,t) \sigma_{l}^{-}\right). \tag{7}\] The phonon bath fluctuation operators are \[\xi_{g} = \frac{1}{2}\left(B_{+}+B_{-}-2\langle B\rangle\right), \tag{8}\] \[\xi_{u} = \frac{1}{2i}\left(B_{+}-B_{-}\right), \tag{9}\] where \(B_{+}\) and \(B_{-}\) are the coherent-state phonon displacement operators. Explicitly, the phonon displacement operators in terms of the phonon mode operators can be written as \[B_{\pm}=\exp\left[\pm\sum_{k}\frac{\lambda_{k}}{\omega_{k}}\left(b_{k}^{ \dagger}-b_{k}\right)\right].\] From this expression, it is clear that the exponential of the phonon operator takes care of all the higher order phonon processes. Therefore, the phonon displacement operator averaged over all closely spaced phonon modes at a temperature T, obeys the relation \(\langle B_{+}\rangle=\langle B_{-}\rangle=\langle B\rangle\) where \[\langle B\rangle=\exp\left[-\frac{1}{2}\int_{0}^{\infty}d\omega\frac{J(\omega )}{\omega^{2}}\coth\left(\frac{\hbar\omega}{2K_{B}T}\right)\right], \tag{10}\] and \(K_{B}\) is the Boltzmann constant. The phonon spectral density function \(J(\omega)=\alpha_{p}\omega^{3}\exp[-\omega^{2}/2\omega_{b}^{2}]\) describes longitudinal acoustic(LA) phonon coupling via a deformation potential [25] for QD system, where the parameters \(\alpha_{p}\) and \(\omega_{b}\) are the electron-phonon coupling and cutoff frequency, respectively. Next we use the master equation(ME) approach to solve the polaron-transformed system Hamiltonian dynamics by considering the phonon bath as a perturbation. The Born-Markov approximation can be performed with respect to the polaron-transformed perturbation in the case of nonlinear excitation. Hence, the density matrix equation for the reduced system under Born-Markov approximation can be written as \[\dot{\rho}=\frac{1}{i\hbar}[H_{s},\rho]+\sum_{l}\Bigl{(}\mathcal{L}_{ph}\rho+ \frac{\gamma}{2}\mathcal{L}[\sigma_{l}^{-}]\rho+\frac{\gamma_{d}}{2}\mathcal{ L}[\sigma_{l}^{+}\sigma_{l}^{-}]\rho\Bigr{)}, \tag{11}\] where \(\gamma\) is the spontaneous decay rate of the exciton state. The spontaneous decay originates from the quantum fluctuations of the vacuum state. Similarly, for thermal fluctuation, we have adopted the final Lindbladian form of the dephasing interaction model described by a simple stochastic Hamiltonian[26]. Therefore, we incorporate the pure-dephasing process phenomenologically in ME with a decay rate \(\gamma_{d}\). This additional dephasing term explains the broadening of the zero-phonon line (ZPL) in QD with increasing temperatures [27; 28]. The Lindblad superoperator \(\mathcal{L}\) is expressed as \(\mathcal{L}[\mathcal{O}]\rho=2\mathcal{O}\rho\mathcal{O}^{\dagger}-\mathcal{O} ^{\dagger}\mathcal{O}\rho-\rho\mathcal{O}^{\dagger}\mathcal{O}\), under the operation of \(\mathcal{O}\) operator. The term \(\mathcal{L}_{ph}\) represents the effect of phonon bath on the system dynamics. Therefore the explicit form of \(\mathcal{L}_{ph}\rho\) in terms of previously defined system operators can be expressed as \[\mathcal{L}_{ph}\rho = -\frac{1}{\hbar^{2}}\int_{0}^{\infty}d\tau\sum_{j=g,u}G_{j}(\tau )[X_{l}^{j}(z,t),X_{l}^{j}(z,t,\tau)\rho(t)] \tag{12}\] \[+ H.c.,\] where \(X_{l}^{j}(z,t,\tau)=e^{-iH_{s}\tau/h}X_{l}^{j}(z,t)e^{iH_{s}\tau/h}\), and the polaron Green's functions are \(G_{g}(\tau)=\langle B\rangle^{2}\{\cosh{[\phi(\tau)]}-1\}\) and \(G_{u}(\tau)=\langle B\rangle^{2}\sinh[\phi(\tau)]\). The phonon Green's functions depend on phonon correlation function given below \[\phi(\tau)=\int_{0}^{\infty}d\omega\frac{J(\omega)}{\omega^{2}}\left[\coth \left(\frac{\hbar\omega}{2K_{B}T}\right)\cos(\omega\tau)-i\sin(\omega\tau) \right]. \tag{13}\] The polaron ME formalism is not generally valid for arbitrary excitation strength and exciton phonon coupling. The validity of polaron ME is stated as [20] \[\left(\frac{\Omega}{\omega_{b}}\right)^{2}\left(1-\langle B\rangle^{4}\right) \ll 1. \tag{14}\] It is clear from the above equation that, at low temperatures \(\langle B\rangle\approx 1\) and \(\Omega/\omega_{b}<1\) fulfil the above criteria. Hence, we restrict our calculation in the weak field regime satisfying \(\Omega/\omega_{b}<1\) at a low phonon bath temperature. The full polaron ME (11) contains multiple commutator brackets and complex operator exponents, which require involved numerical treatment for studying time dynamics. We make some simplifications of the full ME by using various useful identities. These reduce ME into a simple analytical form with decay rates corresponding to the various phonon-induced processes. Though we have not made any approximation, simplified ME scales down the numerical computation efforts and gives better insight into the physical process. By expanding all the commutators in Eq.(11) and rearranging using fermion operator identities, we get the simplified ME as \[\dot{\rho} = \frac{1}{i\hbar}[H_{s},\rho]+\sum_{l}\Bigl{(}\frac{\gamma}{2} \mathcal{L}[\sigma_{l}^{-}]\rho+\frac{\gamma_{d}}{2}\mathcal{L}[\sigma_{l}^{+} \sigma_{l}^{-}]\rho\] \[+ \frac{\Gamma_{l}^{\sigma^{+}}}{2}\mathcal{L}[\sigma_{l}^{+}]\rho+ \frac{\Gamma_{l}^{\sigma^{-}}}{2}\mathcal{L}[\sigma_{l}^{-}]\rho-\Gamma_{l} ^{cd}(\sigma_{l}^{+}\rho\sigma_{l}^{+}+\sigma_{l}^{-}\rho\sigma_{l}^{-})\] \[- i\Gamma_{l}^{sd}(\sigma_{l}^{+}\rho\sigma_{l}^{+}-\sigma_{l}^{-} \rho\sigma_{l}^{-})+i\Delta_{l}^{\sigma^{+}\sigma^{-}}[\sigma_{l}^{+}\sigma_{l}^ {-},\rho]\] \[- [i\Gamma_{l}^{gut+}(\sigma_{l}^{+}\sigma_{l}^{-}\rho\sigma_{l}^{+}+ \sigma_{l}^{-}\rho-\sigma_{l}^{+}\sigma_{l}^{-}\rho\sigma_{l}^{-})+H.c.]\Bigr{)}.\] \[- [\Gamma_{l}^{gut-}(\sigma_{l}^{+}\sigma_{l}^{-}\rho\sigma_{l}^{+}- \sigma_{l}^{-}\rho+\sigma_{l}^{+}\sigma_{l}^{-}\rho\sigma_{l}^{-})+H.c.]\Bigr{)}.\] The phonon-induced decay rates are given by \[\Gamma_{l}^{\sigma^{+}/\sigma^{-}} =\frac{\Omega_{R}(z,t)^{2}}{2}\int_{0}^{\infty}\Bigg{(}\operatorname{ Re}\bigg{\{}(\cosh(\phi(\tau))-1)f(z,t,\tau)+\sinh(\phi(\tau))\cos(\eta(z,t)\tau) \bigg{\}}\] \[\mp\operatorname{Im}\bigg{\{}(e^{\phi(\tau)}-1)\frac{\Delta_{l} \sin(\eta(z,t)\tau)}{\eta(z,t)}\bigg{\}}\,\Bigg{)}\,d\tau, \tag{16}\] \[\Gamma_{l}^{\rm ed} =\frac{1}{2}\int_{0}^{\infty}\operatorname{Re}\bigg{\{}\Omega_{S }(z,t)\sinh(\phi(\tau))\cos(\eta(z,t)\tau)-\Omega_{S}(z,t)(\cosh(\phi(\tau))-1) f(z,t,\tau)\] \[+\Omega_{T}(z,t)(e^{-\phi(\tau)}-1)\frac{\Delta_{l}\sin(\eta(z,t )\tau)}{\eta(z,t)}\bigg{\}}d\tau,\] (17) \[\Gamma_{l}^{\rm sd} =\frac{1}{2}\int_{0}^{\infty}\operatorname{Re}\bigg{\{}\Omega_{T }(z,t)\sinh(\phi(\tau))\cos(\eta(z,t)\tau)-\Omega_{T}(z,t)(\cosh(\phi(\tau))-1 )f(z,t,\tau)\] \[-\Omega_{S}(z,t)(e^{-\phi(\tau)}-1)\frac{\Delta_{l}\sin(\eta(z,t )\tau)}{\eta(z,t)}\bigg{\}}d\tau,\] (18) \[\Delta_{l}^{\sigma^{+}\sigma^{-}} =\frac{\Omega_{R}(z,t)^{2}}{2}\int_{0}^{\infty}\operatorname{Re} \bigg{\{}(e^{\phi(\tau)}-1)\frac{\Delta_{l}\sin(\eta(z,t)\tau)}{\eta(z,t)} \bigg{\}}d\tau,\] (19) \[\Gamma_{l}^{\rm gu+} =\frac{\Omega_{R}(z,t)^{2}}{2}\int_{0}^{\infty}\bigg{\{}(\cosh( \phi(\tau))-1)\operatorname{Im}[\langle B\rangle\Omega]h(z,t,\tau)+\sinh(\phi (\tau))\frac{\operatorname{Re}[\langle B\rangle\Omega]\sin(\eta(z,t)\tau)}{ \eta(z,t)}\bigg{\}}\,d\tau,\] (20) \[\Gamma_{l}^{\rm gu-} =\frac{\Omega_{R}(z,t)^{2}}{2}\int_{0}^{\infty}\bigg{\{}(\cosh( \phi(\tau))-1)\operatorname{Re}[\langle B\rangle\Omega]h(z,t,\tau)-\sinh(\phi (\tau))\frac{\operatorname{Im}[\langle B\rangle\Omega]\sin(\eta(z,t)\tau)}{ \eta(z,t)}\bigg{\}}\,d\tau, \tag{21}\] where \(f(z,t,\tau)=(\Delta_{l}^{2}\cos(\eta(z,t)\tau)+\Omega_{R}(z,t)^{2})/\eta(z,t)^ {2}\), \(h(z,t,\tau)=\Delta_{l}(1-\cos(\eta(z,t)\tau))/\eta^{2}(z,t)\) and \(\eta(z,t)=\sqrt{\Omega_{R}(z,t)^{2}+\Delta_{l}^{2}}\) with the polaron-shifted Rabi frequency, \(\Omega_{R}(z,t)=\langle B\rangle|\Omega(z,t)|\), \(\Omega_{S}(z,t)=\operatorname{Re}[\langle B\rangle\Omega(z,t)]^{2}- \operatorname{Im}[\langle B\rangle\Omega(z,t)]^{2}\), \(\Omega_{T}(z,t)=2\operatorname{Re}[\langle B\rangle\Omega(z,t)]\operatorname {Im}[\langle B\rangle\Omega(z,t)]\). Next, we use Maxwell wave equation to describe the propagation dynamics of the electromagnetic field inside the QD medium \[\bigg{(}\nabla^{2}-\frac{1}{c^{2}}\frac{\partial^{2}}{\partial t^{2}}\bigg{)} \vec{E}(z,t)=\mu_{0}\frac{\partial^{2}}{\partial t^{2}}\vec{P}(z,t) \tag{22}\] where \(\mu_{0}\) is the permeability of free space. The induced polarisation \(\vec{P}(z,t)\) originates from the alignment of the medium dipole in the presence of an applied field. Therefore it depends on the coherence term of the density matrix equation. For \(l^{th}\) QD, the coherence term of the density matrix equation can be written as \(\rho_{12}(\Delta_{l},z,t)=\langle 1(\omega_{l})|\mu(z,t)|2(\omega_{l}) \rangle_{l}\). The medium consists of a large number of QD with continuous frequency distribution centered at \(\omega_{c}\). Therefore we can safely replace the summation with integration by redefining the discrete variable \(\Delta_{l}\) to a continuous variable \(\Delta\). The induced macroscopic polarisation can be written in terms of the density matrix element as \[\vec{P}(z,t)=N\int_{-\infty}^{\infty}\bigg{(}\vec{d}_{12}\rho_{12}(\Delta,z,t) e^{i(kz-\omega_{L}t)}+c.c.\bigg{)}\,g(\Delta)d\Delta, \tag{23}\] where \(N\) is the QD volume number density. The inhomogeneous level broadening function in the frequency domain is defined by \(g(\Delta)\). In our calculation, the form of \(g(\Delta)\) is \[g(\Delta)=\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(\Delta-\Delta_{\alpha})^{2}}{2 \sigma^{2}}}, \tag{24}\] where the standard deviation is \(\sigma\). The detuning between the applied field and the QD's central frequency is represented by \(\Delta_{c}\). By applying slowly varying envelope approximation, one can cast inhomogeneous second order partial differential Eq.(22) to first order differential equation as \[\bigg{(}\frac{\partial}{\partial z}+\frac{1}{c}\frac{\partial}{\partial t} \bigg{)}\Omega(z,t)=i\eta\int_{-\infty}^{\infty}\rho_{12}(\Delta,z,t)g(\Delta)d\Delta, \tag{25}\] where the coupling constant \(\eta\) is defined by \[\eta=-3N\lambda^{2}\gamma/4\pi \tag{26}\] and \(\lambda\) is the carrier wavelength of the QD transition. The self consistent solution of Eq.(15) and (25) with proper initial conditions can display the spatiotemporal evolution of the field inside the medium. Moreover the analytical solution of the coupled partial differential equation is known only for some special conditions, hence we adopted numerical integration of Eq.(15) and (25) to depict the results. For numerical computation, a useful frame transformation \(\tau=t-z/c\) and \(\zeta=z\) is needed which removes the explicit time variable from Eq.(25), which now only depends on the one variable \(\zeta\). ## III Numerical result ### Phonon-induced scattering rates First we discuss various decay rates for the QD system with experimentally available parameter regions [29; 30]. The medium comprises InGaAs/GaAs QDs with volume density \(N=5\times 10^{20}\)m\({}^{-3}\) and a length of 1 mm. The central QD excitation energy is \(\hbar\omega_{c}=\)1.3 eV with a Gaussian spectral distribution having FWHM of 23.5 meV. The QD is driven by the optical pulse at \(\zeta=0\) with a hyperbolic secant profile \[|\Omega(0,\tau)|=\Omega_{0}\,\mathrm{sech}\left(\frac{\tau-\tau_{c}}{\tau_{0}}\right) \tag{27}\] where \(\tau_{0}\), and \(\tau_{c}\) defines the width, and center of the pulse, respectively. For numerical computation, the amplitude and width of the pulse are taken to be \(\Omega_{0}=\) 0.2 meV and \(\tau_{0}=\) 6.373 ps. The phonon bath temperature T = 4.2 K gives \(\langle B\rangle=0.95\). Other parameters are \(\alpha_{p}=0.03\) ps\({}^{2}\), \(\omega_{b}=1\) meV. The system under consideration has a relaxation rate \(\gamma=\gamma_{d}=2\)\(\mu\)eV(2 ns). In order to normalize all the system parameters to a dimensionless quantity we have chosen normalization frequency to be \(\gamma_{n}=\) 1 rad/ps. In Fig.(2), the color bar represents the variation of various phonon-induced scattering rates as a function of detuning and time, both at normalised units along the \(x\)- and \(y\)-axis respectively. In the QD system, various phonon processes are connected with exciton transitions. In the case of ground state to exciton transition, phonon absorption occurs while in the opposite process, phonon emission occurs. Now we discuss the physical process associated with the phonon scattering rates \(\Gamma^{\sigma^{+}}\) and \(\Gamma^{\sigma^{-}}\). For positive detuning, the applied field frequency is larger than the QD transition frequency. Subsequently a phonon generates with \(\Delta\) frequency in order to make a resonant QD transition. These emitted phonons develop an incoherent excitation in the system referred by the \(\Gamma^{\sigma^{+}}\). Oppositely for negative detuning, the applied field frequency is smaller than the QD transition frequency, and a resonant QD transition is possible only when some phonon of frequency \(\Delta\) will be absorbed from the bath. With this mechanism, QD exciton to ground state decay enhances the radiation which is represented by the \(\Gamma^{\sigma^{-}}\). This low-temperature asymmetry is clearly visible in Fig.2(a) and 2(b). At higher temperatures, this asymmetry gets destroyed, and both rates overlap and are centered at \(\Delta=0\). Fig.2(c) shows the variation of \(\Gamma^{cd}\) which is only present in the off-diagonal density matrix element and responsible for the additional dephasing in the system dynamics. The additional detuning \(\Delta^{\sigma^{+}\sigma^{-}}\) from the simplified master equation plotted in Fig.2(d), shows a very tiny value compared to the system detuning \(\Delta\). We also notice that the sign of \(\Delta^{\sigma^{+}\sigma^{-}}\) changes according to the system detuning \(\Delta\). It is important to keep in mind that we display the variation along the \(y\)-axis around \(\gamma_{n}\tau=\)40, which is the centre of the pulse with the secant profile. ### Pulse area theorem It is well know from Beer's law, that a weak pulse gets absorbed inside the medium due to the presence of opacity at the resonance condition. However, McCall and Hahn showed that some specific envelope pulse shape remains intact for a long distance without absorption, even at resonance[1; 2]. Inspired of this phenomena, we have taken into account of a time-varying pulse whose envelope shape is stated in the Eq.(27). The area \(\Theta(z)\) enclosed by its hyperbolic envelope shape is defined as \[\Theta(z)=\int_{-\infty}^{+\infty}\Omega(z,t^{{}^{\prime}})dt^{{}^{\prime}}\,. \tag{28}\] By formally integrating Eq.(25) over time and detuning, one can find the spatial variation of the pulse area closely followed by the McCall and Hahn work. The evolution of the pulse area \(\Theta(z)\) during its propagation in a two-level absorbing QD medium is given by \[\frac{d\Theta(z)}{dz}=-\frac{\alpha}{2}\sin\Theta(z) \tag{29}\] where \(\alpha\) is the optical extinction per unit length. The optical extinction depends on the various system param Figure 2: The variation of phonon-induced scattering rates with detuning and time of a QD at \(\zeta=0\) for the applied secant pulse in Eq.(27). a) Phonon-induced pumping rate \(\Gamma^{\sigma^{+}}\)[Eq.(16)] b) Phonon-induced decay rate \(\Gamma^{\sigma^{-}}\)[Eq.(16)] c) Phonon induced dephasing \(\Gamma^{cd}\)[Eq.(17)] d) Phonon induced detuning \(\Delta^{\sigma^{+}\sigma^{-}}\)[Eq.(19)] for peak Rabi frequency \(\Omega_{0}=\) 0.2 meV, pulse width \(\tau_{0}=\) 6.373 ps and pulse center \(\gamma_{n}\tau_{c}=\) 40. The phonon bath temperature T = 4.2 K corresponds to \(\langle B\rangle=\) 0.95 with spectral density function parameters \(\alpha_{p}=\) 0.03 ps\({}^{2}\), \(\omega_{b}=\) 1 meV. eters as \(\alpha=2\pi ng(0)\). The solution of the Eq.(29) is \[\tan\frac{\Theta(z)}{2}=\tan\frac{\Theta(0)}{2}e^{-\alpha z/2}, \tag{30}\] where \(\Theta(0)\) is the pulse area at \(z=\)0. It is clear from the above expression that \(\Theta(z)=2n\pi\) is the stable solution, whereas \(\Theta(z)=(2n+1)\pi\) is an unstable one. The pulse area of the given envelope as stated in Eq.(27) is \(\Theta(0)=\pi\Omega_{0}\tau_{0}\). Thus, the envelope with amplitude \(\Omega_{0}=2/\tau_{0}\) gives \(2\pi\) area pulse. This envelope shape remains preserve for the long propagation distance even though it interacts resonantly with the medium. Fig.(3) exhibits the variation of pulse area with the propagation distance inside the QD medium. It is evident from this figure that the propagation dynamics of \(2\pi\) area pulse through the medium of length \(L\) has negligible loss in pulse area. In the absence of phonon(black line) interaction, the system behaves identical to the atomic system and hence follows \(\Theta\approx 2\pi(1-\tau_{0}/T_{2}^{{}^{\prime}})\) reported earlier by McCall and Hahn [2]. The loss in pulse area comes from the finite lifetime \(T_{2}^{{}^{\prime}}\) of the QD which is inversely proportional to \(\gamma_{d}\). Ideally, the pulse will retain initial pulse area for an arbitrary distance in absence of decay and decoherence. However, in presence of phonon contribution, we have noticed the pulse area gets enhanced by a small amount. The amount of raise in the pulse area linearly depends on the bath temperature as indicated in Fig.(3). This effect can be explained by carefully examing the definition of an effective Rabi frequency \(\Omega_{R}(z,t)=\langle B\rangle|\Omega(z,t)|\) where \(\langle B\rangle\) is dependent on the bath temperatures. The inset of Fig.(3) illustrate the convergence of the pulse area shifted from the \(2\pi\) value at different temperatures. To explain the behavior of Fig.(3), we study the absorption and dispersion properties of the medium as a function of detuning at various time intervals of the pulse. Fig.(4) delineates the physical process behind the dispersion and absorption. We assume all the population in the ground state, before the leading edge of the pulse reaches the medium. The peak of incident pulse enters inside the medium at \(\gamma_{n}\tau_{c}=40\). It is clear from Fig.4(a) that most of the leading edge pulse energy gets absorbed by the ground state population and the population goes to the excited state. Hence the medium shows maximum absorption at \(\gamma_{n}\tau=30\), hence elucidating the absorption phenomenon at resonance. Simultaneously, the nature of the dispersion curve is anomalous as previously reported [31]. The anomalous dispersion accompanied fast velocity is completely prohibited due to huge absorption at the resonance condition. The medium becomes saturated as the centre of the pulse enters the medium; consequently, the medium turns less absorbent to the pulse. Nonetheless, a tiny absorption peak still exists at the resonance condition due to the presence of various decay processes of the medium as indicated by Fig.4(b). Therefore, the excited state gets populated during the passage of the leading edge pulse. This population can leave the excited state and return to the ground by stimulated emission in the presence of the trailing edge of the pulse. As a results, a gain can be experienced by the incident pulse at \(\gamma_{n}\tau\) = 50 as revealed in Fig.4(c). From these three panels, we can conclude that the leading edge of the pulse gets Figure 3: Evolution of the pulse area(\(\Theta\)) as a function of propagation distance \(\zeta\) started with \(2\pi\) sech-type pulse for different temperatures. The applied pulse has a width of \(\tau_{0}=6.373\) ps and centered at \(\gamma_{n}\tau_{c}=40\). The system under consideration without phonon bath(black) and with phonon bath maintaining temperature T = 4.2K(red), 10K(blue), 20K(green) with electron phonon coupling \(\alpha_{p}=0.03\) ps\({}^{2}\) and cut off frequency \(\omega_{b}=1\) meV.The central QD detuning \(\Delta_{c}=0\) with spontaneous decay and the pure dephasing rate \(\gamma=\gamma_{d}=2\)\(\mu\)eV(2 ns).The optical extinction per unit length \(\alpha=10\) mm\({}^{-1}\). The inset figure shows the stability of the pulse area higher than \(2\pi\) for different phonon bath temperatures. absorbed by the medium, while the tailing edge of the pulse experiences gain. Towards the trailing end of the pulse, the dispersive nature of the medium changes from anomalous to normal, as shown in Fig.4(d). The positive slope of the dispersion curve lead to slow group velocity that started at \(\gamma_{n}\tau=60\) shown in Fig.4(d). Fig.4(d) to Fig.4(f) indicate that the optical pulse regeneration process is completed due to the medium-assisted gain; hence, the pulse shape remains preserved. This is the explanation of the underpinning mechanism behind SIT. The claim of the above physical mechanism can be supported by studying population dynamics at the excited state. For this purpose, we have plotted the excited state population as a function of the pulse area in Fig.(5). A noticeable population redistribution among the levels is feasible within few widths of incident pulse wherein intensity is appreciable. As soon as the pulse intensity diminishes at the trailing end, spontaneous emission takes care of depletion of the excited state population. This leads to vanishing population at the excited state after a sufficiently long time from the pulse centre. As a consequence, it is crucial to decide the observation time of the QD population. Hence, we display the exciton population at just the end of the pulse \(\gamma_{n}\tau=60\), to capture the outcome of the pulse. It is clear from Fig.(5) that the excited state population shows a decaying Rabi oscillation kind of behaviour. It is also confirmed that the population never fully transferred to the excited state or fully returned to the ground state for any pulse area, indicating to non-constant phonon induced decay and gain process involved in the system. The decaying features of the local population maximum can be justified by the examining the photon and phonon induced decay rates. The various phonon decay rates are given in Eq.(16)-(21) where increasing incident pulse amplitude \(\Omega(z,t)\) results in the enhancement of these decay rates. This field amplitude dependent phonon decay together with constant photon decay can explain the gradual decay of the population local maxima. On the contrary, the dip of local minima increases due to the presence of phonon induced gain processes \(\Gamma^{\sigma+}\) as suggested in Eq.(16). The local maximum and minimum of the exciton population are located respectively near odd and even integer multiples of \(\pi\) pulse area. The maxima signifies the pulse absorption by the medium, resulting in population inversion. Similarly, minima manifests the transparency of the medium. Thus, the leading edge of the pulse excite the population whereas the tailing edge assists in stimulated emission leaving the population in the ground state of the medium. It is evident that only even integer multiples of \(\pi\) pulse can propagate through the medium without absorption that is consistent with the pulse area theorem. That the local maxima and minima of exciton population never match exactly with the integer value can be figured out later by investigating pulse propagation dynamics. Previously, we found the stable pulse area is higher than \(2\pi\) as shown in Fig.(3) which also agrees with the above observation. Therefore, the analysis of coherence and population ensures us that SIT phenomena can be accomplished in the QD medium. ### Self Induced Transparency A homogenous QD medium with length 1 mm is taken into account for studying spatio-temporal evolution of hyperbolic secant optical pulse. To achieve a stable pulse propagation, we have chosen the initial pulse area to be \(2\pi\). Fig.6 confirms the area theorem by showing a stable optical pulse propagation for a longer distance. However, the pulse shape at larger distances has noticed some Figure 5: The variation of excited state population with input pulse area at resonance condition \(\Delta_{c}=0\). The system and bath parameters are \(\tau_{0}=6.373\) ps, \(\gamma_{n}\tau_{c}=40\), T = 4.2K, \(\alpha_{p}=0.03\) ps\({}^{2}\), \(\omega_{b}=1\) meV, \(\gamma=\gamma_{d}=2\)\(\mu\)eV(2 ns). distortion and absorption. Fig.6 also indicates that the pulse's peak value gradually decreases by increasing the propagation distance. This suggests a finite absorption in the QD medium that prohibited complete transparency in the system. In particular, the statement agrees well with the small absorption peak at resonance in the absorption profile shown in Fig.4b. Figure 7 displays the individually normalized pulse for different propagation distances. Inspection says that the input pulse experiences delay and a little broadening during the propagation through the medium. The sole reason behind the pulse broadening is the dispersive nature of the system. In the frequency domain, a temporal pulse can be treated as a linear superposition of many travelling plane waves with different frequencies. These individual frequency waves gather different phases and move with varying velocities during the pulse propagation in a dispersive medium. Therefore the pulse gets broader as the leading part(low frequency) moves faster, and the tailing end(high frequency) goes slower. In the QD system, the pure dephasing rate is also responsible for this broadening as it destroys the coherence. From Fig.7, a distinct peak shift is observed while optical pulse propagating through the medium. This peak shift arises because of normal dispersive medium that induced slow group velocity of the optical pulse inside the medium. We adopt the analytical expression of time delay in the ideal case by considering \(\sigma\gg 1/\tau_{0}\) reported earlier[32]. The analytical expression for time delay found to be \(\gamma_{n}\tau_{d}=\alpha L\gamma_{n}\tau_{0}/4\). Here the absorption coefficient \(\alpha\) is approximately 10 mm\({}^{-1}\) calculated from the chosen parameters. Therefore the calculated analytical time delay \(\gamma_{n}\tau_{d}\approx 15\) shows excellent agreement with the numerical result. The inhomogeneous level broadening \(\sigma\) plays an important role in pulse propagation dynamics. In our calculation, we are in the regime where the pulse width is greater than the inhomogeneous broadening time \(\sigma\tau_{0}\gg 1\). Therefore, the higher spread of the QD parameter \(\sigma\) leads to fewer QD resonantly interacting with the propagating pulse. This results in a negligibly small change in pulse shape. Alternatively, the effective QD density becomes less, indicating the lower value of the optical extinction parameter \(\alpha\). Henceforth a lower time delay is expected in the final output pulse due to its presence in the righthand term of Eq.(25). In Fig.(8), the final output pulse shape variation is presented for the three different QD spreads. The pulse delay decreases with an increasing QD broadening \(\sigma\). On the other hand, the pulse peak value decreases with the lower QD spreads. This observation matches our previous prediction that higher \(\sigma\) produce a lower pulse delay in the medium. Also, more resonant QD absorb more energy from the pulse, resulting in a lesser peak value in the final pulse shape. Hence spread of the QD is also a determining factor for the shape and delay of the output pulse. Recalling the pulse area theorem again, we observe that the pulse area is almost constant throughout the propagation near \(2\pi\). The result is consistent because as the pulse amplitude decreases, the pulse width increases, maintaining the constant area under the curve. Therefore an absorbing QD medium can exhibit the SIT phenomena at low temperatures. ### Phonon bath parameter dependence on SIT In the simplified master equation (15), various phonon-induced scattering rates depend on both the system and bath parameters. Hence it is crucial to study the effect of phonon bath on the SIT dynamics. The phonon contribution comes to the picture in two ways; one from the reduced Rabi frequency, which depends on the \(\langle B\rangle\) and the other is the phonon-induced scattering rates connected with the phonon spectral density function. Therefore increasing phonon bath temperatures reduces the Figure 8: The normalized Rabi frequency displayed with retarded time after passing the medium for three different QD broadening \(\sigma\). All the other parameters are the same as Fig.(6). Figure 7: The Rabi frequency normalized with the individual peak value is plotted against retarded time at different propagation distances inside the medium at resonance condition \(\Delta_{c}=0\). All the other parameters are the same as Fig.(6). value of \(\langle B\rangle\) and \(\hbar\omega/2K_{b}T\) present in the expression of \(\phi(\tau)\) given in the Eq.(13). Consequently, effective coupling between QD and applied field gets reduced, but the phonon-induced decay rates get enhanced. From Fig.(9), we notice that the final pulse shape experiences more deformation for higher temperatures. The peak of the output pulse is also very much reduced for the higher temperature T =20 K. Therefore, the bath temperature should be minimised to see the SIT in the QD medium. Another controlling factor of the SIT is the interaction strength between the QD and the phonon bath. So the increment of system-bath coupling leads to the reduction of the coherence in the system. This statement is understandable by looking at the phonon correlation function shown in Eq.(13). Thus the final pulse shape for the equal propagation distances is significantly modified by the electron-phonon coupling constant, as shown in Fig.(10). Therefore we also have to ensure that the QD bath interacts weakly to get SIT phenomena in the QD medium. ### Higher pulse area and pulse breakup Finally, we discuss the behaviour of a pulse propagating through the absorbing QD medium with a higher pulse area than \(2\pi\). Therefore we consider the next stable pulse area solution \(4\pi\) for further investigation. The numerical result of the pulse propagation in both space and time is shown in Fig.(11). Unlike the \(2\pi\) pulse case, here, the initial pulse breaks into two pulses as it travels through the medium. This phenomenon is also well explained by the pulse area theorem where \(2n\pi\) pulse is split into \(n\) number of \(2\pi\) pulses. Surprisingly, the initial pulse breakup into two pulses is not identical in shape. One pulse gets sharper, and the other gets broader in the time domain and adjusts the peak value such that the area under the curve is \(2\pi\). The broader pulse component shows a prominent time delay, whereas the sharper pulse component propagates with a tiny time delay. As a result, total pulse area is constant throughout the propagation distance near \(4\pi\). ## IV Conclusions We have investigated the SIT phenomena in an inhomogeneously broadened semiconductor QD medium. In our model, we have included the effect of phonon in the total Hamiltonian to describe the modified optical properties of QD in the presence of a thermal en Figure 11: The propagation dynamics of a \(4\pi\) area pulse in an absorbing QD medium as a function of both space and time at resonance condition \(\Delta_{c}=0\). All other parameters are same as Fig.(6). Figure 10: The Rabi frequency envelope with time at a propagation distance \(\zeta\eta/\gamma_{n}=50\) for different electron-phonon coupling strength \(\alpha_{p}\) at resonance condition \(\Delta_{c}=0\). All the parameters are same as Fig.(9) except T = 4.2K and various electron-phonon coupling \(\alpha_{p}=0.03\) ps\({}^{2}\)(red), 0.06 ps\({}^{2}\)(blue), 0.12 ps\({}^{2}\)(green). Figure 9: The plot of Rabi frequency envelope with time at a propagation distance \(\zeta\eta/\gamma_{n}=50\) for different phonon bath temperatures at resonance condition \(\Delta_{c}=0\). The common parameters are \(\Theta(0)=2\pi\), \(\tau_{0}=6.373\) ps, \(\gamma_{n}\tau_{c}=40\), \(\alpha_{p}=0.03\) ps\({}^{2}\), \(\omega_{b}=1\) meV, \(\gamma=\gamma_{d}=2\)\(\mu\)eV(2 ns). The figure display four different configurations, system without a phonon bath (black) and with a phonon bath at a temperature T = 4.2K(red), 10K(blue), 20K(green). vironment. We then adopted the polaron ME formalism to analytically derive the simplified ME with various phonon-induced decay rates. These phonon-induced scattering rates are plotted against detuning and time, which verify the presence of low-temperature asymmetry of phonon-induced pumping and decay in our system. We solve numerically the density matrix equation and Maxwell equation selfconsistently with suitable parameters. We observe that stable pulse propagation is possible in the QD medium with pulse area slightly higher than \(2\pi\), depending on the phonon bath temperature. The physical mechanism of the SIT is clearly understood by analyzing the absorption and dispersion of the medium. The leading edge of the pulse gets absorbed by the medium, whereas the tailing edge of the pulse experience gain, hence the pulse shape remains intact and propagate through medium with short length. However, for longer propagation distances, we find that even though the pulse propagation through the medium is possible, the propagating pulse gets absorbed and broadened. The final pulse shape is preserved on exiting the medium. Increasing the phonon bath temperature and coupling produce more deformation in the final pulse shape, as it destroys the coherence in the system. Finally, we explore the propagation of a \(4\pi\) pulse in the QD medium, which shows prominent pulse breakup phenomena reported earlier in the literature. Therefore our investigation ensures that a short pulse can propagate through the considered QD medium with a tiny change in shape. Hence, this work may have potential applications in quantum communication, quantum information, and mode-locking.
2306.10284
Far-Ultraviolet to Near-Infrared Observations of SN 2023ixf: A high energy explosion engulfed in complex circumstellar material
We present early-phase panchromatic photometric and spectroscopic coverage spanning far-ultraviolet (FUV) to the near-infrared (NIR) regime of the nearest hydrogen-rich core-collapse supernova in the last 25 years, SN 2023ixf. We observe early 'flash' features in the optical spectra due to a confined dense circumstellar material (CSM). We observe high-ionization absorption lines (FeII, MgII) in the ultraviolet spectra from very early on. We also observe a multi-peaked emission profile of H-alpha in the spectrum beginning ~16 d, which indicates ongoing interaction of the SN ejecta with a pre-existing shell-shaped CSM having an inner radius of ~75 AU and an outer radius of ~140 AU. The shell-shaped CSM is likely a result of enhanced mass loss ~35-65 years before the explosion assuming a standard Red-Supergiant wind. The UV spectra are dominated by multiple highly ionized narrow absorption features and broad emission features from elements such as C, N, O, Si, Fe, and Ni. Based on early light curve models of Type II SNe, we infer that the nearby dense CSM confined to (7+-3)e14cm (~45 AU) is a result of enhanced mass loss (10^{-3.0+-0.5} Msol/yr) two decades before the explosion.
Rishabh Singh Teja, Avinash Singh, Judhajeet Basu, G. C. Anupama, D. K. Sahu, Anirban Dutta, Vishwajeet Swain, Tatsuya Nakaoka, Utkarsh Pathak, Varun Bhalerao, Sudhanshu Barway, Harsh Kumar, Nayana A. J., Ryo Imazawa, Brajesh Kumar, Koji S Kawabata
2023-06-17T07:47:12Z
http://arxiv.org/abs/2306.10284v2
Far-Ultraviolet to Near-Infrared Observations of SN 2023ixf: A high energy explosion engulfed in complex circumstellar material ###### Abstract We present early-phase panchromatic photometric and spectroscopic coverage spanning far-ultraviolet (FUV) to the near-infrared (NIR) regime of the nearest hydrogen-rich core-collapse supernova in the last 25 years, SN 2023ixf. We observe early 'flash' features in the optical spectra due to a confined dense circumstellar material (CSM). We observe high-ionization absorption lines (Fe ii, Mg ii) in the ultraviolet spectra from very early on. We also observe a multi-peaked emission profile of H \(\alpha\) in the spectrum beginning \(\sim 16\) d, which indicates ongoing interaction of the SN ejecta with a pre-existing shell-shaped CSM having an inner radius of \(\sim\) 75 AU and an outer radius of \(\sim\) 140 AU. The shell-shaped CSM is likely a result of enhanced mass loss \(\sim\) 35 - 65 years before the explosion assuming a standard Red-Supergiant wind. The UV spectra are dominated by multiple highly ionized narrow absorption features and broad emission features from elements such as C, N, O, Si, Fe, and Ni. Based on early light curve models of Type II SNe, we infer that the nearby dense CSM confined to \(7\pm 3\times 10^{14}\) cm (\(\sim\) 45 AU) is a result of enhanced mass loss (\(10^{-3.0\pm 0.5}\) M\({}_{\odot}\) yr\({}^{-1}\)) two decades before the explosion. Core-collapse supernovae (304); Type II supernovae(1731); Supernova dynamics (1664); Red supergiant stars(1375); Supernovae (1668); Observational astronomy(1145) 0000-0002-8867-2886]Rishabh Singh Teja 0000-0002-1883-0886]Avinash Singh 0000-0002-888-0886]Judhajeet Basu 0000-0002-1888-0886]G.C. Anupama 0000-0002-1888-0886]D.K. Sahu 0000-0002-1888-0886]Anirban Dutta 0000-0002-1888-8866]Vishwajeet Swain 0000-0002-8888-0886]Tatsuya Nakaoka 0000-0002-888-0886]Utkarsh Pathak 0000-0002-1888-8866]Varun Bhalerao 0000-0002-1888-0886]Sudhansh Narayan 0000-0002-1888-0886]Harsh Kumar 0000-0002-1888-0886]Nayana A.J. 0000-0002-1888-8866]Ryo Imazawa 0000-0002-1888-8866]Brajesh Kumar 0000-0002-1888-08886]Kojil S. Kawabata ## 1 Introduction Massive stars (\(\gtrsim 8\) M\({}_{\odot}\)) that meet their fate with the explosive phenomena are termed as Core-collapse Supernovae (CCSNe). CCSNe are either hydrogen-rich (Type II) or hydrogen-poor (Ib, Ic) (Filippenko, 1997). Recent advancements in all-sky surveys (e.g., ZTF, ATLAS) have made it possible to discover young supernovae when rapid changes occur in their light curves, spectral energy distribution, and spectral evolution apart from increasing brightness (Khazov et al., 2016; Bruch et al., 2022). The early evolution of a good fraction (\(>36\%\)) of Type II SNe is dominated by narrow emission features associated with confined dense circumstellar material (CSM, Bruch et al., 2021, 2022). The characteristics of the nearby dense CSM are visible in the spectral sequence as "flash" features consisting of narrow high-ionization lines that last a few to several days depending on the radius and density of the CSM (Gal-Yam et al., 2014; Yaron et al., 2017; Jacobson-Galan et al., 2022). The flash features are caused by the ionizing photons that result when the shock breaks out from the stellar surface and flashes/ionizes the nearby circumstellar material. Some authors (Kochanek, 2019; Jacobson-Galan et al., 2022) have noted that the ionization from shock breakout lasts for a few hours only, and to get prolonged flash features, another photon source is required, such as ejecta-CSM interaction. The earliest detailed time se ries observations of "flash spectroscopy" were observed for SN 2013fs (Yaron et al., 2017). The confined CSM (\(<10^{15}\) cm) of SN 2013fs was indicated by the disappearance of flash features, and it was consistent with the radio non-detection (Yaron et al., 2017). It was argued that this could only result if the progenitor had undergone a short-lived episode of enhanced mass loss just a few years before the explosion. Even though the rise of all-sky surveys has led to an order-of-magnitude increase in the early detection (and follow-up) of such events (Blagorodnova et al., 2018; Nicholl, 2021), the physics behind the specifics of such interaction and origins of CSM are still not definitively understood, and the associated observables such as light curves are not very well constrained (Fuller, 2017; Wu and Fuller, 2021; Dessart and Hillier, 2022; Ko et al., 2022; Moriya et al., 2023). The CCSNe have been studied extensively in optical and near-infrared (NIR) regimes, but extensive studies in the ultraviolet (UV) regime are still limited (Pritchard et al., 2014; Brown et al., 2007; Vasylyev et al., 2023). The crucial aspect of the observational investigation in UV is the requirement of observation from space-based missions (Vasylyev et al., 2022; Bostroem et al., 2023) for which scheduling time disruptive Target-of-Opportunity (ToO) observations is not rapid for a majority of missions. The flux in UV declines very rapidly, requiring prompt observations and follow-ups. The UV emission from young CCSNe allows the investigation of hot and dense ejecta and/or the presence of CSM when the photosphere is located in the outer layers of the progenitor star (Bufano et al., 2009). Many Type II SNe show a nearly featureless early optical spectral sequence, unlike the far-UV and near-UV, which showcases a plethora of metal features. The detection of these features can be used to determine the composition of the outer envelope of the pre-SN star, the temperature of the outer layers of the ejecta, or the CSM and its characteristics (Dessart et al., 2022; Bostroem et al., 2023). SN 2023ixf was discovered on 2023 May 19 17:27:15.00 UT (JD 2460084.23) in the galaxy M 101 at \(\sim 14.9\) mag in 'clear' filter (Itagaki, 2023) and classified as a Type II SN (Perley and Gal-Yam, 2023; Teja et al., 2023). The pre-discovery photometry from Zwicky Transient Facility (ZTF) and other Transient Name Server (TNS) alerts provide tight constraints on the explosion epoch. Using the last non-detection (JD 2460083.31) and first detection (JD 2460083.32) (Chufarin et al., 2023), we find the explosion epoch, \(\rm t_{exp}=JD~{}2460083.315\pm 0.005\) which has been used throughout this work. We note that the last-non detection used is not very deep (\(>18\) mag) and if we consider the deeper non-detection (\(>20.5\) mag, Mao et al., 2023) on JD 2460083.16, the explosion epoch has a marginal change (of \(\sim 0.08\) d) to JD 24600083.235. Several professional and amateur astronomers have followed up on SN 2023ixf, being one of the nearest CCSNe in the last 25 years. Various time-domain groups across the globe have been monitoring it soon after its discovery, and the results based on the early observations have already been presented. The early phase optical and NIR photometry and optical spectroscopy have been presented by Yamanaka et al. (2023); Hosseinzadeh et al. (2023); Jacobson-Galan et al. (2023). The presence of flash features in the spectra and increased luminosity is interpreted as due to the presence of nitrogen/helium-rich dense CSM and the interaction of supernova ejecta with it (Yamanaka et al., 2023; Jacobson-Galan et al., 2023). By comparing the early light curve with the shock cooling emission (Hosseinzadeh et al., 2023) suggested that the progenitor of SN 2023ixf could be a red supergiant with radius \(410\pm 10\) R\({}_{\odot}\). The high-resolution spectroscopy revealed that the confined CSM is asymmetric (Smith et al., 2023). Pre-imaging data was analyzed at the SN 2023ixf site in recent works, constraining the mass of the progenitor between \(12-17\) M\({}_{\odot}\)(Jencson et al., 2023; Pledger and Shara, 2023; Soraisam et al., 2023). These estimates are well within the earlier detected CCSNe progenitors (Smartt, 2009; Van Dyk, 2017). This letter presents the panchromatic evolution of SN 2023ixf spanning the far-ultraviolet (FUV) to near-infrared (NIR) wavelengths during the first three weeks since its discovery. The flow of the paper is as follows: In Section 2, we estimate the distance to the host galaxy and its extinction and briefly describe the source of data acquisition and the reduction procedure. Further, we present our spectroscopic observation in Section 3 along with its analysis and modeling in different regimes. Later on in Section 4, we describe the light curve evolution and its early phase analysis. We summarize and discuss this early phase work in Section 5. ## 2 Observations and Data Reduction SN 2023ixf exploded in the outer spiral arm of the host galaxy, M 101, a face-on giant spiral galaxy that lies comparatively close to the Local Group. Tikhonov et al. (2015) estimated a mean distance of \(6.79\pm 0.14\) Mpc (\(\mu=29.15\pm 0.05\) mag) to M 101 using the tip of the RGB method (Lee et al., 1993) with low-uncertainty. Riess et al. (2022) used Cepheids to estimate a distance of \(6.85\pm 0.15\) Mpc (\(\mu=29.18\pm 0.04\) mag). We adopt a mean distance of \(6.82\pm 0.14\) Mpc (\(\mu=29.17\pm 0.04\) mag). The gas phase metallicity was computed by Garner et al. (2022) using various H ii regions in the galaxy and estimated an oxygen abundance of \(12+\log[{\rm O/H}]\sim 0.05\) mag. The gas phase metallicity was computed by Garner et al. (2022) using various H ii regions in the galaxy and estimated an oxygen abundance of \(12+\log[{\rm O/H}]\sim 0. 8.7 in the outer spiral arms of the galaxy which is similar to solar metallicity (Asplund et al., 2009). Galactic reddening in the line-of-sight of SN 2023ixf inferred from the dust-extinction map of Schlafly and Finkbeiner (2011) is \(E(B-V)\) = 0.0077\(\pm\)0.0002 mag. Using high-resolution data, Lundquist et al. (2023) computed equivalent widths of Na I D1 and D2 lines to be 0.118 A and 0.169 A, respectively. Using the relation from Poznanski et al. (2012), we infer a mean host reddening of \(E(B-V)\) = 0.031 \(\pm\) 0.011 mag using Na I D1 and D2. A total reddening of \(E(B-V)\) = 0.039 \(\pm\) 0.011 mag is adopted for SN 2023ixf which is consistent with Smith et al. (2023). ### Optical and Near-Infrared We carried out broadband optical photometric observations in SDSS \(u^{\prime}g^{\prime}r^{\prime}i^{\prime}z^{\prime}\) filters beginning 2023 May 20 UT, using the robotic 0.7-m GROWTH-India telescope (GIT, Kumar et al., 2022) located at Indian Astronomical Observatory (IAO), Hanle, India. Data were downloaded and processed with the standard GIT image processing pipeline described in Kumar et al. (2022). While standard processing was sufficient for \(g^{\prime}r^{\prime}i^{\prime}z^{\prime}\) bands, the \(u^{\prime}\) band data did not have enough stars for automated astrometry using astrometry.net(Lang et al., 2010) and further zero-point estimation. The zero point was computed manually using several non-variable SDSS stars available in the SN field for GIT images. Optical spectroscopic observations of SN 2023ixf were carried out using the HFOSC instrument mounted on the 2-m Himalayan Chandra Telescope (HCT), IAO(Prabhu, 2014). The spectroscopic data were reduced in a standard manner using the packages and tasks in IRAF(For details refer, Teja et al., 2023). Near-Infrared (NIR) data were obtained from the Hiroshima Optical and Near-InfraRed Camera (HONIR; Akitaya et al., 2014) mounted at the 1.5-m Kanata Telescope. The NIR data were reduced using standard procedures in IRAF, and the calibration was done using secondary stars from the 2MASS catalog (Skrutskie et al., 2006). ### Ultraviolet SN 2023ixf was observed by the UltraViolet Imaging Telescope (UVIT; Kumar et al., 2012; Tandon et al., 2017) on board _AstroSat_ on 2023 May 25 & 30 UT in both imaging and spectroscopic modes. However, we could only use imaging data from May 30 for photometry since the images from the earlier epoch are saturated. The spectra obtained at all epochs are of good quality and have been used for this study. We also triggered the _UVIT_ for several Target of Opportunity (ToO) proposals. But due to technical constraints, observations against our ToO request could be undertaken only on June 11, 2023. However, data obtained through ToO observations are immediately made public at the Indian Space Science Data Center (ISSDC) portal1, and we have used the Level 1 (raw) and Level 2 (processed) data files available at ISSDC in this work. All the UVIT observations are listed in Table 1. _UVIT_ observations were performed with the FUV _F172M_ and _F148W_ filters and with the FUV gratings _Grating1_ and _Grating2_. These two gratings are mounted on the FUV filter wheel at positions F4 and F6, respectively (Kumar et al., 2012) and have perpendicular dispersion axes. The _AstroSat-UVIT_ data were pre-processed with CCDLAB(Postma and Leahy, 2017) following the steps described in Postma and Leahy (2021). Aperture photometry was performed using a 12-pixel (\(5^{\prime\prime}\)) aperture and calibrated following the procedures mentioned in Tandon et al. (2020). Spectral extraction and calibrations were performed manually following the procedure described in Tandon et al. (2020) and Dewangan (2021) using IRAF and python. Footnote 1: ISSDC Portal SN 2023ixf was also monitored extensively by the Ultraviolet Optical Telescope (UVOT; Roming et al., 2005) onboard the Neil Gehrels Swift Observatory (Gehrels et al., 2004) beginning May 21, 2023. We utilize the publicly available data obtained from Swift Archives2. Photometry was performed using the UVOT data analysis software in HEASoft, following the procedure described in Teja et al. (2022). To check for contamination, we looked at the archival _Swift_ data of the host galaxy M 101 (ID 00032081). The count rates at the SN site for an aperture similar to that used for the SN photometry are insignificant and comparable to the background. Being a very bright SN, most photometric data points were saturated. We checked the saturate and sss_factor flags from the output and discarded all the saturated and unusable data points based on those flags. Spectroscopic data reduction for _Swift_ UV-grism data was performed using the standard UVOTPY package, which includes the latest grism calibrations and corrections (Kuin, 2014). Further, multiple spectra captured intra-night were summed using uvotspec.sum_PHAffiles program in UVOTPY to increase the overall SNR. The first two spectra separated by just 0.1 d showed intranight flux variability due to the rapid rise; hence, these two spectra were not summed. Around 1800 A, a few spectra were contaminated by a strong source, therefore, we have considered the UVOT spectra beyond 1900 A only. ### X-rays SN 2023ixf was also observed with the Soft X-ray Telescope (SXT) covering the 0.3-7.0 keV energy band (Singh et al., 2016, 2017) aboard _AstroSat_(Singh et al., 2014). Data were obtained in photon counting (PC) and fast window (FW) modes over multiple orbits starting May 25 (see Table 1). Orbit-wise Level 2 data were downloaded from ISSDC and merged into a single cleaned event file using the standard _Julia_ based merger tool. Images, spectra, and lightcurves were produced using XSELECT v2.5a from HEASoft 6.30.1. We do not obtain a statistically significant detection of the source in the data obtained from SXT observations, possibly due to low exposure times and pointing offsets. However, it was detected by other X-ray facilities, primarily in hard X-rays, with the following reports on ATel: _NuSTAR_(May 22, Grefenstette, 2023), _ART-XC_(May 26 and 29, Mereminskiy et al., 2023), and _Chandra_(May 31, Chandra et al., 2023). ### Other Data Sources Being a nearby SN in one of the most well-observed host galaxies, M 101, many amateur astronomers and professional observatories have monitored the SN. We supplemented our photometric dataset with various detections and non-detections of SN 2023ixf from Astronomer's Telegrams3 and TNS Astronotes4, and include the magnitudes reported by Perley and Irani (2023); Filippenko et al. (2023); Fulton et al. (2023); Zhang et al. (2023); Limeburner (2023); Kendurkar and Balam (2023); Mao et al. (2023); Gonzalez-Carballo et al. (2023); Vannini (2023); Desrosiers et al. (2023); Fowler et al. (2023); Koltenbah (2023); Chufarin et al. (2023); D'Avanzo et al. (2023); Balam and Kendurkar (2023); Vannini and Julio (2023a,b); Singh et al. (2023). Footnote 3: Astronomer’s Telegrams Footnote 4: TNS AstroNote ## 3 Spectral Analysis ### Optical Spectra The first optical spectrum of SN 2023ixf was obtained within 5 hrs of discovery by the Liverpool Telescope (Perley and Gal-Yam, 2023). Our spectroscopic follow-up with HCT began \(\sim 2\) days after the explosion. We present the spectral data obtained from HCT until \(\sim 19\) days after the explosion. The spectral sequence is shown in Figure 1. The early spectra, until \(\sim\) 10 d, show a prominent blue continuum with strong high-ionization emission features due to C iv, N iv and He ii, specifically, C iv 5805 A, C iv 7061 A, N iv 7115 A, He ii 4540 A, He ii 4686 A and He ii 5411 A along with the Balmer lines H\(\alpha\), H\(\beta\), H\(\gamma\), and H\(\delta\). Weak signatures of C iii 5696 A and N iii 4641 A and He i 5876 A are also seen in the spectra. The highly ionized emission features at \(\sim\) 2.1 d are well reproduced by a combination of a narrow Lorentzian (limited by the resolution) and an intermediate-width Lorentzian of 2500 km s\({}^{-1}\). Our findings during the flash-ionization phase are similar to those reported in Yamanaka et al. (2023); Jacobson-Galan et al. (2023); Smith et al. (2023); Bostroem et al. (2023). The strength of the narrow component fades gradually, in contrast to the intermediate width component, as the SN flux rises in the optical wavelengths. Most of the flash features in our spectral sequence disappear after \(+\)7 d. In the spectrum of 7.9 d, we observe an intermediate-width H\(\alpha\) emission at \(\sim\) 1,000 km s\({}^{-1}\) in addition to the emergence of a broad P-Cygni feature with absorption trough. This could possibly be due to residual of ongoing interaction with the dense CSM responsible for the flash-ionized phase. A similar profile is also seen for the H\(\beta\) line. Beginning \(\sim\) 16 d (bottom-right panel in Figure 1), we observe a blue-shifted multi-peaked emission profile of H\(\,\alpha\) with a broad absorption feature, which mimics the profile of a detached atmosphere (Jeffery and Branch, 1990), and is an indication of the fast-moving SN shock encountering a low-density shell-shaped CSM (Pooley et al., 2002). The multi-peaked emission profile seen here is similar to the boxy-emission profile seen during the photospheric phase in SN 2007od (Andrews et al., 2010), SN 2016gfy (Singh et al., 2019) and SN 2016esw (de Jaeger et al., 2018). We observe two absorption troughs blue-ward of H\(\alpha\) at 8,000 km s\({}^{-1}\) (PV; Photospheric velocity) and 15,000 km s\({}^{-1}\) (HV; High-Velocity) in the spectrum of \(\sim\) 16 d. The HV feature, labeled "Cachito" (in the literature, could instead be due to the presence of Si ii 6355 A (Gutierrez et al., 2017) in the blue-wing of H\(\,\alpha\). The estimated velocity (\(\sim\) 5000 km s\({}^{-1}\)) is lower than the \begin{table} \begin{tabular}{l l l l l} \hline ObsID & Date & Phase & Instrument & Time \\ & & (d) & & (ks) \\ \hline T05\_108T01\_ & 2023-05-25 & +6.9 & UVIT FUV & 7.32 \\ 9000005664 & & & SXT (FW) & 7.90 \\ T05\_110T01\_ & 2023-05-30 & +11.9 & UVIT FUV & 4.32 \\ 9000005672 & & & SXT (PC) & 8.24 \\ T05\_116T01\_ & 2023-06-11 & +23.4 & UVIT FUV & 3.48 \\ 9000005682\({}^{a}\) & & & SXT (PC) & 15.24 \\ \hline \end{tabular} \({}^{a}\) Observation against our ToO \end{table} Table 1: Log of AstroSat observations. photospheric velocity if the feature is due to Si ii. We also detect an analogous profile bluewards of H \(\beta\) with a similar velocity as seen in the H\(\alpha\) profile, indicating that the feature is likely due to hydrogen only. However, the possibility of Si ii blended with the HV feature of hydrogen can not be ruled out altogether. We estimated the photospheric velocity using the minima of the absorption trough of H \(\beta\), H \(\gamma\) and He i 5876 A. Although velocities estimated from Fe ii act as a reliable tracer of photospheric velocities (Dessart & Hillier, 2005), we used H and He line velocities as they fairly resemble the photospheric velocities early in the photospheric phase (Faran et al., 2014). Using the ejecta velocities (PV and HV) estimated above, we compute an inner radius of \(\sim 75\) AU and an outer radius of \(\sim 140\) AU for the shell-shaped CSM encountered by the SN ejecta. Assuming a standard RSG wind velocity of 10 km s\({}^{-1}\)(Smith, 2014), the progenitor of SN 2023ixf likely experienced this enhanced mass-loss \(\sim 35\) - 65 years before the explosion. If we consider the wind velocity of \(\sim 115\) km s Figure 1: Optical spectral evolution for SN 2023ixf from HCT, Perley & Gal-Yam (2023) and Stritzinger et al. (2023). The spectra are corrected for the redshift of the host galaxy M 101, and the epochs are labeled with respect to our adopted explosion epoch. **Top:**_Left_: Early time spectral sequence of flash features in SN 2023ixf with line identification of high-ionization features and Balmer lines. The inset depicts the H \(\alpha\) profile on \(+7.9\) d having a broad P-Cygni feature and an Intermediate-width Lorentzian emission. _Right:_ Evolution of line-profile of H alpha during the flash phase. **Bottom:**_Left:_ Spectral sequence of SN 2023ixf during the photospheric phase. _Right:_ Evolution of multi-peaked emission profile of H alpha during the photospheric phase. HV and PV refer to the high-velocity and photospheric velocity components in the blue-shifted absorption wing of H \(\alpha\). s\({}^{-1}\) inferred by Smith et al. (2023) using high-resolution optical spectra, we estimate that mass loss episode likely occurred \(\sim 3-6\) years before the explosion. ### UV Spectra We present FUV (1250 - 1800 A) and NUV (1900 - 3400 A) spectral evolution of SN 2023ixf obtained with _AstroSat_ and _Swift_, respectively, in Figure 2. Predominantly, the UV lines arise due to re-emitted UV emission from highly ionized species created from the shock wave expanding into the ambient material (Williams, 1967; Chevalier, 1981; Fransson, 1984; Chevalier & Fransson, 1994). Along with the emission lines, the UV spectra are dominated by a large number of absorption lines from the interstellar matter (ISM) in the Milky Way and the host galaxy due to high ionized states of C, N, O, Si, etc. (Fransson, 1984). Further, the UV spectra are not a simple continuum with isolated emissions and absorptions but a continuous set of features having both emission and absorption features which at times are hard to identify (Pun et al., 1995; Dessart & Hillier, 2010; Bostroem et al., 2023a). The UV spectra of Type II SNe are scarcely studied, particularly the FUV domain, which is largely unexplored. SN 1979C (Panagia et al., 1980) was the first Type II SN observed extensively in FUV, and SN 2022acko (Bostroem et al., 2023a) was the most recent one. For the present work, we restrict ourselves to describing the UV spectra qualitatively. #### 3.2.1 FUV spectra The FUV spectra of SN 2023ixf were obtained at three epochs \(\sim\) 7 d, \(\sim\) 12 d, and \(\sim\) 23 d (see Table 1). The first spectrum for SN 2023ixf in FUV is around the optical maximum (Section 4). In the spectrum of \(\sim\) 7 d, we observe two strong absorption bands in the wavelength regions 1340-1400 A and 1500-1560 A, which can be attributed to a blend of all or potentially a subset of following species Ni ii 1370-1399 A, Si iv 1394-1403 A lines and C iv, Si ii 1527 A, Ni ii 1511 A lines, respectively (Figure 2). Due to the low redshift of SN 2023ixf and with the available spectral resolution, it is difficult to discern whether the interstellar absorptions are either Galactic or due to the host galaxy. We further identify Doppler broadened emission features originating from C iv 1550 A, He ii 1640 A, and N iii] 1750 A marked in the top-right panel of Figure 2 similar to SN 1979C (Fransson, 1984) and SN 2022acko (Bostroem et al., 2023a). In the spectrum obtained at \(\sim\) 12 d, we continue to observe the two absorption bands but with diminishing depth. Other than the emission features observed in the spectrum of \(\sim\) 7 d, we find emission from C ii 1335 A, which could earlier be blended with strong absorption. Si iv and N iv] could also be observed in the wavelength region 1400-1500 A. As the flux continues to reduce in the FUV region, we see the disappearance of He ii and N iii] emission features. We corroborate the presence of these features by modeling the FUV spectrum at \(\sim\) 7 d and \(\sim\) 12 d using the synthetic spectra generation code SYNAPPS (Thomas et al., 2011). Many of the features in the spectra could be reproduced in the synthetic spectrum using the high-ionization (up to IV) species of He, C, N, O, S, Si, and Ni. More detailed spectral modeling with multiple elements is required to study these features extensively (Dessart & Hillier, 2010; Bostroem et al., 2023a). As the SN evolves further, the high density of low-ionization lines of iron-group elements (especially Fe ii and Fe iii) (Mazzali, 2000) amplify the line blanketing in the UV regime as is evident in the FUV spectrum of \(\sim\) 23 d, which is noisy and featureless owing to the completely extinguished continuum flux. The complete extinction in FUV flux around +20 d is also evident in other Type II objects such as SN 2021yja (Vasylyev et al., 2022), SN 2022wsp (Vasylyev et al., 2023), and SN 2022acko (Bostroem et al., 2023a). #### 3.2.2 NUV spectra The first NUV spectrum obtained at + 1.7 d is the earliest-ever NUV spectrum for any CCSN observed after SN 1987A. Contrary to the FUV, many Type II SNe have been observed in NUV at multiple epochs. The NUV spectral coverage of SN 2023ixf is the most comprehensive ever up to + 20 d after the explosion, with 12 spectra. We observe weak and blended absorption features in the first spectrum in the wavelength range 2300-3000 A. These absorption features continue to grow in strength and width and fully dominate the SN spectra at + 6.4 d. The features arise particularly due to Fe ii, Ni ii and Mg ii species (Brown et al., 2007; Bostroem et al., 2023a; Vasylyev et al., 2023). The prominence of these absorption features weakens along with increased line blanketing except for the feature present around 2900 A, which is observed even in the last spectrum presented here, at +19.5 d. The flux in NUV started rising from the first epoch and reached a maximum at \(\sim\) 5 d after the explosion as the SED transitioned to NUV. In the subsequent epochs, the NUV flux starts declining and drops to the level of the first epoch at around \(\sim\) 14 d. There is a significant drop in the flux between +5.5 d and +6.4 d in the region \(<\) 2200 A, observed with the change in the shape of SED as apparent in the left panel of Figure 2. This is probably due to the rapid cooling of the SN ejecta coupled with increased line blanketing in the UV wavelengths due to metal lines (Bufano et al., 2009). The effect of line blanketing in the region \(<\) 3000 A is much more prominent after \(+\) 13.5 d, and it continues to dominate, with fluxes declining in this region. The NUV spectrum of SN 2023ixf is compared with a few Type II SNe such as ASASSN-15oz (Bostroem et al., 2019), SN 2017ew (Szalai et al., 2019), and SN 2021yja (Vasylyev et al., 2022) at similar epochs in the bottom-right panel of Figure 2. Two spectra of SN 2021yja (\(+\) 9 d and \(+\) 14 d) are from HST. All other spectra used for comparisons are from Swift/UVOT. Initially, the UV spectra of Type IIP SNe were thought to be homogeneous (Gal-Yam et al., 2008), but as the number grew, the dissimilarities became more evident (Bostroem et al., 2023; Vasylyev et al., 2023). The absorption feature around 2700 A arising from Mg ii is observed in all the SNe. The feature around 2900 A was observed in SN 2023ixf, SN 2017ew (IIL) (Szalai et al., 2019), SN 2022wsp (IIP) (Vasylyev et al., 2023) and SN 2022acko (IIP) (Bostroem et al., 2023). Detailed modeling for SN 2022ack revealed it to be an absorption window from the close-by Fe ii, Cr ii, and Ti ii absorption complexes (Bostroem et al., 2023). This absorption feature is also observed in SN 2021yja in the spectrum of \(+\) 14 d. The shape of the continuum is very similar prior to \(+\) 10 d for SN 2021yja and SN 2023ixf. As the spectra evolve, a sharp cutoff in flux \(<3000\) A could be observed beyond \(+\) 10 d in all the SNe compared, indicating a significant line blanketing. Around \(+\) 14 d, the differences in spectra are very apparent, especially in ASASSN-15oz, where in the spectrum below 2700 A, we find strong emissions/absorptions, whereas others are devoid of flux comparable to regions beyond 2700 A. Slightly higher flux beyond 3000 A could indicate ongoing interaction (Vasylyev et al., 2022). More SNe need Figure 2: **Left:** NUV spectral evolution for SN 2023ixf obtained using Swift/UVOT. **Right:**_Top:_ FUV spectral evolution obtained using Astrosat/UVIT and the SYNAPPS fit to the spectrum of \(\sim\) 7 d and \(\sim\) 12 d. _Bottom:_ Spectral comparison of NUV spectra with other Type II SNe. to be observed in the UV, specifically within the first three weeks of the explosion. This will be crucial in understanding the progenitor characteristics, its environment, and its effects on the early evolution and will aid in testing homogeneity in their spectra (Kulkarni et al., 2021; Bostroem et al., 2023). ## 4 Light Curve Analysis The multiband light curves based on observations from the various facilities described in Section 2 are shown in Figure 3. We converted all the pre and post-discovery public data to AB magnitude scale and included it with our dataset using the transformations described in Blanton and Roweis (2007). The public dataset reported is very helpful in putting tight constraints on the explosion epoch (Hosseinzadeh et al., 2023). We do not see the fast declining phase after maximum, the \(s1\) phase, usually attributed to the initial cooling phase post-breakout (Anderson et al., 2014), in the \(V\)-band light curve of SN 2023ixf. Instead, it declines very slowly for the initial few days, at \(1.18^{+0.49}_{-0.51}\) mag 100 d\({}^{-1}\), right after it reached a peak \(V-\)band magnitude of \(-18.06\pm 0.07\) mag around \(\sim 5\) d after explosion. The peak magnitude falls at the brighter end of Type II SNe. The peak V-band brightness is comparable with SN 2013by (Valenti et al., 2015) and SN 2014G (Terreran et al., 2016), which were classified as Type IIL, although with many similarities to the Type IIP subclass. SN 2014G also showed flash ionization features in early spectral evolution. While the initial decline of SN 2023ixf is inconsistent with that of Type IIL, its evolution at later phases is yet to be probed. Although the early spectra indicate interaction with a nearby dense CSM, SN 2023ixf is not extremely bright in the UV bands like Type IIn SNe. The observed rise time of \(\sim 4-5\) d is shorter than other normal Type II SNe, which, on average, take \(\sim 10\) days to reach the peak (Valenti et al., 2016). We compare the \(g-r\) color with similar events that showed flash features such as SN 2013by (Valenti et al., 2015; Black et al., 2017), SN 2014G (Terreran et al., 2016), and the bluest Type II SN 2020pni (Terreran et al., 2022). The color evolution is similar to these events for the initial \(\sim\) 20 d but slightly redder than SN 2020pni. The NIR light curves are also presented in Yamanaka et al. (2023) up to a week post-explosion. We show the evolution beyond that and observe that the flux increases in the NIR, possibly due to pre-existing dust around the ejecta. The presence of pre-SN dust is also described in Neustadt et al. (2023). The early prolonged flash features indicated the presence of a dense CSM around the progenitor. Recently, Moriya et al. (2023) provided a comprehensive set of grids for model light curves that could shed light on the structure of CSM and its effects on the early light curve of interacting Type II SNe. In their work, a confined CSM is attached over radius, \(R_{0}\), for five progenitors with mass ranging from 10 to 18 M\({}_{\odot}\). The CSM density structure follows from Moriya et al. (2018), whereas the wind velocity, \(v_{wind}\) at a distance \(r\) was taken to be in the form as given below: \[v_{wind}(r)=v_{0}+(v_{\infty}-v_{0})\left(1-\frac{R_{0}}{r}\right)^{\beta}, \tag{1}\] where \(v_{0}\) and \(v_{\infty}\) are the initial wind velocity at the surface of the progenitor and terminal velocity, respectively, and \(\beta\) is a wind structure parameter that determines the efficiency of wind acceleration. These model light curves can be used to constrain the very early light curve behavior of Type II SNe. Our work utilizes the well-sampled \(g\)-band light curve of SN 2023ixf to compare with the model grid of interacting Type II SNe generated by Moriya et al. (2023). We used the models with \({}^{56}\)Ni mass in the typical range of 0.01 to 0.04 M\({}_{\odot}\)(Anderson et al., 2014). Furthermore, we found that the initial light curves are insensitive to the \({}^{56}\)Ni mass. We iterated over each parameter (E\({}_{\rm exp}\), \(\beta\), \(R_{CSM}\), and \(\dot{M}\)) in succession, keeping others fixed with their full range for a single run. This procedure is repeated for 12, 14, 16, and 18 M\({}_{\odot}\) progenitor models. We categorically reject models which show significant deviations from the observed light curves based on their peak luminosities and rise times. Subsequently, we do this for other parameters constraining the values for previous parameters. The best-fitting models for each progenitor are shown in Figure 3. We note that the slow early rise till day 2 is not captured by any of the models, and the later evolution is such that either the rise or plateau could be matched but not the entire light curve. Since we are concerned about the initial rise, we do not probe it further; detailed hydrodynamical modeling specific to this particular event will be required to understand the entire light curve evolution. Further, the degeneracy in the progenitor masses could not be lifted by these models, but these models give a very tight constraint on the radius of the outer CSM utilizing the rise times of the model light curves. The dense CSM is confined to \(4.0-10.0\times 10^{14}\) cm. Further, \(\beta\) varies from 0.5 to 1.5 depending on the progenitor mass, which is close to the typical values for RSGs (\(\beta>1\)). The \(\beta<1\) value obtained for M\({}_{\rm ZAMS}\) would accelerate winds slightly faster and cause less dense CSM in the vicinity, which is not the case for SN 2023ixf. The mass loss rate is also slightly on the higher end (\(10^{-3.0\pm 0.5}\) M\({}_{\odot}\) yr\({}^{-1}\)). The average den sity of the CSM comes out to be \(\sim 10^{-14}\) g cm\({}^{-3}\) which is in line with the values obtained in Bostroem et al. (2013). The \(\sim 10^{-14}\) g cm\({}^{-3}\) is consistent with the observed CSM. The \(\sim 10^{-14}\) g cm\({}^{-3}\) is consistent with the observed CSM. (2023b) but below the values inferred in Jacobson-Galan et al. (2023) obtained from the detailed spectral modeling. The mass-loss rates align with the density limits of CSM derived from the non-detection of radio emission (230 GHz) at early times (Berger et al., 2023). For a typical RSG (\(\sim 500~{}M_{\odot}\)), the above would translate to a mass loss \(\sim 14-18\) years before the explosion. But as seen in Smith et al. (2023), wind speeds measured using high-resolution early spectra are one order higher than what is assumed in the model parameters, which would give an eruptive mass loss timeline to be around 2 years before the explosion. However, wind acceleration cannot be ruled out. Another parameter that is tightly constrained by the models is the explosion energy. Only the models with explosion energies more than 2.0 foe could match the observed \(g\)-band flux. The explosion energy increases as the progenitor mass is increased. The explosion energy obtained is higher than for the usual Type II SNe. In a recent work, Khatami & Kasen (2023) presented various light curves of transients arising from interacting SN. These include the interaction of SN ejecta with no CSM to a very heavy CSM. Considering the latent space of luminosity and rise-time presented in that work, we find that the light curve evolution of SN 2023ixf (for the period presented in this work) appears to be similar to the model light curves for shock-breakout in a light-CSM scenario. Comparing the rise-times and peak luminosity of SN 2023ixf with the shock-breakout happening inside the CSM, we find that it falls within \(0.01\,{<}\,\)M\({}_{CSM}\,[{\rm M}_{\odot}]\,{<}\,0.1\). Using the parameters obtained from light curve analysis, we get a CSM mass ranging from 0.001-0.03 \({\rm M}_{\odot}\) (assuming \(v_{\rm wind}=10\) km \(s^{-1}\)), where the upper limit is well within the range obtained from Khatami & Kasen (2023). It indicates the mass loss rate could have been even higher than \(10^{-2.5}\)\({\rm M}_{\odot}\) yr\({}^{-1}\), as also being reported in Jacobson-Galan et al. (2023); Hiramatsu et al. (2023). ## 5 Summary This work presents an extensive set of early-phase observations for the closest CCSN in the last 25 years, SN 2023ixf, that exploded in M 101. The panchromatic observations covered wavelengths from the FUV to NIR regime using both ground and space-based observatories. The multi-band photometry spans FUV to NIR, spanning up to \(\sim\,\)23 d since the explosion. Light curves were compared with a large model light curves grid to infer nearby dense CSM properties. Detailed spectral coverage in FUV, NUV, and optical during the first \(\sim 25\) days since the explosion is presented, beginning within 2 days from the explosion. The lines due to Mg ii, Fe ii in the NUV, and C iii, C ii, Si iv, He ii in the FUV were identified. The early (\(<\,\)7 d) spectral sequence of SN 2023ixf indicates the presence of a dense CSM. There are no significant signatures subsequently, except for an intermediate-width emission feature of H \(\alpha\) after \(+\)7 d. The high-resolution spectra presented by Smith et al. (2023) show the presence of an intermediate-width P-Cygni profile during this phase, lasting for about a week, arising in the post-shock, swept-up CSM shell. The line profile during the photospheric phase beginning \(\sim\,\)16 d shows a multi-peaked/boxy profile of H alpha, indicating an ongoing CSM interaction with a shell-shaped CSM with an inner radius of \(\sim\,\)75 AU and an outer radius of \(\sim\,\)140 AU. Considering a standard RSG wind velocity, the progenitor likely experienced enhanced mass-loss \(\sim\,\)35 - 65 years before the explosion. All the above inferences from our multi-wavelength observations indicate a multi-faceted circumstellar matter around the progenitor of SN 2023ixf. The early phase light curve of SN 2023ixf is influenced by the presence of dense nearby CSM, which was likely accumulated due to enhanced mass loss(es) during the later stages of the progenitor's evolution. SN 2023ixf was found to have a very bright peak luminosity (\(M_{V}\approx-18.1\) mag), much higher than the average luminosity for Type II SNe (\(M_{V}\approx-16.7\) mag). Light curves were compared with a large model grid of interacting SNe with varied progenitor masses and CSM properties to infer the properties of the dense CSM in SN 2023ixf. Based on our comparison with light curve models, the high luminosity is likely a mix of interaction with a confined CSM and an inherently energetic explosion. We cannot conclusively decipher the weightage of the above components to the overall luminosity of SN 2023ixf; hence, further monitoring is required. We will continue to carry out the multi-wavelength follow-up of SN 2023ixf. ## 6 Software and Third Party Data Repository Citations _Facilities:_ HCT: 2-m, GIT: 0.7-m, KT: 1.5-m, Swift (UVOT), and AstroSat (UVIT and SXT) _Software:_ astropy (Astropy Collaboration et al., 2013, 2018), emcee (Foreman-Mackey et al., 2013), IRAF (Tody, 1993), HEASoft (Nasa High Energy Astrophysics Science Archive Research Center (Heasarc), 2014), matplotlib (Hunter, 2007), pandas (Wes McKinney, 2010), numpy (Harris et al., 2020), scipy (Virtanen et al., 2020), Jupyter-notebook (Kluyver et al., 2016), seaborn (Waskom, 2021), and SYNAPPS (Thomas et al., 2011). ## Acknowledgments We thank the anonymous referee for an in-depth review that helped improve the manuscript. RST thanks Sergiy S. Vasylyev for providing HST spectra of SN 2021yja. The GROWTH India Telescope (GIT) is a 70-cm telescope with a 0.7-degree field of view, set up by the Indian Institute of Astrophysics (IIA) and the Indian Institute of Technology Bombay (IITB) with funding from Indo-US Science and Technology Forum and the Science and Engineering Research Board, Department of Science and Technology, Government of India. It is located at the Indian Astronomical Observatory (IAO, Hanle). We acknowledge funding by the IITB alumni batch of 1994, which partially supports the operation of the telescope. Telescope technical details are available at [https://sites.google.com/view/growthindia/](https://sites.google.com/view/growthindia/). The HCT observations were made under our accepted ToO proposal HCT-2023-C2-P25. We thank the staff of IAO, Hanle, and CREST, Hosakote, that made these observations possible. The facilities at IAO and CREST are operated by the Indian Institute of Astrophysics, Bangalore. DKS acknowledges the support provided by DST-JSPS under grant number DST/INT/JSPS/P 363/2022. This research has made use of the High-Performance Computing (HPC) resources5 made available by the Computer Center of the Indian Institute of Astrophysics, Bangalore. This research made use of RedPipe6(Singh, 2021), an assemblage of data reduction and analysis scripts written by AS. This work uses the SXT and UVIT data from the _AstroSat_ mission of the Indian Space Research Organisation (ISRO). The SN was observed through multiple ToO proposals, and the data were made public through the ISSDC data archive. We thank the SXT and UVIT payload operation centers for verifying and releasing the data via the ISSDC data archive and providing the necessary software tools. This work has also used software and/or web tools obtained from NASA's High Energy Astrophysics Science Archive Research Center (HEASARC), a service of the Goddard Space Flight Center and the Smithsonian Astrophysical Observatory. This work was also partially supported by a Leverhulme Trust Research Project Grant. This research also made use of the NASA/IPAC Extragalactic Database (NED7), which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. Footnote 5: [https://www.iiap.res.in/?q=facilities/computing/nova](https://www.iiap.res.in/?q=facilities/computing/nova) Footnote 6: [https://github.com/sPaMFouR/RedPipe](https://github.com/sPaMFouR/RedPipe) Footnote 7: [https://ned.ipac.caltech.edu](https://ned.ipac.caltech.edu)
2307.06917
LLM-assisted Knowledge Graph Engineering: Experiments with ChatGPT
Knowledge Graphs (KG) provide us with a structured, flexible, transparent, cross-system, and collaborative way of organizing our knowledge and data across various domains in society and industrial as well as scientific disciplines. KGs surpass any other form of representation in terms of effectiveness. However, Knowledge Graph Engineering (KGE) requires in-depth experiences of graph structures, web technologies, existing models and vocabularies, rule sets, logic, as well as best practices. It also demands a significant amount of work. Considering the advancements in large language models (LLMs) and their interfaces and applications in recent years, we have conducted comprehensive experiments with ChatGPT to explore its potential in supporting KGE. In this paper, we present a selection of these experiments and their results to demonstrate how ChatGPT can assist us in the development and management of KGs.
Lars-Peter Meyer, Claus Stadler, Johannes Frey, Norman Radtke, Kurt Junghanns, Roy Meissner, Gordian Dziwis, Kirill Bulert, Michael Martin
2023-07-13T17:31:41Z
http://arxiv.org/abs/2307.06917v1
# LLM-assisted Knowledge Graph Engineering: Experiments with ChatGPT ###### Abstract Knowledge Graphs (KG) provide us with a structured, flexible, transparent, cross-system, and collaborative way of organizing our knowledge and data across various domains in society and industrial as well as scientific disciplines. KGs surpass any other form of representation in terms of effectiveness. However, Knowledge Graph Engineering (KGE) requires in-depth experiences of graph structures, web technologies, existing models and vocabularies, rule sets, logic, as well as best practices. It also demands a significant amount of work. Considering the advancements in large language models (LLMs) and their interfaces and applications in recent years, we have conducted comprehensive experiments with ChatGPT to explore its potential in supporting KGE. In this paper, we present a selection of these experiments and their results to demonstrate how ChatGPT can assist us in the development and management of KGs. Keywords:ChatGPT knowledge graph engineering RDF large language model use cases AI application. ## 1 Introduction In the last years, Artificial Intelligence (AI) has shown great promise in improving or revolutionizing various fields of research and practice, including knowledge engineering. The recent big leap in AI-based assistant chatbots, like ChatGPT (Generative Pre-trained Transformer) model, has created new opportunities to automate knowledge engineering tasks and reduce the workload on human experts. With the growing volume of information in different fields, the need for scalable and efficient methods to manage and extract knowledge from data that also adapt to new sources is critical. Despite the advances in research w.r.t. (semi)automation, knowledge engineering tasks still rely vastly on human experts. On one hand, this process can be time-consuming, resource-intensive, and susceptible to errors. On the other hand, the reliance on human expertise in knowledge engineering exposes it to workforce shortages (as knowledge engineers are scarce and the demand is growing) and the risk of expertise loss. These factors can impact the resilience and sustainability of systems and operations that rely on knowledge engineering. AI-based assistant bot approaches, such as ChatGPT, could bridge this gap by providing a unified tool for tasks in knowledge engineering, to reduce the workload of knowledge engineers themselves, but also make knowledge engineering more accessible to a broader audience. ChatGPT, in particular, has shown promise in generating responses in a variety of syntactical representations (including code and markup languages) to user queries or task descriptions written in natural language. In this paper, we discuss and investigate the potential of ChatGPT to support or automate various knowledge engineering tasks (e.g. ontology generation, SPARQL query generation). We will explore the benefits, pitfalls and challenges of using it and identify potential avenues for future research. ## 2 Related Work _ChatGPT_, a Large Language Model (LLM) published by OpenAI1, raised the interest in the broad field of Machine Learning (ML)2 and especially LLMs[4] on a broad scale. While there are current discussions and analysis on the capabilities of LLMs like ChatGPT in general (e.g. [1]), there is little in the area of knowledge graph engineering. Ekaputra et al.[3] gives a general overview of current research on the combination of the broad field of ML and semantic web. Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) Footnote 2: [https://aiindex.stanford.edu/wp-content/uploads/2023/04/HALAI-Index-Report_2023.pdf](https://aiindex.stanford.edu/wp-content/uploads/2023/04/HALAI-Index-Report_2023.pdf) Searching Google Scholar and Semantic Scholar with "knowledge graph ChatGPT", "ontology ChatGPT" and "rdf ChatGPT" in the beginning of April 2023 results in only two relevant papers. The first one, [7], reviews the differences between conversational AI models, prominent ChatGPT, and state-of-the-art question-answering systems for knowledge graphs. In their survey and experiments, they detect capabilities of their used frameworks but highlight ChatGPTs explainability and robustness. The second one, [6], discusses the usage of ChatGPT for database management tasks when tabular schema is expressed in a natural language. They conclude among others that ChatGPT is able to assist in complex semantic integration and table joins to simplify database management and enhance productivity. The applied approaches and results of these two papers indicate that the idea of using LLMs like ChatGPT in the field of KG engineering is encouraging and that the LLMs might assist KG engineers in their workflows. Still, the research on the usage of LLMs for knowledge graph engineers is scarce and seems to be a new research area. There exist some non- and semi-scientific resources which render the topic from a practical and experience perspective. We want to highlight here a helpful blog post by Kurt Cagle [2] on ChatGPT for "knowledge graph workers" and a blog post by Konrad Kalicinski [5] on knowledgegraph generation in Neo4J assisted by ChatGPT. ## 3 LLM-Assisted Knowledge Graph Engineering - Potential Application Areas In discussion rounds with knowledge graph engineering experts we identified the following preliminary list of potential use cases in the domain of knowledge graph engineering applicable to LLMs assistance: * Assistance in knowledge graph usage: * Generate SPARQL queries from natural language questions (related experiment in Section 4.1 and Section 4.3) * Exploration and summarization of existing knowledge graphs (related experiment in Section 4.5) * Conversion of competency questions to SPARQL queries * Code generation or configuration of tool(chain)s for data pipelines * Assistance in knowledge graph construction * Populating knowledge graphs (related experiment in Section 4.4) and vice versa * Creation or enrichment of knowledge graph schemas / ontologies * Get hints for problematic graph design by analysing ChatGPT usages problems with a knowledge graph * Semantic search for concepts or properties defined in other already existing knowledge graphs * Creation and adjustment of knowledge graphs based on competency questions Given the limited space of this paper, we evaluate a subset of the application areas with experiments in the following section. ## 4 Experiments To evaluate the capabilities of LLMs at the example of ChatGPT for assisting with knowledge graph engineering, we present several experiments and their results. Further details about them is given in the Supplemental Online Resources. Most experiments were conducted with ChatGPT with the LLM GPT-3.5-turbo6 (named _ChatGPT-3_ from here on), some additionally with ChatGPT with the LLM GPT-47 (named _ChatGPT-4_ from here on). Footnote 6: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5) Footnote 7: [https://platform.openai.com/docs/models/gpt-4](https://platform.openai.com/docs/models/gpt-4) ### SPARQL Query Generation for a Custom Small Knowledge Graph For a first evaluation, we designed a small knowledge graph as shown in Listing 1. Specifically, we wanted to know whether (1) GPT can explain connections between indirectly related entities, (2) create SPARQL queries over the given model and (3) reconstruct the model if all properties and classes were relabelled. We issued the following prompt, which includes the knowledge graph from Listing 1, on ChatGPT-3 and ChatGPT-4: **Prompt 1:** Given the RDF/Turtle model below, are there any connections between US and UK? \(<\)rdf-model\(>\) In the knowledge graph of Listing 1, there is a connection between the two countries via the two people living in these, which got a job in different departments of the same company. While ChatGPT-3 fails to identify this relation, ChatGPT-4 successfully identifies it in all cases. We further asked both ChatGPT models with prompt 2 and received five SPARQL queries each, which we analysed for their syntactic correctness, plausible query structure, and result quality. The results for prompt 2 are listed in table 1 and show that both models produce syntactically correct queries, which in most cases are plausible and produce corrects results in 3/5 (ChatGPT3) and 2/5 (ChatGPT4) cases. **Prompt 2:** Given the RDF/Turtle model below, create a SPARQL query that lists for every person the country, company and department and role. Please adhere strictly to the given model. \(<\)rdf-model\(>\) In essence, AI-based query generation is possible and it can produce valid queries. However, the process needs result validation in two dimensions: 1) validating the query itself by matching to static information, like available classes and properties in the graph, as well as 2) validating the executed query results to let ChatGPT generate new queries in case of empty result sets in order to find working queries in a try & error approach. As a last prompt on the knowledge graph from Listing 1, we created a derived RDF graph by relabelling all classes and properties with sequentially numbered IRIs in the example namespace, like _eg:prop1_ and _eg:class2_. Given the relabelled model, we tasked ChatGPT: **Prompt 3:** Given the RDF/Turtle model below, please replace all properties and classes with the most likely standard ones. <rdf-model> With ChatGPT-3 only 2/5 iterations succeeded in carrying out all substitutions. In those succeeding cases, the quality was still not as expected because of limited ontology reuse: Only IRIs in the example namespace were introduced, rather than reusing the _foaf_, _vcard_, and _org_ vocabularies. Yet, the ad-hoc properties and classes were reasonably named, such as _eg:firstName_, _eg:countryName_ or _eg:departmentName_. In contrast, ChatGPT-4 delivered better results: All classes and properties were substituted with those from standard vocabularies - foaf, vcard, and org were correctly identified. For some iterations, ChatGPT-4 used the schema.org vocabulary instead of the org vocabulary as an alternative approach. ### Token Counts for Knowledge Graphs Schemas After the results with the small custom knowledge graph we wanted to check the size of some well known knowledge graphs with respect to LLMs. The LLMs behind ChatGPT can handle at the moment only 4096 tokens (GPT-3.56) or 8192 respective 32,768 tokens for GPT-47. Footnote 8: [https://github.com/openai/tiktoken](https://github.com/openai/tiktoken) We counted tokens for various public knowledge graphs in different serialization formats with the library _tiktoken_8 as recommended for ChatGPT. Table 2 lists the token counts for a couple of combinations ordered by token count. More data and information is available in the Supplemental Online Resources. The turtle serialization seem to result in minimal token count, but is still bigger than the similar SQL schema added for comparison. All knowledge graphs exceed the token limit for GPT-3.5 and 3 of 4 knowledge graphs listed here exceed the limit for GPT-4. \begin{table} \begin{tabular}{l c c} & ChatGPT-3 & ChatGPT-4 \\ \hline syntactically correct & 5/5 & 5/5 \\ plausible query structure & 4/5 & 3/5 \\ producing correct result & 3/5 & 2/5 \\ \hline using only defined classes and properties & 3/5 & 4/5 \\ correct usage of classes and properties & 5/5 & 5/5 \\ correct prefix for the graph & 5/5 & 4/5 \\ \end{tabular} \end{table} Table 1: Findings in generated SPARQL queries for prompt 2. ### SPARQL Query Generation for the Mondial Knowledge Graph In addition to the experiments with the small custom knowledge graph (see Section 4.1) we tested ChatGPT with the bigger mondial knowledge graph9 which is published since decades with the latest "main revision" 2015. Footnote 9: [https://www.dbis.informatik.uni-goettingen.de/Mondial](https://www.dbis.informatik.uni-goettingen.de/Mondial) We asked ChatGPT to generate a SPARQL query for a natural language question from a sparql university lecture10. We used the following prompt five times with ChatGPT-3 and ChatGPT-4 each: Footnote 10: [https://www.dbis.informatik.uni-goettingen.de/Teaching/SWPr-SS20/swpr-1.pdf](https://www.dbis.informatik.uni-goettingen.de/Teaching/SWPr-SS20/swpr-1.pdf) The results are documented in the Supplemental Online Resources together with detailed comments on the given queries. Table 3 gives some statistics. In summary, all SPARQL queries given by ChatGPT were syntactically correct, but none of them worked when executed. Actually all queries had at least one error preventing the correct execution like referencing a wrong namespace, wrong usage of properties or referencing undefined classes. \begin{table} \begin{tabular}{l c c} Graph & Serialisation Type & Token Count \\ \hline Mondial Oracle DB schema & SQL schema & 2,608 token \\ Mondial RDF schema & turtle & 5,339 token \\ Mondial RDF schema & functional syntax & 9,696 token \\ Mondial RDF schema & manchester syntax & 11,336 token \\ Mondial RDF schema & xml/rdf & 17,179 token \\ Mondial RDF schema & json-ld & 47,229 token \\ Wine Ontology & turtle & 13,591 token \\ Wine Ontology & xml/rdf & 24,217 token \\ Pizza Ontology & turtle & 5.431 token \\ Pizza Ontology & xml/rdf & 35,331 token \\ DBpedia RDF schema & turtle & 471,251 token \\ DBpedia RDF schema & xml/rdf & 2,338,484 token \\ \end{tabular} \end{table} Table 2: Token counts for selected knowledge graphs and serialisations \begin{table} \begin{tabular}{l c c} & \multicolumn{2}{c}{ChatGPT-3 ChatGPT-4} \\ \hline syntactically correct & 5/5 & 5/5 \\ plausible query structure & 2/5 & 4/5 \\ producing correct result & 0/5 & 0/5 \\ \hline using only defined classes and properties & 1/5 & 3/5 \\ correct usage of classes and properties & 0/5 & 3/5 \\ correct prefix for mondial graph & 0/5 & 1/5 \\ \end{tabular} \end{table} Table 3: Findings in generated sparql queries for prompt 4. ### Knowledge Extraction from Fact Sheets As an experiment to evaluate knowledge extraction capabilities, we used PDF fact sheets of 3D printer specifications from different additive manufacturing (AM) vendor websites. The goal is to build a KG about existing 3D printers and their type as well as capabilities. We fed plaintext excerpts (extracted via pdfplumber) from these PDFs into ChatGPT-3 and prompted it to: **Prompt 5:** Convert the following $$vendor$ 3d printer specification into a JSON_LD formatted Knowledge Graph. The node for this KG should be Printer as a main node, Type of 3d printer such as FDM, SLA, and SLS, Manufacturer, Material, Applications, and Technique. Since the fact sheets are usually formatted using a table scheme, the nature of these plain texts is that mostly the printer entity is mentioned in the beginning of the text which then is further characterized in a key-value style. As a result, the text typically does not use full sentences and contains only one entity that is described in detail, but several dependant entities (like printing materials). However, the format of the key-value pairs can be noisy. Key names can be separated with colons, new line feeds, or in contrast multiple key-value pairs can be in the same line, which could impose a challenge. Nevertheless, ChatGPT was able to identify the key-value pairs of the evaluation document in a reliably way. Unfortunately, it delivered out of 5 test runs for this document 4 partial and 1 complete JSON document. In spite of that, we summarize first insights gained from a knowledge engineering perspective (but for the sake of brevity, we refer to the output documents in the experiment supplements) * The JSON-LD output format prioritizes usage of schema.org vocabulary in the 5 evaluation runs. This works good for well-known entities and properties (e.g. Organization@type for the manufacturer, or the name property), however, for the AM-specific feature key names or terms like printer ChatGPT-3 invents reasonable but non-existent property names (in the schema.org namespace) instead of accurately creating a new namespace or using a dedicated AM ontology for that purpose. * Requesting turtle as output format instead, leads to different results. E.g. the property namespace prefix is based on the printer ID and therefore printer descriptions are not interoperable and can not be queried in unified way in a joint KG. * Successfully splitting x,y and z values of the maximum print dimension (instead of extracting all dimensions into one string literal) works in 3 runs. Although ChatGPT-3 accurately appends the unit of measurement to all x,y,z values (which is only mentioned after the z value in the input) in those cases, this is a modelling flaw, as querying the KG will be more complex. In one run it addressed this issue by separating units into a separate unit code field. * A similar effect was observed when it comes to modelling the dependent entities. E.g., in 4 runs, the manufacturer was modelled correctly as a separate typed entity, in 1 as string literal instead. As a general conclusion of the experiment, ChatGPT-3 has overall solid skills to extract the key value pairs from the sheets, but the correct modelling or representation in terms of a KG significantly varies from run to run. Subsequently, none of the generated JSON documents contained sufficient information on their own, but only a subset that was modelled accurately. A question for future research is whether cherrypicking of individual JSON elements from outputs of several runs and combining them into one final document or iteratively refining the output by giving ChatGPT generic modelling feedback (like use an ontology, or separate unit information, etc.) can be automated in a good and scalable way. ### Knowledge Graph Exploration Experts in the field of knowledge graphs are familiar with concepts from RDF Schema (RDFS) (domain/range, subPropertyOf, subClassOf) and Web Ontology Language (OWL) (ObjectProperty, DatatypeProperty, FunctionalProperty,...). Often, each of these experts has their preferred tools and methods for gaining an overview of an ontology they are not yet familiar with. We asked ChatGPT-3 two different questions requesting the mermaid11 visualization of the most important concepts and their connections: Footnote 11: “... a JavaScript-based diagramming and charting tool...” [https://mermaid.js.org/](https://mermaid.js.org/) **Prompt 6:** Can you create me a visualization showing the most important classes and concepts and how they are linked for dbpedia ontology, serialized for mermaid? **Prompt 7:** Can you create me a visualization of the most common concepts of the DBpedia ontology and their connections focusing on domain and range defined in properties. We expected a graph with at least eight nodes and their corresponding edges. The identifiers for the nodes and edges are expected to follow the Turtle or SPARQL prefix:concept notation. If the first question did not achieve the goal, we asked additional questions or demands to ChatGPT-3. The results are presented in table 4 and we evaluated the displayed graphs based on the following criteria: Prompt 6 led to an answer with a hierarchical graph representation of the important classes defined in the DBpedia ontology. The diagram already met our requirements regarding the minimum number and labelling after the first answer and can be seen in the Supplemental Online Resources. The class hierarchy was represented by the rdfs:subPropertyOf relation, and the nodes were labelled in prefix notation, as were the edges. By arranging it as a tree using the subClassOf-pattern, only two different properties were used for the relations (edges). The root node was of type owl:Thing other nodes are connected as (sub)classes from the DBpedia ontology. These were: Place, Organization, Event, Work, Species, and Person. The class Work had one more subClassOf relation to the class MusicalWork. The class Person had the most complex representation, with two more subClassOf relations leading to foaf:Person and foaf:Agent, the latter of which is a subclass of the root node (owl:Thing). In the second prompt (Prompt 7 ChatGPT-3 referred to a graphic file within the answer text that no longer existed. Upon further inquiry, a mermaid diagram was generated. It was of type "Graph" and contained thirteen common concepts and seventeen edges, which were all unique. The labels of both, nodes and edges contain no prefixes, but were addable with further inquiry. Only the generated concept dbo:Occupation is non-existent. All remaining nodes and edges comply with the rules of the ontology, even if the concepts used are derived through further subclass relationships. The resulting diagram is shown in the Supplemental Online Resources. While prompt 6 leads to a result that can be more comprehensively achieved with conventional tools for visualizing RDF, the result from prompt 7 provides an overview of concepts (classes) and properties that can be used to relate instances of these classes to each other. ## 5 Conclusion and Future Work From the perspective of a knowledge graph engineer, ChatGPT has demonstrated impressive capabilities. It successfully generated knowledge graphs from semi-structured textual data, translated natural language questions into syntactically correct and well-structured SPARQL queries for the given knowledge graphs, and even generated overview diagrams for large knowledge graph schemas, as outlined in section 4. An detailed analysis revealed that the generated results contain mistakes, of which some are subtle. For some use cases, this might be harmless and can be tackled with additional validation steps in general, like with the metrics we used for SPARQL queries. In general, our conclusion is, that one needs to keep in mind ChatGPT's tendency to _hallucinate12_, especially \begin{table} \begin{tabular}{l c c} & Prompt 6 & Prompt 7 \\ \hline \hline Mermaid Type & graph & graph\({}^{*}\) \\ Labels of Nodes & prefix and concept prefix and concept\({}^{**}\) \\ Labels of Edges & prefix and concept prefix and concept\({}^{**}\) \\ Number of Nodes (total/existing/dbo) & 10/10/8 & 13/12/12 \\ Number of Edges (total/unique) & 12/2 & 17/17 \\ \hline \end{tabular} \({}^{*}\) One more prompt was needed to serialize a graph \({}^{**}\) One more prompt was needed to add prefixed labels \end{table} Table 4: Diagram content overview. when applied to the field of knowledge graph engineering where many engineers are used to mathematical precision and logic. The closed-source nature of ChatGPT challenges scientific research on it in two ways: 1. Detailed capability ratings of closed-source probabilistic models require much effort 2. Result reproducibility is bound to service availability and results might be irreproducible at a later date (due to service changes) Thus, open training corpora and LLMS are mandatory for proper scientific research. In the future, metrics are to be found to rate generated ChatGPT answers automatically, like we broached with SPARQL queries. This again enables to extend the number of test cases for a specific experiment and to generate profound statistical results. Another research focus should be given to methods that let the LLM access a broader/necessary context to increase the chance for correct answers. #### 4.0.1 Acknowledgements This work was partially supported by grants from the German Federal Ministry for Economic Affairs and Climate Action (BMWK) to the CoyPu project (01MK21007A) and KISS project (01MK22001A) as well as from the German Federal Ministry of Education and Research (BMBF) to the project StahlDigital (13XP5116B) and project KupferDigital (13XP5119F).
2310.05609
Edge-Locating Coloring of Graphs
An edge-locating coloring of a simple connected graph $G$ is a partition of its edge set into matchings such that the vertices of $G$ are distinguished by the distance to the matchings. The minimum number of the matchings of $G$ that admits an edge-locating coloring is the edge-locating chromatic number of $G$, and denoted by $\chi'_L(G)$. In this paper we initiate to introduce the concept of edge-locating coloring and determine the exact values $\chi'_L(G)$ of some custom graphs. The graphs $G$ with $\chi'_L(G)\in \{2,m\}$ are characterized, where $m$ is the size of $G$. We investigate the relationship between order, diameter, and edge-locating chromatic number of $G$. For a complete graph $K_n$, we obtain the exact values of $\chi'_L(K_n)$ and $\chi'_L(K_n-M)$, where $M$ is a maximum matching; indeed this result is also extended for any graph. We will determine the edge-locating chromatic number of join graph $G+H$, where $G$ and $H$ are some well-known graphs. In particular, for any graph $G$, we show a relationship between $\chi'_L(G+K_1)$ and $\Delta(G)$. We investigate the edge-locating chromatic number of trees and present a characterization bound for any tree in terms of maximum degree, number of leaves, and the support vertices of trees. Finally, we prove that any edge-locating coloring of a graph is an edge distinguishing coloring.
M. Korivand, D. A. Mojdeh, Edy Tri Baskoro, A. Erfanian
2023-10-09T10:52:13Z
http://arxiv.org/abs/2310.05609v1
# Edge-Locating Coloring of Graphs ###### Abstract An edge-locating coloring of a simple connected graph \(G\) is a partition of its edge set into matchings such that the vertices of \(G\) are distinguished by the distance to the matchings. The minimum number of the matchings of \(G\) that admits an edge-locating coloring is the edge-locating chromatic number of \(G\), and denoted by \(\chi^{\prime}_{L}(G)\). In this paper we initiate to introduce the concept of edge-locating coloring and determine the exact values \(\chi^{\prime}_{L}(G)\) of some custom graphs. The graphs \(G\) with \(\chi^{\prime}_{L}(G)\in\{2,m\}\) are characterized, where \(m\) is the size of \(G\). We investigate the relationship between order, diameter, and edge-locating chromatic number of \(G\). For a complete graph \(K_{n}\), we obtain the exact values of \(\chi^{\prime}_{L}(K_{n})\) and \(\chi^{\prime}_{L}(K_{n}-M)\), where \(M\) is a maximum matching; indeed this result is also extended for any graph. We will determine the edge-locating chromatic number of join graph \(G+H\), where \(G\) and \(H\) are some well-known graphs. In particular, for any graph \(G\), we show a relationship between \(\chi^{\prime}_{L}(G+K_{1})\) and \(\Delta(G)\). We investigate the edge-locating chromatic number of trees and present a characterization bound for any tree in terms of maximum degree, number of leaves, and the support vertices of trees. Finally, we prove that any edge-locating coloring of a graph is an edge distinguishing coloring. **Key words:** edge-locating coloring, matching, join graphs, distinguishing chromatic index. **AMS Subj. Class:** 05C15. ## 1 Introduction One of the structural and applied topics in graph theory is distinguishing graph vertices and edges by means of different tools. This approach has a relatively old history in graph theory and has used various tools such as distance and automorphism in graphs. In the following, we describe the history of some known concepts that follow such an approach. In 1977, Babaei proposed a concept that today inspires many methods for distinguishing elements of graphs by automorphism [2]. After Albertson and Collins [1] studied this concept in detail and proposed its application, it was widely considered in the name of _asymmetric coloring_ (or _distinguishing labelling_). Among the parameters defined along this concept, we can mention _distinguishing coloring_ (or _proper distinguishing coloring_), _distinguishing index_, _distinguishing arc-coloring_ and _distinguishing threshold_[10, 16, 17, 24]. The other index related to automorphism is _determining set_, in which the goal is to identify the automorphism by a subset of graph vertices. This concept were introduced independently by Boutin [4] and Erwin & Harary [12]. The determining numbers of Kneser graphs and Cartesian product of graphs are provided in [4, 6, 5]. One of the most important and well-known concepts that distinguishes the vertices of a graph with respect to distance is the _metric dimension_. In 1975-76, Slater [25] and Harary & Melter [14] independently introduced and studied this concept for connected graphs. This introduction was a turning point for a branch of research that occupied many researchers, so that after about 50 years this concept is still the foundation of many research projects and applications, even in other sciences such as chemistry and computer science. Due to its many applications in different sciences and other versions of the metric dimension, it has been introduced. In recent years, this concept has received more attention than in the past. We recommend the reader who needs more information about this concept refer to two recently raised surveys that discuss in detail the different versions of the metric dimension and its applications [20, 26]. The _edge metric dimension_ is one of these concepts derived from the metric dimension, where the goal is to distinguish the edges from a set of graph vertices [18]. Of course, in the metric dimension literature, we know the two concepts as edge metric dimension. The second case, which is also discussed in this article, means the least number of edges that resolve the vertices of a graph with respect to the distance [22]. In 2002, Chartrand et. al. introduced a coloring that we know as _locating coloring_[9]. In this coloring, the goal is to distinguish the vertices of a graph by their distance from a partition of the vertex set. The locating coloring has been the subject of many researchers; for more details, see [3, 8, 15, 21]. In this paper, our goal is to distinguish the vertices of a connected graph by the distance of the matchings that partition the edge set. In fact, we can see this definition as the edge version of the locating coloring. We give its exact definition below. Let \(G\) be a simple connected graph. Let \(c:E(G)\longrightarrow\mathbb{N}\) be a proper edge coloring of \(G\), in which adjacent edges of \(G\) have different colors. Let \(\pi=(\mathcal{C}_{1},\mathcal{C}_{2},\ldots,\mathcal{C}_{k})\) denote the ordered partition of \(E(G)\), that is the color classes admitted of \(c\). For a vertex \(v\) of \(G\), the _edge color code_\(c_{\pi}(v)\) is the ordered \(k\)-tuple \((d(v,\mathcal{C}_{1}),d(v,\mathcal{C}_{2}),\ldots,d(v,\mathcal{C}_{k}))\), where \(d(v,\mathcal{C}_{i})=\min\{d(v,e)|e\in\mathcal{C}_{i}\}\) for \(1\leq i\leq k\), and \(d(v,e)=\min\{d(v,x),d(v,y)|e=xy\}\). The coloring \(c\) is called an _edge-locating coloring_ of \(G\) if distinct vertices of \(G\) have different edge color codes. The _edge-locating chromatic number_\(\chi^{\prime}_{L}(G)\) is the minimum number of colors needed for an edge-locating coloring of \(G\). In this paper, we generally seek to investigate the behavior of the edge-locating coloring in some family graphs. Specifically, in Section 2, we compute the edge-locating coloring for paths, cycles, and complete bipartite graphs. Also, we characterize all graphs \(G\) of size \(m\) with the property that \(\chi_{L}^{\prime}(G)=k\), where \(k\in\{2,m\}\). Moreover, we present some bounds for the edge-locating chromatic number. In Section 3, we derive the edge-locating coloring of complete graphs and the complete graphs minus some matchings. Moreover, in this section, we derive a sharp upper bound for the edge-locating chromatic number of a graph having a perfect matching and we extend it for a maximum matching. In Section 4, we will determine the edge-locating chromatic number of join graph \(G+H\), where \(G\) and \(H\) are some well known graphs. In Section 5, we will examine the edge-locating chromatic number of trees. In particular, we compute the edge-locating chromatic number of the double star graphs and generalize it. Moreover, we present a characterization bound for any tree in terms of maximum degree, number of leaves and number of support vertices of trees. We saw that there are several automorphism bases and distance bases coloring and index in graph theory. In general, these two concepts travel their research paths without paying attention to each other. However, some relationships between some of these parameters have been proven. It has been shown that any resolving set of a graph is a determining set. Determining sets and resolving sets were jointly studied in [7, 13, 23]. Also, Korivand, Erfanian, and Baskoro recently showed that any locating coloring is a distinguishing coloring [19]. In Section 6, we prove that any edge-locating coloring of a graph is an edge distinguishing coloring. Also, we bound the edge-locating chromatic number to edge metric dimension and chromatic index. ## 2 General results The edge-locating chromatic number is defined for graphs with more than two vertices. Since graphs are simple if all edges assign distinct colors then clearly the edge color codes of vertices are different. For any simple connected graph \(G\) with size \(m>2\), \[2\leq\chi_{L}^{\prime}(G)\leq m.\] Another natural bound for edge-locating chromatic number is \(\chi^{\prime}(G)\leq\chi_{L}^{\prime}(G)\). Since \(\chi^{\prime}(P_{n})=2\), \(\chi_{L}^{\prime}(P_{n})\geq 2\), for \(n\geq 3\). Clearly \(\chi_{L}^{\prime}(P_{3})=2\). Assume that \(n>3\). If we consider an edge \(2\)-coloring of \(P_{n}\) then any two vertices of \(P_{n}\) that are not pendant vertices have the same edge color code. Thus \(\chi_{L}^{\prime}(P_{n})\geq 3\). Now, for an edge-locating \(3\)-coloring of \(P_{n}\), \(n\geq 4\), it is enough to assign color \(3\) to an edge with a pendant end vertex, and other edges of \(P_{n}\) coloring by color \(1\) and \(2\), alternately. Therefore, \(\chi_{L}^{\prime}(P_{n})=3\). Now, we can present the next proposition. **Proposition 2.1**.: For positive integer \(n\), \(\chi_{L}^{\prime}(P_{n})=\begin{cases}2,&\text{if }n=3\\ 3,&\text{if }n\geq 4.\end{cases}\) The distance between two edges \(e_{1}\) and \(e_{2}\) is defined by \(\min\{d(a_{i},b_{j})\mid 1\leq i,j\leq 2,\ e_{1}=a_{1}a_{2},\ e_{2}=b_{1}b_{2}\}\). **Theorem 2.2**.: For any integer \(n\geq 3\), \(\chi_{L}^{\prime}(C_{n})=\begin{cases}3,&\text{if }n=3\\ 4,&\text{if }n\geq 4.\end{cases}\) Proof.: For \(n=3\), \(\chi^{\prime}(C_{n})=\chi^{\prime}_{L}(C_{n})=3\). Now, we claim that \(\chi^{\prime}_{L}(C_{n})>3\), for \(n\geq 4\). For a contradiction, assume that the edges of \(C_{n}\) are colored by three colors. Without loss of generality, we may suppose that the color \(3\) is the least used color in \(C_{n}\). Let \(e=v_{1}v_{2}\) denote the only edge colored by \(3\). Since \(n\geq 4\), for \(n\) odd, the vertices \(v_{3},v_{n}\) have the same edge color code. For \(n\) even, the vertices \(v_{1},v_{2}\) have the same edge color code. Hence, color \(3\) is assigned to at least two edges. Assume that \(e\) and \(f\) are two edges with color \(3\), such that \(d(e,f)=\min\{d(e_{1},e_{2})|e_{1},e_{2}\in\mathcal{C}_{3}\}\). If the distance between \(e\) and \(f\) are at least two, then \(c_{\pi}(a)=c_{\pi}(b)\), where \(a\sim v\sim u\sim b\) and \(e=vu\). Let \(\{e_{1},e_{2},\ldots,e_{m}\}\) be a maximal alternative matching such that \(d(e_{i},e_{i+1})=1\) and \(e_{i}\in\mathcal{C}_{3}\), for \(1\leq i\leq m\). Since the color \(3\) is the least color used in \(C_{n}\), the vertices \(a\) and \(b\) with the property that \(d(a,e_{1})=d(b,e_{m})=1\) and \(a\), \(b\) are not end points of \(e_{i}\), \(1\leq i\leq m\), have the same edge color code \((0,0,1)\). Therefore, in all cases we have two vertices with the same edge color code, a contradiction. Finally, we present an edge-locating \(4\)-coloring of \(C_{n}\) in such a way that assigns to two incident edges colors \(3\) and \(4\), and other edges coloring by \(1\) and \(2\), alternately. **Proposition 2.3**.: Let \(G\) be a graph. Then \(\chi^{\prime}_{L}(G)=2\) if and only if \(G\cong P_{3}\). Proof.: Only one implication requires proof. Assume that \(\chi^{\prime}_{L}(G)=2\). Hence \(\Delta(G)=2\). This implies that \(G\) is a cycle or a path. On the other hand, by Proposition 2.1 and Theorem 2.2, we know that all cycles and paths except \(P_{3}\) need at least three colors for an edge-locating coloring. So the result is immediate. **Theorem 2.4**.: For distinct integers \(n,m\geq 2\), \(\chi^{\prime}_{L}(K_{n,m})=\max\{n,m\}+1\) and \(\chi^{\prime}_{L}(K_{n,n})=n+2\). Proof.: For the comfort of calculation, we consider matrix \(n\times m\), \(A=[a_{b_{i}c_{j}}]\), where \(\{b_{1},b_{2},\ldots,b_{n}\}\) and \(\{c_{1},c_{2},\ldots,c_{m}\}\) are the partite sets of \(K_{n,m}\), and \(a_{b_{i}c_{j}}\) is the color of edge \(b_{i}c_{j}\). Thus, for any fixed integer \(i\) (\(1\leq i\leq n\)), row \((a_{b_{i}c_{j}})_{j=1}^{m}\) is the assigned colors of the incidence edges of \(b_{i}\). Similarly, for any fixed integer \(j\) (\(1\leq j\leq m\)), column \((a_{b_{i}c_{j}})_{i=1}^{n}\) is the assigned colors of the incidence edges of \(c_{j}\). An edge-locating coloring of \(K_{n,m}\) gives the following conditions on \(A\). * All elements in each row (column) are distinct. * For \(i\) and \(j\) (\(1\leq i,j\leq n\)), \(\{a_{b_{i}c_{k}}\}_{k=1}^{m}\neq\{a_{b_{j}c_{k}}\}_{k=1}^{m}\). * For \(i\) and \(j\) (\(1\leq i,j\leq m\)), \(\{a_{b_{k}c_{i}}\}_{k=1}^{n}\neq\{a_{b_{k}c_{j}}\}_{k=1}^{n}\). Let \(n>m\). To satisfy conditions (i) and (ii) we need more than \(n\) colors. We claim that with \(n+1\) colors matrix \(A\) with conditions (i), (ii) and (iii) is constructed. For this, let \(S\) be \((n+1)\times(n+1)\) matrix consisting of all column matrices of colors \([1,2,\ldots,n,n+1]^{t}\), \([n+1,1,\ldots,n-1,n]^{t},\ldots,[2,3,\ldots,n+1,1]^{t}\). Now, assume that \(A\) is the sub-matrix of \(S\) consisting of first \(n\) rows and \(m\) columns, where \((a_{b_{i}c_{1}})_{i=1}^{n}=[1,2,\ldots,n]^{t}\), \((a_{b_{i}c_{2}})_{i=1}^{n}=[n+1,1,\ldots,n-1]^{t},\ldots,(a_{b_{i}c_{m}})_{i=1 }^{n}=[n+3-m,n+4-m,\ldots,n+m+1-m,1,2,\ldots,n+1-m]^{t}\). Then, we can see that all conditions (i), (ii), and (iii) satisfy on \(A\), and the result is available. For the other implication, let \(m=n\). In this case, to construct matrix \(A\), we need condition (iv) in addition to conditions (i), (ii), and (iii). 4. For \(i\) and \(j\) (\(1\leq i,j\leq n\)), \(\{a_{b_{i}c_{k}}\}_{k=1}^{n}\neq\{a_{b_{k}c_{j}}\}_{k=1}^{n}\). According to the additional condition (iv), all \(2n\) vertices should have distinct edge color sets that are incident to them. First, show that we cannot edge-locating color of \(K_{n,n}\) with \(n+1\) colors. On the contradiction, assume that we can do it. Let the first column be colored with \(n\) colors, and don't use color \(n+1\). Hence, all rows use \(n+1\) as a color for a vertex. This shows that every row does not have one of the colors \(i\) for \(1\leq i\leq n\). On the other hand, since the first column does not use color \(n+1\), at least one column has \(n+1\) as a color and does not take one of the colors in \(1\leq i\leq n\) say \(j\) and thus, this column and the row, which has no \(j\) as a color, have the same edge color code. That is a contradiction. Therefore, \(\chi_{L}^{\prime}(K_{n,n})\geq n+2\). In the following, we give a way of edge-locating coloring \(K_{n,n}\), by presentation \(n\times n\) matrix \[A=\begin{pmatrix}1&2&3&\dots&n-1&n\\ 2&3&4&\dots&n&n+1\\ 3&4&5&\dots&n+1&n+2\\ 4&5&6&\dots&n+2&1\\ \vdots&\vdots&\vdots&&\vdots&\vdots\\ n-1&n&n+1&\dots&n-5&n-4\\ n+1&n+2&1&\dots&n-3&n-2\end{pmatrix}.\] All colors are on module \(n\). Also, \(a_{b_{n},c_{i}}=a_{b_{n-1},c_{i}}+2\), for \(1\leq i\leq n\). One can check that matrix \(A\) satisfies conditions (i) - (iv). In Figure 1, we give an illustration of Theorem 2.4, when \(n=3\) and \(m=2\). In this case, the edge-locating chromatic number is \(4\), and the matrix \(A\) is \(\begin{pmatrix}1&4\\ 2&1\\ 3&2\end{pmatrix}\). For an integer \(n\), the graph \(K_{1,n}\) is called a star graph and is shown by \(S_{n}\). **Theorem 2.5**.: Let \(G\) be a graph with size \(m\geq 2\). Then \(\chi_{L}^{\prime}(G)=m\) if and only if \(G\in\{P_{4},C_{3},C_{4},S_{m}\}\) Figure 1: An edge-locating coloring for \(K_{3,2}\). Proof.: If \(G\in\{P_{4},C_{3},C_{4},S_{m}\}\), then we have nothing to prove. For the other, assume first that \(\Delta(G)=2\). So, Proposition 2.1 and Theorem 2.2 conclude that \(G\in\{P_{4},C_{3},C_{4},S_{2}\}\). Let \(\Delta(G)=k\), for \(k\geq 3\). For a contradiction, suppose that \(G\not\cong S_{m}\), for any \(m\geq 3\). Let \(v\) be a vertex of \(G\) with \(deg(v)=k\leq m-1\). Thus, there exists at least a vertex \(u\neq v\) of \(G\) such that \(1<deg(u)\leq k\). Hence, there exists edge \(e=uw\) in \(G\), where \(w\neq v\). Since \(k\geq 3\), we have edges \(e^{\prime}=vz\) and \(e^{\prime\prime}=vx\) such that at least one of \(z\) or \(x\) is not in \(\{u,w\}\). Now, we assign color \(1\) to edges \(e\) and \(e^{\prime}\), and color the other edges with distinct colors \(2,3,\ldots,m-1\) such that, without loss of generality, it is assigned color \(2\) to edge \(vu\) and color \(3\) to edge \(e^{\prime\prime}=vx\). We will show that this coloring is an edge-locating coloring of \(G\). For this, we have \(c_{\pi}(v)=(0,0,0,\ldots)\), \(c_{\pi}(u)=(0,0,1,\ldots)\), \(c_{\pi}(z)=(0,1,1,\ldots)\), \(c_{\pi}(w)=(0,1,2,\ldots)\) if \(w\) is not adjacent to \(x\), and if \(w\) is adjacent to \(x\) the color of \(wx\) is \(4\), then \(d(w,C_{4})=0\) and \(d(z,C_{4})\leq 1\). Therefore, these five vertices have distinct edge color codes. For a vertex \(y\notin\{v,u,w,z,x\}\), it is incident to at least one new edge with a new color. Hence \(c_{\pi}(y)\neq c_{\pi}(t)\) for \(t\neq y\). This is a contradiction, and then \(G=S_{m}\). In the following, we present some bounds for edge-locating coloring of a graph. **Theorem 2.6**.: Let \(G\) be a graph with order \(n\) and \(diam(G)=d\geq 3\). Then, \[\log_{d}n+2\leq\chi^{\prime}_{L}(G).\] Proof.: The edge color code of any vertex of \(G\) has \(\chi^{\prime}_{L}(G)\) coordinates. Since each vertex is incident to at least one edge, at least one coordinate is \(0\). Let \(v\) be a vertex, and \(e=vu\). There exists an edge \(e^{\prime}=uw\) that \(w\neq v\). The color of \(e^{\prime}\) is different from \(e\), and the coordinate of the edge color code of \(v\) according to color \(e^{\prime}\) is \(1\). So, the two coordinates of any vertex of \(G\) are determined, and other coordinates can be filled by \(k\), \(0\leq k\leq d-1\). Since in any edge-locating coloring, each vertex must have a unique edge color code, \(n\leq d^{(\chi^{\prime}_{L}(G)-2)}\), and the result is obtained. **Theorem 2.7**.: Let \(G\) be a graph with \(diam(G)=d\geq 3\) and \(\chi^{\prime}_{L}(G)=k\). Then, \[\log_{(d-1)}[\frac{n_{i}}{{k\choose i}}]+i\leq k,\ \ for\ \ 1\leq i\leq\Delta.\] Where, \(n_{i}\) is the number of vertices of degree \(i\). Proof.: We can color the incident edges of a vertex of \(G\) of degree \(i\) with \({k\choose i}\) ways. Thus, \([\frac{n_{i}}{{k\choose i}}]\) numbers of vertices of degree \(i\) have the same colored incident edges. So, the other coordinates of this vertices can be filled by \(\ell\), \(1\leq\ell\leq d-1\). Therefore, \([\frac{n_{i}}{{k\choose i}}]\leq(d-1)^{(\chi^{\prime}_{L}(G)-i)}\), for any \(1\leq i\leq\Delta\), and the result is immediate. ## 3 Complete graphs and matchings In this section, we determine the edge-locating coloring of complete graphs and the complete graphs minus some matchings. Then we generalize this subject to arbitrary graphs. ### Complete graphs A _matching_\(M\) of a graph is a set of independent edges. A vertex is \(M\)_-saturated_ if it is incident with an edge of \(M\), and \(M\)-_unsaturated_ otherwise. A matching is said to be _maximum_ if for any other matching \(M^{*},|M|\geq|M^{*}|\). A matching \(M\) is perfect if it saturates all vertices of \(G\). Let \(K_{n}-e\) denote the complete graph \(K_{n}\) minus one edge. **Theorem 3.1**.: For any even \(n\geq 4\), \(\chi^{\prime}_{L}(K_{n})=n+1\). Proof.: First, we show that \(\chi^{\prime}_{L}(K_{n})\geq n+1\). Let \(n\) be an even integer with \(n\geq 4\). Let \(V(K_{n})=\{v_{1},v_{2},\ldots,v_{n}\}\). Then \(\chi^{\prime}_{L}(K_{n})\geq n\). However, we will show that \(\chi_{L}(K_{n})\neq n\) for any even \(n\geq 4\). Let \(c\) be any proper edge locating coloring of \(K_{n}\) with \(n\) colors. Then, each of at least \(n/2\) colors will appear exactly \(n/2\) times each, and each of at most \(n/2\) colors will appear at most \(n/2-1\) times each. A simple verification shows that, precisely, \(n/2\) different colors (say, colors \(1,2,\ldots,\frac{n}{2}\)) appear \(n/2\) times each, and other colors (namely, colors \(\frac{n}{2}+1,\frac{n}{2}+2,\ldots,n\)) will appear exactly \(n/2-1\) times each. Therefore, every vertex is incident to all colors except color \(k\) for some \(k\in\{\frac{n}{2}+1,\frac{n}{2}+2,\cdots,n\}\). This means that there are only \(\frac{n}{2}\) different edge color codes for all \(n\) vertices of \(K_{n}\) with respect to coloring \(c\). Thus, \(c\) is not an edge-locating coloring of \(K_{n}\), and so \(\chi^{\prime}_{L}(K_{n})\geq n+1\) for any even \(n\geq 4\). Now we provide an edge-locating coloring of \(K_{n}\) with \(n+1\) colors. As it is well known, the edge color code of any vertex \(v\) is formed by \(n+1\) coordinates, in which two of its coordinates are \(1\) and the others are \(0\). Let \(e_{ij}\) be an edge of \(K_{n}\) with two end vertices \(v_{i}\), and \(v_{j}\) where \(i<j\). For defining \(n+1\)-edge-locating coloring function \(\alpha\) on \(K_{n}\), we consider two cases. If \(3\nmid n+1\), then we define \(\alpha\) on \(K_{n}\) as follows. \[\alpha(e_{ij})=j+i-2\ (\textit{mod}\ n+1)\ \textit{for}\ 1\leq i<j\leq n.\] In this case, for any vertex \(v_{i}\) two coordinates \(2i-2\) and \(i-2\) of the edge color code of \(v_{i}\) are \(1\) and the others are \(0\). Since \(2i-2=j-2\) and \(i-2=2j-2\), \(3i=0(\textit{mod}n+1)\). If \(i\neq j\), then \(\{2i-2,i-2\}\neq\{2j-2,j-2\}\). If \(3\mid n+1\) and \(n+1=3k\), then we define \(\alpha\) on \(K_{n}\) as follows. If \(e_{ij}\notin\{e_{(lk-1)},e_{(lk)}:1\leq l\leq k-2\}\) \[\alpha(e_{ij})=j+i-2\ (\textit{mod}\ n+1)\ \textit{for}\ 1\leq i<j\leq n.\] For \(e_{ij}\in\{e_{(l(k-1))},e_{(lk)}:1\leq l\leq k-2\}\), we define \[\alpha(e_{l(k-1)})=k+l-2\ (l<k-2);\ \alpha(e_{lk})=k+l-3\ (\textit{mod}\ n+1).\] In this case, for any vertex \(v_{i},(i\neq k-1)\), the two coordinates \(2i-2\) and \(i-2\) of the edge color code of \(v_{i}\) are \(1\) and the others are \(0\). For \(v_{k-1}\) the two coordinates \(k-2\) and \(k-3\) of the edge color code of \(v_{k-1}\) are \(1\) and the others are \(0\). Similar to the above method, one can show that, for \(k-1\notin\{i,j\}\), \(\{2i-2,i-2\}\neq\{2j-2,j-2\}\) and for \(j\neq k-1\), \(\{2j-2,j-2\}\neq\{k-2,k-3\}\). For instance, consider the edge-locating colorings of \(K_{8}\) and \(K_{10}\) represented by the two matrices \((8\times 8)\)-matrices and \((10\times 10)\)-matrices below, where \(8+1=9=3\times 3\) and \(3\nmid 10+1\). The entries \(ij\) are the colors of the edge \(e_{ij}\) with two end-vertices \(v_{i}\) and \(v_{j}\), where \(i<j\). In the matrix, we only \(\{c(e_{13})=3,\ c(e_{23})=4,\ c(e_{34})=5,\ c(e_{35})=6,\ c(e_{36})=7,\ c(e_{37})=8,\ c(e_{38 })=9\}\) or in \(K_{10}\), the vertex \(v_{4}\) is incident to the colors \(\{c(e_{14})=3,\ c(e_{24})=4,\ c(e_{34})=5,\ c(e_{45})=7,\ c(e_{46})=8,\ c(e_{47} )=9,\ c(e_{48})=10,\ c(e_{49})=11,\ c(e_{4\ 10})=1\}\). \[K_{8}:\begin{pmatrix}-&1&3&2&4&5&6&7\\ -&-&4&3&5&6&7&8\\ -&-&-&5&6&7&8&9\\ -&-&-&-&7&8&9&1\\ -&-&-&-&-&9&1&2\\ -&-&-&-&-&-&2&3\\ -&-&-&-&-&-&-&4\\ -&-&-&-&-&-&-&-\end{pmatrix}3\mid 8+1=9\] \[K_{10}:\begin{pmatrix}-&1&2&3&4&5&6&7&8&9\\ -&-&3&4&5&6&7&8&9&10\\ -&-&-&5&6&7&8&9&10&11\\ -&-&-&-&7&8&9&10&11&1\\ -&-&-&-&-&9&10&11&1&2\\ -&-&-&-&-&-&11&1&2&3\\ -&-&-&-&-&-&2&3&4\\ -&-&-&-&-&-&-&4&5\\ -&-&-&-&-&-&-&-&6\\ -&-&-&-&-&-&-&-&-\end{pmatrix}3\nmid 10+1=11\] **Theorem 3.2**.: For any odd \(n\geq 3\), \(\chi^{\prime}_{L}(K_{n})=n\). Proof.: Let \(n\) be an odd integer with \(n\geq 3\). Let \(V(K_{n})=\{v_{1},v_{2},\cdots,v_{n}\}\). Since \(\Delta(K_{n})=n-1\) then \(\chi^{\prime}_{L}(K_{n})\geq n\). We are going to show that \(\chi_{L}(K_{n})=n\) for any odd \(n\geq 3\). We define \(\alpha\) on \(K_{n}\) as follows: \[\alpha(e_{ij})=j+i-2\ (\text{{mod}}\ n)\ \text{for}\ 1\leq i<j\leq n.\] In this case, for any vertex \(v_{i}\), one coordinate \((2i-2)\) of the edge color code of \(v_{i}\) is \(1\) and the others are \(0\). Since for \(i\neq j\), \(2i-2\neq 2j-2\), this coloring is an edge-locating coloring. Thus \(\alpha\) is an edge-locating chromatic coloring of \(K_{n}\), and so \(\chi^{\prime}_{L}(K_{n})=n\) for odd \(n\). \[K_{11}:\begin{pmatrix}-&1&2&3&4&5&6&7&8&9&10\\ -&-&3&4&5&6&7&8&9&10&11\\ -&-&-&5&6&7&8&9&10&11&1\\ -&-&-&-&7&8&9&10&11&1&2\\ -&-&-&-&-&9&10&11&1&2&3\\ -&-&-&-&-&-&11&1&2&3&4\\ -&-&-&-&-&-&-&2&3&4&5\\ -&-&-&-&-&-&-&-&4&5&6\\ -&-&-&-&-&-&-&-&-&6&7\\ -&-&-&-&-&-&-&-&-&-&8\\ -&-&-&-&-&-&-&-&-&-&-\end{pmatrix}\] For \(k\geq 1\), let \(M_{k}\) be a matching with \(k\) edges. In the proof of Theorem below, we can also use the method of the proof of Theorem 3.4 by the proof of Theorem 3.2. **Theorem 3.3**.: For \(n\geq 1\) and \(1\leq k\leq n\), we have that \[\chi^{\prime}_{L}(K_{2n+1}\backslash M_{k})=\left\{\begin{array}{ll}2n+1&, \text{ if }1\leq k\leq n-1,\\ 2n&,\text{ if }k=n.\end{array}\right.\] Proof.: For \(1\leq k\leq n-1\), the graph \(K_{2n+1}\backslash M_{k}\) has at least two vertices of degree \(2n\). Thus, \(\chi^{\prime}_{L}(K_{2n+1}\backslash M_{k})\geq 2n+1\). But if \(k=n\), since \(K_{2n+1}\setminus M_{n}\) has exactly one vertex of degree \(2n\), then \(\chi^{\prime}_{L}(K_{2n+1}\backslash M_{n})\geq 2n\). To obtain an edge-locating \((2n)\)-coloring of \(K_{2n+1}\backslash M_{n}\) from the edge-locating \((2n+1)\)-coloring of \(K_{2n+1}\) (in Theorem 3.2) we can remove the edges of a monochromatic \(M_{n}\). Thus, the proof is observed. If we note the proof of Theorem 3.1, there exists a \(\chi^{\prime}_{L}(K_{2n})\) with \(2n+1\) colors such that each edge-locating color class has at least \(n-1\) edges and we can easily to see exactly \(2n\) color classes have \(n-1\) edges, and one color class has \(n\) edges. The edge coloring is such that it can be said that color \(2n-1\) was used for \(n\) edges, and the rest of the colors were used for \(n-1\) edges each. Therefore, we have. **Theorem 3.4**.: Let \(n\geq 2\). Then \(\chi^{\prime}_{L}(K_{2n}\backslash M_{k})=2n\) if \(k\in\{n-1,n\}\). Proof.: By Theorem 3.1, we have that \(\chi^{\prime}_{L}(K_{2n})=2n+1\) for \(n\geq 2\). As we mentioned in the above, there is exactly one perfect matching \(M_{n}\) with a monochromatic and \(2n\) matchings \(M_{n-1}\) with a monochromatic each. Thus, we get an edge-locating \(2n\)-coloring of \(K_{n}\backslash M_{k}\) for \(k\in\{n-1,n\}\). As an immediate result of Theorems 3.1 and 3.4, we have. **Corollary 3.5**.: Let \(m\) be a positive integer and \(m\leq n-1\). Then there exist \(m\) matchings \(M_{n-1}\) in which \(\chi^{\prime}_{L}(K_{2n}-m(M_{n-1})\cup M_{n})=2n-m\). ### Matchings In other words, an edge-locating coloring of a graph \(G\) is a partition of its edge set into matchings such that the vertices of \(G\) are distinguished by the distance of the matchings. The minimum number of the matchings of \(G\) that admit an edge-locating coloring is the edge-locating chromatic number of \(G\). **Theorem 3.6**.: Let \(G\) be a graph with order \(n\geq 5\) and size \(m\). If \(G\) has a perfect matching, then \(\chi_{L}^{\prime}(G)\leq m-n/2+1\). This bound is sharp for cycle \(C_{6}\) and path \(P_{6}\). Proof.: Let \(M\) be a perfect matching of \(G\). Color all edges of \(M\) with color \(1\), and other edges with distinct colors. We will show that this coloring is an edge-locating coloring of \(G\). Note that vertices of \(G\) cannot be distinguished by the color \(1\). Consider an arbitrary vertex \(v\) of \(G\). Suppose first that \(N(v)=\{u\}\). So, \(vu\in M\). It is enough to investigate the vertices that have distance one from edge(s) \(e=uw\), for \(w\in N(u)\setminus\{v\}\). Let \(N(u)\setminus\{v\}=\{w\}\). If \(\deg(w)\geq 3\), there exists a vertex \(x\) adjacent to \(w\) such that \(xw\notin M\). Hence, any vertex of \(N(w)\setminus\{u\}\) and vertex \(v\) are distinguished by the color of \(xw\). If \(\deg(w)=2\), since \(n\geq 5\), there exists an edge \(f=zy\) such that \(\{z\}=N(w)\setminus\{u\}\) and \(y\notin\{v,u,w\}\). Thus, \(f\notin M\), and \(v\) and \(z\) are distinguished by the color of \(f\). Assume that \(|N(u)\setminus\{v\}|\geq 2\). A vertex \(z\) has distance one from edges \(e=uw\), for \(w\in N(u)\setminus\{v\}\), when \(N(u)\setminus\{v\}\subseteq N(z)\). In this situation, there exists a vertex \(x\in N(u)\setminus\{v\}\) such that \(xz\notin M\). Therefore, \(v\) and \(z\) have different distances from \(xz\), and the result is immediate. Assume that \(\deg(v)=2\). There exists at least an edge \(e=vu\notin M\), for a vertex \(u\) of \(G\). Assign color \(2\) to \(e\). The only vertex that can be a candidate for the edge color code equal to \(v\) is \(u\). Since \(u\) can not be a pendant vertex, we have \(N(u)\setminus\{v\}\neq\emptyset\). If \(|N(u)\setminus\{v\}|\geq 2\), there exists a vertex \(w\in N(u)\setminus\{v\}\) such that \(uw\notin M\). Thus, the color of \(uw\) distinguishes \(v\) and \(u\). Let \(|N(u)\setminus\{v\}|=1\). Suppose that \(N(u)\setminus\{v\}=\{z\}\). So, \(uz\in M\). If \(\deg(z)\geq 2\), then there exists a vertex \(w\) such that \(zw\notin M\). If there is edge \(vw\), then we have a cycle with vertices \(v,u,z\), and \(w\). Hence, we must have at least one vertex in this cycle with a degree greater than \(2\). Clearly, vertices \(z\) or \(w\) can have a degree of more than \(2\). Since \(vw\in M\), in all possible cases, vertices \(v\) and \(u\) have different an edge color codes. Also, if \(\deg(z)=1\), there exist an edge \(f\notin M\) with \(d(v,f)=d(u,f)-1\), and the result is available. Finally, let \(\deg(v)\geq 3\). In this case, consider vertices \(x,y\), and \(z\) as neighbors of \(v\) such that \(xv\in M\). This implies that \(yv,zv\notin M\). Now, the colors of \(yv\) and \(zv\) distinguish \(v\) with the other vertices of \(G\). Theorem 3.6 can be extended for maximum matching. **Theorem 3.7**.: Let \(G\) be a graph with order \(n\geq 5\) and size \(m\). If \(G\) has a max matching \(M\) with \(|M|=k\), then \(\chi_{L}^{\prime}(G)\leq m-k+1\). This bound is sharp for cycle \(C_{5}\), path \(P_{5}\), star \(K_{1,n-1}\) for \(n\geq 2\) and double star \(S_{p,1}\). Proof.: Let \(M\) be a maximum matching of \(G\) with \(|M|=k\). It is clear that \(M\) saturates \(2k\) vertices, and \(n-2k\) vertices cannot be saturated by \(M\). We add \(n-2k\) vertices to \(G\) and make each of them adjacent to a vertex that is not saturated. Then, the resulted graph is of order \(2n-2k\), size \(m+n-2k\) and has a perfect matching of size \(k+n-2k=n-k\). Now, Theorem 3.6 implies that \(\chi_{L}^{\prime}(G)\leq m+n-2k-(n-k)+1=m-k+1\). **Theorem 3.8**.: Let \(G\) be a graph with order \(n\geq 4\) and size \(m\). If \(G\) has \(k\) edge-disjoint perfect matchings \(M=M_{1}\cup M_{2}\cup\cdots\cup M_{k}\) and \(G\backslash M\) is a connected spanning subgraph of \(G\), then \(\chi^{\prime}_{L}(G)\leq m-kn/2+k\). Proof.: Let \(M=M_{1}\cup M_{2}\cup\cdots\cup M_{k}\) be \(k\) edge-disjoint matchings in \(G\). Let \(G\backslash M\) be a connected subgraph. Then, establish an edge coloring \(\alpha\) on \(G\) by assigning a distinct color to each matching and assigning distinct colors to all remaining edges of \(G\backslash M\). Certainly, this coloring \(\alpha\) is an edge proper coloring of \(G\). Since \(G\backslash M\) is a connected spanning subgraph, then there are no two vertices incident to the same set of colors. This means that every vertex has a distinct edge color code. Therefore, \(\alpha\) is an edge-locating coloring of \(G\). As a closing remark, we raise the following question: Is an edge-locating chromatic number of a graph monotonic? Precisely, is it true that if \(G\) is a proper subgraph of \(H\), then \(\chi^{\prime}_{L}(G)\leq\chi^{\prime}_{L}(H)\)? We know that the metric dimension of a graph is not monotonic, since if \(G\) is a star \(K_{1,n-1}\) with \(n\geq 5\) and \(H\) is a graph formed from \(G\) by adding one edge connecting two end-point vertices, then \(dim(G)>dim(H)\). The locating chromatic number of a graph is also not monotonic, since if \(G=C_{4}\) and \(H\) is a graph formed from \(C_{4}\) by adding two pendant edges to two consecutive vertices of \(C_{4}\), then \(4=\chi_{L}(G)>\chi_{L}(H)=3\). **Theorem 3.9**.: The edge-locating chromatic number of a graph is not necessarily monotonic. Proof.: For a positive integer \(n\), let \(T_{n}\) denote the perfect binary tree, i.e., \(T_{n}\) is a tree with a root \(r\) of degree \(2\) and other vertices of degree \(3\) or \(1\) in which the distance between the root vertex \(r\) and any leaf is \(n\). Let \(G\) denote the graph obtained from \(T_{n}\) by making adjacent a pendant edge to \(r\) in \(T_{3}\). We will show that \(\chi^{\prime}_{L}(G)\geq 5\). For a contradiction, since \(G\) has at least two vertices of degree \(3\), we assume that \(\chi^{\prime}_{L}(G)=4\). Without loss of generality, we may suppose that the incident edges of \(r\) are colored by \(1,2\), and \(3\), such that the leaf is colored by \(1\). It is clear that \(G\) has seven vertices with degree \(3\). The distance between any vertex of degree \(3\) and any color class is at most \(2\). So, we don't have more than two vertices of \(G\) with the same colored incident edges. Since \(\binom{4}{3}=4\), there are \(4\) ways for coloring the incident edges of a vertex of degree \(3\). Hence, for exactly one vertex of degree \(3\), the edges incident on it take a set of three colors, and for the rest of the vertices of degree \(3\), for both vertices, the edges incident on them take a set of three colors. Vertex \(r\) is the only vertex with colored incident edges \(1,2\), and \(3\). Any other vertex with these colored incident edges has distance \(1\) from color \(4\). We say that the vertices of depth \(i\) in \(G\) are the vertices of \(T_{3}\) with distance \(i\) from \(r\). In \(T_{3}\), there are two children, as \(r_{L}\) and \(r_{R}\), on the left and right of \(r\). The children and grandchildren of \(r_{L}\) and \(r_{R}\), called by left part and right part, receptively. Now, we want to determine the position of two vertices of degree \(3\) with colored incident edges \(2,3\) and \(4\). Clearly, these two vertices cannot be in depth \(1\) or the same part simultaneously. If these two vertices are in different parts, then one edge between depth \(1\) vertex and depth \(2\) vertex must be colored by \(1\), and the other one is not colored by \(1\). This implies that we have three vertices with colored incident edges \(2,3\) and \(4\), which is a contradiction. Assume that there are two vertices of degree \(3\) with colored incident edges \(2,3\), and \(4\) in depth \(1\) and depth \(2\). Similarity, distinguishing these two vertices gives us another vertex with colored incident edges \(2,3\), and \(4\), a contradiction. Therefore, \(\chi^{\prime}_{L}(G)\geq 5\) and obviously, by assigning \(5\) colors to the edges of \(G\), we can show that \(\chi^{\prime}_{L}(G)=5\). Let \(H\) denote the graph obtained from \(G\) with join \(r\) to a pendant vertex in depth 3. One can check that \(\chi^{\prime}_{L}(H)=4\) (see Figure 2). Therefore, there exist graphs \(G\) and \(H\) that \(G\subset H\) and \(\chi^{\prime}_{L}(H)\leq\chi^{\prime}_{L}(G)\). ## 4 Join graphs For any graphs \(G\) and \(H\), a _join graph_ between \(G\) and \(H\), denoted by \(G+H\), is a graph obtained by connecting all vertices of \(G\) with all vertices of \(H\). In particular, if \(G=K_{1}\) and \(H\) is a cycle \(C_{n}\), the graph \(K_{1}+C_{n}\) is called a _wheel_, and it is denoted by \(W_{n}\). The graph \(K_{1}+P_{n}\) is called a _fan_, graph and it is denoted by \(F_{n}\). The graph \(K_{1}+nK_{2}\) is called a _windmill_ graph and it is denoted by \(Wm(2n)\). The graph \(K_{2}+nK_{2}\) is called a _book_ graph with \(n\) pages, and it is denoted by \(B_{n}\). In this section, we will determine the edge-locating chromatic of join graph \(G+H\). **Theorem 4.1**.: For any graphs \(G\) and \(H\), \(\chi^{\prime}_{L}(G+H)\geq max\{\Delta(G)+|V(H)|,|V(G)|+\Delta(H)\}\)_._ Proof.: It is straightforward since \(\Delta(G+H)=max\{\Delta(G)+|V(H)|,|V(G)|+\Delta(H)\}\). The upper bound is sharp and achieved by a wheel, a fan, or a windmill, as stated in the following theorem. **Theorem 4.2**.: The following are the edge-locating chromatic number for special join graphs: * For \(n\geq 4\), \(\chi^{\prime}_{L}(W_{n})=n\). * For \(n\geq 4\), \(\chi^{\prime}_{L}(F_{n})=n\); \(\chi^{\prime}_{L}(F_{2})=3\), and \(\chi^{\prime}_{L}(F_{3})=4\). * For \(n\geq 3\), \(\chi^{\prime}_{L}(Wm(n))=2n\), \(\chi^{\prime}_{L}(Wm(2))=5\) Figure 2: Graph \(H\) with edge-locating chromatic number 4. * For \(n\geq 3\), \(\chi^{\prime}_{L}(B_{n})=2n+2\), and \(\chi^{\prime}_{L}(B_{2})=6\). Proof.: For wheels and fans, let \(V(W_{n})=V(F_{n})=\{c,v_{1},v_{2},\ldots,v_{n}\}\) with a center \(c\) and \(n\geq 4\). Since \(\Delta(W_{n})=\Delta(F_{n})=n\) then \(\chi^{\prime}_{L}(W_{n})\geq n\) and \(\chi^{\prime}_{L}(F_{n})\geq n\). Now, construct an edge \(n\)-coloring \(\alpha\) of \(W_{n}\) (as well as of \(F_{n}\)) as follows. \[\alpha(e)=\left\{\begin{array}{ll}i&\mbox{, if }e=cv_{i},\\ i+2&\mbox{mod }n&\mbox{, if }e=v_{i}v_{i+1}.\end{array}\right.\] Note that all indices are in mod \(n\). In wheels, the color code of vertex \(v_{i}\) under \(\alpha\) will have zero entries in the \(i^{th}\), \((i+1)^{th}\), and \((i+2)^{th}\) (in modulo \(n\)) positions. In fans, the color code of \(v_{1}\) has zeros in the \(1^{st}\) and \(3^{rd}\) positions; the color code of \(v_{n}\) has zeros in the \(1^{st}\) and the \(n^{th}\) positions. The color codes for other vertices are the same as for wheels. The color code of vertex \(c\) has all zero entries. Therefore all color codes are different for wheels as well as on fans. For small cases, it is easy to verify that \(\chi^{\prime}_{L}(F_{2})=3\), and \(\chi^{\prime}_{L}(F_{3})=4\). In windmills \(Wm(n)\), for \(n\geq 3\), let \(V(Wm(n))=\{c,v_{1},v_{2},\ldots,v_{2n}\}\) with a center \(c\) and \(E(Wm(n))=\{v_{i}v_{i+1}|\mbox{ for all odd }i\leq 2n\}\cup\{cv_{i}|\mbox{ for all }i\leq 2n\}\). Since \(\Delta(Wm(n))=2n\), then \(\chi^{\prime}_{L}(Wm(n))\geq 2n\). Now, construct an edge \((2n)\)-coloring \(\alpha\) of \(Wm(n)\) as follows. \[\alpha(e)=\left\{\begin{array}{ll}i&\mbox{, if }e=cv_{i},\\ i+2&\mbox{mod }n&\mbox{, if }e=v_{i}v_{i+1}\mbox{ and }i\mbox{ is odd.}\end{array}\right.\] Note that all indices are in mod \(n\). This coloring \(\alpha\) is easily verified as an edge-locating coloring. In Books \(B_{n}\), for \(n\geq 3\), let \(V(B_{n})=\{c_{1},c_{2},v_{1},v_{2},\cdots,v_{2n-1},v_{2n}\}\) and \(E(B_{n})=\{c_{1}c_{2}\}\cup\{c_{1}v_{2i-1},c_{1}v_{2i},c_{2}v_{2i-1},c_{2}v_{2 i}|\)\(1\leq i\leq n\}\)\(\cup\{v_{2i-1}v_{2i}|\)\(1\leq i\leq n\}\). Since \(\Delta(B_{n})=2n+1\) and there are two vertices of degree \(2n+1\), then \(\chi^{\prime}_{L}(B_{n})\geq 2n+2\). Now, construct an edge \((2n+2)\)-locating coloring \(\alpha\) of \(B_{n}\) as follows. \[\alpha(e)=\left\{\begin{array}{ll}1&\mbox{, if }e\in\{c_{1}c_{2},v_{2i-1}v_{2 i}|\mbox{ }1\leq i\leq n\},\\ i+1(\mod 2n)&\mbox{, if }e\in\{c_{1}v_{2i-1},c_{2}v_{2i}|\mbox{ }1\leq i\leq n\}, \\ n+2&\mbox{, if }e=c_{1}v_{2},\\ n+2+i(\mod 2n)&\mbox{, if }e\in\{c_{1}v_{2i+2},c_{2}v_{2i-1}|\mbox{ }1\leq i\leq n-1\}, \\ 2n+2&\mbox{, if }e=c_{2}v_{2n-1}.\end{array}\right.\] It is easy to verify that this coloring \(\alpha\) is an edge-locating coloring. **Theorem 4.3**.: Let \(G\) be a connected graph and \(H=G+K_{1}\). Then we have * If \(G\) is graph of order \(2n\) and \(\Delta(G)\leq 2n-2\), then \(\chi^{\prime}_{L}(H)\leq 2n\). Furthermore, \(\chi^{\prime}_{L}(H)=2n+1\) if and only if \(G\) has at least one vertex of degree \(2n-1\), * If \(G\) is a graph of order \(2n+1\) and \(\Delta(G)\leq 2n-1\), then \(\chi^{\prime}_{L}(H)\leq 2n+2\), and equality holds if \(G\) has at least one vertex of degree \(2n\) Proof.: (i). Let \(|V(G)|=2n\). If \(\Delta(G)\leq 2n-2\), then \(G\subseteq K_{2n}\setminus M_{n}\). From Theorem 3.4\(\chi^{\prime}_{L}(K_{2n}\setminus M_{n})=2n\) and then \(\chi^{\prime}_{L}(G)\leq 2n\). Now we have, \(H\subseteq K_{2n+1}\setminus M_{n}\) and from Theorem 3.3\(\chi^{\prime}_{L}(K_{2n+1}\setminus M_{n})=2n\) and then \(\chi^{\prime}_{L}(H)\leq 2n\). Now suppose that \(G\) has at least one vertex of degree \(2n-1\). Then \(H\) has at least two vertices of degree \(2n\), and hence \(\chi^{\prime}_{L}(H)\geq 2n+1\). On the other hand, \(H\subseteq K_{2n+1}\setminus M_{k}\) for \(k\leq n-1\). By Theorem 3.3, \(\chi^{\prime}_{L}(K_{2n+1}\setminus M_{k})=2n+1\), and thus \(\chi^{\prime}_{L}(H)\leq 2n+1\). Therefore, the equality holds. Conversely, suppose that the equality holds and, in contradiction, \(G\) has no vertex of degree \(2n-1\), which means that \(\Delta(G)\leq 2n-2\). From the first part of the proof, since the order of \(G\) is \(2n\), hence \(\chi^{\prime}_{L}(H)\leq 2n\), that is a contradiction. (ii). Let \(|V(G)|=2n+1\). If \(\Delta(G)\leq 2n-1\), then \(G\subseteq K_{2n+1}\setminus M_{n}\cup M_{k}\) where \(k\geq 1\). From Theorem 3.3, \(\chi^{\prime}_{L}(K_{2n+1}\setminus M_{n}\cup M_{k})\leq 2n\), and then \(\chi^{\prime}_{L}(G)\leq 2n\). In this case, \(H+K_{1}\) is a connected graph of order \(2n+2\), with exactly one vertex of maximum degree \(\Delta(H)=2n+1\). Thus we have \(H\subseteq K_{2n+2}\setminus M_{n}\cup M_{k}\) and from Theorem 3.4\(\chi^{\prime}_{L}(K_{2n+2}\setminus M_{n}\cup M_{k})\leq 2n+2\) and then \(\chi^{\prime}_{L}(H)\leq 2n+2\). Now suppose that \(G\) has at least one vertex of degree \(2n\). Then \(H\) has at least two vertices of degree \(2n+1\) and hence \(\chi^{\prime}_{L}(H)\geq 2n+2\). On the other hand, \(H\subseteq K_{2n+2}\setminus M_{n}\). By Theorem 3.4\(\chi^{\prime}_{L}(K_{2n+2}\setminus M_{n})\leq 2n+2\), and thus \(\chi^{\prime}_{L}(H)\leq 2n+2\). Therefore, the equality holds. ## 5 Trees **Theorem 5.1**.: For any double star \(S_{p,q}\), \(\chi^{\prime}_{L}(S_{p,q})=\begin{cases}p+1,&\text{if }p>q\\ p+2,&\text{if }p=q.\end{cases}\). Proof.: Let \(G=S_{p,q}\), where \(p>q\), with support vertices \(v,u\) of degrees \(p+1\), \(q+1\) and end vertices \(v_{1},\ldots,v_{p},u_{1},\ldots,u_{q}\) respectively. Then, by Konig's Theorem [11, Theorem 10.8], \(\chi^{\prime}(G)=p+1\) and hence \(\chi^{\prime}_{L}(S_{p,q})\geq p+1\). On the other hand, if we assign color \(i\) to \(vv_{i}\) and \(uu_{i}\), and assign color \(p+1\) to the vertex \(vu\), then \(c_{\pi}(v_{i})=(1,1,\ldots,d(v_{i},C_{i})=0,1,\ldots,1)\) for \(1\leq i\leq p\), \(c_{\pi}(v)=(0,0,\ldots,0)\), \(c_{\pi}(u)=(0,0,\ldots,0,d(u,C_{q+1})=1,\ldots,d(u,C_{p})=1,d(u,C_{p+1})=0)\), and \(c_{\pi}(u_{j})=(1,1,\ldots,d(u_{j},C_{j})=0,1,\ldots,d(u_{j},C_{q})=1,d(u_{j},C _{q+1})=2,\ldots,d(u_{j},C_{p})=2,d(u_{j},C_{p+1})=1)\) for \(1\leq j\leq q\). Therefore \(\chi^{\prime}_{L}(S_{p,q})=p+1\). Let \(p=q\). Then \(\chi^{\prime}(S_{p,q})=p+1\), and edges color \(i\) for \(vv_{i}\) and \(uu_{i}\)\(1\leq i\leq p\) and color \(p+1\) for \(vu\). In this case, \(c_{\pi}(v)=(0,0,\ldots,0)=c_{\pi}(u)\). Now by changing the color edge \(uu_{1}\) from \(1\) to \(p+2\). Then using above method, it can be seen that all vertices have distinct edge color codes. Therefore, \(\chi^{\prime}_{L}(S_{p,q})=p+2\). In general we have, **Theorem 5.2**.: Let \(n\geq 4\). There exists a tree \(T\) of size \(m\) having edge-locating-chromatic number \(k\) if and only if \(k\in\{3,4,\ldots,m-1,m\}\). Proof.: For \(k=3\), consider \(T=P_{m+1}\) by Theorem 2.1. For \(k\geq 4\), let \(T\) be a tree with vertex set \(\{v_{1},v_{2},\ldots,v_{m+1}\}\) where vertex \(v_{2}\) is of degree \(k\), vertices \(v_{1},v_{3},v_{4},\ldots,v_{k},v_{m+1}\) of degree \(1\), and other vertices are of degree \(2\). Now if we assign \(i\) to edge \(v_{2}v_{i}\), (\(1\leq i\neq 2\leq k+1\)), assign \(2\) and \(1\) to other edges alternately, then for this \(T\), it is obvious to see that \(\chi^{\prime}_{L}(T)=k\). **Theorem 5.3**.: Let \(T\) be a tree with \(k\) support vertices \(v_{1},v_{2},\cdots,v_{k}\), and \(\ell_{i}\) leaves adjacent to \(v_{i}\), where \(\ell_{1}\leq\ell_{2}\leq\cdots\leq\ell_{k}\). Let \(e_{i,j}\) be the pendant edges corresponding to the support vertex where \(1\leq j\leq\ell_{i}\) and \(T^{\prime}\) be the induced subgraph of non-leaves of \(T\). If \(\Delta(T)<m=\sum_{i=1}^{k}\ell_{i}\). Then \(\chi_{L}^{\prime}(T)\leq\Delta(T^{\prime})+\ell_{k}+k-1\). Equality holds if and only if \(T=S_{p,p}\). Proof.: We can consider an edge proper \(\Delta(T^{\prime})\)-coloring of \(T^{\prime}\), with colors \(1,2,\ldots,\Delta(T^{\prime})\). Also, color the \(\ell_{k}\) pendant edges with distinct colors \(\Delta(T^{\prime})+1,\Delta(T^{\prime})+2,\ldots,\Delta(T^{\prime})+\ell_{k}\). Now assign colors \(\Delta(T^{\prime})+1,\Delta(T^{\prime})+2,\ldots,\Delta(T^{\prime})+\ell_{i}- 1,\Delta(T^{\prime})+\ell_{k}+i\) to the edges \(e_{i,1},\cdots,e_{i,\ell_{i}}\) if \(\ell_{i}\geq 2\) or assign color \(\Delta(T^{\prime})+\ell_{k}+i\) to edge \(e_{i,\ell_{i}}\) if \(\ell_{i}=1\). Now, let \(v\) and \(u\) be two arbitrary vertices of \(T\). Let \(P_{v-u}\) denote the path between \(v\) and \(u\), and \(P\) be a maximal path that contains \(P_{v-u}\). There exist two leaves \(e\) and \(e^{\prime}\) on \(P\) such that the colors of \(e\) and \(e^{\prime}\) are distinct and distinguish vertices \(v\) and \(u\). For equality, if \(T=S_{p,p}\) (\(p\geq 2\)) Theorem 5.1 deduces the result. Conversely, let \(\chi_{L}^{\prime}(T)=\Delta(T^{\prime})+\ell_{k}+k-1\) and \(T\neq S_{p,p}\). If \(T=S_{p,q}\), where \(p\geq q+1\), then Theorem 5.1 shows that \(\chi_{L}^{\prime}(T)=p+1\neq 1+p+1=p+2\), a contradiction. Hence \(T\) has at least \(k\geq 3\) support vertices, \(\Delta(T^{\prime})\geq 2\) and \(T^{\prime}\) has at least two leaves, say \(v_{r},v_{t}\), and one non leaf, say \(v_{i}\). Suppose \(v_{k}\) is not a leaf in \(T^{\prime}\), since \(\ell_{r}\leq\ell_{k}\), then the pendant edges \(e_{r,j}\)s corresponding to \(v_{r}\) can be colored with the colors of the pendant edges \(e_{k,j}\)s corresponding to \(v_{k}\). Suppose \(v_{k}\) is a leaf in \(T^{\prime}\), then the pendant edges \(e_{i,j}\)s corresponding to \(v_{i}\) can be colored with the colors of the pendant edges \(e_{k,j}\)s corresponding to \(v_{k}\). In the two above cases, other pendant edges corresponding to other support vertices can be colored by the method in the first part of the theorem. Therefore \(\chi_{L}^{\prime}(T)\leq\Delta(T^{\prime})+\ell_{k}+k-2\). This contradiction presents \(T=S_{p,p}\). **Theorem 5.4**.: Let \(T\) be a tree with \(m\geq 3\) leaves. If \(\Delta(T)=m\) then \(\chi_{L}^{\prime}(T)=m\). If \(\Delta(T)<m\) then \(\chi_{L}^{\prime}(T)\leq\Delta(T^{\prime})+m\), where \(T^{\prime}\) is the induced subgraph of non-pendant vertices of \(T\). Proof.: Assume first that \(\Delta(T)=m\). Let \(N(v)=\{w_{1},w_{2},\ldots,w_{m}\}\), for a vertex \(v\) of \(T\). Let \(v_{1}u_{1},v_{2}u_{2},\ldots,v_{m}u_{m}\) be the leaves of \(T\) such that \(v_{i}\)'s are pendant vertices, for \(1\leq i\leq m\). For any \(i\), \(1\leq i\leq m\), suppose that \(P_{i}\) is the \(v-v_{i}\) path that contains vertex \(w_{i}\), \(1\leq i\leq m\). Since \(\Delta(T)=m\), \(V(P_{i})\cap V(P_{j})=\{v\}\), for any \(i\) and \(j\), \(1\leq i,j\leq m\). Consider a coloring of \(T\) in such a way that for any \(i\), \(1\leq i\leq m-1\), the edges of \(P_{i}\) (\(P_{m}\)) are colored by colors \(i\) (\(m\)) and \(i+1\) (\(1\)), alternately, such that edges \(vw_{i}\) are colored by \(i\), for \(1\leq i\leq m\). Any non-pendant vertex of \(P_{i}\) (\(P_{m}\)) has a distance zero from \(\mathcal{C}_{i}\) (\(\mathcal{C}_{m}\)) and \(\mathcal{C}_{i+1}\) (\(\mathcal{C}_{1}\)), and distance more than zero from other colors. Hence, each non-pendant vertex of \(T\) is distinguished by other vertices. On the other hand, \(v_{i}\) (\(v_{m}\)) has a distance zero from one of the color classes \(\mathcal{C}_{i}\) (\(\mathcal{C}_{m}\)) and \(\mathcal{C}_{i+1}\) (\(\mathcal{C}_{1}\)), and distance one from another class, for any \(i\), \(1\leq i\leq m\), that \(|V(P_{i})|\geq 3\). There exist only some elements of \(N(v)\setminus V(P_{i})\) (\(N(v)\setminus V(P_{m})\)) that can have the same coordinates according to the color classes \(\mathcal{C}_{i}\) (\(\mathcal{C}_{m}\)) and \(\mathcal{C}_{i+1}\) (\(\mathcal{C}_{1}\)). Let \(z\in N(v)\cap V(P_{i})\). If \(|V(P_{i})|\geq 3\), then degree \(z\) is \(2\) and the result is obtained. If \(z\) is a pendant vertex, then since \(m\geq 3\), there exists a color class \(\mathcal{C}_{j}\) such that the distance of \(z\) from \(\mathcal{C}_{j}\) is one and the distance of \(v_{i}\) from \(\mathcal{C}_{j}\) is more than one. Therefore, all vertices of \(T\) have a different edge color code, and the result is available. For the other implication, by [11, Theorem 10.8], we can consider an edge proper \(\Delta(T^{\prime})\)-coloring of \(T^{\prime}\), with colors \(1,2,\ldots,\Delta(T^{\prime})\). Also, color the leaves by distinct colors \(\Delta(T^{\prime})+1,\Delta(T^{\prime})+2,\ldots,\Delta(T^{\prime})+m\). Now, let \(v\) and \(u\) be two arbitrary vertices of \(T\). Let \(P_{v-u}\) denote the path between \(v\) and \(u\) and \(P\) be a maximal path that contains \(P_{v-u}\). There exists two leaves, \(e\) and \(e^{\prime}\) on \(P\). The colors of \(e\) and \(e^{\prime}\) distinguish vertices \(v\) and \(u\), and the proof is completed. Edge metric dimension and distinguishing chromatic index The minimum size of subset \(S\) of edges of graph \(G\) that for any two edges \(e\) and \(e^{\prime}\), there exists \(f\in S\) such that \(d(e,f)\neq d(e^{\prime},f)\), is the _edge metric dimension_ of \(G\) and denoted by \(\dim_{E}(G)\). We say that the set \(S\) is an _edge basis_ of \(G\). Actually, the edge metric dimension of a graph \(G\) is the standard metric dimension of the line graph \(L(G)\). This concept is introduced and studied by Nasir et. al. [22]. Also, Kalinowski and Pilsniak introduced the distinguishing chromatic index in [16], wherein the edge distinguishing coloring is an edge proper coloring such that the only color preserving automorphism is the trivial automorphism. The _distinguishing chromatic index_\(\chi^{\prime}_{D}(G)\) of a graph \(G\) is the minimum number of colors that admit an edge distinguishing coloring. In this section, we study some relations between edge-locating coloring and those concepts. For any subset \(S\) of edges of \(G\), let \(G-S\) denote the subgraph of \(G\) with vertex set \(V(G)\) and edge set \(E(G)\setminus S\). Let \(S\) be an edge basis of \(G\). Consider graph \(H:=G-S\) and assign colors \(1,2,\ldots,\chi^{\prime}(H)\) to edges \(H\) according to a proper edge coloring of \(H\). Also give distinct colors \(\chi^{\prime}(H)+1,\chi^{\prime}(H)+2,\ldots,\chi^{\prime}(H)+|S|\) to elements of \(S\). Clearly this coloring is an edge-locating coloring of \(G\). Since \(\chi^{\prime}(H)\leq\chi^{\prime}(G)\), we have the following bound. \[\chi^{\prime}(G)\leq\chi^{\prime}_{L}(G)\leq\chi^{\prime}(G)+ \dim_{E}(G). \tag{1}\] Clearly, this bound is sharp. For instance, let \(G=C_{2n}\). Let \(vu\) and \(wx\) be two edges in graph \(G\) and \(f\in Aut(G)\). We say that \(f(vu)=wx\), if \(f(v)=w\), and \(f(u)=x\). **Theorem 6.1**.: Any edge-locating coloring of a graph is an edge distinguishing coloring. Proof.: Let \(G\) be a graph with size \(m\) and \(\pi=(\mathcal{C}_{1},\mathcal{C}_{2},\ldots,\mathcal{C}_{n})\) be the color classes admitted by an edge-locating coloring \(c\) of \(G\). The result is immediate if \(n=m\). Assume that \(n<m\). For a contradiction, suppose that \(c\) is not an edge distinguishing coloring of \(G\). Thus, there exists an automorphism \(f\) of \(G\) that preserves the coloring, and \(f(e_{a})=e_{b}\) for two edges \(e_{a}\) and \(e_{b}\) in \(\mathcal{C}_{1}\). Let \(e_{a}=aa^{\prime}\), \(e_{b}=bb^{\prime}\), \(f(a)=b\) and \(f(a^{\prime})=b^{\prime}\). Consider arbitrary color \(i\) (\(1\leq i\leq n\)) and let \(d(a,\mathcal{C}_{i})=d(a,e_{i}^{a})\) and \(d(b,\mathcal{C}_{i})=d(b,e_{i}^{b})\), for edges \(e_{i}^{a}\) and \(e_{i}^{b}\) with color \(i\). We will have \[d(a,e_{i}^{a})=d(f(a),f(e_{i}^{a}))=d(b,f(e_{i}^{a})) \tag{2}\] and \[d(b,e_{i}^{b})=d(f^{-1}(b),f^{-1}(e_{i}^{b}))=d(a,f^{-1}(e_{i}^{ b})). \tag{3}\] Since \(d(a,\mathcal{C}_{i})\leq d(a,f^{-1}(e_{i}^{b}))\) and \(d(b,\mathcal{C}_{i})\leq d(b,f(e_{i}^{a}))\), (2) and (3) imply that \(d(a,\mathcal{C}_{i})=d(b,\mathcal{C}_{i})\). This means that \(c_{\pi}(a)=c_{\pi}(b)\), a contradiction. **Corollary 6.2**.: For any graph \(G\), 1. \(\chi^{\prime}_{D}(G)\leq\chi^{\prime}_{L}(G)\). 2. \(\chi^{\prime}_{D}(G)\leq\chi^{\prime}(G)+\dim_{E}(G)\). By Theorem 16 [16], the equality of Corollary 6.2 (ii) is achieved if and only if \(G\) is a path graph, \(C_{4}\) or \(C_{6}\). Also, Theorem 16 [16] concludes that \(\chi^{\prime}_{D}(G)=\chi^{\prime}_{L}(G)=k\) for \(k\in\{\Delta(G),\Delta(G)+1\}\). Future Research As you have seen in different sections, the edge-locating chromatic number is related to different and well-known graph concepts. One of them is the edge chromatic index. Recall that \(\chi^{\prime}(G)\leq\chi^{\prime}_{L}(G)\), for a connected graph \(G\). Classifying connected graphs \(G\) such that \(\chi^{\prime}(G)=\chi^{\prime}_{L}(G)\) can be valuable. Also, one can check if the edge chromatic index is independent of the edge-locating chromatic number. For this purpose, looking for a graph where the edge chromatic index is \(m\) and the edge-locating chromatic number is \(n\), for any integers \(m\) and \(n\) that \(m\leq n\). We think such a graph is available. For \(k\geq 2\), let \(T_{k}\) be the perfect binary tree with root \(a\), such that \(deg(a)=2\), other non-pendant vertices have degree \(3\), and all pendant vertices have distance \(k\) from \(a\). By Konig's Theorem [11, Theorem 10.8], the chromatic index of \(T_{k}\) is \(3\), for any \(k\geq 2\). But as \(k\) increases, the edge-locating chromatic number of \(T_{k}\) also increases. If we find the edge-locating chromatic number of \(T_{k}\) and let \(G\) be the graph obtained by joining the role of \(T_{k}\) to a star graph, the question is answered. We end the paper with the following problems. **Problem 7.1**.: Prove or disprove that for any connected graph \(G\) of order \(n\), \(\chi^{\prime}_{L}(G)\leq\chi^{\prime}_{L}(K_{n})\). **Problem 7.2**.: Characterize the class \(\Psi\) of connected graphs such that \(G\in\Psi\) if and only if \(\chi^{\prime}_{D}(G)=\chi^{\prime}_{L}(G)=k\) for \(k\in\{\Delta(G),\Delta(G)+1\}\). **Problem 7.3**.: For a connected graph \(G\), is there a significant relationship between \(\chi_{L}(G)\) and \(\chi^{\prime}_{L}(G)\)? ## Acknowledgment The research has been supported by the 2023 PPMI research grant, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Indonesia.
2305.19096
Noisy voter models in switching environments
We study the stationary states of variants of the noisy voter model, subject to fluctuating parameters or external environments. Specifically, we consider scenarios in which the herding-to-noise ratio switches randomly and on different time scales between two values. We show that this can lead to a phase in which polarised and heterogeneous states exist. Secondly, we analyse a population of noisy voters subject to groups of external influencers, and show how multi-peak stationary distributions emerge. Our work is based on a combination of individual-based simulations, analytical approximations in terms of a piecewise-deterministic Markov processes (PDMP), and on corrections to this process capturing intrinsic stochasticity in the linear-noise approximation. We also propose a numerical scheme to obtain the stationary distribution of PDMPs with three environmental states and linear velocity fields.
Annalisa Caligiuri, Tobias Galla
2023-05-30T14:59:21Z
http://arxiv.org/abs/2305.19096v2
# Noisy voter models in switching environments ###### Abstract We study the stationary states of variants of the noisy voter model, subject to fluctuating parameters or external environments. Specifically, we consider scenarios in which the herding-to-noise ratio switches randomly and on different time scales between two values. We show that this can lead to a phase in which polarised and heterogeneous states exist. Secondly, we analyse a population of noisy voters subject to groups of external influencers, and show how multi-peak stationary distributions emerge. Our work is based on a combination of individual-based simulations, analytical approximations in terms of a piecewise-deterministic Markov processes (PDMP), and on corrections to this process capturing intrinsic stochasticity in the linear-noise approximation. We also propose a numerical scheme to obtain the stationary distribution of PDMPs with three environmental states and linear velocity fields. ## I Introduction The voter model (VM) [1; 2] is a model of interacting individuals, and can be used to describe, among other phenomena, the competition of opinions in a population. In the simplest setting, every agent in the population can have opinion \(A\) or opinion \(B\). The individuals form an interaction network, this can be a complete graph, or the different agents can have limited sets of nearest neighbours. The interaction is an imitation process: an agent is selected at random, and adopts the opinion of a neighbour, selected randomly as well. Provided the interaction network consists of one single connected component, this model has two absorbing states, in which all agents have the same opinion (all \(A\), or all \(B\)). These states are referred to as _consensus_ states. The voter model in this simple form was first proposed by probabilists [2], and has found widespread applications, including in the modelling of opinion dynamics, language competition and in population genetics [3; 4; 5; 6; 7; 8; 9]. The VM has also generated significant interest in statistical physics, with particular focus on its coarsening dynamics [10; 11], field theoretic descriptions and different types of phase transition and universality [12; 13]. So-called 'noisy' voter models (nVM) are variations of the original model. The term 'noisy' is used to indicate that, in addition to the imitation process, agents can also change opinion state spontaneously. Models of this type have been used to describe people choosing among restaurants, or ants selecting one of two paths towards a source of food [14; 15]. The nVM has no absorbing states, and shows a finite-size phase transition [14; 16; 17]. When the noise is stronger than the herding mechanism the steady-state distribution is unimodal and the system displays coexistence of the two opinions. If the noise is below a threshold (set by the herding rate and the size of the population), then the stationary distribution of agents across the two opinions is bimodal. The system spends most of its time near one of the consensus states, with occasional switches from one side of phase space to the other The models mentioned so far describe homogeneous populations in which all agents are subject to the same update rules. In [18; 19] agents that never change opinion were introduced. These are referred to as _zealots_. The effect of zealots on the VM is studied for example in [20; 21; 22; 19]. The presence of zealots can destroy the symmetry of the steady-state distribution, and the population can become biased towards the opinion of the majority of zealots. Other, related mechanisms include the introduction of mass media [23], or personal information [24]. The overall purpose of the present work is to study the effects of (i) time-dependence of the imitation dynamics, and (ii) time-dependent external influence on VMs. More specifically, with regards to (i), we study variants of the nVM in which the ratio of the noise and herding rates switches randomly between two different values. There are thus periods in which the ordering effect of herding is strong compared to disordering effect of spontaneous opinion changes, and other periods in which the disordering effects dominate. In terms of statistical physics this falls into a class of population dynamics subject to environmental fluctuations, studied for example in [25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. We also note recent work on VM in fluctuating environments [35] where a three-state constrained VM under fluctuating influence is studied. Further we refer to [36] where the authors study a VM which switches between phases with and without noise respectively. With regards to (ii), we introduce groups of agents who are inert to the herding mechanism (akin to zealots), but who can switch opinion states randomly from time to time. We will refer to these as _influencers_. This term is to be understood broadly, in particular we do not restrict the notion of influencers to individual human actors. Instead, the term captures different types of external in fluences on the population of conventional VM agents, including media, advertising, social networks etc., or indeed new information, facts or events that arise and drive opinions in a population (e.g., a political scandal that comes to light). One main feature of our model is that the effects of influencers is not static, instead it fluctuates in time. The objective of this work is thus to understand if fluctuations of the relative noisy rate or of external influences affect the formation of consensus. At the centre of this is the question how demographic noise (due to the finiteness of the population), decision noise (random opinion changes), and external randomness interact. To address these questions, we use a number of different approaches from statistical physics. In the limit of infinite populations, and thus discarding demographic randomness, the system reduces to a so-called piecewise-deterministic Markov process (PDMP) [37; 38; 39; 40]. The stationary distribution of such a process can be obtained analytically for the case of two environmental states, see for example [40]. As a by-product of our work we develop a numerical scheme to obtain the stationary state of models with three or more environmental states. Advancing the method of [28] we also compute corrections to the infinite-population limit. This can be used to approximate the stationary distribution of the system with large but finite populations. Separately, analytical progress is also possible in the the adiabatic limit of fast switching environment [30]. The opposite extreme, very slow environmental changes, can also be addressed analytically. The remainder of the paper is structured as follows. In Section II we define the model, including in particular the dynamics of the environment. Section III contains a description of analytical approaches for very fast or very slow environmental dynamics, and, separately, in the large-population limit. In Section IV we study a VM with fluctuating noise parameter. We obtain analytical results for fast and slow switching and we present simulation results for intermediate environmental time scales. Section V focuses on the model with fluctuating influencers. We present our conclusions and brief outlook in Section VI. ## II Model definitions and methods ### Model definitions We consider a finite population of \(N\) individuals. At any given time, each individual can be in one of two states, which we label as \(A\) and \(B\). We write \(i\) for the number of individuals in state \(A\), the number of individuals in state \(B\) is then \(N-i\). The composition of the population evolves in continuous time via reactions that each convert an individual of type \(A\) into type \(B\), or vice versa. An individual can change state through three different mechanisms: (i) they can interact with another individual and copy its state; (ii) they can change state spontaneously; or (iii) they can interact with an influencer and thus change opinion. The model operates on a fully connected graph, that is any one of the \(N\) individuals can copy the state of any other individual in item (i). Similarly, the interaction with the influencers is also all-to-all, in the sense that in item (iii) any influencer can, in principle, affect any of the \(N\) individuals in the population. In order to model processes (i) and (ii) we follow the conventions of existing literature on the nVM [14; 17; 41; 41]. The external influence [process (iii)] is represented by 'forces' driving the individuals towards one of the opinion states. We model these forces as a group of size \(\alpha N\) (with \(\alpha\geq 0\) a model parameter). We re-iterate that influencers are not necessarily to be thought of individuals, there is therefore no strict need to limit \(\alpha N\) to integer values. Instead, \(\alpha\) characterises the total strength of all influencers, relative that of the \(N\) agents in the population. Not all influencers need to act towards the same state (\(A\) or \(B\)). Instead, at any one time a fraction \(z\) of the \(\alpha N\) influencers acts towards \(A\), and a fraction \(1-z\) acts in the direction of opinion \(B\). Naturally, \(z\) is restricted to the interval \([0,1]\). We assume that the fraction \(z\) fluctuates in time. More precisely, and to allow for a compact notation, we think of the population dynamics as subject to an external environment, which can take states \(\sigma=0,1,\ldots,S-1\). This environment determines the fraction \(z\) of influencers acting in the direction of opinion \(A\) (that is \(z\) is a function of \(\sigma\)), and it can also affect the noise rate in the dynamics. We will now describe this in detail. The per capita rates, in environment \(\sigma\), for an agent in state \(A\) to change to state \(B\) and for the reverse process respectively are given by \[\pi_{A\to B,\sigma}(i) = a_{\sigma}+h\left[\frac{N-i}{(1+\alpha)N}+\frac{\alpha N(1-z_{ \sigma})}{(1+\alpha)N}\right],\] \[\pi_{B\to A,\sigma}(i) = a_{\sigma}+h\left[\frac{i}{(1+\alpha)N}+\frac{\alpha Nz_{\sigma}} {(1+\alpha)N}\right]. \tag{1}\] The quantity \(a_{\sigma}\) is the rate of spontaneous opinion changes. We assume that this parameter can take different values in the different environmental states, as indicated by the subscript \(\sigma\). The coefficient \(h\) is what is sometimes called a _herding parameter_, and indicates how easily individuals are influenced by the opinions of other individuals, including external influencers. From the above expressions it is clear that only ratio of the noise and the herding parameters is relevant for the stationary state. We can therefore set \(h=1\) throughout. This amounts to fixing the time scales of the processes in Eq. (1). For the time being we will keep the value of \(h\) general though, as this allows us to track the origin of different terms in the dynamics. The square brackets in the rates represent processes (i) and (iii) described above. A focal individual chooses an interaction partner either from the population of agents, or from the set of \(\alpha N\) external influencers, and then adopts the opinion of this interaction partner. A change of the composition of the population occurs only if the interaction partners is in the opinion state opposite to that of the focal individual. The expression \((1+\alpha)N\) in the denominator in Eq. (1) is the total number of possible interaction partners, hence \((N-i)/[(1+\alpha)N]\) is the probability that the interaction partner is an individual from the population and in opinion state \(B\). Similarly, \(\alpha N(1-z_{\sigma})/[(1+\alpha)N]\) is the probability that the interaction partner is an external influencer promoting opinion \(B\). The expressions in Eq. (1) are per capita rates. The total rate of converting individuals of type \(B\) to type \(A\) (or vice versa respectively) in the population are then \[T^{+}_{i,\sigma} = (N-i)\pi_{B\to A,\sigma}(i),\] \[T^{-}_{i,\sigma} = i\pi_{A\to B,\sigma}(i). \tag{2}\] These are the rates with which transitions \(i\to i+1\) and \(i\to i-1\) occur in the population if the environment is in state \(\sigma\). It remains to specify the dynamics of the environmental state. We assume that the environment undergoes a Markovian process governed by rates \(\lambda\mu_{\sigma\rightarrow\sigma^{\prime}}(i)\). The \(\mu_{\sigma\rightarrow\sigma^{\prime}}(i)\) are the elements of a stochastic matrix (with \(\sum_{\sigma^{\prime}}\mu_{\sigma\rightarrow\sigma^{\prime}}(i)=1\) for all \(\sigma\)). We set \(\mu_{\sigma\rightarrow\sigma}(i)=0\). In the present work the rates \(\mu_{\sigma\rightarrow\sigma^{\prime}}\) do not depend on the state of the population, \(i\). However, to develop the general formalism we will allow for such a dependence in principle whenever possible. The parameter \(\lambda\) controls the time scale of the environmental dynamics relative to that of the changes within the population. We thus refer to the scenario \(\lambda\to 0\) as the'slow-switching' limit, and to situations in which \(\lambda\rightarrow\infty\) as 'fast-switching'. ### Master equation We write \(P(i,\sigma,t)\) for the probability to find the system in state \((i,\sigma)\) at time \(t\), that is the probability to have \(i\) individuals of opinion \(A\) in the population, and the environment in state \(\sigma\). The time dependence of \(P\) is omitted below to make the notation more compact. We then have the following master equation \[\frac{d}{dt}P(i,\sigma)=(E-1)[T^{-}_{i,\sigma}-1]P(i,\sigma)\] \[+(E^{-1}-1)[T^{+}_{i,\sigma}-1]P(i,\sigma)\] \[+\lambda\sum_{\sigma^{\prime}}[\mu_{\sigma^{\prime}\rightarrow \sigma}(i)P(i,\sigma^{\prime})-\mu_{\sigma\rightarrow\sigma^{\prime}}(i)P(i, \sigma)], \tag{3}\] where we have defined the raising operator \(E\), acting on functions of \(i\) as \(Ef(i)=f(i+1)\). Its inverse is \(E^{-1}\), i.e., we have \(E^{-1}f(i)=f(i-1)\). ## III Theoretical analysis ### Fast-switching limit In the limit of very fast environmental switching (\(\lambda\rightarrow\infty\)) we can, for purposes of the dynamics in the population, assume that the environmental process is at stationarity. We write \(\rho^{*}_{\sigma}(i)\) for this stationary distribution. This distribution fulfills the relations \[\sum_{\sigma^{\prime}}\left[\mu_{\sigma^{\prime}\rightarrow\sigma}\rho^{*}_{ \sigma^{\prime}}(i)-\mu_{\sigma\rightarrow\sigma^{\prime}}\rho^{*}_{\sigma}(i) \right]=0 \tag{4}\] for all \(\sigma\). Following [29] the dynamics of the population in the fast-switching limit is governed by effective rates \[\overline{T}^{\pm}_{i}\equiv\sum_{\sigma}\rho^{*}_{\sigma}(i)T^{\pm}_{i, \sigma}. \tag{5}\] For our system these effective rates are \[\overline{T}^{+}_{i} = \left[\overline{a}+h\frac{i}{(1+\alpha)N}+\frac{\alpha Nh\overline {z}}{(1+\alpha)N}\right](N-i)\] \[\overline{T}^{-}_{i} = \left[\overline{a}+h\frac{N-i}{(1+\alpha)N}+\frac{\alpha Nh(1- \overline{z})}{(1+\alpha)N}\right]i, \tag{6}\] where we have written \[\overline{f}=\sum_{\sigma}\rho^{*}_{\sigma}(i)f_{\sigma}(i). \tag{7}\] We have suppressed the potential \(i\)-dependence of objects of this type. If model parameters are such that \(\overline{a}\neq 0\) then there are no absorbing states for this effective birth-death process. The stationary distribution is given by (see e.g. [20]), \[\overline{P}^{*}_{i}=\frac{\prod_{k=1}^{N}\overline{\gamma}_{i-k}}{1+\sum_{ \ell=1}^{N}\sum_{k=1}^{\ell}\overline{\gamma}_{k}}, \tag{8}\] where \(\overline{\gamma}_{i}=\overline{T}^{+}_{i}/\overline{T}^{-}_{i-1}\). ### Slow-switching limit In the slow-switching scenario, and assuming that the switching rates \(\mu_{\sigma\rightarrow\sigma^{\prime}}\) are not functions of \(i\), the stationary distribution is given by the weighted sum of the stationary distributions \(P^{*}(i|\sigma)\) for the system in fixed environments \(\sigma\in\{0,1\}\). These distributions in turn are obtained from relations analogous to that in Eq. (8), but for fixed environment, and therefore with rates \(T^{\pm}_{i,\sigma}\) instead of \(\overline{T}^{\pm}_{i}\). We then have \[P^{*}(i)=\sum_{\sigma}\rho^{*}_{\sigma}P^{*}(i|\sigma). \tag{9}\] ### Rate equations and piecewise deterministic Markov process #### ii.3.1 Piecewise deterministic Markov process in the limit of infinite populations In the limit of an infinite population the stochasticity within the population becomes irrelevant and a deterministic dynamics emerges between switches of the environmental state. This results in a piecewise deterministic Markov process (PDMP), see for example [28] and references therein. Writing \(\phi=i/N\) and \(\mathcal{T}_{\sigma}^{\pm}(i/N)=T_{i,\sigma}^{\pm}/N\), and taking the limit \(N\to\infty\), the deterministic evolution between changes of the environment is governed by \[\dot{\phi}=\mathcal{T}_{\sigma}^{+}(\phi)-\mathcal{T}_{\sigma}^{-}(\phi). \tag{10}\] For our model, this can be written as \[\dot{\phi}=v_{\sigma}(\phi), \tag{11}\] with \[v_{\sigma}(\phi)\equiv a_{\sigma}(1-2\phi)+\frac{h\alpha}{1+\alpha}(z_{\sigma} -\phi). \tag{12}\] As before, the environment \(\sigma\) follows the process defined by the rates \(\lambda\mu_{\sigma\to\sigma^{\prime}}\). The different terms in Eq.(12), valid in fixed environment \(\sigma\), can be interpreted as follows. The first term, \(a_{\sigma}(1-2\phi)\), drives the population towards a state with \(\phi=1/2\), i.e., equal proportions of individuals in opinions \(A\) and \(B\) respectively. This term describes random opinion changes, with equal rate from \(A\) to \(B\) or vice versa. If this was the only process in an infinite population, then a state with \(\phi=1/2\) would eventually result in any fixed environment with \(a_{\sigma}>0\). The second term on the right-hand side of Eq.(12) describes the effects of the external influencers. The fraction of influencers promoting opinion \(A\) is \(z_{\sigma}\), and a fraction \(1-z_{\sigma}\) promotes opinion \(B\). The net result of these external forces is a drive towards the state \(\phi=z_{\sigma}\). The strength of this pull is governed by the herding parameter \(h\) and by the ratio \(\alpha/(1+\alpha)\) describing the strength of external influencers (of which there are \(\alpha N\)) among all partners a given individual can interact with (\(N\) individuals in the population plus \(\alpha N\) external influencers). If \(h\alpha\gg(1+\alpha)a_{\sigma}\) then the external forces dominate the dynamics of the population, and the noise term proportional to \(a_{\sigma}\) becomes irrelevant. We further note that the interaction among individuals in the population has no effect in the deterministic dynamics in Eq.(12) [42; 17]. This is a well-known characteristic of the VM, and a consequence of the fact that, in an interaction of two individuals of types \(A\) and \(B\) respectively, the processes of individual \(A\) copying opinion \(B\) is equally likely as the reverse. As a final remark, we note that the dynamics in Eq.(12) has a single attractive fixed point, given by \[\phi_{\sigma}^{*}=\frac{a_{\sigma}+h\frac{\alpha}{1+\alpha}z_{\sigma}}{2a_{ \sigma}+h\frac{\alpha}{1+\alpha}}. \tag{13}\] We always have \(\phi_{\sigma}^{*}\in[0,1]\). The fixed point \(\phi_{\sigma}^{*}\) is located at extreme values \(0\) or \(1\) only if \(a_{\sigma}=0\), \(\alpha>0\), and \(z_{\sigma}\in\{0,1\}\). That is, for the unique fixed point to be at \(0\) or \(1\), there must not be any spontaneous opinion changes, there must be a non-zero set of influencers, and all influencers must act in the same direction. Further, most of our paper excludes cases in which two different environmental states lead to the same fixed point, i.e., we assume that \(\phi_{\sigma}^{*}\neq\phi_{\sigma^{\prime}}^{*}\) for \(\sigma\neq\sigma^{\prime}\) and \(\alpha,h\neq 0\). Without loss of generality we can then assume that the environmental states \(\sigma=0,\ldots,S-1\) are labelled such that \(\phi_{0}^{*}<\phi_{1}^{*}<\cdots<\phi_{S-1}^{*}\). The dynamics of the PDMP is then restricted to the interval \((\phi_{0}^{*},\phi_{S-1}^{*})\), where \(\phi_{0}^{*}\) is the left-most fixed point, and \(\phi_{S-1}^{*}\) is the right-most fixed point on the \(\phi\)-axis. #### ii.3.2 Stationary distribution The PDMP defined in Sec. III.3, governed by Eqs. (11) and the dynamics of the environmental process can be described by the following Liouville-master equation for the probability \(\Pi(\phi,\sigma)\) to find the system in state \((\phi,\sigma)\), \[\frac{d}{dt}\Pi(\phi,\sigma)=-\frac{\partial}{\partial\phi}\left[ v_{\sigma}(\phi)\Pi(\phi,\sigma)\right]\] \[+\lambda\sum_{\sigma^{\prime}}[\mu_{\sigma^{\prime}\to\sigma}( \phi)\Pi(\phi,\sigma^{\prime})-\mu_{\sigma\to\sigma^{\prime}}(\phi)\Pi(\phi, \sigma)]. \tag{14}\] In slight abuse of notation we have written \(\mu_{\sigma\to\sigma^{\prime}}(\phi)\) for the transition rates of the environmental process if the population is in state \(\phi\). The stationary state of the PDMP is defined by \(\frac{d}{dt}\Pi(\phi,\sigma)=0\) for all \(\phi,\sigma\). In this state we have \[\frac{\partial}{\partial\phi}\left[v_{\sigma}(\phi)\Pi(\phi, \sigma)\right] = \lambda\sum_{\sigma^{\prime}}\left[\mu_{\sigma^{\prime}\to\sigma}( \phi)\Pi(\phi,\sigma^{\prime})\right. \tag{15}\] \[\left.-\mu_{\sigma\to\sigma^{\prime}}(\phi)\Pi(\phi,\sigma)\right].\] #### ii.3.3 Special case of two environmental states If there are only two environmental states, \(\sigma\in\{0,1\}\) then the stationary state can be found explicitly [28; 40], and takes the following form, \[\Pi(\phi,0) = \frac{\mathcal{N}}{-v_{0}(\phi)}g(\phi),\] \[\Pi(\phi,1) = \frac{\mathcal{N}}{v_{1}(\phi)}g(\phi), \tag{16}\] where \(\phi\in(\phi_{0}^{*},\phi_{1}^{*})\), and \[g(\phi)=\exp\left[-\lambda\int^{\phi}du\left(\frac{\mu_{0\to 1}(u)}{v_{0}(u)}+ \frac{\mu_{1\to 0}(u)}{v_{1}(u)}\right)\right]. \tag{17}\] We note that \(v_{0}(\phi)<0\) and \(v_{1}(\phi)>0\) for \(\phi\in(\phi_{0}^{*},\phi_{1}^{*})\). The constant \(\mathcal{N}\) in Eq. (16) is determined by normalisation, \(\int_{\phi_{0}^{*}}^{\phi_{1}^{*}}du\left[\Pi(u,0)+\Pi(u,1)\right]=1\). Systems with more than two environmental states For systems with three or more environmental states we do not know of any method to find the stationary distribution of the resulting PDMP analytically. However, is possible to numerically integrate the system in Eq.(15). To deal with singularities in Eq. (15) at the fixed points \(\phi_{\sigma}^{*}\) one can divide the interval \(0<\phi<1\) into \(S-1\) subintervals \(\phi_{\sigma}^{*}<\phi<\phi_{\sigma+1}^{*}\) (\(\sigma=0,\ldots,S-2\)), and perform a numerical integration in each of these intervals. One then needs to ensure continuity of all functions \(\Gamma_{\sigma}(\phi)=v_{\sigma}(\phi)\Pi(\phi,\sigma)\) at the boundaries. Further details can be found in Appendix A. ### Leading-order effects of noise The PDMP descriptions retains the environmental noise, but discards all intrinsic stochasticity at fixed environmental state. This approach is formally valid in the limit of infinite populations, \(N\to\infty\). The effects of noise within the population can be studied to leading order by an expansion of the master equation (3) in powers of \(1/N\). This follows the lines of [28]. To leading order the expansion produces the PDMP, and to sub-leading order a dynamics described by a set of 'piecewise' stochastic differential equations is obtained [28; 29; 43]. More precisely, these are of the form \(\dot{x}=v_{\sigma}(x)+\sqrt{w_{\sigma}(x)/N}\eta(t)\), where \(\eta\) is zero-average Gaussian white noise of unit amplitude (i.e., \(\langle\eta(t)\eta(t^{\prime})\rangle=\delta(t-t^{\prime})\)). The functions \(w_{\sigma}(x)\) are given by [28] \[w_{\sigma}(x)=\mathcal{T}_{\sigma}^{+}(x)+\mathcal{T}_{\sigma}^{-}(x). \tag{18}\] As before, the environmental state undergoes the Markov process defined by the rates \(\lambda\mu_{\sigma\to\sigma^{\prime}}\). As shown in [28], further progress can then be made using a linear-noise approximation. To this end, one writes \(i/N=\phi+N^{-1/2}\xi\), where \(\phi\) is the trajectory of the PDMP for a given realisation of the environmental dynamics [i.e., \(\dot{\phi}=v_{\sigma}(\phi)\)]. Expanding to linear order in \(\xi\) one then finds \[\dot{\xi}(t)=v_{\sigma}^{\prime}(\phi)\xi+\sqrt{w_{\sigma}(\phi)}\zeta(t) \tag{19}\] with white Gaussian noise \(\zeta(t)\), and where \(v_{\sigma}^{\prime}(\phi)=dv_{\sigma}(\phi)/d\phi\). The stationary distribution of the original system in Eq. (3) can be approximated by the following expression, \[\Pi(x) = \sum_{\sigma}\int\,d\phi\,d\xi\,\left[\Pi(\phi,\sigma)\Pi(\xi|\phi)\right. \tag{20}\] \[\times\left.\delta(x-\phi-N^{-1/2}\xi)\right].\] Here, \(\Pi(\phi,\sigma)\) is the stationary distribution of the PDMP, and \(\Pi(\xi|\phi)=[2\pi s^{2}(\phi)]^{-1/2}\exp\left[-\xi^{2}/[2s^{2}(\phi)]\right]\) is a Gaussian distribution with mean zero and variance given by \[s^{2}(\phi)=-\frac{1}{2}\frac{\sum_{\sigma}\Pi(\sigma|\phi)w_{\sigma}(\phi)}{ \sum_{\sigma}\Pi(\sigma|\phi)v_{\sigma}^{\prime}(\phi)}. \tag{21}\] This relation was derived for systems with two environmental states in [28], but holds more generally as described in more detail in Appendix B. ## IV Noisy voter model with switching noise parameter ### Setup In this section, we will examine the simple case of \(\alpha=0\), i.e., the system is not affected by any influencers. The rates in environmental state \(\sigma\) are then \[T_{i,\sigma}^{+} = \left(a_{\sigma}+h\frac{i}{N}\right)(N-i),\] \[T_{i,\sigma}^{-} = \left(a_{\sigma}+h\frac{N-i}{N}\right)i. \tag{22}\] Despite the absence of influencers the system operates within a switching environment, as the noise parameter \(a_{\sigma}\) fluctuates in time. We study the case of two environmental states \(\sigma=0,1\). We label the states such that \(a_{0}<a_{1}\). The rates for the environmental switches in our analysis are assumed not to depend on the population state \(i\). Therefore, the stationary distribution for the environmental state \(\sigma\) will be simply \[\rho_{0}^{*}=\frac{\mu_{1\to 0}}{\mu_{1\to 0}+\mu_{0\to 1}},\ \ \rho_{1}^{*}=\frac{\mu_{0\to 1}}{\mu_{1\to 0}+\mu_{0\to 1}}. \tag{23}\] Throughout our analysis we assume \(\mu_{0\to 1}=\mu_{1\to 0}\), and consequently we have \(\rho_{0}^{*}=\rho_{1}^{*}=1/2\). We will first investigate the slow-switching and fast-switching limits. The total rate for events in the population in environment \(\sigma\) is \(T_{i,\sigma}^{+}+T_{i,\sigma}^{-}=a_{\sigma}N+2hi(N-i)/N\), and therefore takes values between \(a_{\sigma}N\) and \((a_{\sigma}+h/2)N\). The environment can therefore be considered slow when \(\lambda\ll a_{0}N\). Similarly, the environment is fast relative the population when \(\lambda\gg N(a_{1}+h/2)\). ### Slow switching limit We show the stationary distribution in the slow-switching limit in Fig. 1, comparing theoretical predictions with simulation results for different values of the population size \(N\). We observe three different shapes. For small populations (\(N=15\) in the figure) the distribution is bimodal, with two maxima at the consensus states. When the population is large (\(N=55\)) we find a unimodal shape, the population is mostly in states in which both opinions coexist in similar proportions. So far, this is similar to what one would expect in the conventional two-state VM, namely transition from a bimodal shape in small populations to a unimodal shape in large populations [17]. However, in the present model we find an additional phase with trimodal distributions for intermediate population sizes (\(N=40\) in Fig. 1). The distribution has two maxima at the extremes, and an additional maximum in the center. This state is characterized by alternating periods of coexistence of both opinions and periods of polarization. This is illustrated by the time series in Fig. 2. Broadly speaking this type of behaviour represents a scenario in which public opinion is characterized by a mixture of two views, but where event may occur temporarily increasing the weight of herding relative to that of noise, and thus polarising opinions. ### Fast switching limit In the limit of fast environmental switching we have effective transition rates \[\overline{T}_{i}^{+} = \left(\overline{a}+h\frac{i}{N}\right)(N-i),\] \[\overline{T}_{i}^{-} = \left(\overline{a}+h\frac{N-i}{N}\right)i \tag{24}\] This describes a conventional noisy VM [17], with noise parameter \(\overline{a}\) and herding parameter \(h\). The stationary distribution is bimodal if \(N<h/\overline{a}\), and unimodal otherwise, as shown in Fig. 3. The transition between these two regimes occurs without an intermediate trimodal shape. ### Simulations for intermediate switching rates When the time scales for population and environmental switches are comparable to each other, an analytical characterisation is not easily available. Nonetheless, we can conduct simulations, varying the value of \(\lambda\) to interpolate between the slow-switching regime in Sec. IV.2 to fast switching in Sec. IV.3. Figure 4 shows the resulting phase diagram in the \((\lambda,N)\)-plane, at fixed values of the remaining model parameters. For slow switching (low values of \(\lambda\)), the stationary distribution exhibits three different shapes (bi-modal, trimodal, and unimodal) as \(N\) increases. For faster environmental dynamics (higher values of \(\lambda\)), the trimodal shape disappears, resulting in the well-known finite-size transition between unimodal and bimodal states in a nVM with an effective noise constant. Figure 1: **Voter model with slow-switching noise rate.** Stationary distribution from simulations (symbols) and from theory [lines, from Eq. (9)]. Model parameters are \(a_{0}=0.02\), \(a_{1}=0.05\), \(h=1\) and \(\lambda=0.02\). In the inset, we highlight the new trimodal shape (\(N=35\)). Each distribution is from \(10^{6}\) entries sampled every 50 units of time, after an initial transient of 1000 units of time Figure 3: **Voter model with fast-switching noise rate.** Stationary distribution from simulations (symbols), and from theory [lines, Eq. (8), with noise parameter \(\overline{a}=(a_{0}+a_{1})/2\)]. Model parameters: \(a_{0}=0.02\), \(a_{1}=0.05\), \(h=1\) and \(\lambda=100\). Each distribution is from \(5\times 10^{6}\) samples; time between subsequent samples is \(\Delta t=0.01\), after a transient of 500 units of time. Figure 2: **Time series of the fraction of agents in state \(A\) from a simulation of the voter model with switching noise parameter**. Shaded segments indicate high noise rate, white background low noise rate. Model parameters are \(a_{0}=0.001\), \(a_{1}=0.1\), \(h=1\), \(\lambda=0.001\) and \(N=30\). ## V Noisy voter model with switching influencers In this section we focus on the impact of fluctuating groups of influencers on the nVM. The state of the influencers plays the role of the external environment. We assume that \(a_{\sigma}\equiv a\) and \(h_{\sigma}\equiv 1\) across environmental states. We begin by examining the two-state scenario, which allows us to obtain an explicit solution for stationary distribution of the model. If there are more than two environmental states we resort to numerical integration to solve Eq. (15). ### Two states of the group of influencers We consider a model with two environmental states and in which all influencers form one group of total strength \(\alpha N\). At any one time, they act coherently either in favour of opinion \(A\) or of opion \(B\). As before we write \(\sigma\in\{0,1\}\) for the two environmental states, and \(\lambda\mu_{0\to 1}\) and \(\lambda\mu_{1\to 0}\) for the switching rates. We have \(z_{0}=0\) and \(z_{1}=1\). The stationary state of the environmental dynamics is given by Eq. (23). #### v.1.1 Fast-switching limit We first consider the fast-switching limit \(\lambda\to\infty\). The effective rates in Eq. (6) then become \[\overline{T}^{+}_{i} = \left[a^{+}+\frac{i}{(1+\alpha)N}\right](N-i),\] \[\overline{T}^{-}_{i} = \left[a^{-}+\frac{N-i}{(1+\alpha)N}\right]i, \tag{25}\] where we have introduced \[a^{+} \equiv a+\frac{\alpha\overline{z}}{1+\alpha}, \tag{26}\] \[a^{-} \equiv a+\frac{\alpha(1-\overline{z})}{1+\alpha}, \tag{27}\] with \(\overline{z}=\rho_{0}^{*}z_{0}+\rho_{1}^{*}z_{1}\) [see also Eq. (7)]. We note that Eqs. (25) are also valid for an arbitrary number of environmental states (with the definition \(\overline{z}=\sum_{\sigma}\rho_{\sigma}^{*}(i)z_{\sigma}\), and so long as \(h=1\) and \(a_{\sigma}\equiv a\)). Eqs. (25) are recognised as the transition rates of a potentially asymmetric noisy voter model (asymmetry here refers to setups with \(a^{+}\neq a^{-}\)). For \(\overline{z}=1/2\) one has a symmetric noisy voter model with effective herding rate \(1/(1+\alpha)\) and with noise parameter \(a^{+}=a^{-}=a+\alpha/[2(1+\alpha)]\). A finite-size transition between unimodal and bimodal states occurs in the nVM when the ratio of noise parameter to herding parameter is \(1/N\)[42; 17; 44]. This leads to \[N_{c}=\frac{1}{a(1+\alpha)+\alpha/2}. \tag{28}\] Simulations results verifying this are shown in Fig. 5. The total weight of influencers in the model is the equivalent of \(\alpha N\) normal agents. For a given value of \(\alpha\) this means that the weight of influencers is less than that of a single normal agent when there are fewer than \(N_{1}\equiv 1/\alpha\) normal agents (\(N<1/\alpha\)). In such situations one cannot think of influencers as discrete agents. We now briefly consider the asymmetric case, \(\overline{z}\neq 1/2\). In this case, the stationary distribution is no longer symmetric [i.e., the distribution will not fulfill \(P^{*}(i)=P^{*}(N-i)\) for all \(i\)]. We therefore study the shape of the distribution near its left and right ends of the domain separately. As parameters are varied, the'slope' of the distribution near the left end changes when \(P(i=0)=P(i=1)\). This is the case if and only if \(\overline{T}^{+}_{0}=\overline{T}^{-}_{1}\). This in turn leads to \[a(1+\alpha)(N-1)+\alpha\left[N-\overline{z}(N+1)\right]-\frac{N-1}{N}=0. \tag{29}\] For given \(a,\alpha\) and \(\overline{z}\) we denote the physically relevant solution of this equation by \(N_{c}^{\text{left}}\). An analogous equation is obtained from setting \(\overline{T}^{-}_{N}=\overline{T}^{+}_{N-1}\), Figure 4: **Phase diagram for the voter model with switching noise parameter**. The coloured shading indicates the shape of the stationary distribution as found in simulations. The red lines on the left and right show the phase boundaries in the limits of slow switching [left, found from evaluating the expression in Eq. (9)], and fast switching [right, from Eq. (8)]. For each pair of values for \(N\) and \(\lambda\), we obtain the stationary distribution \(P^{*}(i)\) (\(i=0,\dots,N\)). Using the expected symmetry \(P^{*}(i)=P^{*}(N-i)\), we classify a distribution as unimodal when \(P^{*}(0)<P^{*}(1)\), as trimodal when \(P^{*}(0)>P^{*}(1)\) and when there is a local maximum in the interval \([N/2-1,N/2+1]\), and as bimodal otherwise. Model parameters are \(a_{0}=0.02\), \(a_{1}=0.05\), \(h=1\). Time between subsequent samples is \(\Delta t=1/\lambda\), for each distribution we take \(10^{5}-10^{7}\) samples after a transient of 500 units of time. \[a(1+\alpha)(N-1)+\alpha\left[\overline{z}(N+1)-1\right]-\frac{N-1}{N}=0. \tag{30}\] We denote the solution of this equation by \(N_{c}^{\rm right}\). The resulting behaviour of the asymmetric model is illustrated in Fig. 6 (a). In the example shown we have \(\overline{z}=0.85\) so that influencers tend to favour opinion \(A\). In this setup one finds \(N_{c}^{\rm left}<N_{c}^{\rm right}\). For relatively small populations (\(N<N_{c}^{\rm left}\)) the stationary distribution is bimodal, but with a higher peak at \(x=1\) than at \(x=0\). As \(N\) is increased, the left edge of the distribution (near \(x=0\)) first changes slope, and a distribution which is strictly increasing in \(x\) results for \(N_{c}^{\rm left}<N<N_{c}^{\rm right}\). Finally, when \(N>N_{c}^{\rm right}\) the distribution is unimodal, but with its maximum closer to \(x=1\) than to \(x=0\). Fig. 6(b) shows the resulting phase diagram in the \((\alpha,N)\) plane, indicating the transitions between a bimodal phase in small populations, a phase with a strictly increasing functional shape for the stationary distribution at intermediate population sizes, and finally a unimodal phase for large populations. #### iii.2.2 Limit of large populations In the limit \(N\to\infty\) the internal noise in the population becomes irrelevant, and a PDMP results. The velocities in the two environments are given in Eq. (12). Using the expressions in Sec. III.3.3 the stationary distribution for the model with two environmental states can be obtained for any choice of the rates \(\lambda\mu_{0\to 1}\) and \(\lambda\mu_{1\to 0}\). We here restrict the discussion to the case \(\mu_{0\to 1}=\mu_{1\to 0}=1\), but keep the time scale separation \(\lambda\) general. We then find \[\Pi^{*}(\phi)=\mathcal{C}\left[(\phi-\phi_{0}^{*})(\phi_{1}^{*}-\phi)\right]^ {\lambda/\lambda_{c}-1}, \tag{31}\] where \(\mathcal{C}\) is a normalisation constant, and where \[\lambda_{c}=2a+\frac{\alpha}{1+\alpha}. \tag{32}\] The fixed points \(\phi_{0}^{*}\) and \(\phi_{1}^{*}\) are obtained from Eq. (13). The stationary distribution becomes singular at \(\phi=\phi_{0}^{*}\) and \(\phi=\phi_{1}^{*}\) respectively for \(\lambda<\lambda_{c}\). An example is shown in Fig. 7. For \(\lambda<\lambda_{c}\) the distribution is bimodal as shown in panel (a). For \(\lambda=\lambda_{c}\) the distribution is mostly flat [panel (b)], and for \(\lambda>\lambda_{c}\) a unimodal state results [panel (c)]. These results can be understood from the form of the flow fields \(v_{\sigma}(\phi)=a(1-2\phi)+\alpha(z_{\sigma}-\phi)/(1+\alpha)\) obtained from Eq. (12). In each environment \(\sigma\), the variable \(\phi\) thus moves towards the fixed point \(\phi_{\sigma}^{*}\) on a characteristics time scale given by \([2a+\alpha/(1+\alpha)]^{-1}\). The inverse of this time scale sets the value \(\lambda_{c}\) for the switching rate, separating the unimodal and bimodal regimes. Thus, for \(\lambda<\lambda_{c}\) the environmental switching is slower than the relaxation of the population in any fixed environment. This relaxation can therefore proceed before the next switch occurs, and hence probability accumulates near the fixed points. The distribution of \(\phi\) is bimodal, and if inspected at a given time, the population is likely to be found near the consensus state favoured by the influencers in the environmental state at that time. Figure 5: **Transition between bimodal and unimodal stationary distributions in the model with two external states and fast switching influencers.** Panel (a): Stationary distributions for different values of \(N\) and \(\alpha=0.02\). For \(N=5\) the stationary distribution is bimodal; for \(N=54\) and \(N=104\) it is unimodal. Panel (b): Location of the phase transition, \(N_{c}(\alpha)\), as a function of the weight \(\alpha\) of the influencers. The prediction from Eq. (28) is shown as a solid line, markers are from simulations. Below \(N_{c}(\alpha)\) the stationary distribution has a bimodal shape, above it is unimodal. The dashed line shows \(N_{1}=1/\alpha\). Below this line the total weight of influencers is less than that of one normal agent. Model parameters are \(\lambda\mu_{0\to 1}=\lambda_{\mu_{1}\to 0}=50\), \(z_{1}=1-z_{0}=0\), \(a=0.01\). Time between subsequent samples is \(\Delta t=0.1\), we take \(10^{6}\) samples after a transient of one unit of time. If on the other hand \(\lambda>\lambda_{c}\) then the environment switches quickly, before the population can approach either fixed point. The system frequently reverses its direction of motion, and the most likely states of the variable \(\phi\) are those in the interiour of the interval from \(0\) to \(1\). As a result, the stationary distribution is peaked in the middle (unimodal). Both opinion states are typically found in the population at any given time. The resulting phase diagram is shown in Fig. 8. The system is in the unimodal state above the phase line, and in the bimodal state below the line. #### iv.1.3 Lowest-order correction to the PDMP For the model with \(\mu_{\pm}=1\) and \(\overline{z}=0\) we find from Eq. (21) \[s^{2}(\phi)=\phi(1-\phi)\left(\frac{h}{(1+\alpha)\lambda_{c}}+1\right). \tag{33}\] This can be used to approximate the stationary distribution following [28]. An illustration is shown in Fig. 7 where the red lines show the resulting predictions for a model with \(N=200\) agents. Intrinsic noise in the model with finite populations smoothens the distribution compared to that in the PDMP limit, but the main characteristics of being bimodal or unimodal are preserved. Nevertheless, the finite size of the population results in a notable alteration in the bimodal phase. With intrinsic noise the regions \(x<\phi_{0}^{*}\) and \(x>\phi_{1}^{*}\) become populated. These parts of phase space are not accessible by the PDMP. Thus, intrinsic stochasticity enhances the polarization of the population. ### Three states of the group of influencers In this section the group of influencers switches among three states \(\sigma=0,1,2\). As before we assume that there is a state in which all influencers support opinion \(B\) (\(z_{0}=0\)), and another in which all influencers favour opinion \(A\) (\(z_{2}=1\)). In the intermediate state, \(\sigma=1\), we assume that a fraction \(\delta\) of influencers supports \(A\), and a fraction \(1-\delta\) acts in favour of \(B\). Thus, \(z_{1}=\delta\). Switches between these three states are taken to occur in a Markov process as follows \[0\xrightleftharpoons{\lambda}{\sum_{\lambda/2}}1\xrightleftharpoons{\lambda/2}2. \tag{34}\] Thus, the environment switches out of state \(0\) and to state \(1\) with rate \(\lambda\), and similarly for switches \(2\to 1\). The total rate of leaving state \(\sigma=1\) is also \(\lambda\), split equally for transitions to states \(0\) and \(2\), respectively. We first discuss the model in the PDMP limit, that is for infinite populations, \(N\rightarrow\infty\). The stationary state is then to be obtained from Eq. (15). In the present setup this can be reduced into a system of two coupled ODE (see Appendix A) but we are unable to obtain an analytical solution. However, as also explained in Appendix A one can proceed numerically. It is useful to note that the presence of three environmental states does not affect the relaxation time scale in any fixed environment. This is due to our assumption \(a_{\sigma}\equiv a\) in all three states. Therefore, \(\lambda_{c}\) continues to be given by the expression in Eq. (32). Figure 6: **Model with asymmetric influencers in the fast-switching limit.** Panel (a) shows the shapes of the stationary distribution for \(x\) in a model with two environmental states, \(\overline{z}=0.85\) and fast switching, for different sizes of the population. Remaining model parameters are \(a=0.01\), \(\alpha=0.02\). Markers are from simulations, lines from the analytical theory in the fast-switching limit. Time between subsequent samples in simulations is \(\Delta t=0.1\), we take \(10^{6}\) samples after a transient of one unit of time. Panel (b) shows \(N_{c}^{\rm left}\) and \(N_{c}^{\rm right}\) from Eqs. (29) and (30) respectively (lines). Markers are from simulations. For \(N<N_{c}^{\rm left}\) the distribution is bimodal and asymmetric, in the area between the lines it is strictly increasing in \(x\), and for \(N>N_{c}^{\rm right}\) the distribution has a unimodal asymmetric shape. Results are shown in Fig. 9. We first focus on the black dashed lines showing the stationary distribution in the PDMP limit. When environmental switching is slower than the relaxation in the population [\(\lambda<\lambda_{c}\) shown in panel (a)] the distribution has three sharp singularities, positioned at the fixed-point values \(\phi_{0}^{*},\phi_{1}^{*}\) and \(\phi_{2}^{*}\) obtained from Eq. (13) (we attribute minor numerical deviations to discretisation effects). For \(\lambda>\lambda_{c}\) on the other hand [panel (c)], the distribution is asymmetrically unimodal with peak at \(\phi_{1}^{*}\). In panel (b), where \(\lambda=\lambda_{c}\), the distribution also has a single maximum at \(\phi=\phi_{1}^{*}\). In contrast with panel (c) though, the stationary density in the PDMP limit (black dashed line) remains non-zero at \(\phi=\phi_{0}^{*}\) and \(\phi_{2}^{*}\) respectively. In panel (c) the density tends to zero at the bondaries. In Fig. 9 we also show results from the theory capturing the leading-order corrections in \(1/N\) (solid lines). As seen intrinsic noise does not manifestly change the overall structure of the stationary distribution. Its main effect is to smoothen the singularities, and as expected there is now a non-zero probability of finding the system in the intervals \(i/N\in[0,\phi_{0}^{*}]\) and \(i/N\in[\phi_{0}^{*},1]\) respectively. These intervals are (by construction) unattainable by the PDMP. Figure 8: **Phase diagram for the model with two states for the group of influencers.** The dashed line is \(\lambda_{c}\) obtained in the PDMP limit [Eq. (32)]. It separates a phase in which the stationary distribution is bimodal (below the line) from the other phase in which the distribution is unimodal (above the line). Green asterisks are from simulations of the individual-based model with \(a=0.01\) and \(N=500\). Blue dots indicate the phase boundary obtained from the theory which takes into account leading-order corrections to the PDMP [Eq. (20)]. Model parameters are \(\overline{z}=1\), \(\mu_{0\to 1}=\mu_{1\to 0}=1\), \(a=0.01\). Figure 7: **Stationary distribution of the model with influencers switching between two states.** The black dashed lines in each panel are from Eq. (31) (PDMP limit), solid red lines are from the numerical integration of Eq. (20), capturing leading-order corrections to PDMP limit. The shaded histograms are from simulations of the full model. In all panels \(a=0.01\), \(\alpha=0.5\), \(N=200\) and \(\mu_{0\to 1}=\mu_{1\to 0}=1\). The switching rates are (a) \(\lambda=0.2\), (b) \(\lambda=\lambda_{c}\approx 0.35\) and (c) \(\lambda=0.7\), where \(\lambda_{c}\) is obtained from Eq. (32). Time between subsequent samples is \(\Delta t=5\), for each distributions we take \(10^{6}\) samples after a transient of 50 units of time. ### Multiple states for groups of influencers We now focus on systems in which there are more than three states for the environment of influencers. The numerical solution of Eq. (15) then becomes more complex, and we hence focus on direct simulations of the original individual-based model, and of the limiting PDMP respectively. #### iv.3.1 Five environmental states We first focus on a generalisation of the system in Eq.(34) to five environmental states, \[0\xrightarrow[\lambda]{\lambda}1\xrightleftharpoons{\lambda/2}2\xrightarrow[ \lambda]{\lambda/2}3\xrightleftharpoons{\lambda/2}3\xrightleftharpoons{\lambda/2}4. \tag{35}\] We set \(z_{\sigma}=\sigma/4\) for \(\sigma=0,1,\ldots,4\). Thus in state \(\sigma=0\) all influencers promote opinion \(B\), and for \(\sigma=4\) the external force is fully in direction of opinion \(A\). State 1 is partially biased towards \(B\), in state \(\sigma=2\) there is no net force by the influencers in either direction, and \(\sigma=3\) represents a state with partial bias towards opinion \(A\). In Fig. 10 we show stationary distributions from simulations of the full model for \(N=200\) and with different choices of the switching rate \(\lambda\) (shaded histograms). We also show the stationary distributions from simulations of the PDMP (dashed lines). In panel (a) we choose \(\lambda<\lambda_{c}\), i.e., the population relaxes faster than the time between switches of the environment. We observe five singularities in the stationary distribution of the PDMP, located at the different \(\phi_{\sigma}^{*}\). As before, intrinsic noise smoothens these peaks. Panel (b) shows the case \(\lambda=\lambda_{c}\), we then find three peaks in the stationary distribution of the PDMP. These maxima are also discernible in the stationary distribution of the full model, but the intrinsic noise smears the distribution out, so that the maxima are less pronounced. Increasing the rate of influencer switching further [panel (c)], the number of maxima reduces to two, and finally in panel (d) the stationary state becomes unimodal. The positions of the maxima are shown in Fig. 11 for different values of the switching rate \(\lambda\). For small \(\lambda\) there are five maxima, located at the \(\phi_{\sigma}^{*}\). For intermediate switching rates, only three maxima remain, located at their initial positions \(\phi_{1}^{*},\phi_{2}^{*},\phi_{3}^{*}\). Next the maximum at \(\phi_{2}^{*}\) disappears. Finally, the transition to only only maximum at large values of \(\lambda\) on the other hand occurs by gradual approach and eventual fusion of the two remaining maxima. #### iv.3.2 Independent influencers Next, we consider the case of independent influencers. The influencers are all taken to have the same strength, and each influencer can act in favour of opinion \(A\), or of Figure 9: **Stationary distributions for the model with three states for the group of influencers**. In each panel the dashed line represents the PDMP limit, and is obtained from a numerical solution of Eq. (15). The solid lines are from the numerical integration of Eq. (20), capturing leading-order corrections in \(1/N\). The shaded histograms are from simulations of the full model. The dotted lines are the values of \(\phi_{0}^{*},\phi_{1}^{*}\) and \(\phi_{2}^{*}\) found from Eq. (13). The environmental switching rate is \(\lambda=0.2\) in panel (a), \(\lambda=\lambda_{c}\approx 0.35\) in (b), and \(\lambda=0.7\) in panel (c). In all panels \(a=0.01\), \(\alpha=0.5\), \(N=200\), \(z_{0}=0\), \(z_{1}=0.8\), \(z_{2}=1\), and \(\mu_{0\to 1}=\mu_{2\to 1}=1\) and \(\mu_{1\to 0}=\mu_{1\to 2}=1/2\). Time between subsequent samples is \(\Delta t=5\); for each distributions we take \(10^{6}\) samples, after a transient of 50 units of time. opinion \(B\). In the example in Fig. 12 there are \(\alpha N=20\) influencers, and hence \(\alpha N+1\) environmental states \(\sigma=0,1,\ldots,S-1=\alpha N\). In state \(\sigma\) there are \(\sigma\) influucers favouring \(A\), and \(S-1-\sigma\) influucers promoting \(B\). Thus, \(z_{\sigma}=\sigma/(S-1)\). In state \(\sigma\) there are \(\sigma\) influucers who can switch to promoting \(B\) instead of \(A\), and \(S-1-\sigma\) influucers who can change from favouring \(B\) to favouring \(A\). Thus, the rate of transitioning from state \(\sigma\) to state \(\sigma-1\) is proportional to \(\sigma\), and that of transitioning from \(\sigma\) to \(\sigma+1\) is proportional to \(S-1-\sigma\). We set \(\mu_{\sigma\rightarrow\sigma-1}=\sigma/(S-1)\) and \(\mu_{\sigma\rightarrow\sigma+1}=1-\sigma/(S-1)\). Keeping in mind the overall multiplying factor \(\lambda\), the environmental dynamics can then be summarised as \[0\xrightleftharpoons{(1-z_{0})}{\lambda z_{1}}\cdots\xrightleftharpoons{(1-z_{ \sigma-1})}{\lambda z_{\sigma}}\sigma\xrightleftharpoons{(1-z_{\sigma})}{ \lambda z_{\sigma+1}}\cdots\xrightleftharpoons{(1-z_{S-2})}{\lambda z_{S-1}}S -1. \tag{36}\] Effectively, this means that each one of the \(\alpha N\) individual influucers changes state with rate \(\lambda/(S-1)\). We note that the division by \(S-1\) is immaterial as any constant factors can be absorbed into the overall multiplier \(\lambda\). The resulting stationary distributions in Fig. 12 show some of the behaviour seen in the previous example in Fig. 10. For slow environmental switching the distribution has multiple maxima in the PDMP approximation. The number of extrema decreases with increasing switching rate of the environment, and ultimately only one single maximum remains [panels (c) and (d)]. Carrying out simulations of the full model for a population of the same size (\(N=200\)) as in Fig. 12 we find in Fig. 10 that the stationary distribution is unimodal throughout. This is a consequence of the fact that the maxima of the PDMP for slow switching [Fig. 10(a)] are found relatively closely to each other. Intrinsic noise therefore 'washes out' this structure much more easily than in Fig. 12(a), where the maxima for the PDMP are more separated. #### iii.2.3 Details of the influencer dynamics matter To characterise the relation between the distributions in Figs. 12 and 10 further, we study an intermediate scenario. As in Fig. 10 we allow for \(\alpha N+1=21\) states of the environment, and we use \(z_{\sigma}=\sigma/(S-1)\). However, in Figure 10: **Stationary distribution for the model with five environmental states.** The dashed lines in each panel are from numerical simulations of PDMP capturing the limit of infinite populations. Shaded histograms are from simulations of the full individual-based model with \(N=200\). The environmental dynamics is as in Eq. (35). The switching rates in panels (a)-(d) are \(\lambda=0.1\), \(\lambda=\lambda_{c}\approx 0.35\), \(\lambda=0.7\) and \(\lambda=2\) respectively. Time between samples is \(\Delta t=1\) in (a)-(c), and \(\Delta t=0.2\) for panel (d). We take \(10^{7}\) samples after a transient of 10 units of time. fluencers no longer switch states independently from one another, but instead the environmental state is governed by a process more akin to that in Eq. (35). Specifically, we focus on \[0\xrightleftharpoons[\lambda/2]{\lambda}1\xrightleftharpoons[\lambda/2]{\lambda/2}... \xrightleftharpoons[\lambda/2]{\lambda/2}\alpha N-1\xrightleftharpoons[\lambda/2]{ \lambda/2}\alpha N \tag{37}\] The stationary distribution for the model with independent influencers [Fig. 12] was found to be unimodal throughout for the parameters we tested, and the corresponding PDMP has a unimodal envelope, modulated by local maxima for slow switching. In contrast, if the environmental dynamics is as given in Eq. (37) the envelope of the stationary distribution of the PDMP is more flat at least for slow and intermediate environmental switching rates [Fig. 13 (a)-(c)]. We again find a modulation and the resulting maxima. The stationary distribution of the full model (i.e., including intrinsic noise) also has a broader shape than that in Fig. 12. These differences in outcome can only be attributed to the differences in the environmental process [Eq. (36) vs Eq. (37)]. In the former case the environment has a proclivity to move from the more extreme states (those near \(\sigma=0\) and \(\sigma=\alpha N\) respectively) towards the more balanced states (those with values of \(\sigma\) close to \(\alpha N/2\)). As a consequence the stationary distribution of the environmental process will be concentrated on the balanced states. In contrast, the dynamics in Eq. (37) results in a lower tendency for the influencers to populate the more balanced states. As result of that, in turn, the stationary distribution for the population of voters becomes more broad. ## VI Conclusions In summary we have analysed variants of the noisy voter model with two opinion states subject to an external environment, which switches between discrete states following a random process. Specifically, we considered a switching ratio of herding-to-noise rates, and, separately, fluctuating external groups of 'influencers' acting on the population of voters. We find that the model with switching herding-to noise ratio can be reduced to a standard nVM in the limit of very fast environments. One then observes the familiar finite-size transition between a unimodal stationary state for large populations, and a bimodal state for small populations. When the environmental process is much slower than the relaxation time scale of the voters an additional trimodal phase is found for intermediate population sizes. There are then periods in which the population of voters is polarised (this occurs when herding is strong). At other times (when herding is weak) both opinions co-exist. When influencers switch between two symmetric states (at constant herding and noise rates) we also find a transition between unimodal and bimodal states. In the limit of fast influencers the resulting phase diagram [Fig. 5 (b)] can again be understood via a mapping to a conventional nVM with an effective noise rate. For very large populations the transition can alternatively be studied in terms of a piecewise deterministic Markov process and corrections to it. The transition between unimodal and bimodal phases can then for example be observed as a function of the strength of influencers and the environmental switching rate [Fig. 8]. If the two states of the influencers are not symmetric, we find an additional phase in which the stationary distribution is monotonic [Fig. 6]. If there are more than two states for the external influencers the complexity of the stationary distribution of opinions also increases. For large populations (PDMP limit) we find stationary states with multiple sharp peaks when the influencer switching is slow. For higher switching rates the number of maxima generally reduces, and for very fast switching only a single peak remains, corresponding to coexistence of the two opinions. Intrinsic noise in finite populations washes out the sharp peaks seen for the PDMP, but the general trend tends to remain, there are multiple peaks for the distribution of opinions when the environment is slow, and gradually fewer peaks as influencers change states more often. We have demonstrated that the precise shape of the resulting stationary state and the location of the peaks depend on the detailed mechanics of the influencer process. Our work thus contributes to a research programme of continuously extending the basic mechanics of the voter model. In particular, it is aligned with other recent work on variants of the voter model with fluctuating environments [35; 36]. While the basic voter model can be understood as a crude and stylised characterisation of opinion dynamics, systematic statistical mechanics analyses and the addition of parameters and features has also con Figure 11: **Location of the maxima of the stationary distribution for the model with five environmental states (Fig. 10).**The figure shows the location of maxima in the stationary distribution of the PDMP for the model with five environmental states (Sec. V.3.1). The markers are from simulations of the PDMP, the lines indicate the fixed points \(\phi_{0}^{*},\ldots,\phi_{4}^{*}\). Except for \(\lambda\) parameters are as in Fig. 10. tributed to our understanding of stochastic processes at large. For example, the study of the initial voter model has led to a 'generalised voter' universality class [12; 13]. Here, we connect existing work on the noisy voter model with literature on individual-based systems in switching environments. We use established methods (such as the PDMP formalism) and more recent developments (linear noise approximation for models with switching environments [28]) to characterise the stationary states of VMs subject to extrinsic fluctuations. In turn, our work is also a contribution to extending these methods. For example, there is no known method to calculate the stationary states of piecewise deterministic Markov processes with more than two environmental states. As a by-product of our work, we have presented a numerical scheme. This is not a replacement for an analytical solution, but it removes the need to carry out numerical simulations of the PDMP, at least in some circumstances. Naturally, there is more work to do. The question of an analytical characterisation of stationary distributions for multi-state PDMPs remains, and the voter model (with its linear velocity fields) is a natural candidate for further study. Failing this, we wonder if the numerical method we have proposed for the model with three environmental states can be streamlined and implemented effectively for environments with more than three states. In terms of individual-based modelling of opinion dynamics (in the widest sense), a number of extensions of the model seem possible. For example, both the agents and the influencers could be placed on a network, presumably the location or connectivity of the influencers would then become relevant. A further line of future work concerns the extension to models with more than two opinion states. Finally, allowing for continuous external environments also appears to be worthwhile. Figure 12: **Stationary distribution for the model with independent influencers.** The influencers follow the process in Eq. (36). We use \(z_{\sigma}=\sigma/(\alpha N)\) with \(\sigma=0,...,\alpha N\), with \(\alpha N=20\). Dashed lines in each panel are from numerical simulations of the PDMP, the shaded histogram is from simulations (\(N=200\)). Model parameters are \(a=0.01\), \(\alpha=0.1\). Environmental switching rate is (a) \(\lambda=0.05\), (b) \(\lambda_{e}\approx 0.11\), (c) \(\lambda=1\) and (d) \(\lambda=20\). Time between subsequent samples is 20 units of time in (a) and (b), 2 units in (c), and 0.2 units of time in (d). For each distributions we take \(10^{7}\) samples after a transient of duration \(100/\lambda\). _Acknowledgments._ We thank Yen Ting Lin for helpful discussions on the solution of PDMP for more than two environmental states, and Lucas Lacasa for useful comments on the work. AC acknowledges funding by the Maria de Maeztu Programme (MDM-2017-0711) and the AEI under the FPI programme. Partial financial support has been received from the Agencia Estatal de Investigacion and Fondo Europeo de Desarrollo Regional (FEDER, UE) under project APASOS (PID2021-122256NB-C21/PID2021-122256NB-C22), and the Maria de Maeztu project CEX2021-001164-M, funded by MCIN/AEI/10.13039/501100011033. ## Appendix A Algorithm to determine the stationary state of a PDMP with three environmental states In this appendix we provide details of the algorithm used to solve Eq.(15) when the environment undergoes transitions between three states. ### General theory Focusing on the PDMP framework with \(S\) environmental states we follow [28] and introduce currents \[\begin{split}& J_{\sigma}(\phi)=\Pi(\phi,\sigma)v_{\sigma}( \phi)\\ &-\int_{\phi_{0}^{*}}^{\phi}\lambda\sum_{\eta}\left(\Pi(\phi^{ \prime},\eta)\mu_{\eta\rightarrow\sigma}-\Pi(\phi^{\prime},\sigma)\mu_{\sigma \rightarrow\eta}\right)d\phi^{\prime}.\end{split} \tag{39}\] The quantity \(J_{\sigma}(\phi)\) represents the net probability flux into or out of the interval \((\phi_{0}^{*},\phi)\) and environmental state \(\sigma\). This is illustrated in Fig. 14, the dotted box at the bottom left of the figure highlights the interval \((\phi_{0}^{*},\phi)\) at fixed environmental state \(\sigma=0\). The quantity \(J_{0}(\phi)\) is the flux out of this interval, due to either deterministic motion [following \(v_{0}(\phi)\)] or to switches of the environment. Further details can be found in [28]. The continuity equation for probability can be expressed as: \[\partial_{t}\Pi(\phi,\sigma)=-\partial_{\phi}J_{\sigma}(\phi). \tag{40}\] Figure 13: **Stationary distribution for the model with environmental dynamics as in Eq. (37).** Dashed lines are from numerical simulations of PDMP process, shaded histograms are from simulations of the full model (\(N=200\)). In all the panels \(a=0.01\), \(\alpha=0.1\). The environmental switching rates are (a) \(\lambda=0.05\), (b) \(\lambda=\lambda_{c}\approx 0.11\), (c) \(\lambda=1\) and (d) \(\lambda=20\). We use \(z_{\sigma}=\sigma/(\alpha N)\) with \(\sigma=0,...,\alpha N\); where \(\alpha N=20\). Time between subsequent samples 20 units of time in (a) and (b), two units of time in (c), and 0.2 units of time in (d). For each distribution we take \(10^{7}\) samples after a transient of length \(100/\lambda\). In the stationary state we therefore have \[\frac{d}{d\phi}\left(\Pi^{*}(\phi,\sigma)v_{\sigma}(\phi)\right) \tag{33}\] \[-\lambda\sum_{\eta}\left(\Pi^{*}(\phi,\eta)\mu_{\eta\to\sigma}-\Pi^ {*}(\phi,\sigma)\mu_{\sigma\to\eta}\right)=0.\] In the following calculations, we always focus on the stationary state. To keep the notation compact we will omit the asterisk. Stationary implies that the total current vanishes, i.e., \[\sum_{\sigma}J_{\sigma}(\phi)=0 \tag{34}\] for all \(\phi\). Defining \[\Gamma_{\sigma}(\phi)=\Pi(\phi,\sigma)v_{\sigma}(\phi), \tag{35}\] and using Eq. (28) this results in \[\sum_{\sigma}\Gamma_{\sigma}(\phi)=0. \tag{36}\] For any one system, we can therefore pick a particular environmental state \(\tau\), and express \(\Gamma_{\tau}(\phi)\) in terms of the \(\Gamma_{\sigma}(\phi)\), \(\sigma\neq\tau\), \[\Gamma_{\tau}(\phi)=-\sum_{\sigma\neq\tau}\Gamma_{\sigma}(\phi) \tag{37}\] We can then reduce Eq. (33) to the following set of \(S-1\) equations for the \(\Gamma_{\sigma}\) with \(\sigma\neq\tau\): \[\frac{d}{d\phi}\Gamma_{\sigma}(\phi)+\frac{\Gamma_{\sigma}(\phi) }{v_{\sigma}(\phi)}\left(\lambda\sum_{\eta}\mu_{\sigma\to\eta}\right)\] \[-\lambda\sum_{\eta\neq\tau}\Gamma_{\eta}(\phi)\left(\frac{\mu_{ \eta\to\sigma}}{v_{\eta}(\phi)}-\frac{\mu_{\tau\to\sigma}}{v_{\tau}(\phi)} \right)=0. \tag{38}\] ### Three states We now consider the case of three environmental states, see Sec. V.2, and in particular Eq. (34). After elimination of \(\Gamma_{1}\), we can write Eq. (38) as \[\frac{d}{d\phi}\underline{\Gamma}(\phi)=\underline{\underline{\Lambda}}(\phi )\underline{\Gamma}(\phi), \tag{39}\] with \(\underline{\Gamma}(\phi)=\left[\Gamma_{0}(\phi),\Gamma_{2}(\phi)\right]^{T}\), where the superscript indicates transposition. The \(2\times 2\) matrix \(\underline{\underline{\Lambda}}\) is given by \[\underline{\underline{\Lambda}}(\phi)=-\frac{\lambda}{\lambda_{c}}\left( \begin{array}{cc}\frac{1}{2(\phi_{1}^{*}-\phi)}+\frac{1}{\phi_{0}^{*}-\phi} &\frac{1}{2(\phi_{1}^{*}-\phi)}\\ \frac{1}{2(\phi_{1}^{*}-\phi)}&\frac{1}{2(\phi_{1}^{*}-\phi)}+\frac{1}{\phi_{ 2}^{*}-\phi}\end{array}\right), \tag{40}\] w Figure 14: Physical interpretation of the currents in Eq. (28) for a three states environmental switching as in Sec. V.2. where \(\phi_{\sigma}^{*}\) is the fixed point of the limiting deterministic dynamics for fixed environment \(\sigma\) [see Eq. (13)]. The quantity \(\lambda_{c}\) is given in Eq. (32). The matrix \(\underline{\underline{\Lambda}}\) encodes the dynamics of an infinite population combined with a switching environment. We note the singularities at \(\phi_{0}^{*},\phi_{1}^{*}\) and \(\phi_{2}^{*}\). ### Algorithm We now outline the algorithm we use to solve equation Eq. (54) in the domain \(\phi\in(\phi_{0}^{*},\phi_{2}^{*})\). A graphical illustration can be found in Fig. 15. The boundary conditions for the solution will be detailed below. Due to the singularity of \(\underline{\underline{\Lambda}}\) at the internal fixed point \(\phi=\phi_{1}^{*}\), we divide the domain into two intervals, \((\phi_{0}^{*},\phi_{1}^{*})\) and \((\phi_{1}^{*},\phi_{2}^{*})\), and first obtain separate solutions on these two subdomains. These are then combined using the boundary conditions. To numerically integrate Eq. (54) we discretise the \(\phi\)-axis into elements of size \(\Delta\phi\). Choosing initial conditions \(\Gamma_{0}(\phi_{0}^{*}+\Delta\phi)=a_{0}\) and \(\Gamma_{2}(\phi_{0}^{*}+\Delta\phi)=a_{2}\), we can then forward integrate Eq. (54), to obtain \(\underline{\Gamma}(\phi_{0}^{*}+2\Delta\phi),\underline{\Gamma}(\phi_{0}^{*} +3\Delta\phi),\ldots,\underline{\Gamma}(\phi_{1}^{*}-\Delta\phi)\). This numerical solution will depend on the choice of \(a_{0}\) and \(a_{2}\). Similarly (but independently) we choose final conditions \(\Gamma_{0}(\phi_{2}^{*}-\Delta\phi)=b_{0}\) and \(\Gamma_{2}(\phi_{2}^{*}-\Delta\phi)=b_{2}\) near the right edge of the domain \((\phi_{0}^{*},\phi_{2}^{*})\). We then backward integrate Eq. (54), to find \(\underline{\underline{\Gamma}}(\phi_{2}^{*}-2\Delta\phi),\underline{\Gamma}( \phi_{2}^{*}-3\Delta\phi),\ldots,\underline{\Gamma}(\phi_{1}^{*}+\Delta\phi)\). This numerical solution in turn will depend on the choice of \(b_{0}\) and \(b_{2}\). We now need to determine the right choice for the boundary conditions \(a_{0},a_{2}\), and \(b_{0},b_{2}\). We do this using the following properties of the stationary distribution: 1. _Overall normalisation._ Noting that Eq. (54) is linear in \(\underline{\underline{\Gamma}}\), a multiplication of all of \(a_{0},a_{2},b_{0},b_{2}\) with a constant factor will simply re-scale the solution. We also recall that \(\Gamma_{1}=-(\Gamma_{0}+\Gamma_{2})\) so that \(\Gamma_{1}\) undergoes the same re-scaling. The \(\Gamma_{\sigma}\) in turn determine the stationary distribution \(\Pi(\phi,\sigma)\) [via Eq. (53)]. Overall normalisation requires \(\sum_{\sigma}\int d\phi\,\Pi(\phi,\sigma)=1\). This can be used to fix one of the coefficients \(a_{0},a_{2},b_{0},b_{2}\). 2. _Continuity of \(\Gamma_{0}\) and \(\Gamma_{2}\) at the interior fixed point \(\phi_{1}^{*}\)._ The velocity fields \(v_{0}(\phi)\) and \(v_{2}(\phi)\) show no singularity at \(\phi=\phi_{1}^{*}\). We thus expect \(\Gamma_{0}\) and \(\Gamma_{2}\) to be continuous at \(\phi_{1}^{*}\). Within the discretisation this translates into \[\Gamma_{0}(\phi_{1}^{*}-\Delta\phi) = \Gamma_{0}(\phi_{1}^{*}+\Delta\phi),\] \[\Gamma_{2}(\phi_{1}^{*}-\Delta\phi) = \Gamma_{2}(\phi_{1}^{*}+\Delta\phi),\] (55) up to corrections of order \(\Delta\phi\). 3. _No-flux condition at \(\phi_{1}^{*}\) in environment \(\sigma=1\)._ In environment \(\sigma=1\) the flow field is directed towards \(\phi_{1}^{*}\), both from below and from above. This means that \[\Gamma_{1}(\phi_{1}^{*}-\Delta\phi) \geq 0,\] \[\Gamma_{1}(\phi_{1}^{*}+\Delta\phi) \leq 0. \tag{56}\] At the same time, the relation \(\Gamma_{1}=-(\Gamma_{0}+\Gamma_{2})\) and the conditions in (55), imply that \(\Gamma_{1}(\phi_{1}^{*}-\Delta\phi)=\Gamma_{1}(\phi_{1}^{*}+\Delta\phi)\). Together with (56) this means \(\Gamma_{1}(\phi_{1}^{*}\pm\Delta\phi)=0\), and therefore \(\Gamma_{0}(\phi_{1}^{*}\pm\Delta\phi)=-\Gamma_{2}(\phi_{1}^{*}\pm\Delta\phi)\). Using again the conditions in (55) this can be written compactly as one single condition \(\Gamma_{0}(\phi_{1}^{*}-\Delta\phi)=-\Gamma_{2}(\phi_{1}^{*}+\Delta\phi)\), again to be understood as subject to corrections of order \(\Delta\phi\). In order to impose these conditions we use a gradient-descent algorithm. Specifically, we find the coefficients \(a_{2},b_{0}\) and \(b_{2}\) such that the function \(|\Gamma_{2}(\phi_{1}^{*}-\Delta\phi)-\Gamma_{2}(\phi_{1}^{*}+\Delta\phi)|+| \Gamma_{0}(\phi_{1}^{*}-\Delta\phi)-\Gamma_{0}(\phi_{1}^{*}+\Delta\phi)|+| \Gamma_{1}(\phi_{1}^{*}-\Delta\phi)+\Gamma_{2}(\phi_{1}^{*}+\Delta\phi)|\) is minimised. The last step is then to adjust the remaining coefficient \(a_{0}\) such that the probability distribution is normalised [item (i) above]. The principles of the algorithm are summarised in Fig. 15. ## Appendix B Extension for more than two environments of Lowest-order approximation In this appendix, we will provide an explicit derivation of Eq. (21). This builds on Ref. [28], where a similar calculation is carried out for systems with two environmental states. For the purposes of this appendix, we assume that the environmental switching is independent of the state of population; i.e. the \(\mu_{\sigma\rightarrow\sigma^{\prime}}\) do not depend on \(i\). In the limit of large but finite population size \(N\) the master equation (3) can be expanded in powers of \(1/N\) following for example [28; 29]. Writing \(x=i/N\), and retaining leading and sub-leading orders one obtains an equation of the type \[\frac{d}{dt}\Pi(x,\sigma)=L_{\sigma}(x)\Pi(x,\sigma)\] \[+\lambda\sum_{\sigma^{\prime}}[\mu_{\sigma^{\prime}\rightarrow\sigma }\Pi(x,\sigma^{\prime})-\mu_{\sigma\rightarrow\sigma^{\prime}}\Pi(x,\sigma)], \tag{57}\] with Fokker-Planck operators \[L_{\sigma}(x)=-\partial_{x}v_{\sigma}(x)+\frac{\partial_{x}^{2}\omega_{\sigma}(x )}{2N}, \tag{58}\] where \[\omega_{\sigma}(x)=a+\frac{h}{1+\alpha}\left[\alpha\left(z_{\sigma}+\left(1-2z_{ \sigma}\right)x\right)+2x(1-x)\right]. \tag{59}\] Writing \(x(t)=\phi(t)+\frac{\xi}{\sqrt{N}}\) one then finds to leading order in the expansion \[\dot{\phi}(t)=v_{\sigma}(\phi). \tag{60}\] Additionally making the linear-noise approximation (LNA) [45] sub-leading corrections evolve in time as follows (see [28; 29] for details), \[\dot{\xi}(t)=v_{\sigma}^{\prime}(\phi)\xi+\sqrt{\omega_{\sigma}(\phi)}\eta(t), \tag{61}\] where \(\eta(t)\) is Gaussian white noise of zero mean and unit amplitude. This is a Langevin equation, to be interpreted in the Ito sense. We note that the environment \(\sigma\) retains its time-dependence (via the switching process). Within this expansion and the LNA, the joint distribution for \(\phi,\xi\) and \(\sigma\), \(\Pi(\phi,\xi,\sigma)\), evolves in time as follows, \[\begin{split}&\partial_{t}\Pi(\phi,\xi,\sigma,t)=-v^{\prime}_{ \sigma}(\phi)\partial_{\xi}[\xi\Pi(\phi,\xi,\sigma,t)]\\ &-\partial_{\phi}[v_{\sigma}(\phi)\Pi(\phi,\xi,\sigma,t)]+\frac{ \omega_{\sigma}(\phi)}{2}\partial_{\xi}^{2}[\Pi(\phi,\xi,\sigma,t)]\\ &+\sum_{\eta\neq\sigma}\lambda\left[\mu_{\eta\to\sigma}\Pi(\phi, \xi,\eta,t)-\mu_{\sigma\to\eta}\Pi(\phi,\xi,\sigma,t)\right].\end{split} \tag{43}\] Focusing on the stationary distribution \(\Pi^{*}(\phi,\xi,\sigma)\), and writing \(\Pi^{*}(\phi,\xi,\sigma)=\Pi^{*}(\xi|\phi,\sigma)\Pi^{*}(\phi,\sigma)\), we find after summing over environmental states, \[\begin{split}&\sum_{\sigma}\left\{\partial_{\phi}[v_{\sigma}(\phi) \Pi(\phi,\sigma)\Pi(\xi|\phi,\sigma)]\right.\\ &+v^{\prime}_{\sigma}(\phi)\Pi(\phi,\sigma)\partial_{\xi}[\xi\Pi (\xi|\phi,\sigma)]\\ &-\Pi(\phi,\sigma)\frac{\omega_{\sigma}(\phi)}{2}\partial_{\xi}^ {2}[\Pi(\xi|\phi,\sigma)]\right\}=0.\end{split} \tag{44}\] We have omitted the asterisks to keep the notation compact. We stress that Eq. (44) and all remaining relations in this section refer to the stationary state. We follow [28] again, and make the assumption that instantaneous fluctuations about the PDMP trajectory does not depend on the environmental state, i.e., \(\Pi(\xi|\phi,\sigma)\simeq\Pi(\xi|\phi)\). We then have \[\begin{split}&\partial_{\phi}\left[\Pi(\xi|\phi)\left(\sum_{ \sigma}v_{\sigma}(\phi)\Pi(\phi,\sigma)\right)\right]\\ &+\left[\sum_{\sigma}v^{\prime}_{\sigma}(\phi)\Pi(\phi,\sigma) \right]\partial_{\xi}[\xi\Pi(\xi|\phi)]\\ &-\sum_{\sigma}\left[\Pi(\phi,\sigma)\frac{\omega_{\sigma}(\phi) }{2}\right]\partial_{\xi}^{2}[\Pi(\xi|\phi)]=0\end{split} \tag{45}\] We further know that \(\sum_{\sigma}v_{\sigma}(\phi)\Pi(\phi,\sigma)=0\), and \(\Pi(\phi,\sigma)=\Pi(\sigma|\phi)\Pi(\phi)\). Eq. (45) can thus be re-written as \[\begin{split}& 0=\sum_{\sigma}\left[v^{\prime}_{\sigma}(\phi)\Pi( \sigma|\phi)\right]\Pi(\phi)\partial_{\xi}[\xi\Pi(\xi|\phi)]\\ &-\sum_{\sigma}\left[\Pi(\sigma|\phi)\Pi(\phi)\frac{\omega_{ \sigma}(\phi)}{2}\right]\partial_{\xi}^{2}[\Pi(\xi|\phi)],\end{split} \tag{46}\] and subsequently as \[\begin{split}&\sum_{\sigma}\left[v^{\prime}_{\sigma}(\phi)\Pi( \sigma|\phi)\right]\partial_{\xi}[\xi\Pi(\xi|\phi)]\\ &-\sum_{\sigma}\left[\Pi(\sigma|\phi)\frac{\omega_{\sigma}(\phi) }{2}\right]\partial_{\xi}^{2}[\Pi(\xi|\phi)]=0.\end{split} \tag{47}\] Eq. (47) is a stationary Fokker-Planck equation. Its solution is a Gaussian distribution \[\Pi(\xi|\phi)=\mathcal{A}\exp\left(\frac{\xi^{2}\sum_{\sigma}v^{\prime}_{ \sigma}(\phi)\Pi(\sigma|\phi)}{\sum_{\sigma}\Pi(\sigma|\phi)^{\frac{\omega_{ \sigma}(\phi)}{2}}}\right) \tag{48}\] Figure 15: **Illustration of the numerical algorithm used to obtain the stationary distribution of the PDMP for the model with three environmental states (Appendix A.3).** The flow in environment \(\sigma=0\) is directed towards \(\phi_{0}^{*}\) (filled circle on the left). In environment \(\sigma=1\) the deterministic flow is towards the internal fixed point \(\phi_{1}^{*}\) (filled circle in the centre), and in environment \(\sigma=2\) the system flows towards \(\phi_{2}^{*}\), shown as a filled circle on the right. Using the fact that \(\Gamma_{0}+\Gamma_{1}+\Gamma_{2}=0\), we eliminate \(\Gamma_{1}\) (greyed out in the figure). Eqs. (46) are then forward-integrated on the interval \((\phi_{0}^{*},\phi_{1}^{*})\), starting from an initial condition at \(\phi_{1}^{*}+\Delta\phi\) (green diamonds) to obtain final values at \(\phi_{1}^{*}-\Delta\phi\) (triangles). A similar backward-integration is performed starting from \(\phi_{2}^{*}-\Delta\phi\) (purple and pink diamonds), ending at \(\phi_{2}^{*}+\Delta\phi\) (triangles). As explained in the text, we impose that the numerical solution approximates the conditions \(\Gamma_{0}(\phi_{1}^{*}-\Delta\phi)=\Gamma_{0}(\phi_{1}^{*}+\Delta\phi)\) (green downward triangles), \(\Gamma_{2}(\phi_{1}^{*}-\Delta\phi)=\Gamma_{2}(\phi_{1}^{*}+\Delta\phi)\) (purple upward triangles), and \(\Gamma_{2}(\phi_{1}^{*}-\Delta\phi)=\Gamma_{0}(\phi_{1}^{*}+\Delta\phi)\) (orange squares). with \(\mathcal{A}\) a normalisation constant. This distribution has mean \(0\) and variance \[s^{2}(\phi)=-\frac{\sum_{\sigma}\Pi(\sigma|\phi)\omega_{\sigma}(\phi)}{2\sum_{ \sigma}v^{\prime}_{\sigma}(\phi)\Pi(\sigma|\phi)}. \tag{110}\] We note that this object is intrinsically non-negative in our model, given that \(v^{\prime}_{\sigma}(\phi)<0\) for all \(\phi\) and \(\sigma\). Using \(\Pi(\sigma|\phi)=\Pi(\phi,\sigma)/\Pi(\phi)\) in the numerator and in the denominator of Eq. (110), and cancelling the common factor \(\Pi(\phi)\) we find \[s^{2}(\phi)=-\frac{\sum_{\sigma}\Pi(\phi,\sigma)\omega_{\sigma}(\phi)}{2\sum_{ \sigma}v^{\prime}_{\sigma}(\phi)\Pi(\phi,\sigma)}. \tag{111}\] For linear flow as in our model, \(v_{\sigma}(\phi)=\lambda_{c}(\phi_{\sigma}-\phi)\), we can further simplify and find the final result \[s^{2}(\phi)=\frac{\sum_{\sigma}\Pi(\phi,\sigma)\omega_{\sigma}(\phi)}{2 \lambda_{c}\sum_{\sigma}\Pi(\phi,\sigma)}. \tag{112}\] The distributions \(\Pi(\phi,\sigma)\) are known analytically in the case of two-environmental states [see Eq. (16)]. For the model with three environmental states we use the numerical method described in Appendix A.3.
2301.04256
Quantum backreaction for overspinning BTZ geometries
We examine the semiclassical backreaction of a conformally coupled scalar field on an overspinning BTZ geometry. This extends the work done on a similar problem for ($2+1$)- AdS geometries of the BTZ family with $|M|>|J|$. The overspinning classical solutions corresponds to $|M|<|J|$ and possess a naked singularity at $r=0$. Using the renormalized quantum stress-energy tensor for a conformally coupled scalar field on such a spacetime, we obtain the semiclassical Einstein equations, which we attempt to solve perturbatively. We show that the stress-energy tensor is non-renormalizable in this approach, and consequently the perturbative solution to the semiclassical equations in the overspinning case does not exist. This could be an indication of the fact that the naked singularity at the center of an overspinning geometry is of a more severe nature than the conical singularity found in the same family of BTZ geometries.
Olaf Baake, Jorge Zanelli
2023-01-11T00:57:23Z
http://arxiv.org/abs/2301.04256v1
# Quantum backreaction for overspinning BTZ geometries ###### Abstract We examine the semiclassical backreaction of a conformally coupled scalar field on an overspinning BTZ geometry. This extends the work done on a similar problem for \((2+1)\)- AdS geometries of the BTZ family with \(|M|>|J|\). The overspinning classical solutions corresponds to \(|M|<|J|\) and possess a naked singularity at \(r=0\). Using the renormalized quantum stress-energy tensor for a conformally coupled scalar field on such a spacetime, we obtain the semiclassical Einstein equations, which we attempt to solve perturbatively. We show that the stress-energy tensor is non-renormalizable in this approach, and consequently the perturbative solution to the semiclassical equations in the overspinning case does not exist. This could be an indication of the fact that the naked singularity at the center of an overspinning geometry is of a more severe nature than the conical singularity found in the same family of BTZ geometries. ## 1 Introduction Since the dawn of general relativity, many black hole solutions to Einstein's field equations have been found. All these black holes contain a spacetime singularity hidden by an event horizon. However, for some range of values of the integration constants (mass \(M\), angular momentum \(J\), electric charge \(Q\)) these solutions have no event horizon. Although paradoxical, these naked singularities are exact solutions to the classical equations of general relativity as well. In the vicinity of a naked singularity causality and other physical laws can be arbitrarily violated, which is why Roger Penrose suggested the existence of a (weak) cosmic censorship principle in nature [1], requiring singularities to be hidden behind an event horizon. In that case, an outside observer would be causally disconnection from the singularity. Classically, naked singularities cannot be ruled out on mathematical grounds, and it is difficult to prove that every possible collapse process leads to the formation of an event horizon. The fact that so far no naked singularities have been observed in the universe may be interpreted as an indication that, in the strong gravity regime near a singularity, quantum gravity effects dominate eliminating singularities altogether, or at least making sure that a horizon forms around them. The accumulation of experiments and observations that confirm the predictions of general relativity puts very tight constraints on possible theories incorporating both general relativity and quantum theory. Since both theories are so well established in their regimes, it is sensible to look for a common area where a semi-classical approach could be used to obtain a better understanding of the issues at hand. Calculating quantum effects on a curved background spacetime is notoriously difficult, but in (2+1)-dimensional AdS spacetime this problem becomes significantly simpler and still provide meaningful information to learn from. The Banados-Teitelboim-Zanelli (BTZ) black hole in (2+1)-dimensional AdS spacetime [2, 3], obtained for \(M\geq|J|\) are particularly interesting geometries in this respect, but these are not the only solutions of physical interest in this theory and with the same global symmetries. Locally constant curvature 2+1 spacetimes include, besides the BTZ black hole family, the self-dual Coussaert-Henneaux spacetimes [4], and the toroidal time-dependent geometries [5], with global isometry groups \(SO(2)\times\mathbb{R}\)\(SO(2)\times SO(2,1)\) and \(SO(2)\times SO(2)\), respectively. Recently, the quantum back reaction on the classical singularities was studied for several geometries, including static, rotating and extremal BTZ black holes, as well as for static and rotating conical naked singularities [6, 7, 8, 9]. The naked singularities considered in these papers are continuations of the BTZ spacetime to the case of negative mass [10]. The interesting aspect of this result is that the quantum fluctuations of a conformally coupled scalar field generate a non-vanishing stress energy-momentum tensor that through Einstein's equations produces aback-reacted geometry with a horizon of order Planck length in radius. This dressing up of the naked singularity, turning it into a black hole, could be viewed as a mechanism that implements cosmic censorship. These results have also been confirmed by an alternative holographic approach in [11]. Here we are concerned with the overspinning BTZ spacetime, which occurs if the absolute value of the angular momentum is greater than that of the mass. This geometry is also endowed with a naked singularity at \(r=0\), as in the case of the conical singularity obtained for \(M\leq-|J|\). We show that the stress-energy tensor contains incurable divergences, making the perturbative ansatz to the semiclassical equations of motion ill-defined. While the equations of motion can still be formally integrated, the first order corrections to the metric functions would become large, further demonstrating the inapplicability of a perturbative approach to this type of geometry. This strongly suggests that the naked singularity of an overspinning geometry is of a more severe nature than the conical singularities appearing in the other BTZ geometries so that they cannot be cured by a perturbative quantum censor. ## 2 Overspinning BTZ space-time The rotating BTZ metric [2, 3], is given by \[\mathrm{d}s^{2}=-\left(\frac{r^{2}}{l^{2}}-M\right)\mathrm{d}t^{2}-J\mathrm{d} t\mathrm{d}\theta+\left(\frac{r^{2}}{l^{2}}-M+\frac{J^{2}}{4r^{2}}\right)^{-1} \mathrm{d}r^{2}+r^{2}\mathrm{d}\theta^{2}, \tag{1}\] where the coordinate ranges are: \(-\infty<t<\infty\), \(0<r<\infty\) and \(0\leq\theta<2\pi\), \(\Lambda=-l^{-2}\) is the cosmological constant, and \(M\) and \(J\) are mass and angular momentum respectively. This metric describes different spacetimes that can be classified by the values of \(M\) and \(J\) which determine the nature of the four roots of the equation \(g^{rr}=0\), \[\lambda_{\pm}=\frac{l}{2}\left[\sqrt{M+\frac{J}{l}}\pm\sqrt{M-\frac{J}{l}} \right]\,. \tag{2}\] These roots are real for \(M\geq|J|/l\) (black holes) and take complex values for \(M<|J|/l\) (naked singularities). The full classification is explained in detail in [3], but here we will consider the so-called overspinning geometry (\(|M|l<|J|\)). This geometry was examined in [12] through the study of classical geodesics around it. In particular, we will analyze the back reaction of the geometry to the presence of a conformally coupled quantum scalar field, following the steps in [6, 7, 8, 9], where the back reaction for conical naked singularities in the parameter range \(M\leq-|J|\) was studied. The starting point of the analysis is the observation that the BTZ spacetimes (1) are quotients of the universal covering of anti-de Sitter space-time (CAdS\({}_{3}\)) by an appropriate Killing vector field [3]. The constant negative curvature spacetime AdS\({}_{3}\) is defined by a pseudosphere of radius \(l\) embedded in \(\mathbb{R}^{(2,2)}\) as \[\eta_{AB}X^{A}X^{B}=-\left(X^{0}\right)^{2}+\left(X^{1}\right)^{2}+\left(X^{2} \right)^{2}-\left(X^{3}\right)^{2}=-l^{2}\,. \tag{3}\] The metric reads \[\eta_{AB}dX^{A}dX^{B}=-\left(dX^{0}\right)^{2}+\left(dX^{1}\right)^{2}+\left( dX^{2}\right)^{2}-\left(dX^{3}\right)^{2}, \tag{4}\] where the embedding coordinates \(X^{A}\) must be specified as functions of \((t,r,\theta)\). As shown in [12], the overspinning geometry (1) with \(|M|<|J|\) corresponds to embedding coordinates given by \[X^{0}= \frac{l}{2}\sqrt{A+1}\cosh\left[a\left(t/l-\theta\right)\right] \left\{\cos\left[b\left(\theta+t/l\right)\right]-\sin\left[b\left(\theta+t/l \right)\right]\right\}\] \[+ \epsilon\frac{l}{2}\sqrt{A-1}\sinh\left[a\left(t/l-\theta\right) \right]\left\{\sin\left[b\left(\theta+t/l\right)\right]+\cos\left[b\left( \theta+t/l\right)\right]\right\}, \tag{5}\] \[X^{1}= \frac{l}{2}\sqrt{A+1}\sinh\left[a\left(t/l-\theta\right)\right] \left\{\cos\left[b\left(\theta+t/l\right)\right]-\sin\left[b\left(\theta+t/l \right)\right]\right\}\] \[+ \epsilon\frac{l}{2}\sqrt{A-1}\cosh\left[a\left(t/l-\theta\right) \right]\left\{\sin\left[b\left(\theta+t/l\right)\right]+\cos\left[b\left( \theta+t/l\right)\right]\right\},\] (6) \[X^{2}= \frac{l}{2}\sqrt{A+1}\sinh\left[a\left(t/l-\theta\right)\right] \left\{\sin\left[b\left(\theta+t/l\right)\right]+\cos\left[b\left(\theta+t/l \right)\right]\right\}\] \[- \epsilon\frac{l}{2}\sqrt{A-1}\cosh\left[a\left(t/l-\theta\right) \right]\left\{\cos\left[b\left(\theta+t/l\right)\right]-\sin\left[b\left( \theta+t/l\right)\right]\right\},\] (7) \[X^{3}= \frac{l}{2}\sqrt{A+1}\cosh\left[a\left(t/l-\theta\right)\right] \left\{\sin\left[b\left(\theta+t/l\right)\right]+\cos\left[b\left(\theta+t/l \right)\right]\right\}\] \[- \epsilon\frac{l}{2}\sqrt{A-1}\sinh\left[a\left(t/l-\theta\right) \right]\left\{\cos\left[b\left(\theta+t/l\right)\right]-\sin\left[b\left( \theta+t/l\right)\right]\right\}, \tag{8}\] where \[a=\frac{\sqrt{|J|/l+M}}{2},\qquad b=\frac{\sqrt{|J|/l-M}}{2},\qquad A=\frac{2 \sqrt{\frac{J^{2}}{4}+\frac{r^{4}}{l^{2}}-Mr^{2}}}{\sqrt{J^{2}-l^{2}M^{2}}}, \tag{9}\] with \(\epsilon=\text{sign}(M-r^{2}/l^{2})\). Note that both cases (\(\epsilon=\pm 1\)) lead to the same RSET, and hence to the same end results.1 Footnote 1: Without loss of generality, we will assume \(J>0\) for the rest of this work. The overspinning BTZ space-time is now obtained through identifications generated by a Killing field \(\xi\), which in this case given by [3, 12] \[\xi=-a(J_{01}-J_{23})+b(J_{03}-J_{12}), \tag{10}\] which can be written as \(\xi=\frac{1}{2}\omega^{AB}J_{AB}\), where the antisymmetric matrix \(\omega^{AB}\) characterizes the identification. The Killing field in matrix form reads \[\xi=\begin{pmatrix}0&-a&0&-b\\ -a&0&-b&0\\ 0&b&0&-a\\ b&0&-a&0\end{pmatrix}. \tag{11}\] The identification in the embedding space \(\mathbb{R}^{(2,2)}\) under the action of the Killing field is a mapping defined by the matrix, \(H(\xi)=e^{2\pi\xi}\), which takes the form \[H=\begin{pmatrix}C(a)c(b)&-S(a)c(b)&S(a)s(b)&-C(a)s(b)\\ -S(a)c(b)&C(a)c(b)&-C(a)s(b)&S(a)s(b)\\ -S(a)s(b)&C(a)s(b)&C(a)c(b)&-S(a)c(b)\\ C(a)s(b)&-S(a)s(b)&-S(a)c(b)&C(a)c(b)\end{pmatrix}\,, \tag{12}\] where \(C(a)\equiv\cosh(2\pi a)\), \(S(a)\equiv\sinh(2\pi a)\)\(c(b)\equiv\cos(2\pi b)\), and \(s(b)\equiv\sin(2\pi b)\). An important feature of the Killing vector (10) is that the boost and rotation generators \(K\equiv J_{01}-J_{23}\) and \(J\equiv J_{03}-J_{12}\) commute, \([K,J]=0\). Consequently, \(H=e^{2\pi\xi}\) can be factored as \(H=H_{a}\cdot H_{b}=H_{b}\cdot H_{a}\), where \(H_{a}=H|_{b=0}\) and \(H_{b}=H|_{a=0}\). Iterating the identification by \(H\) is equivalent to acting with \[H^{n}=\begin{pmatrix}C(na)c(nb)&-S(na)c(nb)&S(na)s(nb)&-C(na)s(nb)\\ -S(na)c(nb)&C(na)c(nb)&-C(na)s(nb)&S(na)s(nb)\\ -S(na)s(nb)&C(na)s(nb)&C(na)c(nb)&-S(na)c(nb)\\ C(na)s(nb)&-S(na)s(nb)&-S(na)c(nb)&C(na)c(nb)\end{pmatrix}=H_{a}^{n}\cdot H_{b}^{ n}\;. \tag{13}\] Quotienting a manifold by a rotation Killing vector requires the identification angle to be a rational fraction of \(2\pi\). Otherwise, each point is identified with infinitely many images which densely cover a circle, and the resulting image set would not be a smooth manifold [9]. This means that the coefficient \(b\) in (10) must be rational, namely, \[b=k/m, \tag{14}\] with \(k,m\) relative primes. No restrictions are necessary for \(a\), as boosts act transitively in a non-compact manner. Note that the \(m\)-th iteration produces a pure boost (and a rotation by \(2k\pi\), which is equivalent to the identity, \(H_{b}^{m}=\mathbb{1}\)). In fact, we can treat the rotated plane and the boosted plane separately by splitting the identification matrix as follows: consider writing \(n=qm+p\), where \(p\in\{0,1,\dots,m-1\}\), \(q\in\{0,1,\dots,\infty\}\) and \(m\) is some positive integer. Hence, the powers of \(H=H_{a}\cdot H_{b}\) can be arranged as follows \[\begin{array}{cccccccc}\mathbb{1}&H_{a}H_{b}&H_{a}^{2}H_{b}^{2}&H_{a}^{3}H _{b}^{3}&\dots&H_{a}^{m-1}H_{b}^{m-1}\\ H_{a}^{m}&H_{a}^{m+1}H_{b}&H_{a}^{m+2}H_{b}^{2}&H_{a}^{m+3}H_{b}^{3}&\dots&H_ {a}^{2m-1}H_{b}^{m-1}\\ H_{a}^{2m}&H_{a}^{2m+1}H_{b}&H_{a}^{2m+2}H_{b}^{2}&H_{a}^{2m+3}H_{b}^{3}&\dots&H _{a}^{3m-1}H_{b}^{m-1}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\end{array} \tag{15}\] Here each column corresponds to a fixed \(p\) and includes infinitely many boosts, while each row has a fixed \(q\) comprising a finite set of rotations. In this pattern, an interesting observation becomes apparent. First note that \(H_{a}\) is precisely the identification matrix of the rotating non-extremal BTZ black hole, and \(H_{b}\) the identification matrix of the rotating non-extremal naked singularity [9]. Now, using trigonometric identities, one can write in general, as can be seen in (15), \[H^{qm+p}=H_{a}^{qm}H_{a}^{p}H_{b}^{p}=H_{a\cdot m}^{q}H_{a}^{p}H_{b}^{p}, \tag{16}\] so that the \(p\)-th column reads \[H_{a}^{p}H_{b}^{p}\left\{\mathbb{1},H_{a\cdot m}^{1},H_{a\cdot m}^{2},H_{a \cdot m}^{3},\cdots\right\}. \tag{17}\] Or in other words, each column contains the powers of the identification matrix associated with the rotating non-extremal black hole, multiplied by some constant. Renormalized stress tensor To describe the quantum effects on the spacetime geometry, in particular the backreaction of the naked singularity to the presence of a quantum field, we consider the semi-classical Einstein equations \[G_{\mu\nu}-l^{-2}g_{\mu\nu}=\kappa\left\langle T_{\mu\nu}\right\rangle, \tag{18}\] where \(\left\langle T_{\mu\nu}\right\rangle\) is the renormalized expectation value of the quantum stress-energy tensor (RSET) of a conformally coupled scalar field [6, 7, 8, 9], \[\kappa\left\langle T_{\mu\nu}(x)\right\rangle=\pi l_{P}\lim_{x^{\prime}\to x} \left(3\nabla_{\mu}^{x}\nabla_{\nu}^{x^{\prime}}-g_{\mu\nu}g^{\lambda\rho} \nabla_{\lambda}^{x}\nabla_{\rho}^{x^{\prime}}-\nabla_{\mu}^{x}\nabla_{\nu}^{ x}-\frac{1}{4l^{2}}g_{\mu\nu}\right)G(x,x^{\prime})\,,\,\,\,l_{P}=\frac{\hbar\kappa}{8\pi}. \tag{19}\] Using the method of images, the propagator, \(G(x,x^{\prime})=\{\phi(x),\phi(x^{\prime})\}\) is the anti-commutator of the scalar field, which takes the form [13, 14, 15, 16, 17, 9] \[G(x,x^{\prime})=\frac{1}{2\sqrt{2}\pi}\sum_{n\in I}\frac{\Theta(\sigma(x,H^{n }x^{\prime}))}{\sqrt{\sigma(x,H^{n}x^{\prime})}}, \tag{20}\] where \(\sigma(x,x^{\prime})\) is the chordal distance connecting \(x\) and \(x^{\prime}\), which can be expressed in terms of the corresponding embedding coordinates in \(\mathbb{R}^{(2,2)}\) as \[\sigma(x,x^{\prime})=\frac{1}{2}\left[-\left(X^{0}-X^{\prime 0}\right)^{2}+ \left(X^{1}-X^{\prime 1}\right)^{2}+\left(X^{2}-X^{\prime 2}\right)^{2}- \left(X^{3}-X^{\prime 3}\right)^{2}\right]\,. \tag{21}\] The Heaviside step function \(\Theta\) in (20) was introduced in [9] because \(\sigma(x,H^{n}x)\) can be negative in the rotating case. Calling \(d_{n}(x)\) the cordal distance between a spacetime point and its \(n\)th image, \[d_{n}=2\sigma(x,H^{n}x)=2l^{2}\left[-1+\cosh(2\pi an)\cos(2\pi bn)-B(r)\sinh(2 \pi an)\sin(2\pi bn)\right], \tag{22}\] with \[B(r)=\frac{l^{2}M-2r^{2}}{4abl^{2}}, \tag{23}\] and the RSET takes the form [13, 9] \[\kappa\left\langle T_{\mu\nu}\right\rangle=\frac{3l_{\rm P}}{2}\sum_{n\in I \setminus\{0\}}\Theta(d_{n}(x))\left(S_{\mu\nu}^{n}-\frac{1}{3}g_{\mu\nu}g^{ \lambda\rho}S_{\lambda\rho}^{n}\right), \tag{24}\] with \[S_{ab}^{n}=\frac{H_{ab}^{n}}{d_{n}^{3/2}}+\frac{3H_{ac}^{n}X^{c}H_{bd}^{-n}X^{ d}-H_{ac}^{n}X^{c}H_{bd}^{n}X^{d}}{d_{n}^{5/2}}\,. \tag{25}\] The set \(I\) in the sum (24) includes all distinct images. With the splitting (16) between boosts \((H_{a})\) and rotations \((H_{b})\), one must sum over different ranges for \(q\) and \(p\). ### Explicit form for \(\left\langle T^{\mu}{}_{\nu}\right\rangle\) Note that for any rational value of \(b\) there are infinitely many values of \(n\) for which \(2bn\) is an integer, which occurs for \(p=0\), which implies \(bn=kq\) and consequently the last term in (22) vanishes, making the distance function \(d_{n}\) independent of \(r\). This causes an infinite number of terms in the sum (24) to diverge, signaling a breakdown of the perturbative approach. This can be seen in the non-vanishing components of the stress-energy tensor, \[\kappa\left\langle T^{t}{}_{t}\right\rangle= \frac{l_{Pl}l^{2}}{8ab}\sum_{\begin{subarray}{c}n=1\\ m\nmid n\end{subarray}}^{\infty}\left(\frac{6\left(a^{2}+b^{2}\right)Bb_{n}-4ab \bar{b}_{n}+12B\bar{a}_{n}}{d_{n}^{5/2}}\right. \tag{26a}\] \[\left.+\frac{\left[3\left(a^{2}-b^{2}\right)B-2ab\right]\left( \bar{c}_{n}-8\right)+\left[3(a^{2}-b^{2})+2abB\right]c_{n}e_{n}}{d_{n}^{5/2}} \right),\] \[\kappa\left\langle T^{t}{}_{\theta}\right\rangle= -\frac{3l_{Pl}l^{3}}{8ab}\sum_{\begin{subarray}{c}n=1\\ m\nmid n\end{subarray}}^{\infty}\frac{2\left[\left(a^{2}-b^{2}\right)B+4ab \right]b_{n}+4Ba_{n}+\left(a^{2}+b^{2}\right)\left[B\left(\bar{c}_{n}-8 \right)+e_{n}e_{n}\right]}{d_{n}^{5/2}},\] (26b) \[\kappa\left\langle T^{r}{}_{r}\right\rangle= l_{P}\sum_{\begin{subarray}{c}n=1\\ m\nmid n\end{subarray}}^{\infty}\frac{c_{n}}{d_{n}^{3/2}}\] (26c) \[\kappa\left\langle T^{\theta}{}_{t}\right\rangle= \frac{3l_{Pl}l}{8ab}\sum_{\begin{subarray}{c}n=1\\ n\nmid n\end{subarray}}^{\infty}\frac{2\left[\left(a^{2}-b^{2}\right)B-4ab \right]b_{n}+4Ba_{n}+\left(a^{2}+b^{2}\right)\left[B\left(\bar{c}_{n}-8\right) +c_{n}e_{n}\right]}{d_{n}^{5/2}},\] (26d) \[\kappa\left\langle T^{\theta}{}_{\theta}\right\rangle= -\kappa\left[\left\langle T^{t}{}_{t}\right\rangle+\left\langle T ^{r}{}_{r}\right\rangle\right], \tag{26e}\] where \(\sum\limits_{n}^{\prime}s_{n}\equiv\sum\limits_{n}\Theta(d_{n})s_{n}\), and \[a_{n}= a^{2}\cos(4\pi bn)+b^{2}\cosh(4\pi an)\;,\;\;\bar{a}_{n}=a^{2} \cos(4\pi bn)-b^{2}\cosh(4\pi an), \tag{27a}\] \[b_{n}= \cos(4\pi bn)-\cosh(4\pi an)\;,\qquad\bar{b}_{n}=\cos(4\pi bn)+ \cosh(4\pi an),\] (27b) \[c_{n}= 2\cosh(2\pi an)\cos(2\pi bn)+2\;,\;\;\;\;\;\bar{c}_{n}=2\cosh(4 \pi an)\cos(4\pi bn)+2,\] (27c) \[e_{n}= 4\sinh(2\pi an)\sin(2\pi bn). \tag{27d}\] The presence of \(B(r)\) in the numerator of the \(\left\langle T^{\mu}{}_{\nu}\right\rangle\) components makes them grow as \(r^{2}\) for large distance. Hence, as the denominators are independent of \(r\) for \(n=qm\), these sums contain infinitely many asymptotically divergent terms. The problem is that to renormalize the stress-energy tensor using the Hadamard regularization scheme simply removes one divergent term corresponding to \(n=0\) (or \(p=q=0\)) in the sum (20). However, we see that the stress energy tensor has infinitely many divergent terms, for \(p=0\) and all possible \(q\)s. A "natural" scheme to avoid the problem would be to eliminate the \(b\)s that generate the issue, but this would mean eliminating all rational \(b\)s, contradicting (14). It is still possible in principle that, in spite of the divergences in \(\left\langle T^{\mu}{}_{\nu}\right\rangle\), they cancel out in the equations, yielding a finite result for the back reacted metric. We will see next that such cancellation does not occur, so that the field equations do not allow for a perturbative solution. ### Backreacted metric The backreacted geometry is expected to belong in the same family of spherically symmetric stationary BTZ metrics. It is therefore natural to assume the ansatz \[\mathrm{d}s^{2}= -N(r)^{2}f(r)\mathrm{d}t^{2}+f(r)^{-1}\mathrm{d}r^{2}+r^{2}\left( \mathrm{d}\theta+k(r)\mathrm{d}t\right)^{2}\,. \tag{28}\] Additionally, based on the previous results [9] we write \[N(r)= N_{0}(r)+l_{P}N_{1}(r)+O(l_{P}^{2}), \tag{29}\] \[f(r)= f_{0}(r)+l_{P}f_{1}(r)+O(l_{P}^{2}),\] (30) \[k(r)= k_{0}(r)+l_{P}k_{1}(r)+O(l_{P}^{2}). \tag{31}\] The zeroth order equations describe the unperturbed situation that yield the BTZ metric, \[N_{0}(r)=1,\qquad f_{0}(r)=\frac{r^{2}}{l^{2}}-M+\frac{J^{2}}{4r^{2}},\qquad k _{0}(r)=-\frac{J}{2r^{2}}. \tag{32}\] The first order corrections in \(l_{P}\) of the field equations yield \[N_{1}(r)= \frac{\kappa}{l_{P}}\int\mathrm{d}r\frac{r}{f_{0}(r)}\left(\left< \tensor{T}{{}^{r}_{r}}\right>-\left<\tensor{T}{{}^{t}_{t}}\right>-\frac{J}{2r^{ 2}}\left<\tensor{T}{{}^{t}_{\theta}}\right>\right)+K_{1}, \tag{33}\] \[f_{1}(r)= \int\mathrm{d}r\left[-2f_{0}(r)N_{1}^{\prime}(r)+\left(\frac{J^ {2}}{r^{3}}-\frac{2M}{r}\right)N_{1}(r)\right.\] (34) \[\left.+\frac{2}{r^{3}}\int\mathrm{d}r\left(2MrN_{1}(r)+\frac{ \kappa}{l_{P}}r^{3}\left<\tensor{T}{{}^{r}_{r}}\right>\right)\right]+\frac{K_ {2}}{r^{2}}+K_{3},\] (35) \[Jk_{1}(r)= -f_{1}(r)-2f_{0}(r)N_{1}(r)+2\int r\mathrm{d}r\left(\frac{2}{l^{2 }}N_{1}(r)+\frac{\kappa}{l_{P}}\left<\tensor{T}{{}^{r}_{r}}\right>\right)+K_{ 4}, \tag{36}\] Here the integration constants must be chosen as \(K_{i}=0\) (\(i=1,2,3,4\)) so that the \(O(l_{P})\) metric corrections vanish for \(\left<\tensor{T}{{}^{\mu}_{\nu}}\right>=0\). Even before integrating these expressions, it can be directly checked that the divergences of the stress-energy tensor do not cancel out, leading to unbounded results for \(N_{1}\), \(f_{1}\) and \(k_{1}\). Consequently, the perturbative ansatz (29 -31) does not work, since the first order corrections cannot be shown to be small. ## 4 Summary We have shown that a naked singularity of an overspinning BTZ geometry conformally coupled to a quantum scalar field does not lead to a renormalized stress-energy tensor. This causes incurable infinities to appear in the equations of motion and in the purportedly perturbative solutions. This is contrary to the previously studied cases of conical singularities, where the quantum corrections of the conformally coupled scalar field yields a finite renormalized stress-energy tensor and the resulting back-reacted geometry acquires a horizon, which provides a mechanism that enforces cosmic censorship [6, 7, 8, 9]. Our result indicates that the overspinning geometry is plagued by a more severe form of naked singularity, inaccessible by a perturbative approach. Consequently, it is not possible to claim that the singularity may become dressed by perturbative quantum corrections. Our result seems to indicate that coupling a conformal quantum scalar field to an overspinning geometry may cause the metric to be significantly different from the original BTZ metric. In any event, it is not possible to assert, as in the other cases of naked singularities, that quantum mechanics provides a cosmic censor in this case. It would be interesting to understand whether there is a more profound problem with this type of geometry, or if the strongly rotating behavior simply prevents the application of perturbative methods. Perhaps one way to approach this problem would be by numerical methods, hoping to get a better understanding of the nature of this particular type of singularity and to see if this is purely a problem of the perturbative approach, or if there is a more fundamental issue with the overspinning singularity. ## Acknowledgements We thank C. Martinez, M. Hassaine and Steen Ryom-Hansen for many enlightening discussions. OB is funded by the PhD scholarship of the University of Talca. This work has been partially funded by grant N\({}^{o}\) 1220862 from ANID/Fondecyt.
2308.08167
A Quantum Approximation Scheme for k-Means
We give a quantum approximation scheme (i.e., $(1 + \varepsilon)$-approximation for every $\varepsilon > 0$) for the classical $k$-means clustering problem in the QRAM model with a running time that has only polylogarithmic dependence on the number of data points. More specifically, given a dataset $V$ with $N$ points in $\mathbb{R}^d$ stored in QRAM data structure, our quantum algorithm runs in time $\tilde{O} \left( 2^{\tilde{O}(\frac{k}{\varepsilon})} \eta^2 d\right)$ and with high probability outputs a set $C$ of $k$ centers such that $cost(V, C) \leq (1+\varepsilon) \cdot cost(V, C_{OPT})$. Here $C_{OPT}$ denotes the optimal $k$-centers, $cost(.)$ denotes the standard $k$-means cost function (i.e., the sum of the squared distance of points to the closest center), and $\eta$ is the aspect ratio (i.e., the ratio of maximum distance to minimum distance). This is the first quantum algorithm with a polylogarithmic running time that gives a provable approximation guarantee of $(1+\varepsilon)$ for the $k$-means problem. Also, unlike previous works on unsupervised learning, our quantum algorithm does not require quantum linear algebra subroutines and has a running time independent of parameters (e.g., condition number) that appear in such procedures.
Ragesh Jaiswal
2023-08-16T06:46:37Z
http://arxiv.org/abs/2308.08167v2
# A Quantum Approximation Scheme for \(k\)-Means ###### Abstract We give a quantum approximation scheme (_i.e., \((1+\varepsilon)\)-approximation for every \(\varepsilon>0\)_) for the classical \(k\)-means clustering problem in the QRAM model with a running time that has only polylogarithmic dependence on the number of data points. More specifically, given a dataset \(V\) with \(N\) points in \(\mathbb{R}^{d}\) stored in QRAM data structure, our quantum algorithm runs in time \(\tilde{O}\left(2^{O(\frac{1}{\varepsilon})}\eta^{2}d\right)\) and with high probability outputs a set \(C\) of \(k\) centers such that \(\mathit{cost}(V,C)\leq(1+\varepsilon)\cdot\mathit{cost}(V,C_{OPT})\). Here \(C_{OPT}\) denotes the optimal \(k\)-centers, \(\mathit{cost}(.)\) denotes the standard \(k\)-means cost function (_i.e., the sum of squared distance of points to the closest center_), and \(\eta\) is the aspect ratio (_i.e., the ratio of maximum distance to minimum distance_). This is the first quantum algorithm with a polylogarithmic running time that gives a provable approximation guarantee of \((1+\varepsilon)\) for the \(k\)-means problem. Also, unlike previous works on unsupervised learning, our quantum algorithm does not require quantum linear algebra subroutines and has a running time independent of parameters (e.g., condition number) that appear in such procedures. ## 1 Introduction Data clustering and the \(k\)-means problem, in particular, have many applications in data processing. The \(k\)-means problem is defined as: given a set of \(n\) points \(V=\{v_{1},...,v_{n}\}\subset\mathbb{R}^{d}\), and a positive integer \(k\), find a set \(C\subset\mathbb{R}^{d}\) of \(k\) centers such that the cost function, \[\Phi(V,C)\equiv\sum_{v\in V}\min_{c\in C}D^{2}(v,c),\] is minimised. Here, \(D(v,c)\equiv\left\|v-c\right\|\) is the Euclidean distance between points \(v\) and \(c\). Partitioning the points based on the closest center in the center set \(C\) gives a natural clustering of the data points. Due to its applications in data processing, a lot of work goes into designing algorithms from theoretical and practical standpoints. The \(k\)-means problem is known to be NP-hard, so it is unlikely to have a polynomial time algorithm. Much research has been done on designing polynomial time _approximation_ algorithms for the \(k\)-means problem. However, the algorithm used in practice to solve \(k\)-means instances is a heuristic, popularly known as the \(k\)-means algorithm (_not to be confused with the \(k\)-means problem_). This heuristic, also known as Lloyd's iterations [10], iteratively improves the solution in several rounds. The heuristic starts with an arbitrarily chosen set of \(k\) centers. In every iteration, it (i) partitions the points based on the nearest center and (ii) updates the center set to the centroids of the \(k\) partitions. In the classical computational model, it is easy to see that every Lloyd iteration costs \(O(Nkd)\) time. This hill-climbing approach may get stuck in a local minimum or take a huge amount of time to converge and hence does not give provable guarantees on the quality of the final solution or the running time. In practice, Lloyd's iterations are usually preceded by the \(k\)-means++ algorithm [1], a fast sampling-based approach for picking the initial \(k\) centers that also gives an approximation guarantee. So, Lloyd's iterations, preceded by the \(k\)-means++ algorithm, give the best of both worlds, theory, and practice. Hence, it is unsurprising that a lot of work has been done on these two algorithms. This ranges from efficiency improvements in specific settings to implementations in distributed and parallel models. With the quantum computing revolution imminent, it is natural to talk about quantum versions of these algorithms and quantum algorithms for the \(k\)-means problem in general. Early work on the \(k\)-means problem within the quantum setting involved efficiency gains from quantizing Lloyd's iterations. In particular, Aimeur, Brassard, and Gambs [1] gave an \(O(\frac{N^{3/2}}{\sqrt{k}})\) time algorithm for executing a single Lloyd's iteration for the Metric \(k\)-median clustering problem that is similar to the \(k\)-means problem. This was using the quantum minimum finding algorithm of Durr and Hoyer [1]. Using quantum distance estimation techniques assuming quantum data access, Lloyd, Mohseni, and Rebentrost [10] gave an \(O(kN\log d)\) time algorithm for the execution of a single Lloyd's iteration for the \(k\)-means problem. More recently, [10] gave an approximate quantization of the \(k\)-means++ method and Lloyd's iteration assuming _QRAM data structure_[11] access to the data. Interestingly, the running time has only polylogarithmic dependence on the size \(N\) of the dataset. The algorithm uses quantum linear algebra procedures, and hence there is dependence on certain parameters that appear in such procedures, such as the condition number \(\kappa(V)\). Since Lloyd's iterations do not give an approximation guarantee, its quantum version is also a heuristic without a provable approximation guarantee.1 Our work on the \(k\)-means problem builds upon the techniques developed in all the above and other works on quantum unsupervised learning to design algorithms with provable approximation guarantees. Specifically, we want to design an _approximation scheme_ for the \(k\)-means problem with a running time that has only a polylogarithmic dependence on the data size \(N\) as in the algorithm of [10]. An approximation scheme is an algorithm that, in addition to the dataset and \(k\), takes an error parameter \(\varepsilon>0\) as input and outputs a solution with a cost within \((1+\varepsilon)\) factor of the optimal. We do this by quantizing the highly parallel, sampling-based approximation scheme of [11]. The tradeoff in obtaining this fine-grained approximation is that the running time of our algorithm has an exponential dependence on \(k\) and error parameter \(\varepsilon\). In the classical setting, such algorithms are categorized as Fixed Parameter Approximation Schemes (fpt-AS). Such \((1+\varepsilon)\)-approximation algorithms can have exponential running time dependence on the _parameter_ (e.g., the number of clusters \(k\) in our setting). The practical motivation for studying Fixed-Parameter Tractability for computationally hard problems is that when the parameter is small (e.g., number of clusters \(k\sim 5\)), the running time is not prohibitively large. We state our main result as the following theorem, which we will prove in the remainder of the paper. Footnote 1: Even though [10] gives a quantum version of the \(k\)-means++ algorithm that has an \(O(\log k)\) approximation guarantee, the guarantee for the quantum version (which has errors) is not shown explicitly. Theorem 1.1 (Main Theorem): _Let \(0<\varepsilon<1/2\) be the error parameter. There is a quantum algorithm that, when given QRAM data structure access to a dataset \(V\in\mathbb{R}^{N\times d}\), runs in time \(\tilde{O}\left(2^{\tilde{O}(\frac{k}{\varepsilon})}d\eta^{2}\right)\) and outputs a \(k\) center set \(C\in\mathbb{R}^{k\times d}\) such that with high probability \(\Phi(V,C)\leq(1+\varepsilon)\cdot OPT\). Here, \(\eta\) is the aspect ratio, i.e., the ratio of the maximum to the minimum distance between two given points in \(V\).2_ Footnote 2: The \(\tilde{O}\) notation hides logarithmic factors in \(N\). The \(\tilde{O}\) in the exponent hides logarithmic factors in \(k\) and \(1/\varepsilon\). ### An approximation scheme in the classical setting We convert the \(D^{2}\)-sampling-based approximation scheme of [11] to a Quantum version. The approximation scheme is simple and highly parallel, which can be described in the following few lines: **Input**: Dataset \(V\), integer \(k>0\), and error \(\varepsilon>0\) **Output**: A center set \(C^{\prime}\) with \(\Phi(V,C^{\prime})\leq(1+\varepsilon)OPT\) 1. (_Constant approximation_) Find a center set \(C\) that is a constant factor approximate solution. An \((\alpha,\beta)\)_pseudo-approximate solution_, for constants \(\alpha,\beta\), also works. 2. (_\(D^{2}\)-sampling_) Pick a set \(T\) of \(poly(\frac{k}{\varepsilon})\) points independently from the dataset using \(D^{2}\)-sampling with respect to the center set \(C\). 3. (_All subsets_) Out of all \(k\)-tuples (\(S_{1},...,S_{k}\)) of (multi)subsets of \(T\cup\{\text{copies of points in }C\}\), each \(S_{i}\) of size \(O(\frac{1}{\varepsilon})\), return \((\mu(S_{1}),...,\mu(S_{k}))\) that gives the least \(k\)-means cost. Here, \(\mu(S_{i})\) denotes the centroid of points in \(S_{i}\). We will discuss the quantization of the above three steps of the approximation scheme of [11], thus obtaining a quantum approximation scheme. 3 Footnote 3: Steps (2) and (3) in the algorithm are within a loop for probability amplification. This loop is skipped in this high-level description for simplicity. 1. (Constant approximation)The first step requires finding a constant factor approximate solution for the \(k\)-means problem. Even though several constant factor approximation algorithms are known, we need one with a quantum counterpart that runs in time that is polylogarithmic in the input size \(N\). One such algorithm is the \(k\)-means++ seeding algorithm [1] that picks \(k\) centers in a sequence with the \(i^{th}\) center picked using \(D^{2}\)-sampling4 with respect to the previously chosen \((i-1)\) centers. [10] give an approximate quantum version of \(D^{2}\)-sampling. The approximation guarantee of the \(k\)-means++ algorithm is \(O(\log k)\) instead of the constant approximation required in the approximation scheme of [1]. It is known from the work of [1] that if the \(D^{2}\)-sampling in \(k\)-means++ is continued for \(2k\) steps instead of stopping after sampling \(k\) centers, then we obtain a center set of size \(2k\) that is a \((2,O(1))\)-pseudo approximate solution. This means that this \(2k\)-size center set has a \(k\)-means cost that is some constant times the optimal. Such a pseudo-approximate solution is sufficient for the approximation scheme of [1] to work. We show that the pseudo-approximation guarantee of [1] also holds when using the approximate quantum version of the \(D^{2}\)-sampling procedure. Footnote 4: \(D^{2}\)-sampling: Given a center set \(C\), \(D^{2}\)-sampling picks a datapoint with probability proportional to the squared distance of the point to the closest center in \(C\). 2. (\(D^{2}\)-sampling)The second step of [1] involves \(D^{2}\)-sampling, which we already discussed how to quantize. This is no different than the \(D^{2}\)-sampling involved in the \(k\)-means++ algorithm of the previous step. The sampling in this step is simpler since the center set \(C\) with respect to which the \(D^{2}\)-sampling is performed, does not change (as is the case with the \(k\)-means++ algorithm.) 3. (All subsets)Since the number of points sampled in the previous step is \(poly(\frac{k}{\varepsilon})\), we need to consider a list of \(\left(\frac{k}{\varepsilon}\right)^{\tilde{O}(\frac{k}{\varepsilon})}\) tuples of subsets, each giving a \(k\)-center set (_a tuple_\((S_{1},...,S_{k})\)_defines_\((\mu(S_{1}),...,\mu(S_{k}))\)). We need to compute the \(k\)-means cost for each \(k\) center sets in the list and then pick the one with the least cost. We give quantization of the above steps. 5 Footnote 5: Note that when picking the center set with the least cost, we can get quadratic improvement in the search for the best \(k\)-center set using quantum search. Given that the search space is of size \(\left(\frac{k}{\varepsilon}\right)^{\tilde{O}(\frac{k}{\varepsilon})}\), this results only in a constant factor improvement in the exponent. So, we leave out the quantum search from the discussion for simplicity. Note that the quantization of the classical steps of [1] will incur precision errors. So, we first need to ensure that the approximation guarantee of [1] is robust against small errors in distance estimates, \(D^{2}\)-sampling probabilities, and \(k\)-means cost estimates. We must carefully account for errors and ensure that the quantum algorithm retains the \((1+\varepsilon)\) approximation guarantee of the robust version of [1]. OrganizationWe begin the technical discussions in the next section by showing that the approximation scheme of [1] is robust against errors. We will also show the robustness of the \(k\)-means++ procedure. In the subsequent section, we give the quantization of the steps of [1]. First, we briefly discuss the related work. ### Related work We have already discussed past research works on quantum versions of the \(k\)-means algorithm (i.e., Lloyd's iterations). This includes [1], [10], and [11]. All these have been built using various quantum tools and techniques developed for various problems in quantum unsupervised learning, such as coherent amplitude and median estimation, distance estimation, minimum finding, etc. See [21] for examples of several such tools. Other directions on quantum \(k\)-means includes _adiabatic_ algorithms (e.g., [1]) and algorithms using the _QAOA_ framework (e.g., [1, 12]). However, these are without provable guarantees. A line of work has also suggested that quantum algorithms can outperform classical ones because of the QRAM data structure access. A more level playing field is to assume that a similar _sample and query_ data access is available in the classical setting. Under this assumption, several "dequantization" results for unsupervised machine learning algorithms have been given. This includes [14, 15, 16]. It will be interesting to see if similar dequantization is possible for the quantum algorithms presented in this work since the main ingredient of our algorithm and the dequantization results is length-squared sampling. A Robust Approximation Scheme We start the discussion with the \(D^{2}\)-sampling method. In particular, we would like to check the robustness of the approximation guarantee provided by the \(D^{2}\)-sampling method against errors in estimating the distances between points. We will show that the \(D^{2}\)-sampling method gives a constant pseudo-approximation even under sampling errors. ### Pseudoapproximation using \(D^{2}\)-sampling Let the matrix \(V\in\mathbb{R}^{N\times d}\) denote the dataset, where row \(i\) contains the \(i^{th}\) data point \(v_{i}\in\mathbb{R}^{d}\). Let the matrix \(C\in\mathbb{R}^{t\times d}\) any \(t\)-center set, where row \(i\) contains the \(i^{th}\) center \(c_{i}\in\mathbb{R}^{d}\) out of the \(t\) centers. Sampling a data point using the \(D^{2}\) distribution w.r.t. (_short for with respect to_) a center set \(C\) means that the datapoint \(v_{i}\) gets sampled with probability proportional to the squared distance to its nearest center in the center set \(C\). This is also known as \(D^{2}\) sampling w.r.t. center set \(C\). More formally, data points are sampled using the distribution \(\left(\frac{D^{2}(v_{1},C)}{\sum_{j}D^{2}(v_{j},C)},...,\frac{D^{2}(v_{N},C)} {\sum_{j}D^{2}(v_{j},C)}\right)\), where \(D^{2}(v_{j},C)\equiv\min_{c\in C}D^{2}(v_{j},c)\). For the special case \(C=\emptyset\), \(D^{2}\) sampling is the same as uniform sampling. The \(k\)-means++ seeding algorithm starts with an empty center set \(C\) and, over \(k\) iterations, adds a center to \(C\) in every iteration by \(D^{2}\) sampling w.r.t. the current center set \(C\). It is known from the result of [1] that this \(k\)-means++ algorithm above gives an \(O(\log k)\) approximation in expectation. It is also known from the result of [1] that if \(2k\) centers are sampled, instead of \(k\) (_i.e., the for-loop runs from \(1\) to \(2k\)_), the cost with respect to these \(2k\) centers is at most some constant times the optimal \(k\)-means cost. Such an algorithm is called a _pseudo approximation_ algorithm. Such a pseudo approximation algorithm is sufficient for the approximation scheme of [1]. So, we will quantize the following constant factor pseudo-approximation algorithm. ``` Input:\((V,k)\) \(C\leftarrow\{\}\) for\(i\) = \(1\) to \(2k\)do Pick \(c\) using \(D^{2}\)-sampling w.r.t. center set \(C\) \(C:=C\leftarrow\{c\}\) endfor return\(C\) ``` **Algorithm 1**A pseudo-approximation algorithm based on \(D^{2}\)-sampling. In the quantum simulation of the above sampling procedure, there will be small errors in the sampling probabilities in each iteration. We need to ensure that the constant approximation guarantee of the above procedure is robust against small errors in the sampling probabilities owing to errors in distance estimation. We will work with a relative error of \((1\pm\delta)\) for small \(\delta\). Following is a crucial lemma from [1] needed to show the pseudo-approximation property of Algorithm 1. Lemma 1 (Lemma 3.2 in [1]): _Let \(A\) be an arbitrary optimal cluster, and let \(C\) be an arbitrary set of centers. Let \(c\) be a center chosen from \(A\) with \(D^{2}\)-sampling with respect to \(C\). Then \(\mathbf{E}[cost(A,C\cup\{c\})]\leq 8\cdot OPT(A)\)._ The above lemma is used as a black box in the analysis of Algorithm 1 in [1]. The following version of the lemma holds for distance estimates with a relative error of \((1\pm\delta)\) and gives a constant factor approximation guarantee. Since Lemma 1 is used as a black box in the analysis of Algorithm 1, replacing this lemma with Lemma 2 also gives a constant factor approximation to the \(k\)-means objective. We will use the following notion of the closeness of two distance functions. Definition 1: A distance function \(D_{1}\) is said to be \(\delta\)-close to distance function \(D_{2}\), denoted by \(D_{1}\sim_{\delta}D_{2}\), if for every pair of points \(x,y\in\mathbb{R}^{d}\), \(D_{1}(x,y)\in(1\pm\delta)\cdot D_{2}(x,y)\).6 Footnote 6: We use the notation that for positive reals \(P,Q\), \(P\in(1\pm\delta)\cdot Q\) if \((1-\delta)\cdot Q\leq P\leq(1+\delta)\cdot Q\). **Lemma 2**.: _Let \(0<\delta\leq 1/2\). Let \(A\) be an arbitrary optimal cluster and \(C\) be an arbitrary set of centers. Let \(c\) be a center chosen from \(A\) with \(\tilde{D}^{2}\)-sampling with respect to \(C\), where \(\tilde{D}\sim_{\delta}D\). Then \(\mathbf{E}[cost(A,C\cup\{c\})]\leq 72\cdot OPT(A)\)._ Proof.: Let \(D(a)\) denote the distance of the point \(a\) from the nearest center in \(C\) and let \(\tilde{D}(a)\) denote the estimated distance. We have \(\tilde{D}(a)\in D(a)\cdot(1\pm\delta)\). The following expression gives the expectation: \[\sum_{a_{0}\in A}\frac{\tilde{D}^{2}(a_{0})}{\sum_{a\in A}\tilde{D}^{2}(a)} \cdot\sum_{a^{\prime}\in A}\min\left(D^{2}(a^{\prime}),D^{2}(a^{\prime},a_{0})\right)\] Note that for all \(a_{0},a\in A\), \(D(a_{0})\leq D(a)+D(a,a_{0})\). This gives \(\tilde{D}(a_{0})\leq\frac{1+\delta}{1-\delta}\cdot\tilde{D}(a)+(1+\delta)\cdot D (a_{0},a)\), which further gives \(\tilde{D}^{2}(a_{0})\leq 2\left(\frac{1+\delta}{1-\delta}\right)^{2}\cdot \tilde{D}^{2}(a)+2(1+\delta)^{2}\cdot D^{2}(a_{0},a)\) and \(\tilde{D}^{2}(a_{0})\leq\frac{2}{|A|}\left(\frac{1+\delta}{1-\delta}\right)^ {2}\cdot\sum_{a\in A}\tilde{D}^{2}(a)+\frac{2}{|A|}(1+\delta)^{2}\cdot\sum_{a \in A}D^{2}(a_{0},a)\). We use this to obtain the following upper bound on the expectation \(\mathbf{E}[cost(A,C\cup\{c\})]\): \[\mathbf{E}[cost(A,C\cup\{c\})] \leq\sum_{a_{0}\in A}\frac{\tilde{D}^{2}(a_{0})}{\sum_{a\in A} \tilde{D}^{2}(a)}\cdot\sum_{a^{\prime}\in A}\min\left(D^{2}(a^{\prime}),D^{2}( a^{\prime},a_{0})\right)\] \[\leq\sum_{a_{0}\in A}\frac{\left(\frac{2}{|A|}\left(\frac{1+ \delta}{1-\delta}\right)^{2}\sum_{a\in A}\tilde{D}^{2}(a)\right)}{\sum_{a\in A }\tilde{D}^{2}(a)}\cdot\sum_{a^{\prime}\in A}\min\left(D^{2}(a^{\prime}),D^{2} (a^{\prime},a_{0})\right)+\] \[\quad\sum_{a_{0}\in A}\frac{\left(\frac{2}{|A|}(1+\delta)^{2} \sum_{a\in A}D^{2}(a_{0},a)\right)}{\sum_{a\in A}\tilde{D}^{2}(a)}\cdot\sum_{a ^{\prime}\in A}\min\left(D^{2}(a^{\prime}),D^{2}(a^{\prime},a_{0})\right)\] \[\leq\sum_{a_{0}\in A}\sum_{a^{\prime}\in A}\frac{2}{|A|}\left( \frac{1+\delta}{1-\delta}\right)^{2}D^{2}(a^{\prime},a_{0})+\sum_{a_{0}\in A} \sum_{a\in A}\frac{2}{|A|}\left(\frac{1+\delta}{1-\delta}\right)^{2}D(a_{0},a )^{2}\] \[=\frac{4}{|A|}\left(\frac{1+\delta}{1-\delta}\right)^{2}\sum_{a_{ 0}\in A}\sum_{a\in A}D^{2}(a_{0},a)^{2}\] \[=8\left(\frac{1+\delta}{1-\delta}\right)^{2}OPT(A)\] \[\leq 72\cdot OPT(A).\] This completes the proof of the lemma. We will use this lemma in the approximation scheme of [1]. However, this lemma may be of independent interest as this gives a quantum pseudo approximation algorithm with a constant factor approximation that runs in time that is polylogarithmic in the data size and linear in \(k\) and \(d\). We will discuss this quantum algorithm in the next Section. ### Approximation scheme of [1] A high-level description of the approximation scheme of [1] was given in the introduction. We give a more detailed pseudocode in Algorithm 2. In addition to the input instance \((V,k)\) and error parameter \(\varepsilon\), the algorithm is also given a constant approximate solution \(C\), which is used for \(D^{2}\)-sampling. A pseudoapproximate solution \(C\) is sufficient for the analysis in [1]. The discussion from the previous subsection gives a robust algorithm that outputs a pseudoapproximate solution even under errors in distance estimates. So, the input requirement of Algorithm 2 can be met. Now, the main ingredient being \(D^{2}\)-sampling, we need to ensure that errors in distance estimate do not seriously impact the approximation analysis of Algorithm 2. We state the main theorem of [1] before giving the analogous statement for the modified algorithm where \(D\) is replaced with \(\tilde{D}\) that is \(\delta\)-close to \(D\). 1:Input:\((V,k,\varepsilon,C)\), where \(V\) is the dataset, \(k>0\) is the number of clusters, \(\varepsilon>0\) is the error parameter, and \(C\) is a \(k\) center set that gives constant (pseudo)approximation. 2:Output: A list \(\mathcal{L}\) of \(k\) center sets such that for at least one \(C^{\prime}\in\mathcal{L}\), \(\Phi(V,C^{\prime})\leq(1+\varepsilon)\cdot OPT\). 3:Constants:\(\rho=O(\frac{k}{\varepsilon^{4}})\); \(\tau=O(\frac{1}{\varepsilon})\) 4:\(\mathcal{L}\leftarrow\emptyset\); \(count\gets 1\) 5:repeat 6: Sample a multi-set \(M\) of \(\rho k\) points from \(V\) using \(D^{2}\)-sampling wrt center set \(C\) 7:\(M\gets M\cup\{\tau k\) copies of each element in \(C\}\) 8:for all disjoint subsets \(S_{1},...,S_{k}\) of \(M\) such that \(\forall i,|S_{i}|=\tau\)do 9:\(\mathcal{L}\leftarrow\mathcal{L}\cup(\mu(S_{1}),...,\mu(S_{k}))\) 10:endfor 11:\(count\)++ 12:until\(count<2^{k}\) 13:return\(\mathcal{L}\) ``` **Algorithm 2**Algorithm of [1] Theorem 2.1 (Theorem 1 in [1]): _Let \(0<\varepsilon\leq 1/2\) be the error parameter, \(V\in\mathbb{R}^{N\times d}\) be the dataset, \(k\) be a positive integer, and let \(C\) be a constant approximate solution for dataset \(V\). Let \(\mathcal{L}\) be the list returned by Algorithm 2 on input \((V,k,\varepsilon,C)\) using the Euclidean distance function \(D\). Then with probability at least \(3/4\), \(\mathcal{L}\) contains a center set \(C^{\prime}\) such that \(\Phi(V,C^{\prime})\leq(1+\varepsilon)\cdot OPT\). Moreover, \(|\mathcal{L}|=\tilde{O}\left(2^{O(\frac{k}{\varepsilon})}\right)\) and the running time of the algorithm is \(O(Nd|\mathcal{L}|)\)._ We give the analogous theorem with access to the Euclidean distance function \(D\) replaced with a function \(\tilde{D}\) that is \(\delta\)-close to \(D\). Theorem 2.2: _Let \(0<\varepsilon\leq\frac{1}{2}\) be the error parameter, \(0<\delta<1\) be the closeness parameter, \(V\in\mathbb{R}^{N\times d}\) be the dataset, \(k\) be a positive integer, and let \(C\) be a constant approximate solution for dataset \(V\). Let \(\mathcal{L}\) be the list returned by Algorithm 2 on input \((V,k,\varepsilon,C)\) using the distance function \(\tilde{D}\) that is \(\delta\)-close to the Euclidean distance function \(D\). Then with probability at least \(3/4\), \(\mathcal{L}\) contains a center set \(C^{\prime}\) such that \(\Phi(V,C^{\prime})\leq(1+\varepsilon)\cdot OPT\). Moreover, \(|\mathcal{L}|=\tilde{O}\left(2^{\tilde{O}(\frac{k}{\varepsilon(1-\delta)})}\right)\) and the running time of the algorithm is \(O(Nd|\mathcal{L}|)\)._ The proof of the above theorem closely follows the proof of Theorem 2 of [1]. This is similar to the proof of Theorem 2 that we saw earlier, closely following the proof of Lemma 1. The minor changes are related to approximate distance estimates using \(\tilde{D}\) instead of real estimates using \(D\). The statement of Theorem 2.2 is not surprising in this light. Instead of repeating the entire proof of [1], we point out the one change in their argument caused by using \(\tilde{D}\) instead of \(D\) as the distance function. The analysis of [1] works by partitioning the points in any optimal cluster \(X_{j}\) into those that are close to \(C\) and those that are far. For the far points, it is shown that when doing \(D^{2}\)-sampling, a far point will be sampled with probability at least \(\gamma\) times the uniform sampling probability (see Lemma 21 in [1], which is a full version of [1]). It then argues that a reasonable size set of \(D^{2}\)-sampled points will contain a uniform sub-sample. A combination of the uniform sub-sample along with copies of points in \(C\) gives a good center for this optimal cluster \(X_{j}\). Replacing \(D\) with \(\tilde{D}\) decrease the value of \(\gamma\) by a multiplicative factor of \(\frac{(1-\delta)^{2}}{(1+\delta)^{2}}\geq(1-\delta)^{4}\). This means that the number of points sampled should increase by a factor of \(O(\frac{1}{1-\delta})\). This means that the list size increases to \(\tilde{O}\left(2^{\tilde{O}(\frac{k}{\varepsilon(1-\delta)})}\right)\). Note that when \(\delta\leq\frac{1}{2}\), the list size and running time retains the same form as that in [1] (i.e., \(|\mathcal{L}|=\tilde{O}\left(2^{\tilde{O}(\frac{k}{\varepsilon})}\right)\) and time \(O(Nd|\mathcal{L}|)\)). ## 3 Quantum Algorithms We will work under the assumption that the minimum distance between two data points is \(1\), which can be acheived using scaling. This makes the aspect ratio \(\eta\) simply the maximum distance between two data points. We will use \(i\) for an index into the rows of the data matrix \(V\in\mathbb{R}^{N\times d}\), and \(j\) for an index into the rows of the center matrix \(C\in\mathbb{R}^{k\times d}\). We would ideally like to design a quantum algorithm that performs the transformation: \[\ket{i}\ket{j}\ket{0}\rightarrow\ket{i}\ket{j}\ket{D(v_{i},c_{j})}\] Let us call the state on the right \(\ket{\Psi_{ideal}}\). This is an ideal quantum state for us since \(\ket{\Psi_{ideal}}\) helps to perform \(D^{2}\)-sampling and to find the \(k\)-means cost of clustering, which are the main components of the approximation scheme of [1] that we intend to use. One caveat is that we will only be able to perform the following transformation (instead of the abovementioned transformation) \[\ket{i}\ket{j}\ket{0}\rightarrow\ket{i}\ket{j}\ket{\psi_{i,j}},\] where \(\ket{\psi_{i,j}}\) is an approximation for \(\ket{\tilde{D}(v_{i},c_{j})}\) in a sense that we will make precise below. We will use \(\ket{\Psi_{real}}\) to denote the state \(\ket{i}\ket{j}\ket{\psi_{i,j}}\). This state is prepared using tools such as _swap test_ followed by _coherent amplitude estimation_, and _median estimation_. Since these tools and techniques are known from previous works [13, 11, 12], we summarise the discussion (see Section 4.1 and 4.2 in [11]) in the following lemma. Lemma 3 ([11] and [13]): _Assume for a data matrix \(V\in\mathbb{R}^{N\times d}\) and a center set matrix \(C\in\mathbb{R}^{t\times d}\) that the following unitaries: (i) \(\ket{i}\ket{0}\rightarrow\ket{i}\ket{v_{i}}\), (ii) \(\ket{j}\ket{0}\rightarrow\ket{j}\ket{c_{j}}\) can be performed in time \(T\) and the norms of the vectors are known. For any \(\Delta>0\), there is a quantum algorithm that in time \(O\left(\frac{T\log\frac{1}{\Delta}}{\varepsilon}\right)\) computes:_ \[\ket{i}\ket{j}\ket{0}\rightarrow\ket{i}\ket{j}\ket{\psi_{i,j}},\] _where \(\ket{\psi_{i,j}}\) satisfies the following two conditions for every \(i\in[N]\) and \(j\in[t]\):_ 1. \(\left\lVert\ket{\psi_{i,j}}-\ket{0^{\otimes\ell}}\right\rVert\tilde{D}(v_{i},c_ {j})\right\rangle\!\!\bigg{\rVert}\leq\Delta\)_, and_ 2. _For every_ \(i,j\)_,_ \(\tilde{D}(v_{i},c_{j})\in(1\pm\varepsilon)\cdot D(v_{i},c_{j})\)_._ In the subsequent discussions, we will use \(T\) as the time to access the _QRAM data structure_[15], i.e., for the transitions \(\ket{i}\ket{0}\rightarrow\ket{i}\ket{v_{i}}\) and \(\ket{j}\ket{0}\rightarrow\ket{j}\ket{c_{j}}\) as given in the above lemma. This is known to be \(T=O(\log^{2}\left(Nd\right))\). Moreover, the time to update each entry in this data structure is also \(T=O(\log^{2}\left(Nd\right))\). This is the logarithmic factor that is hidden in the \(\tilde{O}\) notation. In the following subsections, we discuss the utilities of \(\ket{\Psi_{real}}\) for the various components of the approximation scheme of [1]. During these discussions, it will be easier to see the utility first with the ideal state \(\ket{\Psi_{ideal}}\) before the real state \(\ket{\Psi_{real}}\) that can actually be prepared. We will see how \(\ket{\Psi_{real}}\) is sufficient within a reasonable error bound. ### Finding distance to closest center Let us see how we can estimate the distance of any point to its closest center in a center set \(C\) with \(t\leq k\) centers. We can use the transformation \(\ket{i}\ket{j}\ket{0}\rightarrow\ket{i}\ket{j}\ket{D(v_{i},c_{j})}\) to prepare the following state for any \(i\): \[\ket{i}\ket{D(v_{i},c_{1})}\ket{D(v_{i},c_{2})}...\ket{D(v_{i},c_{t})}\] We can then iteratively compare and swap pairs of registers to prepare the state \(\ket{i}\ket{\min_{j\in[t]}D(v_{i},c_{j})}\). If we apply the same procedure to \(\ket{i}\ket{\psi_{i,1}}...\ket{\psi_{i,t}}\), then with probability at least \((1-2\Delta)^{t}\), the resulting state will be \(\ket{i}\ket{\min_{j\in[t]}\tilde{D}(v_{i},c_{j})}\). So, the contents of the second register will be an estimate of the distance of the \(i^{th}\) point to its closest center in the center set \(C\). This further means that the following state can be prepared with probability at least \((1-2\Delta)^{Nt}\):7 Footnote 7: The state prepared is actually \(\frac{1}{\sqrt{N}}\sum_{i=1}^{N}\ket{i}\left(\alpha\left|\min_{j\in[t]}\tilde{ D}(v_{i},c_{j})\right\rangle+\beta\left|G\right\rangle\right)\) with \(\left|\alpha\right|^{2}\geq(1-2\Delta)^{Nk}\). However, instead of working with this state, subsequent discussions become much simpler if we assume that \(\ket{\Psi_{C}}\) is prepared with probability \(\left|\alpha\right|^{2}\). \[\ket{\Psi_{C}}\equiv\frac{1}{\sqrt{N}}\sum_{i=1}^{N}\ket{i}\left|\min_{j\in[t ]}\tilde{D}(v_{i},c_{j})\right\rangle.\] This quantum state can be used to find the approximate clustering cost of the center set \(C\), which we discuss in the following subsection. However, before we do that, let us summarise the main ideas of this subsection in the following lemma. Lemma 4: _There is a quantum algorithm that, with probability at least \((1-2\Delta)^{Nt}\), prepares the quantum state \(|\Psi_{C}\rangle\) in time \(O\left(\frac{Tk\log\frac{1}{\Delta}}{\varepsilon}\right)\)._ ### Computing cost of clustering Suppose we want to compute the \(k\)-means cost, \(\Phi(V,C)\equiv\sum_{i=1}^{N}\min_{j\in[k]}D^{2}(v_{i},c_{j})\), of the clustering given by a \(k\) center set \(C\in\mathbb{R}^{k\times d}\). We can prepare \(m\) copies of the state \(|\Psi_{C}\rangle\) and then estimate the cost of clustering by measuring \(m\) copies of this quantum state and summing the squares of the second registers. If \(m\) is sufficiently large, we obtain a close estimate of \(\Phi(V,C)\). To show this formally, we will use the following Hoeffding tail inequality. Theorem 3.1 (Hoeffding bound): _Let \(X_{1},...,X_{m}\) be independent, bounded random variables such that \(X_{i}\in[a,b]\). Let \(S_{m}=X_{1}+...+X_{m}\). Then for any \(\theta>0\), we have:_ \[\mathbf{Pr}[|S_{m}-\mathbf{E}[S_{m}]|\geq\theta]\leq 2\cdot e^{\frac{-2\theta^{ 2}}{m(b-a)^{2}}}.\] Let \(X_{1},...,X_{m}\) denotes the square of the measured value of the second register in \(|\Psi_{C}\rangle\). These are random values in the range \([1,\eta^{2}]\), where \(\eta=\max_{i,j}\tilde{D}(v_{i},v_{j})\in(1\pm\varepsilon)\cdot\max_{i,j}D(v_ {i},v_{j})\). First, we note the expectation of these random variables equals \(\frac{\tilde{\Phi}(V,C)}{N}\), where \(\tilde{\Phi}(V,C)\equiv\sum_{i=1}^{N}\min_{j\in[k]}\tilde{D}(v_{i},v_{j})\in( 1\pm\varepsilon)\cdot\Phi(V,C)\). We define the variable \(S_{t}=X_{1}+X_{2}+...+X_{m}\) and apply the Hoeffding bound on these bounded random variables to get a concentration result that can then be used. Lemma 5: _Let \(\alpha_{m}=S_{m}\cdot\frac{N}{m}\) and \(L>0\). If \(m=\frac{\eta^{2}\ln\left(10L\right)}{\varepsilon^{2}}\), then we have:_ \[\mathbf{Pr}[\alpha_{m}\in(1\pm\varepsilon)\cdot\tilde{\Phi}(V,C)]\geq 1-\frac{1} {5L}.\] Proof: We know that \(\mathbf{E}[S_{m}]=\frac{m}{N}\cdot\tilde{\Phi}(V,C)\) From the Hoeffding tail inequality, we get the following: \[\mathbf{Pr}[|S_{m}-\mathbf{E}[S_{m}]|\geq\varepsilon\cdot\mathbf{E}[S_{m}]] \leq 2\cdot e^{\frac{-2\varepsilon^{2}\mathbf{E}[S_{m}]^{2}}{m\eta^{2}}}=2 \cdot e^{\frac{-2\varepsilon^{2}m}{\eta^{2}}\cdot\left(\frac{\tilde{\Phi}(V,C )}{N}\right)^{2}}\leq 2\cdot e^{-\ln\left(10L\right)}\leq\frac{1}{5L}.\] This implies that: \[\mathbf{Pr}[|\alpha_{m}-\tilde{\Phi}(V,C)|\leq\varepsilon\cdot\tilde{\Phi}( V,C)]\leq\frac{1}{5L}.\] This completes the proof of the lemma. So, conditioned on having \(m\) copies of the state \(|\Psi_{C}\rangle\), we get an estimate of the clustering cost within a relative error of \((1\pm\varepsilon^{2})\) with probability at least \((1-\frac{1}{5L})\). Removing the conditioning, we get the same with probability at least \((1-2\Delta)^{Nkm}\cdot(1-\frac{1}{5L})\). We want to use the above cost estimation technique to calculate the cost for a _list_ of center sets \(C_{1},...,C_{L}\), and then pick the center set from the list with the least cost. We must apply the union bound appropriately to do this with high probability. We summarise these results in the following lemma. Let us first set some of the parameters with values that we will use to implement the approximation scheme of [1]. * \(L\) denotes the size of the list of \(k\)-center sets we will iterate over to find the one with the least cost. This quantity is bounded as \(L=\left(\frac{k}{\varepsilon}\right)^{O(\frac{k}{\varepsilon})}\). * \(m\) is the number of copies of the state \(|\Psi_{C}\rangle\) made to estimate the cost of the center set \(C\). This, as given is Lemma 5 is \(m=\frac{\eta^{2}\ln\left(10L\right)}{\varepsilon^{2}}\), where \(\eta=(1+\varepsilon)\cdot\max_{i,j}D(v_{i},v_{j})\). **Lemma 6**.: _Let \(L=\left(\frac{k}{\varepsilon}\right)^{O(\frac{k}{\varepsilon})}\), \(m=\frac{\eta^{2}\ln{(10L)}}{\varepsilon^{2}}\), and \(\Delta=O\left(\frac{1}{NkmL}\right)\). Given a point set \(V\) and a list of center sets \(C_{1},...,C_{L}\) in the QRAM model, there is a quantum algorithm that runs in time \(\tilde{O}\left(2^{\tilde{O}(\frac{k}{\varepsilon})}T\eta^{2}\right)\) and outputs an index \(l\) such that \(\Phi(V,C_{l})\leq(1+\varepsilon)^{2}\min_{j\in L}\Phi(V,C_{j})\) with probbaility at least \(\frac{3}{5}\)._ Proof.: The algorithm estimates the cost of \(C_{1},...,C_{L}\) using \(m\) copies each of \(\left|\Psi_{C_{1}}\right.,...,\left|\Psi_{C_{L}}\right.\) and picks the index with the minimum value in time \(O\left(\frac{TkmL\log\frac{1}{\varepsilon}}{\varepsilon}\right)\). Plugging the values of \(L,m\), and \(\Delta\) we get the running time stated in the lemma. Let us bound the error probability of this procedure. By Lemma 4, the probability that we do not have the correct \(m\) copies each of \(\left|\Psi_{C_{1}}\right.,...,\left|\Psi_{C_{L}}\right.\) is bounded by \((1-2\Delta)^{NkmL}\). Conditioned of having \(\left|\Psi_{C_{1}}\right.,...,\left|\Psi_{C_{L}}\right.\), the probability that there exists an index \(j\in[L]\), where the estimate is off by more than a \((1\pm\varepsilon)^{2}\) factor is upper bounded by \(\frac{1}{5}\) by the union bound. So, the probability that the algorithm will find an index \(l\) such that \(\Phi(V,C_{l})>(1+\varepsilon)^{2}\min_{j\in[L]}\Phi(V,C_{j})\) is upper bounded by \((1-2\Delta)^{Nklm}+\frac{1}{5}\). This probability is at most \(\frac{2}{5}\) since \(\Delta=O(\frac{1}{NkmL})\). This completes the proof of the lemma. ### \(D^{2}\)-sampling \(D^{2}\)-sampling from the point set \(V\) with respect to a center set \(C\in\mathbb{R}^{t\times d}\) with \(t\) centers, samples \(v_{i}\) with probability proportional to \(\min_{j\in[t]}D^{2}(v_{i},c_{j})\). Let us see if we can use our state \(\left|\Psi_{C}\right.=\frac{1}{\sqrt{N}}\sum_{i=1}^{N}\left|i\right>\left| \min_{j\in[t]}\tilde{D}(v_{i},c_{j})\right>\) is useful to perform this sampling. If we can pull out the value of the second register as the amplitude, then measurement will give us close to \(D^{2}\)-sampling. This is possible since we have an estimate of the clustering cost from the previous subsection. We can use controlled rotations on an ancilla qubit to prepare the state: \[\left|\Psi_{sample}\right.\right\rangle\equiv\frac{1}{\sqrt{N}}\sum_{i=1}^{N }\left|i\right>\left(\beta_{i}\left|0\right>+\frac{1}{\sqrt{2}}\left|1\right> \right),\] where \(\beta_{i}=\frac{\min_{j\in[t]}\tilde{D}(v_{i},c_{j})}{\sqrt{2\cdot\Phi(V,C)}}\). So, the probability of measurement of \((i,0)\) is \(\frac{\min_{j\in[t]}\tilde{D}(v_{i},c_{j})}{2\cdot\tilde{\Phi}(V,C)}\). Since we do rejection sampling (ignoring \((.,1)\)'s that are sampled with probability \(\frac{1}{2}\)), we end up sampling with a distribution where the probability of sampling \(i\) is \(\frac{\min_{j\in[t]}\tilde{D}(v_{i},c_{j})}{\Phi(V,C)}\in(1\pm\varepsilon) \cdot\frac{\min_{j\in[t]}D(v_{i},c_{j})}{\Phi(V,C)}\). This means that points get sampled with a probability close to the actual \(D^{2}\)-sampling probability. As we have mentioned earlier, this is sufficient for the approximation guarantees of [1] to hold. We summarise the observations of this section in the next lemma. We will need the following notion of the relative similarity of two distributions. **Definition 2**.: _Let \(0<\varepsilon<1\). For two distributions \(D_{1}\) and \(D_{2}\) over a finite set \(X\), we say that \(D_{1}\sim_{\varepsilon}D_{2}\) if for every \(x\in X\), \(D_{1}(x)\in(1\pm\varepsilon)\cdot D_{2}(x)\)._ **Lemma 7**.: _Given a dataset \(V\in\mathbb{R}^{N\times d}\) and a center set \(C\in\mathbb{R}^{t\times d}\) in the QRAM model, there is a quantum algorithm that runs in time \(O\left(\frac{TkS\log\frac{1}{\varepsilon}}{\varepsilon}\right)\) and with probability at least \((1-2\Delta)^{NtS}\) outputs \(S\) independent samples with distribution \(Z\) such that \(Z\sim_{\varepsilon}D^{2}\), where \(D^{2}\) denotes the \(D^{2}\)-sampling distribution._ Proof.: The proof follows from Lemma 4 and the preceding discussion. The above lemma says that for \(\Delta=O(\frac{1}{NkS})\), we obtain the required samples with high probability. We can now give proof of Theorem 1 assembling the quantum tools of this section. Proof (Proof of Theorem 1).: The first requirement for executing the algorithm of [1] is a constant pseudo approximation algorithm using which we obtain the initial center set \(C\). By Lemma 2, we know that \(2k\) points sampled using \(\tilde{D^{2}}\)-sampling gives such a center set. From Lemma 7, this can be done quantumly in time \(\tilde{O}(\frac{k^{2}d}{\varepsilon})\), which also includes the time \(O(kd\log^{2}{(kd)})\) to set up the QRAM data structure for all \(k\) iterations. The algorithm of [1] has an outer repeat loop for probability amplification. Within the outer loop, \(poly(\frac{k}{\varepsilon})\) points are \(D^{2}\)-sampled with respect to the center set \(C\) (line 6). This can again be done quantumly using Lemma 7 in time \(\tilde{O}(d(k/\varepsilon)^{O(1)})\). We can then classically process the point set \(M\) (see line 7 in Algorithm 2) and create the QRAM data structure for the list \(C_{1},...,C_{L}\) of \(k\)-center sets that correspond to all possible disjoint subsets of \(M\) (see line 8 in Algorithm 2). This takes time \(\tilde{O}(Lkd)\), where \(L=\left(\frac{k}{\varepsilon}\right)^{O(\frac{k}{\varepsilon})}\). Theorem 3 shows that at least one center set in the list gives \((1+\varepsilon)\)-approximation. We use this fact in conjunction with the result of Lemma 6 to get that the underlying quantum algorithm runs in time \(\tilde{O}(2^{O(\frac{k}{\varepsilon})}d\eta^{2})\) and with high probability outputs a center set \(C^{\prime}\) such that \(\Phi(V,C^{\prime})\leq(1+\varepsilon)^{3}\cdot OPT\).8 Footnote 8: We needed \((1+\varepsilon)\), but got \((1+\varepsilon)^{3}\) instead. However, this can be handled with \(\varepsilon^{\prime}=\varepsilon/4\). ## 4 Discussion and Open Problems We give a quantum algorithm for the \(k\)-means problem with a provable approximation guarantee of \((1+\varepsilon)\) for arbitrary \(\varepsilon\) with a polylogarithmic running time dependence on the data size \(N\) and an exponential dependence on \(\frac{k}{\varepsilon}\). In the classical setting, there are FPT (fixed-parameter tractable) algorithms that have polynomial running time dependence on the input size \(N\) but are allowed to have exponential dependence on the _parameters_ (e.g. \(k\) in the \(k\)-means problem, which is typically a small number). In this paper, we witnessed a case where we were able to take such a classical FPT algorithm into the quantum setting and lower the dependency on \(N\) from linear in the classical setting [1] to polylogarithmic (this paper) while keeping the dependence on the parameters \((k,d,\varepsilon)\) intact. The aspect ratio \(\eta\) can be considered an additional parameter. It would be interesting to see if there are other problems where such quantization is possible. If so, discussing Quantum FPT (QFPT) algorithms with polylogarithmic dependence on the input size and possibly exponential dependence on the parameters would make sense. Another future direction is to check whether the _sample and query access_ defined by [19] is sufficient to obtain comparable results in the classical setting.
2308.10728
Synergies between interstellar dust and heliospheric science with an Interstellar Probe
We discuss the synergies between heliospheric and dust science, the open science questions, the technological endeavors and programmatic aspects that are important to maintain or develop in the decade to come. In particular, we illustrate how we can use interstellar dust in the solar system as a tracer for the (dynamic) heliosphere properties, and emphasize the fairly unexplored, but potentially important science question of the role of cosmic dust in heliospheric and astrospheric physics. We show that an Interstellar Probe mission with a dedicated dust suite would bring unprecedented advances to interstellar dust research, and can also contribute-through measuring dust - to heliospheric science. This can, in particular, be done well if we work in synergy with other missions inside the solar system, thereby using multiple vantage points in space to measure the dust as it `rolls' into the heliosphere. Such synergies between missions inside the solar system and far out are crucial for disentangling the spatially and temporally varying dust flow. Finally, we highlight the relevant instrumentation and its suitability for contributing to finding answers to the research questions.
Veerle J. Sterken, Silvan Hunziker, Kostas Dialynas, Jan Leitner, Maximilian Sommer, Ralf Srama, Lennart R. Baalmann, Aigen Li, Konstantin Herbst, André Galli, Pontus Brandt, My Riebe, Jack Baggaley, Michel Blanc, Andrej Czechowski, Frederic Effenberger, Brian Fields, Priscilla Frisch, Mihaly Horanyi, Hsiang-Wen Hsu, Nozair Khawaja, Harald Krüger, Bill S. Kurth, Niels F. W. Ligterink, Jeffrey L. Linsky, Casey Lisse, David Malaspina, Jesse A. Miller, Merav Opher, Andrew R. Poppe, Frank Postberg, Elena Provornikova, Seth Redfield, John Richardson, Michael Rowan-Robinson, Klaus Scherer, Mitchell M. Shen, Jon D. Slavin, Zoltan Sternovsky, Gunter Stober, Peter Strub, Jamey Szalay, Mario Trieloff
2023-08-21T13:53:41Z
http://arxiv.org/abs/2308.10728v1
# Synergies between interstellar dust and heliospheric science with an Interstellar Probe ###### Abstract We discuss the synergies between heliospheric and dust science, the open science questions, the technological endeavors and programmatic aspects that are important to maintain or develop in the decade to come. In particular, we illustrate how we can use interstellar dust in the solar system as a tracer for the (dynamic) heliosphere properties, and emphasize the fairly unexplored, but potentially important science question of the role of cosmic dust in heliospheric and astrospheric physics. We show that an Interstellar Probe mission with a dedicated dust suite would bring unprecedented advances to interstellar dust research, and can also contribute -- through measuring dust -- to heliospheric science. This can, in particular, be done well if we work in synergy with other missions inside the solar system, thereby using multiple vantage points in space to measure the dust as it 'rolls' into the heliosphere. Such synergies between missions inside the solar system and far out are crucial for disentangling the spatially and temporally varying dust flow. Finally, we highlight the relevant instrumentation and its suitability for contributing to finding answers to the research questions. keywords: cosmic dust - heliosphere - synergies - interstellar - instrumentation - space missions + Footnote †: This article has been accepted for publication in RASTI Published by Oxford University Press on behalf of the Royal Astronomical Society. + ## 1 Introduction and background information This paper discusses the synergies1 between heliospheric and dust science that can be harnessed with an interstellar probe, the open science questions and pathways forward in the future, including the relevant instrumentation. We refer to Sterken et al. (2019) for a review of the current state of the art of interstellar dust research in the solar system (dynamics and composition, measurements and models). This paper (RASTI) was originally submitted to the Decadal Survey for Solar and Space Physics (Heliophysics) 2024 - 2033 - 7 Sept. 2022 and would be published in the Bulletin of the AAS (BAAS): Sterken et al. (2022) - doi pending. It is modified in this version and includes a discussion on the instrumentation necessary to answer the science questions. Two accompanying white papers were submitted for the decadal survey: Hsu et al. (2022), "Science opportunities enabled by in situ cosmic dust detection technology for heliophysics and beyond", and Poppe et al. (2022), "The interactions of Interstellar Dust with our Heliosphere". A third accompanying refereed paper (Hunziker et al. 2023, in prep.), will provide dust flux predictions in order to illustrate how dust measurements on the way out through the heliosphere may provide new constraints (i.e., the boundary conditions) for heliosphere models, in addition to the already existing magnetic field, plasma, Galactic Cosmic Ray (GCR) and other data from the Voyagers and other spacecraft. Footnote 1: A Synergy: the interaction or cooperation of two or more organizations, substances, or other agents to produce a combined effect greater than the sum of their separate effects. [Oxford Languages dictionary] ### The solar system in the Local Interstellar Cloud The Sun and planets move through the outer edges of the local interstellar cloud (LIC) and into the neighbouring G-cloud or a mixed region of the two clouds (Swaczyna et al., 2022) after a journey of nearly 60.000 years in the LIC (Linsky et al., 2022). The interstellar dust (ISD) in this diffuse cloud may have its origins in supernovae and atmospheres of cool stars or may be recondensed in the interstellar medium after being shattered by supernova shocks. These particles cross the solar system due to its relative velocity with the LIC (of about 26 km s\({}^{-1}\)). They can be measured in situ by dust detectors on spacecraft, and hereby provide unique ground truth information about their make-up and dynamics. This ground truth information is complementary to measurements of the dust by more classical astronomical methods like observations of extinction, scattering, and polarisation of starlight as well as dust thermal emission, and by observing the gas in comparison to a reference (the so-called "cosmic abundances", usually the solar composition), where the "missing component" in the gas phase hints at what must be locked up in the dust (Mathis et al., 1977; Draine, 2003; Draine & Li, 2007; Draine, 2009; Wang et al., 2015). Directly measuring these particles is of utmost importance for astrophysics and is also part of humanity's exploration of our local interstellar neighbourhood. ### Dynamics of interstellar dust in the heliosphere The interstellar dust (ISD) size distribution extends from nanometers to several micrometers and decreases with increasing particle size (Figure 1). However, its mass distribution increases with particle size, as illustrated in Fig. 2 (see also, for example, Kruger et al. 2015, Fig. 6), and thus the largest ISD particles are the most important for determining the gas-to-dust mass ratio (R\({}_{\rm g/d}\)) in the LIC. The dynamics of the dust in the heliosphere depends on the particle size, optical properties, and on the space environment. This dependence on the space environment turns ISD into a very interesting tracer for the dynamic heliosphere. **Micron-sized ISD** particles passing through the solar system are gravitationally dominant, may be uncoupled from the LIC, and could, in theory, come from any other direction than the heliosphere nose (note that the interstellar meteoroids are still a controversial topic in the field, e.g., Hajdukova et al. (2020, 2023); Brown & Borovicka (2023)). Figure 1: The interstellar dust size distribution from astronomical models and in situ data by Ulysses (from Sterken et al. 2022). The smallest interstellar dust particles are the most numerous. Distributions derived from astronomical observations were taken from Wang et al. (2015); Grün & Landgraf (2000); Weingartner & Draine (2001). Ulysses data are from Krüger et al. (2015). Figure 2: The interstellar dust mass distribution from astronomical models and in situ data by Ulysses. Most of the mass is in the largest particles, which is important for the determination of the gas-to-dust mass ratio. Units on the vertical axis are chosen to be equal to the units in Wang et al. (2015). Distributions derived from astronomical observations were taken from Wang et al. (2015); Grün & Landgraf (2000); Weingartner & Draine (2001). Ulysses data are from Krüger et al. (2015). **Mid-sized ISD** particles (ca. 0.1 - 0.6 \(\mu\)m radius) can reach the solar system depending on their size, optical properties, composition, and phase in the solar cycle. Their dynamics in the solar system are governed by solar gravitation, by solar radiation pressure force, and by Lorentz forces due to (charged) ISD passing through the magnetic fields of the solar wind plasma that changes with the 22-year solar cycle, leading to an alternating focussing and defocussing of the dust towards the solar equatorial plane during the solar minima. However, there is an additional (most likely time-dependent) mechanism of filtering in the heliosheath (Linde & Gombosi, 2000; Slavin et al., 2012; Sterken et al., 2015). **Small ISD** particles (30-100 nm) are dominated by the Lorentz force and may partially reach the solar system during the solar focussing phase (e.g., 2029-2036) if the heliospheric boundary regions do not filter the particles already upfront. The higher the charge-to-mass ratio of the dust is, the more the particles move on complicated patterns (e.g., Figure 3, from Sterken et al. (2012)), which may cause 'waves' of higher dust densities to 'roll' into the heliosphere for specific particle sizes (Figure 4, from Hunziker et al. (2023)). The exact lower cut-off size and time-dependence of particles that can enter the solar system is not yet exactly known, but Ulysses and Cassini already have measured ISD particles with radii between 50 and 100 nm (Altobelli et al., 2016; Kruger et al., 2015). **Nanodust** (2-30 nm) cannot enter the heliosphere because it is coupled to the magnetic field lines of the very local interstellar medium (VLISM), and is diverted around the heliopause boundary (Linde & Gombosi, 2000; Slavin et al., 2012). These particles may also pile-up at the heliopause (Slavin et al., 2012; Frisch et al., 2022). Polycyclic aromatic hydrocarbon (PAH) molecules are the smallest carbon nanodust particles in the interstellar medium. They are abundant and widespread in a wide variety of astrophysical regions (Li, 2020). Their presence (or absence) in the local interstellar cloud would provide useful insight into the nature and origin of interstellar PAHs. They are not expected to enter the solar system since they would have been deflected from the heliosphere. However, if PAHs of interstellar origin _are_ detected in the solar system or beyond, their origin (possibly through fragmentation of small carbon dust) would offer valuable insight into the composition and structure of interstellar carbon dust. ### Filtering of dust in the outer and inner boundary regions of the heliosphere The filtering of interstellar dust in the heliosphere likely happens mainly at the heliosphere boundary regions (i.e., at the heliopause and in the heliosheath) and in the region closer to the Sun because (1) the dust acquires the highest charges in the heliosheath (Kimura & Mann, 1998; Slavin et al., 2012), and (2) the azimuthal component of the interplanetary magnetic field causing the focusing and defocusing effects is the largest closer to the Sun. Flying a spacecraft through all these regions to measure all parameters simultaneously (magnetic field, plasma densities, dust charging, pickup ions, dust flux, velocity, and direction), would be of utmost value for understanding the mechanisms of the dynamics of the dust, the dust-plasma interaction, and the role of dust for heliosphere physics, in particular, because this has never been done before. We, therefore, need in-situ measurements of dust and plasma parameters in interplanetary space, at the termination shock, in the heliosheath, at the heliopause and - especially - beyond the heliopause, over the solar cycle, from a future Interstellar Probe. In situ ISD measurements with Ulysses up to 5.4 AU indeed contained a signature of the dynamic heliosphere (Landgraf et al., 2003; Strub et al., 2015; Sterken et al., 2015). These particles were measured for the first time in 1993, using an impact ionization dust detector (Grun et al., 1993). Ulysses monitored the ISD throughout the solar cycle for 16 years, giving us an impression of the fluxes and roughly the flow directionality of the dust. The dust flow direction changed, in particular, in the latitudinal direction around 2006 (Kruger et al., 2007), which may have been caused by the Lorentz force (Sterken et al., 2015). Simulations of ISD dynamics in the solar system (without heliospheric boundaries) would be piecewise compatible with the data if the larger dust particles are porous, or aggregates (hence, if they have a higher charge-to-mass ratio, Sterken et al. (2015)). A second time-dependent mechanism of filtering in the heliosheath was suggested to be needed in order to explain the Ulysses data (Sterken et al., 2015). Time-dependent Figure 4: Higher dust densities ‘rolling’ into the heliosphere for Q/m = 11.1 Ckg (Hunziker et al., 2023). Figure 3: A complicated pattern of ISD trajectories for charge-to-mass ratio 12 C/kg, assuming they made it through the heliosheath (Sterken et al., 2012). models of the heliosphere-dust interaction including a heliosheath are currently under development. ### Compositional measurements of interstellar and interplanetary dust grains #### 1.4.1 Circumstellar grains versus ISM dust versus solar system dust Both model predictions (Zhukovska et al., 2016) and analysis of presolar dust grains from primitive meteorites (Hoppe et al., 2017) indicate that only about \(\sim\)3 % of the interstellar medium (ISM) and parent interstellar cloud dust of our Sun is circumstellar dust. Measurements of contemporary interstellar dust offer the opportunity to compare the inferred ratios at the time of formation of the Solar System with present-day data. However, this would require a sufficiently large number of investigated particles. Assuming a similar circumstellar dust/ISM dust ratio as obtained from modelling and at the time of Solar System formation, we would expect one circumstellar grain among \(\sim\)33 ISM grains, on average. From an astrophysical point of view, oxygen isotopes are best-suited to identify circumstellar oxygen-rich grains (silicates and refractory oxides), while carbon isotopes are diagnostic for most C-rich species (Zinner, 2014). The oxygen and carbon isotopic compositions of the local ISM are not well constrained; there is information on the \({}^{18}\)O/\({}^{17}\)O ratio (e.g., Wouterloot et al., 2008), which differs from the Solar System value. Analyzing the the oxygen and carbon isotopic compositions of a large number of interstellar dust (ISD) particles would help closing this knowledge gap. Laboratory isotope measurements of extraterrestrial samples with high-spatial resolution Secondary Ion Mass Spectrometry (SIMS), like NanoSIMS (Hoppe et al., 2013) or TOF-SIMS (Stephan, 2001), have to deal with a multitude of mass interferences, which often require high mass resolutions m/m of several thousand to be resolved. Spacecraft-mounted impact-ionization TOF-MS instruments, on the other hand, achieve mass resolutions \(\leq\)200, which is sufficient to resolve different isotopes of C, O, Mg, Al, Si, and Ca but not sufficient to resolve the isotopes from interfering compound ions. Nevertheless, due to the different ionization process, many compound ions responsible for hydride or oxide mass interferences (e.g., \({}^{24}\)Mg\({}^{1}\)H\({}^{+}\), \({}^{28}\)Si\({}^{16}\)O\({}^{+}\)) do not occur in relevant numbers, allowing the measurement of diagnostic isotopic ratios, when the detection sensitivity of the element in question is sufficient. Limiting factors would be the concentration of the respective element in a given dust grain, and the impact velocity/energy, governing the ionization yield. The required impact velocities for different species are listed in Table 2. Besides oxygen and carbon (which are detected as O- and C-), further elements of interest would be Mg, Al, Si, and perhaps Ca, all forming positive ions, and present in the vast majority of O-rich circumstellar grains (silicates and refractory oxides). Mg and Si in circumstellar grains can also display isotopic anomalies, although not as pronounced as O and C. However, the Mg-, Si-, and Ca-isotopic compositions of interstellar dust are unknown, making such measurements even more valuable. Another electronegative element of interest would be Sulfur, which is present in C-rich presolar grains (Hoppe et al., 2012). Sulfides have been identified around evolved stars (Hony et al., 2002). No presolar/circumstellar sulfides have been unambiguously identified so far, except for one signature in an impact crater on Stardust Al foils, Heck et al. (2012). None of the Cassini in situ measurements of 36 ISD particles (Altobelli et al., 2016) show evidence for sulfides, despite evidence that S is depleted from the gas phase in the ISM where an abundance of \(\sim\)100 ppm has been inferred for primitive Solar System materials (Keller and Rahman, 2011). Thus, we would expect a certain amount of circumstellar sulfides in the interstellar dust population, if sulfides are able to escape destruction in the ISM - an important question that could be addressed by in situ dust measurements. The mass of S coincides with that of O\({}_{2}\) and sulfur measurements, therefore, require a higher mass resolution than 200. #### 1.4.2 Galactic Chemical Evolution Elemental - and especially isotopic - ratios of contemporary circumstellar and interstellar dust would greatly complement and enhance our knowledge on Galactic Chemical Evolution (GCE), i.e., the enrichment of the ISM and stars with heavier elements and heavier isotopic compositions over time. For certain elements, like Si and Mg, GCE-related correlations have been observed in presolar dust grains (e.g., Zinner et al., 2006; Hoppe et al., 2021), and general predictions have been made for these and other elements from model calculations (Timmes et al., 1995; Kobayashi et al., 2020, e.g.,). However, models and measured grain data do not always show good correlations, thus, information on isotopic ratios like \({}^{25}\)Mg/\({}^{24}\)Mg, \({}^{26}\)Mg/\({}^{24}\)Mg, \({}^{29}\)Si/\({}^{28}\)Si and \({}^{44}\)Ca/\({}^{40}\)Ca would establish another baseline and allow the study of potential heterogeneities of these isotopic systems in the local ISM, which is typically not covered by the models. Similarly, \({}^{13}\)C/\({}^{12}\)C, \({}^{17}\)O/\({}^{16}\)O, \({}^{18}\)O/\({}^{16}\)O, \({}^{33}\)S/\({}^{32}\)S and \({}^{34}\)S/\({}^{32}\)S would yield important information, if electronegative elements can be measured by the respective instruments and if the ratios can be determined with sufficient precision. #### 1.4.3 Interplanetary dust composition and diversity Interplanetary Dust Particles (IDPs) collected in the stratosphere and subsequently analyzed at laboratories on Earth sample a mix of particles from dust-producing bodies. Dust from Jupiter Family Comets is likely dominating (Nesvorny et al., 2010) while dust from the asteroid belt (Rietmeijer, 1996) and even the Kuiper belt have also been observed (Keller and Flynn, 2022). However, stratospheric IDPs are not an unbiased sample of the dust population as their survival during atmospheric entry depends on entry speed and angle (Love and Brownlee, 1991). Measurements of the elemental compositions of IDPs in space would not suffer from this bias and therefore give an average composition of the zodiacal cloud. Further, compositional mapping over different orbital distances could potentially detect differences between Jupiter Family Comets and Kuiper Belt Objects. The major element composition (e.g., Mg/Si, Ca/Si, Fe/Si) of IDP particles is variable and it is unclear if the heterogeneity occurs within or between parent bodies since the origin of individual IDPs collected in the stratosphere is unknown (Bradley, 2014). Compositional mapping of IDPs in the solar system and targeted analyses of individual dust-producing bodies would answer that question. Interstellar Probe may cross cometary dust streams on its path towards interstellar space. Modelling of these streams (Soja et al., 2014) and suitable instrumentation to determine dust compositions and to constrain the dust dynamical properties may provide a statistically relevant dataset for a number of comets and for the sporadic dust background, with increasing distance to the Sun. In particular supporting missions in the solar system may - with the current and near-future generation of instrumentation - be able to analyze IDPs from different well-known sources (e.g., Kruger et al., 2020; Sterken, 2023a,b). The Poppe (2016) model predicts that Jupiter-family comet grains dominate the interplanetary dust grain mass flux inside approximately 10 AU, Oort-Cloud cometary grains may dominate between 10 and 25 AU, and Edgeworth-Kuiper Belt grains are dominant outside 25 AU. Mapping the composition of dust over those regions with an interstellar probe, and if possible measuring oxygen isotopes (and other isotopic data) would be very valuable for gaining knowledge on the history of the solar system. ### Inner and outer source of Pickup Ions Pickup ions (PUIs) were originally assumed to be formerly neutral (mainly H and He) particles of interstellar origin that were ionized in the heliosphere and picked up by the solar wind, where they accelerate to higher energies, presenting a cut-off at roughly twice the solar wind bulk speed. These interstellar PUIs enter the heliosphere in the same manner as ISD does (see Section 1.1 and Kallenbach et al., 2000 and references therein). For a review of in-situ detections of PUIs, see Zirnstein et al. (2022). Measurements from the Solar Wind Ion Composition Spectrometer (SWICS) on Ulysses (Geiss et al., 1995) discovered other species of PUIs (in particular C+, and O+, N+, etc.) from an "inner source" near the Sun that had previously been hypothesized by Banks (1971). Two competing mechanisms of generation of these inner source PUIs were proposed: (1) solar wind particles are first embedded in and later released from dust grains close to the Sun (Gloeckler et al., 2000), and (2) energetic neutral atoms (ENAs) are created in the heliosheath and propagate close to the Sun, where they are ionised and picked up by the solar wind (Gruntman and Izmodenov, 2004). Schwadron and Gloeckler (2007) showed that the second mechanism is dominant for inner source PUIs. Szalay et al. (2021) confirmed from measurements by Parker Solar Probe (PSP) that submicron-sized dust grains do not have sufficient cross-sections to produce all inner source PUIs; however, because nanograins can become trapped close to the Sun, they may account for inner source PUIs via mechanism (1). Neither interstellar nor inner source PUIs can explain the presence of easily ionized atoms among the measured composition of anomalous cosmic rays (ACRs)in the outer solar system (Cummings et al., 2002, Schwadron and Gloeckler, 2007). Therefore, an additional "outer source" of PUIs was proposed: sputtered atoms from dust grains originating in the Edgeworth-Kuiper belt (EKB) are ionised and picked up by the solar wind (Schwadron et al., 2002). The dynamics of nanodust is expected to be similar to that of PUIs but the contribution of nanodust (heavy charged particles that may be multiply charged) to the physics and boundaries of our solar bubble has not currently been quantified. To date, we only have limited knowledge about the PUI distribution in interplanetary space from the New Horizons mission, from older missions/instruments like Ulysses/SWICS, and (in the heliosheath) from indirect measurements of PUIs by Cassini and IBEX that measure remotely sensed Energetic Neutral Atoms (ENAs) from 10 AU and 1 AU, respectively. Measurements from the Solar Wind Around Pluto by the (SWAP; McComas et al., 2008) instrument on the New Horizons mission showed that the interstellar PUIs are heated in the frame of the solar wind, before reaching the termination shock (McComas et al., 2021). Notably, once the Voyager missions crossed the termination shock (Decker et al., 2005, 2008; Stone et al., 2005, 2008) they identified that the majority of the shocked solar wind energy density went into heating the PUIs, whereas \(>\)15% was transferred to energetic ions, showing an unexpected charged particle spectrum inside the heliosheath (Dialynas et al., 2019, 2020, e.g.). Only \(\sim\)20% of the shocked solar wind energy density went into heating the downstream thermal plasma (Richardson et al., 2008). Consequently, PUIs are expected to play a substantial role in the pressure balance between the heliosheath and the Very Local Interstellar Medium (VLISM), but the Voyager missions could not measure PUIs. The analysis of a unique combination of all available _in situ_ ion and remotely sensed ENA measurements (Dialynas et al., 2020) over \(\sim\)10 eV to \(\sim\)344 MeV energies, showed that the heliosheath is a high plasma-\(\beta\) region (\(\beta\) is here the particle over the magnetic field pressure), where PUIs (primarily) and suprathermal particles (secondarily) dominate the pressure (see also review article by Dialynas et al., 2022). Understanding both the nanodust and PUI populations through direct in situ measurements from a future ISP mission will be instrumental for understanding the heliosphere's interactions with the Very Local Interstellar Medium (VLISM). ## 2 An interdisciplinary science case and its importance for a wider field Here we summarize the most pressing science questions covering the fields of heliospheric (H) and dust science (D), and questions related to the heliosphere-dust interaction (HD). Addressing in depth this broad spectrum of questions is also important for the astrospheric community and for understanding our local interstellar neighborhood. In the following, we divided the questions according to the dust size, so that they can be linked more easily to the type of measurements and instrumentation that are needed. Apart from ISD, interplanetary (nano)dust may also play a role in these questions. **Micron-sized ISD**: What is the gas-to-dust mass ratio in the ISM, and hence, what is the biggest size of ISD residing in the Interstellar Medium (ISM)?\({}^{(D)}\) Do large grains detected at Earth as (interstellar) meteors exist in the ISM?\({}^{(D)}\) Is any of the dust coming from a direction other than the heliosphere nose and what does it imply for our current interstellar environment near the interface between LIC and G-cloud?\({}^{(D)}\) What is the composition and morphology of micron-sized ISD (porous, aggregate, compact?) and what implications are there for the formation of the dust and processes in the VLISM?\({}^{(HD)}\) What are the characteristics of Oort cloud dust, and what will the Kuiper belt dust reveal about its sources?\({}^{(D)}\) **Submicron-sized ISD**: How do ISD dynamics depend on the heliosphere, and specifically how does the heliosheath filter out these particles?\({}^{(HD)}\) What is the time-variable size and structure of the heliosphere (using dust measurements as additional boundary conditions for the heliosphere models)?\({}^{(H)}\) From which distance to the Sun can we measure carbonaceous ISD, and why has there been little evidence in detections so far? **Nanodust ISD**: How much nanodust is filtered (time-dependent or permanently) at the heliopause and heliosheath?\({}^{(HD)}\) What role does the nanodust inside and outside of the heliopause/heliosheath play in heliospheric physics?\({}^{(HD)}\) Does nanodust pile-up near the heliopause?\({}^{(HD)}\) Where does 'outgoing' (interplanetary) nanodust from the solar system and the ISD reside in the heliosphere; i.e. will they flow to the heliosphere flanks?\({}^{(HD)}\) Can it affect the heliosphere size and structure throughout the solar cycle?\({}^{(HD)}\) What are carbon nanodust species made of and will we measure Polycyclic Aromatic Hydrocarbon (PAH) clusters outside of the heliopause?\({}^{(D)}\) **All dust sizes**: How much charging does ISD acquire in different regions of the heliosphere, in particular in the heliosheath, and how does this charging depend on dust size, composition and local environment properties?\({}^{(HD)}\) Does dust \(-\) and what sizes of the dust - play a role in the pressure balance of the heliosphere?\({}^{(HD)}\) How does dust affect the production of pickup ions, and how does it depend on the solar cycle? Do ISD or interplanetary dust particles (IDP) contribute to mass-loading of the solar wind?\({}^{(HD)}\) What are the different dust populations in the ISM, and what are their compositions, particle morphologies, and bulk densities?\({}^{(D)}\) How do they compare with astronomical measurements and cosmic abundances?\({}^{(D)}\) How much do they affect the plasma / heliosphere physics, and at which spatial scales?\({}^{(HD)}\) What species of carbonaceous ISD exist and for which dust sizes and abundances?\({}^{(D)}\) How much of the ISD is likely recondened or pristine stardust?\({}^{(D)}\) How accurately does our current knowledge of elemental and isotopic composition, mostly derived from measurements of the solar nebula and galactic cosmic rays, reflect that of the galaxy/universe?\({}^{(D)}\) What is the role of the dust for astrospheres?\({}^{(HD)}\) What is the role of the dust in the history and habitability of the heliosphere?\({}^{(HD)}\) **Importance**: Probing the heliosphere-dust interaction using modelling and in situ measurements is essential for understanding our own immediate interplanetary and interstellar environment. It is also a test-bed to understand how other astrospheres work, as well as to unravel the history of our own solar system and its interaction with various environments during its journey through the Galaxy. Tracers of this journey can now be found in deep-sea sediments (e.g., from supernovae (Miller et al., 2022) or perhaps from passing through denser clouds (Opher & Loeb, 2022). Dust from the VLISM is of particular astrophysical interest in light of recent near-Earth supernovae from which debris is still falling on Earth today (Koll et al., 2019) and must arrive in the form of dust (Miller et al., 2022). Studying this dust is also important for galaxy evolution and physics of the ISM (see Section 1.4). Assessment of Infrastructure, Research Strategy to answer these science questions, and technological development Needs ### Dust measurements on an Interstellar Probe First and foremost, a mission into interstellar space like the Interstellar Probe (McNutt et al., 2022; Brandt et al., 2022) with a dedicated dust detection suite on board would be optimal for compelling ISD and heliosphere research. Such an Interstellar Probe (ISP) would - for the first time - be able to measure the smallest ISD particles beyond the heliopause that are blocked from entering the solar system. With such measurements, ISP would be entering unexplored scientific territory. Also, these dust particles of a few to tens of nanometers are orders of magnitude more numerous than the particles Ulysses could measure (see Figure 1). In addition, ISP could detect whether there is really a pile-up of particles near the heliopause. For the first time, we would be able to measure how and until what size the particles follow the flow of the VLISM, which sizes can cross the heliopause permeability, and how far some particles can travel through the heliosheath. Such measurements in combination with measurements of the local magnetic field, plasma properties, pickup ions, and the surface charge for dust particles larger than a few hundred nanometers, will help tremendously in understanding the heliosphere-dust interaction and the potential role of dust in heliosphere physics. Also, ISPs move fast (ca. 7-8 AU per year outward) into the stream of ISD (coming at 5.5 AU per year inwards). The high speed results in higher fluxes (cf., detection rates) and enhanced detector sensitivity for the dust impacts, making the detection of tiny particles easier as well as allowing particles to be fully ionized for all compositional elements. Last but not least, ISP will fly throughout approximately 16 years, more than a solar cycle, while passing through interplanetary space, the termination shock, the heliosheath, up to the heliopause and beyond, making it an optimal mission for studying the heliosphere-dust coupling and using this knowledge for other astrospheres. Beyond the heliopause, the tiny dust with gyro-radii of only a few to 100 AU (for dust radii \(<\) 0.1 \(\mu\)m, see also Table 3), will help study the interstellar environment (magnetic field, plasma) and may detect local enhancements of smaller as well as bigger ISD. The strength of the mission lies in flying through all of these diverse regions with simultaneous magnetic field, dust, plasma and pickup ion measurements. No mission so far has flown a dedicated dust dynamics and composition suite into the heliosheath and the vast space beyond. ### Continuous observations and observations from different vantage points in space The optimal way to disentangle the spatially and temporally variable dust dynamics in the heliosphere is by ensuring long-term monitoring of the dust flux (\(>\) 22 years) and by combining measurements from different vantage points in space. Hence, the science yield of an ISP mission would be greatly enhanced by simultaneous measurements inside the solar system by another mission, with a dust suite tailored to measuring dust dynamics (and composition) over an extended period of time. One example of such an observing capability in the ecliptic plane is a long-term dust suite on the Lunar Gateway (Wozniakiewicz et al., 2021; Sterken, 2023a), with the continuation of complementary dust measurements by IMAP (McComas et al., 2018). Examples of such missions out of the ecliptic could be the DOLPHIN(+) mission concept that was proposed to ESA 2022 (Sterken, 2023b), the SunCHASER mission concept that includes a dust detection suite in its baseline (Posner et al., 2021), or a mission with a Ulysses-type orbit that is out of the ecliptic and perpendicular to the ISD stream. Missions with inclined orbits can in addition investigate the IDP-helioshere interactions, and the solar-cycle dependent vertical structure of the zodiacal dust cloud. Such a dust suite could contain a Large Area Mass Spectrometer (Sternovsky et al., 2007; Sarma et al., 2007) (or a combination with an impact ionization detector), equipped with one or several charge grids / a trajectory sensor, eventually augmented by a large-area polyvinylidene fluoride (PVDF) detector. An in depth overview and discussion of possible ISP instrumentation is given in Section 4, while an overview table of the main goals of the above mentioned supporting missions is given in Table A1. Synergies between heliosphere and dust measurements, inclusion of'serendipity instruments', and modelling Simultaneous measurements with complementary instruments, i.e., for plasma and magnetic field properties and pickup ion detections, together with dust fluxes, velocities, directions and - if possible - dust surface charge will yield particularly strong synergies between dust and heliospheric science. The inclusion of'serendipity dust instruments' that collect information on dust impacts, but were not originally designed for this work, will enlarge the pool of data to be used from different vantage points in space. Plasma wave instruments on various satellites, which pick up a sharp signal when a dust particle impacts the spacecraft, are very good examples of this. The Wind mission yielded a yearly recurring ISP signature in the more than 25 years of plasma wave dust data, including a solar cycle variability (Malaspina et al., 2014; Malaspina and Wilson, 2016; Hervig et al., 2022). Also Voyager has detected a few impacts (Gurnett et al., 1983). A challenge is that the operations and observations were not tailored to dust impacts; hence, retrieving the dust flux and direction is a challenging task. Also, information such as impact velocity, particle mass or particle charge is missing. Therefore, it is difficult in the solar system to distinguish between interplanetary dust particles (IDP) and interstellar dust (ISD) impacts with these types of instruments. A long-term dust monitoring mission, with sufficiently large detector surfaces, dust trajectory, surface charge, and velocity sensing capabilities (and composition) would be a tremendous leap forward and a significant increment to this pool of data. In any case, Wind has fuel for another 50 years (Darling, 2019), IMAP (with dust compositional analyser, without grid) could keep monitoring the compositions and fluxes of incoming ISD on a statistical basis, and the Gateway may be a good platform for long-term monitoring during the flight time of an Interstellar Probe. When such a data set (multiple places and long-term) is combined with state-of-the-art computer modelling of the heliosphere-dust flow, then the particle properties (e.g., size distribution in the LIC) and the dynamical structure of the heliosphere can be retrieved by fitting a model of the heliosphere, including time-variable heliosheath, to the pool of data. Figure 5 illustrates that even a simple model with only dust filtering in the solar system can already yield valuable information about filtering at the heliosheath if sufficient data are available. The model used is the IMEX model (Strub et al., 2019) and the predictions shown are for an interstellar probe. The ISD waves 'rolling' in can be seen as sharp increases in relative flux at different times for different particle sizes. An additional filtering at the heliosphere boundaries would alter this pattern. These fluxes are predicted along an ISP trajectory with launch date in 2030, during the focusing phase of the solar cycle. Dust observations along the path of ISP at high impact velocities may be able to shed light on heliospheric filtering, through monitoring whether such patterns are present inside of the heliosphere, in addition to the direct measurements in the heliosheath. Similar investigations can be undertaken in the solar system. ### Ground-based facilities An on-going calibration effort of different dust detectors with a dust accelerator is crucial for success. Since ISP moves very fast, calibrations with a dust accelerator are needed with high velocities and for dust particle analogs with different properties (e.g., lower bulk density dust analogs are important for measurements of \(\mu\)m-sized ISD (Hunziker et al., 2022)). New dust analogs need to be further developed, measurements with plasma wave instruments need to be further understood (e.g. Shen et al., 2021), and high-level computing facilities are needed for the modelling. The dust accelerator at LASP, Univ. Colorado Boulder and at IRS, Univ. Stuttgart, are indispensable tools for any space mission with a dust detector on board. Developments are underway at the University of Stuttgart for a linear staged accelerator (faster velocities), and at ETH Zurich and FU Berlin for the next generation of dust analogs. High-precision and high-power (\(>\) 100 kW pulse) ground-based radars are needed for interstellar meteor research (Hajdukova et al., 2020). The technological risk for these types of missions, instruments and ground-based facilities is relatively low, since most have been developed already or are based on heritage. ## 4 Available instrumentation and how they address the science objectives and questions This paper does not propose a mission or an in-depth Science Traceability Matrix, but we discuss instrumentation that is available today or in the near future, its strengths and weaknesses with respect to different types of measurements, and how it can contribute to the science questions and science goals for an Interstellar Probe and support missions. We focus mostly on science questions related to ISD- (or IDP-) heliosphere interactions. Questions focusing on other fundamental physics aspects of the heliosphere, in part as a result of the Voyager 1, Voyager 2, Cassini, IBEX and New Horizons missions, are reviewed in Dialynas et al. (2023) and particularly in Brandt et al. (2022, 2023) and McNutt et al. (2022). The science questions in Section 2 can be summarized in the following science objectives (SO) (see also Table 1): (SO1) Nature of our local interstellar medium environment (SO2) Origins and processes of dust in the local interstellar medium Figure 5: An example of how computer simulations of relative dust fluxes can teach us about the filtering at the heliosheath, when compared to spacecraft data for the respective dust sizes, from (Hunziker et al., 2023). (SO3) Origins and processes of dust in or nearby the heliosphere (SO4) Heliosphere structure, physics and dynamics. ### PVDF detectors PVDF detectors employ a permanently polarized polymer film that generates a charge pulse upon particle impact. The penetration of the film causes a depolarization of the material, ensuing a measurable relocation of charge (see, e.g., Simpson & Tuzzolino, 1985). The shape and amplitude of this signal depend on the mass and impact speed of the dust particle. PVDF sensors are foil-type detectors, named after the material used as the polymer (polyvinylidene flouride). PVDF detectors have the advantage of being low-cost, low-resource, and fairly simple. They can be used to cover large areas (e.g., 0.54 m\({}^{2}\) onboard IKAROS, Hirai et al., 2014) and may even be integrated with a spacecraft's thermal insulation (e.g., onboard EQUULEUS, Funase et al., 2020). A student-project PVDF detector currently flies on the New Horizons mission, providing measurements from beyond 55 AU (Horanyi et al., 2008; Bernardoni et al., 2022). PVDF detectors are particularly useful for the micron-sized part of the dust size distribution. However, they lack the possibility to distinguish impact mass and impact speed, and contain no information about impactor directionality except for the pointing of the instrument with a field of view of 180\({}^{\circ}\). Due to their piezo- and pyroelectric properties, PVDF sensors can generate noise events induced by mechanical vibrations or thermal variations (Simpson & Tuzzolino, 1985; James et al., 2010). These can be mitigated to some degree by correlation of events with spacecraft operational activities or by use of shielded reference sensors to adjust the trigger threshold (Piquette et al., 2019). PVDF can contribute to the science questions (Section 2) related to large interstellar dust particlesand (towards) interstellar (micro)meteoroids (Gregg & Wiegert, 2023; Hajdukova et al., 2019), provided that the spacecraft is outside of the solar system. For a spacecraft in orbit around the Sun, e.g. at Earth distance, the yearly modulation of the dust fluxes - due to the fact that ISD from the LIC comes mostly from one direction - provides some information about the ISD flux as well (e.g., Hervig et al., 2022; Malaspina et al., 2014). The inability to discriminate between ISD and IDP with PVDF makes it less useful for studies of ISD inside the solar system. Because PVDF is suitable for micron-sized ISD detections, owing large surface areas, it can contribute to a certain extent to the science questions concerning the gas-to-dust mass ratio in the ISM, finding 'big' dust grain populations, and to support astronomical observations of interstellar dust in the micron-size regime and above. ### Impact ionisation detector When a dust particle impacts the target of an impact ionisation detector (IID) at hypervelocity speeds, particle as well as target material are vaporized and partially or fully ionized (depending on speed). The charge of the generated ions and electrons is measured, and the impact speed and mass of the dust particle are estimated through calibrated signal rise times (e.g., Grun et al., 1992) and charge signal amplitudes (see, e.g., Fritchtenicht & Slattery, 1963; Grun et al., 1992) respectively. Impact ionisation detectors achieve a high degree of reliability and sensitivity, with the ability to detect impactors of only tens of nanometers in size (or below for fast speeds, see also Section 4.11). Considerable surface areas in the order of 0.1 m\({}^{2}\) can be accomplished (e.g., onboard Ulysses) despite relatively simple and lightweight designs. However, they only obtain limited information about the dynamics of the impactors. The directional constraint comes solely from the aperture design (consider e.g., the FOV half angle of the Cosmic Dust Analyzer (CDA2) onboard Cassini of \(\sim 45^{\circ}\), Srama et al., 2004a). Impact velocity estimates based on the charge signal shape involve large uncertainties in the order of a factor of 1.6-2 (Goller & Grun, 1989), which may be even higher for fluffy particles (Hunziker et al., 2022). Since the mass of the impactor is derived from the relation Q \(\sim\) m \(\nu^{\alpha}\) (with Q the measured charge after impact, m the impactor mass and v the impact velocity), the particle mass uncertainties typically are in the order of a factor of 10 (Goller & Grun, 1989). This is why reliable velocity information is of crucial importance for impact ionization instruments, in order to distinguish interstellar from interplanetary dust in the solar system, and for estimating the mass-frequency distribution of the interstellar dust in the ISM. Large statistics may nevertheless yield useful information about the dynamics of the dust (e.g., Strub et al., 2015; Sterken et al., 2015). Adding a (segmented) charge-sensing grid or trajectory sensor, however, greatly enhances the science return (see Section 4.6). Footnote 2: Note that CDA was a combined TOF-MS and IID with two charge-sensing grids. Due to their sensitivity, impact ionisation detectors have been instrumental in the exploration of the smallest meteoroids, such as \(\beta\)-meteoroids (Wehry et al., 2004), nanodust (Section 4.11), and submicron ISD, particularly with regard to their abundance, but also their dynamics (Landgraf et al., 2003; Strub et al., 2015; Sterken et al., 2015). Since these detectors can detect dust from the few-nanometers to micrometer size range, they also are powerful for a larger number of science questions related to the ISD size distribution inside and outside of the heliosphere, the modulation of the ISD dynamics by the heliosphere, the gas-to-dust mass ratio, etc. comparable to the work done with the Ulysses mission dust data, if there is a sufficiently large surface area. The fact that the instrument is sensitive to dust impacts of a few nanometers at relative speeds typical for ISP (ca. 55 km/s, Hunziker et al., 2023) makes it useful for nanodust studies as well (see also Section 4.11). Therefore, such instruments can contribute to questions of the dust distributions in and around the heliosphere, including a possible pile-up of ISD outside the heliopause, and consequences for the physics of the heliospheric boundary regions. These dust measurements throughout the solar system may be used as an extra boundary condition for the heliospheric model, but inside of the heliosphere, it is challenging to discriminate ISD from IDP. Adding a segmented grid (see Sections 4.4 and 4.6) for better velocity determination would greatly augment the science return both from the point of view of distinguishing populations as well as for having more precise estimates of the particle masses. Since the instrument is more sensitive than PVDF, it would also augment our current knowledge on dust in the Kuiper belt region (in particular if combined with a grid), after the first crude dust measurements in the region were taken by the Voyager mission using plasma wave antennas (Jaynes, 2023, pers. comm.), and by the New Horizons mission using a PVDF detector (Bernardoni et al., 2022). Although this type of fairly simple and well established instruments can contribute to many science questions concerning populations, dynamics and in particular dust-heliosphere interaction and physics, it still lacks compositional information for many of the origins, processes and populations related science questions. Compositional information can also help discriminate ISD from IDP. A combination with a time of flight mass spectrometer (Section 4.3) and/or (segmented) grid can be flown (e.g. Cassini CDA). ### Time of flight mass spectrometer (TOF-MS) Dust particles impacting at hypervelocity speeds on a time-of-flight (ToF) mass spectrometer ionize by the impact ionization process. The plasma's ions are then accelerated by an electric field (e.g., 1000 V for Cassini CDA) that separates them according to their charge-to-mass ratios. The recording of their travel times from the impact target to an ion detector then yields the abundance of species with different charge-to-mass ratio within the impact plasma, from which the information about the impactor composition can be derived. The key benefits of the TOF-MS are: (1) its ability to analyse the grain composition, and (2) that the recording of a mass spectrum functions as unequivocal proof of a true particle impact (as opposed to a noise event). Higher mass resolutions can be obtained with more sophisticated mass analyzer concepts ('ion-optics') than for the linear TOF-MS: * linear; \(m/dm\approx 30\); e.g. CDA, (Srama et al., 2004a) * reflectron; \(m/dm\approx 100-300\); e.g. DDA, SUDA (Kruger et al., 2019; Kempf, 2018) * orbitrap; \(m/dm>10,000\); e.g. onboard SLAVIA (Zymak et al., 2023) yet to be tested in space One limitation of the TOF-MS is that only either the plasma's cations or the anions can be fed into the mass analyser, depending on the polarity of the ion optics. So far only cation mass analysers have been used in space missions, as cations are readily formed by most elements and molecules (Srama et al., 2009). Certain organic molecules, however, form anions rather than cations during impact ionisation (Hillier et al., 2014, 2018), suggesting the use of (switchable) dual polarity ion optics in future instruments (as first employed in the upcoming SUDA instrument, Napoleoni et al., 2023). Two arguments especially support an anion analysis of impact plasma of a dust particle: (1) oxygen can be measured with a much higher sensitivity (up to factor \(10^{5}\)) which would allow the determination of isotopic ratios. This is important for the sensitive detection of water ice, hydroxides, silicates and oxides. However, its yield for cations is strongly impact speed dependent. (2) A negative anion mode would also allow a sensitive study of Halogens, Carbon and minerals like S, P, SO\({}_{4}\) and PO\({}_{4}\). This complements the sensitive detection of metals in the cation mode. In summary, using the combination of cation and anion modes in TOF-MS impacts with speeds above 30 km/s allows the sensitive detection of all elemental ions between 1 and 200 amu. Isotopes help to identify elemental species, but are not trivial to measure. Measurements of isotopes at mass M require both a mass resolution higher than M and a high dynamic range in order to quantify small peaks in the vicinity of larger peaks. When the dynamic range reaches 1000 or better, the identification of isotopically anomalous interstellar dust grains of circumstellar origin is achievable, provided extensive calibration data is available. The current and former generations of impact ionization TOF-MS were not optimized for simultaneous high-dynamic range and high mass resolution. Future instruments will employ improved electronics in order to extend the dynamic range. Impact velocities also play a major role in TOF impact ionization spectrum analysis. At lower velocities, not all of the impactor constituents may become ionised. Table 2 shows the minimum impact velocities that are needed for the detection of the ion species (for the positive ion mode). Considering the flow speed of ISD of about 25 km/s, such velocities are met in most conceivable cases. In particular for ISP, moving into the nose direction of the heliosphere, all particles are expected to be fully ionized at relative speeds of ca. 55 km/s. However, for spacecraft in the solar system moving in the down-wind direction (along their heliocentric orbit or on a down-wind escape trajectory), the relative velocities of ISD may be insufficient for complete ionisation of certain species. Compositional information is crucial for many of the science questions related to the origins and processes of ISD in the VLISM, dust origins and processes in the solar system (e.g. Kuiper belt, various comets), charging mechanisms, generation of PUI, etc. \begin{table} \begin{tabular}{|l|l|l|} \hline Dust size regime & Science Objectives & Main type of questions \\ \hline \hline \(\mu\)m-sized dust from the ISM & SO1, SO2 & ISM exploration, sources, processes, populations \\ \hline Electromagnetically dominated (sub-micron) sizes & SO3, SO4 & Heliosphere-dust dynamics, dust as a tracer, heliosphere-dust interactions and processes, ISM dust dynamics \\ \hline Nanodust sizes / macromolecules & SO4 & Influence of dust on heliosphere plasma, dust as a tracer in the heliosphere and the ISM, PAH in the ISM \\ \hline \end{tabular} \end{table} Table 1: Summary of science objectives and types of science questions considered in this publication with strong dust-heliosphere synergies. \begin{table} \begin{tabular}{c c} \hline Species & V\({}_{\rm min}\) (km/s) \\ \hline \hline H & \(8-10\) \\ C & \(10-12\) \\ O & \(14-16\) \\ Na & \(2-5\) \\ K & \(2-5\) \\ Mg & \(5-10\) \\ Al & \(5-10\) \\ Si & \(10-15\) \\ Ca & \(5-10\) \\ Fe & \(10-15\) \\ Rh & \(8\) \\ S or O\({}_{2}\) & \(15-20\) \\ \hline \end{tabular} \end{table} Table 2: Minimum impact velocities for measuring the composition of the tabulated species with impact ionization time-of-flight mass spectrometry (in positive ion mode). The Mg-peak can also appear even at lower speeds around 3 km/s but is then very small. The Si-peak can also appear at 7 to 10 km/s, albeit also very small. The peaks of S and O\({}_{2}\) are difficult to distinguish. (see also Section 1.4), but it can also help to discriminate between different dust populations. Also, for such instruments statistics are vital for the science results and hence, the need for large surface area. Large Area Mass spectrometers have been developed with surface areas of 0.1 m\({}^{2}\)(Sternovsky et al., 2007; Srama et al., 2007). Speed information can be constrained within boundaries from the shapes of the peaks, and from the occurrence of the peaks or from molecular clusters. Particle masses can be constrained from the impact charge together with simulations of the ion optics and calibration data. However, a (segmented) grid or trajectory sensor would be of great added value. #### 4.3.1 Plasma Wave Antennas When a dust particle impacts the body of a spacecraft at hypervelocity speeds, it vaporises both itself and a fraction of the spacecraft surface. The recollected electrons and the induced charges of the escaping electrons and ions measured by the plasma wave antennas produce a distinct amplitude signal. Whether the impact speed and mass of the dust particle can be reconstructed from that signal is currently a topic of investigation (see, e.g., Shen et al., 2023, and references therein). The advantage of the plasma wave antennas is the large surface area (basically, the spacecraft body), and the science-at-no-extra-cost if a plasma wave antenna is on board. However, it comes with the caveat that only limited directionality information can be derived (e.g., (Malaspina et al., 2014; Pusack et al., 2021)). Also, the signal is a function of mass, impact velocity, distance to the antenna (depending on configuration), and spacecraft surface material. As a consequence, it is not yet possible to uniquely determine the mass, velocity and/or distance to the antenna of the impact. A plasma wave antenna can detect many dust impacts per time period compared to other dust instruments. This information can be heavily compressed into a low-bandwidth data product via on-board dust detection algorithms. Outside of the heliosphere, the plasma wave antennas can be especially useful because large counting statistics (due to the large surface area of the spacecraft body) may yield important information on the distribution and populations of ISD in the ISM. Large counting statistics are especially useful for detecting larger, more sporadic dust grains. However, just like PVDF detectors, only limited information on mass and velocity can be derived. Inside of the heliosphere, at e.g. Earth orbit, the plasma wave antennas can infer information about the ISD variability with time through the modulation of the flux throughout the year and throughout the solar cycle (Hervig et al., 2022; Malaspina et al., 2014). Plasma wave and PVDF results could be compared with each other. ### Charge-sensing grid Charge-sensing grids are grid electrodes that sense charged dust particles passing through, via the charge they induce in the electrode (e.g., see upper elements in Fig. 6). In addition to measuring the particle charge, dynamical information such as entrance angles and speed may be estimated from the signal shape of the induced charge. Different configurations of charge-sensing entrance grids have been proposed. The Cassini Cosmic Dust Analyzer (CDA) used a serial electrode design with two canted grids that could yield speeds as well as incident angles (Auer et al., 2002). However, the large capacity of the two grids restricted this design to relatively large grains with charges > 1 fC. A design employing segmented, lower-capacity grids has been proposed by Li et al. (2014, 2015, 2017). Such a segmented design is awaiting in-flight demonstration on-board of the Destiny+ mission3(Arai et al., 2018) as part of the Destiny+ Dust Analyzer (DDA) (Simolka et al., 2022) with an anticipated detection threshold of 0.2 fC. This corresponds to a dust particle with radius of 0.35 \(\mu\)m (in the solar system). Segmented charge sensing grids are a good compromise between increased science output and instrument complexity. Since they are non-destructive, they are especially suited to be combined with destructive detector stages, so that even a single-plane charge-sensing grid can be used for time-of-flight impactor speed measurements (as done in DDA). The DDA system can determine the speed with a ca. 15% and the mass with approximately a 20% accuracy. Footnote 3: Demonstration and Experiment of Space Technology for INterplanetary voYage Phaethon flyby and dUst Science, to be launched in 2024 Charge-sensing grids are well suited to study the abundance and dynamics of bigger (> 1 \(\mu\)m diameter) particles in the solar system. These particles are on the larger end of the dust particle size range that is still affected by electromagnetic forces in the solar system. Measuring or constraining their speed would increase the accuracy of the mass determination (Q - m - v\({}^{\mathrm{az}}\)), and constraining the speed and velocity vector to a certain extent would allow for a better discrimination between the sources of the dust particles (in particular ISD vs. IDP) in the solar system. Measuring their surface charge (with the grid), their mass (through an IID or TOF-MS) and plasma parameters (plasma instrument) may even yield constraints on their bulk densities. In the helioshach, the dust is expected to reach higher equilibrium potentials of ca. +6 to +12V (Kimura and Mann, 1998) or +8 V (Slavin et al., 2012), as opposed to ca. +5V for the solar system (see Table 3). These are equivalent to dust particle radii of ca. 0.3, 0.15 and 0.2 \(\mu\)m, respectively, which is well within the range of electromagnetically affected dust that reacts dynamically to the solar cycle through the time-variable heliospheric magnetic fields. In interstellar space, the dust particles are expected to have lower charges, corresponding to ca. +0.5V (Grin and Figure 6: Schematic of the DDA instrument. Image credit: IRS/Univ. Stuttgart Svestka 1996) equilibrium surface potential (equivalent dust radii ca. 3.5 \(\mu\)m). However, it can be expected that micron-sized ISD may be porous (Westphal et al. 2014; Sterken et al. 2015). Assuming a compactness factor of ca. 1/3\({}^{\rm rd}\), they can be ca. three to four times more charged (Ma et al. 2013), i.e. they would be detectable by the grid from a radius of ca. 2 \(\mu\)m (detection threshold 0.2 \(\Gamma\)). Adding a charge grid to the design of an IID or TOF-MS (Section 4.6) may thus yield important information on the dynamics of the mid-sized (sub-micron) particles in the heliosphere and, in particular, in the heliosheath, as well as help constrain the direction of motion of very large grains or micrometeoroids that may exist in the ISM, in addition to possibly constraining their bulk material densities. Science questions inside the heliosphere (in particular the heliosheath) related to the dynamics and mass distribution of submicron ISD would be easier to tackle if an IID or TOF-MS instrument includes a (segmented) grid, with limited add-on complexity. Such instruments provide higher constraints on dust directionality, velocity (hence, ISD-IDP discrimination), better mass constraints (using the velocity constraints) and useful information on the dust surface charge. ### Trajectory sensor The concept of the trajectory sensor involves two planes of position-sensitive charge sensors, from which the flight path of a particle may be accurately reconstructed. These position-sensitive charge sensors can be realized through a set of crisscrossed wire electrodes (Auer 1975; Auer et al. 2008) or through a finely segmented grid (Li et al. 2014). The key advantage of a trajectory sensor is its accuracy. For instance, uncertainties of \(<1^{\circ}\) are reached the design of Auer et al. (2008). The difficulty of this design lies in its complexity, as one charge-sensitive amplifier (CSA) is required for each wire or grid segment (e.g., 64 CSAs in the design of Auer et al. 2008). So far, such designs (see, e.g., Fig. 7) have only been demonstrated in the laboratory (Xie et al. 2011). One of the major motivations for using trajectory sensors, apart from dust surface charge measurements, would be the dynamic differentiation between dust types (e.g., between interstellar and interplanetary dust, or cometary dust streams). Their use together with IID or TOF-MS detectors (see Section 4.6) would yield an improved mass determination through an accurate velocity determination (see also the discussion above for the grids). ### The Dust Telescope The combination of a non-destructive trajectory sensor with an impact plasma mass spectrometer allows for the simultaneous analysis of dust particles' physical, chemical, and dynamical properties. This type of instrument as been nicknamed 'dust telescope' (Grun et al. 2005; Sarma et al. 2004b). A simplified version of a dust telescope could consist of instruments such as CDA and DDA with their less accurate, charge grid-type dynamics-sensing detector stages. A laboratory model of a true dust telescope (i.e., with high-accuracy trajectory sensor) has been implemented by Horanyi et al. (2019). ### Plasma instrument The purpose of a plasma instrument (Plasma Subsystem-PLS; see also Table 5) is to measure the low energy (eV to keV) particle distributions throughout the heliosphere with the sensitivity to detect the very cold plasma populations in the VLISM and the dynamic range to measure the solar wind. The physics of the boundaries of our solar bubble, namely the termination shock (TS), the heliopause (HP) and the heliosheath, including the very local interstellar medium (VLISM) (e.g. Dialyans et al. 2022; Kleinmann et al. 2022) requires the determination of the composition of ions (and electrons) that are "frozen in" to the magnetic field along with an accurate determination of their energy distributions and moments (temperatures, densities, velocities and pressure). Dust in the Solar System is embedded in the solar wind plasma and the relative abundance of dust populations throughout the heliosphere affects the interplay of outflowing solar wind plasma and inflowing interstellar material. Also, measuring plasma parameters along with the dust allows calculating and studying the dust charging process, and constraining the dust bulk densities (morphology) via its surface charges and mass measurements. Direct measurements of the distribution functions of both the interstellar cloud and dust cloud pickup ions up to energies of a few keV/e would be possible with a plasma detector with a geometry factor of \(\sim\)10\({}^{-3}\) cm\({}^{2}\)-sr and a signal to noise ratio of \(>\)10 (McNutt et al. 2021). ### Pickup Ion instrument As explained in Section 1.6 and throughout this manuscript, PUIs play a substantial role in the dynamics of our solar bubble, are very important indicators of the plasma processes throughout the heliosphere, i.e. from interplanetary space out to the heliopause (Zirnstein et al. 2022; Dialyans et al. 2022, e.g.), and may be related to dust populations (Schwadron et al. 2002b). Despite the SWAP (McComas et al. 2008) and PEPPSI (McNutt et al. 2008) instruments on New Horizons being operational for many years, this spacecraft is not expected to make measurements at distances far beyond the termination shock. Also, its instruments were not designed to measure multiple and heavier species of PUIs (only limited to hydrogen and helium) and there is limited directional information. Furthermore, the limited scientific payload of New Horizons (e.g. it does not carry a magnetometer) indicates that those PUI measurements it obtains cannot be set in context with simultaneous fields or wave measurements. To understand the important physics of our heliosphere through the PUIs Figure 7: Photo of a trajectory sensor. Image credit: MPI-K/Univ. Stuttgart. and their possible link to dust, a future ISP mission should include a detector with a fairly large geometrical factor (\(>\)10\({}^{-3}\)cm\({}^{2}\) sr), a high dynamic range (10\({}^{-1}\) to 10\({}^{4}\) (cm\({}^{2}\) sr s keV\({}^{-1}\)), and a combination of high time and energy resolution (of \(\Delta\)E/E \(\leq\)10%) that would resolve light and heavy ions and their charge states within the energy range of \(\sim\)0.5-78 keV/e (see McNutt et al., 2021). ### Magnetic field instrument The Voyager mission survey through the heliosphere showed that taking accurate magnetic field measurements from interplanetary space all the way to the VLISM is of paramount importance for addressing timeless questions concerning the shape of the global heliosphere, its nature, dynamics and interactions with the VLISM. The relatively low resolution of the MAG experiments on the Voyagers demonstrated the necessity of obtaining magnetic field observations from a future Interstellar Probe mission in the nT range with pT resolution (see McNutt et al., 2021, and Table 5) to address questions concerning the role of plasma turbulence and magnetic reconnection throughout the heliosphere. A high dynamic range (ca. \(\sim\)0.01-100 nT) would provide invaluable aid in determining the interaction of small dust grains with particles and fields throughout the heliosphere (e.g., CMEs) and, in particular, in the heliosheath. ### Neutral Mass Spectrometer The primary science goal of the Neutral Mass Spectrometer (NMS) is to measure the chemical composition of neutral gas along the spacecraft trajectory, employing two measurement techniques: an antechamber and a collection foil. The latter provides a higher sensitivity than the antechamber, but less frequent measurements. The technology readiness level and the longevity of NMS are backed up by e.g. the Neutral Ion Mass Spectrometer NIM (Fohn et al., 2021) on the Jupiter Icy Moons Explorer (launched in 2023, nominal end of mission 2035). NMS may detect dust grains that happen to enter the antechamber or hit the collection foil. However, the collection area is rather small (on the order of cm\({}^{2}\)) compared to dedicated dust detectors, which implies low detection rates. The volatile component of any dust particle entering the antechamber can be measured by the NMS. When nanograins impact the collection foil, both their volatile and refractory species can be analyzed. Additional on-ground calibration at impact speeds representative for the Interstellar Probe will be needed (McNutt et al., 2021). Although NMS can be very valuable for the compositional analysis of nanodust and macromolecules in the VLISM, it does not provide impact rates, sizes or dynamical information about the nanodust that would be useful for further exploring the dust-heliosphere physics and the smallest populations of condensed matter in the VLISM. Table 4 and 5 show an overview of the instrumentation discussed, with typical values for measurement ranges, power consumption, instrument mass and volume. ### Discussion on nanodust measurements A fundamental question for in-situ instrumentation is what the lower detection limit in particle size is. Fortunately, the detection method of impact ionization is extremely sensitive to small particles as long as the impact speed exceeds a certain limit. Above impact speeds of approximately 30 km/s, the particle becomes fully ionized, providing enough ions to be detected with sensitive ion detectors. The measurement of nanodust with sensitive non-TOF detectors (_Galileo_, _Ulysses_, 1993) and with TOF-MS instruments (_Cassini_, GIOTTO) is well known and published. Uterback & Kissel (1990) identified particles of only \(5\times 10^{-22}\) kg during the flyby of comet Halley in 1986 with the mass spectrometer PIA/PUMA. The relative flyby speed was 78 km/s and the smallest signals contained only 75 ions from the generated impact plasma. The instruments onboard _Galileo_ and _Ulysses_ detected the Jovian dust streams. Models have shown that these particles reach \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Radius (\(\mu\)m) & density (g cm\({}^{-3}\)) & mass (kg) & surface potential (V) & charge (IC) & gyroradius (kAU) \\ \hline \multicolumn{6}{|c|}{Interplanetary conditions} \\ \hline 0.1 & 2.5 & \(1\times 10^{-17}\) & +5 & 0.06 & 0.5 \\ 0.2 & 2.5 & \(8\times 10^{-17}\) & +5 & 0.1 & 2 \\ 0.5 & 2.5 & \(1\times 10^{-15}\) & +5 & 0.3 & 9 \\ 1 & 0.8 & \(3\times 10^{-15}\) & +20 & 2 & 4 \\ 5 & 0.8 & \(4\times 10^{-13}\) & +20 & 11 & 97 \\ \hline \hline \multicolumn{6}{|c|}{Heliosheath conditions} \\ \hline 0.1 & 2.5 & \(1\times 10^{-17}\) & +8 & 0.09 & 0.7 \\ 0.2 & 2.5 & \(8\times 10^{-17}\) & +8 & 0.2 & 3 \\ 0.5 & 2.5 & \(1\times 10^{-15}\) & +8 & 0.4 & 17 \\ 1 & 0.8 & \(3\times 10^{-15}\) & +32 & 4 & 5 \\ 5 & 0.8 & \(4\times 10^{-13}\) & +32 & 18 & 150 \\ \hline \hline \multicolumn{6}{|c|}{Local interstellar conditions} \\ \hline 0.1 & 2.5 & \(1\times 10^{-17}\) & +0.5 & 0.006 & 0.1 \\ 0.2 & 2.5 & \(8\times 10^{-17}\) & +0.5 & 0.01 & 0.5 \\ 0.5 & 2.5 & \(1\times 10^{-15}\) & +0.5 & 0.03 & 2 \\ 1 & 0.8 & \(3\times 10^{-15}\) & +2 & 0.2 & 1 \\ 5 & 0.8 & \(4\times 10^{-13}\) & +2 & 1 & 27 \\ \hline \end{tabular} \end{table} Table 3: Approximate size, mass, charge, surface potential, and gyroradius, adapted from Grün & Svestka (1996), assuming spherical particles, a magnetic field strength of 1 nT in the interplanetary medium, 0.1 nT in the heliosheath, and 0.5 nT in the LISM, and a relative particle speed of 400 km/s in the interplanetary medium, 100 km/s in the heliosheath, and 5 km/s in the undisturbed LISM. Gryorndii are upper limits for particle motions perpendicular to the magnetic field. Surface potentials for 0.1 \(\mu\)m do not take into account the small particle effect and may be larger in reality; for micron-sized particles a compactness of 33% was assumed, which increases the surface potential by a factor of about 4 (Ma et al., 2013). The instrument threshold is indicated in yellow. 400 km/s with typical particle sizes between 10 and 20 nm (Zook et al., 1996). These detectors used large target areas of up to \(0.1\,\mathrm{m}^{2}\), although their large targets and related large electrode capacitances led to a low sensitivity. Later, _Cassini_ characterized the Jovian and Saturnian dust stream particles, measuring the composition of fast and tiny grains: Saturnian stream particles typically have speeds between 100 km/s and 200 km/s and are usually smaller than Jovian stream particles (Kempf et al., 2005; Horanyi, 2000; Hsu et al., 2011). Another good example of measuring the composition of individual grains smaller than 50 nm at moderate impact speeds of approximately 30 km/s is the _Cassini_ CDA proximal orbit campaign with its inner ring plane crossings in 2017 (Hsu et al., 2018). This demonstrated a high sensitivity for simple TOF-MS instruments using impact ionization. Not only dust spectrometers were able to detect nano-sized grains. Carpenter et al. (2007) demonstrated the measurement of nanometer-sized dust impacts with an instrument combining a thin foil and a multichannel plate (MCP) in Earth orbit onboard the ISS. In order to determine the lower mass threshold in dependence of the impact speed, one can use calibration equations by (Grun, 1984) of \(Q=6.3\times 10^{-4}\cdot m\cdot v^{5.6}\), or Burchell et al. (1999) with \(Q=0.096\cdot m\cdot v^{4.01}\). Using the Burchell equation gives a total impact plasma charge of \(Q=6.2\times 10^{-16}\,\mathrm{C}\) for a 50 km/s and \(10^{-21}\,\mathrm{kg}\) particle (4.6 mm radius, silicate). However, at such high speeds, it is a good approximation to assume full ionization of the projectile material. Furthermore, the plasma is dominated by ions from the target material (with or without surface contaminations). If the goal is not the detection of a particle but a careful compositional analysis or mass determination of the nano-meteoroid, one should consider only the dust particle material, and the typical equations giving \(Q/m\) for a given size and speed are not applicable. An estimate of the number of dust particle atoms gives, therefore, a more precise quantity of the relevant impact charge. We consider the sensitivity for an impact ionization detector with ion optics that focuses all generated ions towards an ion detector that could be either an MCP or a multiplier. The target is normally a polished metal surface (gold, rhodium, iridium, palladium) onboard dust telescopes like SUDA (_Europa Clipper_) or DDA (DESTINY\({}^{+}\)). The loss factor from the target to the ion \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Dust Parameter & \multicolumn{6}{c|}{INSTRUMENT} \\ \hline & PVDF & IID & TOF-MS & Grid & Traj.-Sens. & Plasma Wave \\ \hline \hline Mass, kg & \(>10^{-14}\) & \(>10^{-17}\) & \(10^{-18}\) & \(>5\times 10^{-15}\) & \(>10^{-15}\) & \(>10^{-14}\) \\ (at \(10\,\mathrm{km/s}\)) & & & to \(10^{-15}\) & & & \\ \hline Mass, kg & \(>10^{-14}\) & \(>10^{-21}\) & \(10^{-22}\) & \(>5\times 10^{-14}\) & \(>10^{-15}\) & \(>10^{-19}\) \\ (at \(50\,\mathrm{km/s}\)) & & & to \(10^{-17}\) & & & \\ \hline Speed, km/s & \(1-100\) & \(2-100\) & \(2-100\) & \(2-50\) & \(2-50\) & \(4-100\) \\ \hline POV, sr & \(2\pi\) & \(1\pi\) & \(0.2\pi\) & \(1.5\pi\) & \(0.5\pi\) & \(4\pi\) \\ \hline Sens. area, m\({}^{2}\) & variable & variable & variable & variable & variable & \\ & \(0.005-0.1\) & \(0.05-0.1\) & \(0.05\) - \(0.03\) & \(0.005-0.1\) & \(0.01-0.1\) & \((0.25-1)\) \\ \hline Observables & \(E_{\mathrm{kin}}\) & \(v\), \(m\) & \(v\), \(m\), comp. & \(v\), \(m\), \(q\) & \(v\), \(m\), \(q\), direc. & \(E_{\mathrm{kin}}\) \\ \hline Advantages & \begin{tabular}{c} low mass, \\ low power, \\ robust \\ \end{tabular} & \begin{tabular}{c} reliable, \\ nano-grain \\ detection, \\ high dynamic \\ range \\ \end{tabular} & \begin{tabular}{c} composition, \\ high reliability, \\ nano-grain \\ detection \\ \end{tabular} & \begin{tabular}{c} get v, \(m\) and \(q\) \\ with small \\ errors \\ \end{tabular} & \begin{tabular}{c} get \(\tilde{v}\), \(m\), \(q\) \\ with small \\ errors \\ \end{tabular} & \begin{tabular}{c} get \(\tilde{v}\), \(m\), \(q\) \\ with small \\ errors \\ \end{tabular} & \begin{tabular}{c} \(2\mathrm{in}1\) \\ instrument, \\ narrows \\ large FOV \\ large FOV \\ \end{tabular} \\ \hline Disadvantages & \begin{tabular}{c} calibration, \\ not suited \\ for inner \\ solar system \\ \end{tabular} & errors in \(v\), \(m\) & \begin{tabular}{c} complex \\ instrument, \\ HV needed, \\ limited sens. \\ area, \\ FOV limited, \\ errors in \(v\), \(m\) \\ \end{tabular} & \begin{tabular}{c} sensitive to \\ plasma, \\ SNR \\ for submicron \\ grains \\ \end{tabular} & \begin{tabular}{c} sensitive to \\ plasma, \\ plasma, \\ many signal \\ channels, \\ power \\ \end{tabular} & \begin{tabular}{c} sensitive to \\ plasma, \\ calibration, \\ no composition, \\ no directionality \\ \end{tabular} \\ \hline Mass, kg & \(1-2.5\) & \(2-8\) & \(4-12\) & \(2-5\) & \(2-8\) & \(37\) (RPWS) \\ \hline Power, W & \(1-2.5\) & \(3-7\) & \(5-15\) & \(3-6\) & \(5-10\) & \(16\) (RPWS) \\ \hline Data vol., MB/d & \(<1\) & \(<1\) & \(1-100\) & \(1-2\) & \(1-100\) & \(2-1000\) \\ \hline TRL & 9 & 9 & 9 & \(9\) & \(4-9\) & \(9\) \\ \hline Cost, M\({}^{\mathrm{C}}\) & 5 & 5 & 10 & 5 & 6 & (12) \\ \hline \end{tabular} \end{table} Table 4: Technical data of state-of-the-art dust instrumentation. The mass and velocity range of TOF-MS, grid and trajectory sensors can shift towards larger values depending on gain settings and some instrument adaptations. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & NMS & PLS & MAG & PUI \\ \hline \hline Measurement range & 1-1000 annu (molecules) & \(<\)3 eV/e to 20 keV/e (\(\mathrm{AE/E}\leq 10\)k) & 0.01–100 nT (three components) & \(\sim\)0.5-78 keV/e \\ Power consumption & 11 W & 10 W & 5.7 W, including two survival heaters & 7 W \\ Mass & 10 kg & 8 kg & 0.6 kg for two fluxgates, 4.2 kg for 10-m boom & 5.5 kg \\ Volume & \(40\times 15\times 15\) cm\({}^{-3}\) & – & – & – \\ \hline \end{tabular} \end{table} Table 5: NMS, PLS, MAG and PUI specifications according to McNutt et al. (2021). detector is assumed to be 50%. An MCP and multiplier is sensitive enough to measure even single ions. But such high detector gains are not practical in space due to the high sensitivity to radiation. Therefore, the ion detector should operate with a reduced gain (determined empirically). We consider 100 ions within one peak of a mass spectrum as a good number. We also assume five elements within the particle with equal ionization yield, and each element contributes at least 100 ions to a mass line in the spectrum. This means that we can detect as a lower mass threshold particles providing 500 ions at the ion detector, which corresponds to a particle with at least 1000 atoms. This marks a useful lower particle size and mass detection limit. For a bonding length of approximately 2 A, such a particle would be not larger than 2 or 3 nm in diameter. However, if a particle contains less than five species, the detection of even smaller particles is not excluded. On the other hand, an impact ionization detector using a multiplier for ion detection can be adjusted to measure larger grains by reducing the detector gain by a factor of 100 or even 1000 by lowering the operating high voltage. This ensures a wide dynamic range to measure mass spectra of nanodust as well as of micron-sized particles with the same instrument electronics. Changes in instrument gain every hour, day or just a month can be foreseen for this reason. This procedure was tested in-flight with _Cassini_ CDA. ## 5 Conclusions Many compelling science questions exist concerning the interaction of ISD (and IDP) with the heliosphere. We highlight the synergies between the two sciences, and what tremendous progress we could make if a dedicated dust suite would fly on an Interstellar Probe to measure dust properties together with the plasma, magnetic field, and pickup ions, during its journey through all regions inside and outside of the heliosphere. The science yield would be increased even more by simultaneous measurements from other missions inside the solar system while ISP is on its journey. The science results may be crucial for understanding the physics and pressure balance of the heliosphere, and the pool of new dust measurements can be used as an extra boundary condition for heliosphere models to help reveal the time-dependent structure and size of the heliosphere. We describe the major advantages for the dust measurements on ISP, including being outside of the heliopause where highly abundant nano-ISD resides, and flying at very high speeds against the flow of ISD - good for detecting dust. From a programmatic point of view, a mission like ISP with a dust detector is crucial, but there is also a need for an optimized long-term monitoring of ISD dynamics parameters (and composition) with broad temporal and spatial coverage in the solar system. The currently existing dust and dust-heliosphere-related instruments each have their advantages and disadvantages for certain types of measurements to help answer the science questions currently posed. With these instruments, we can push forward the boundaries of our knowledge as described here. The topic of dust-heliosphere science is gaining a lot of traction in the community, and collaborations between the continents are important. The "new space" launcher industry is expected to allow for instruments with larger detection surfaces in optimized orbits. Finally, solving the science questions presented here will not only benefit dust science and heliosphere science; it will also foster broader synergistic cross-divisional science between heliophysics, astronomy, planetary science and astrobiology, addressing for instance the role of astrospheres in habitability of planetary systems. Such cross-divisional science not only "crosses" the borders of divisions, but also augments science in each of them, thus meeting the exact definition of a true "synergy". ## Acknowledgements Veerle J. Sterken, Lennart R. Baalmann and Silvan Hunziker received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement N\({}^{\circ}\) 851544 - ASTRODUIST. This work has been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. Konstantin Herbst acknowledges support from the German Research Foundation priority program SPP 1992 "Exploring the Diversity of Extrasolar Planets" through the project HE 8392/1-1. The authors would like to thank the anonymous referees for their helpful comments to improve the manuscript. ## Data Availability Data in this paper can be made available upon request.
2305.18388
The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation
We study the problem of temporal-difference-based policy evaluation in reinforcement learning. In particular, we analyse the use of a distributional reinforcement learning algorithm, quantile temporal-difference learning (QTD), for this task. We reach the surprising conclusion that even if a practitioner has no interest in the return distribution beyond the mean, QTD (which learns predictions about the full distribution of returns) may offer performance superior to approaches such as classical TD learning, which predict only the mean return, even in the tabular setting.
Mark Rowland, Yunhao Tang, Clare Lyle, Rémi Munos, Marc G. Bellemare, Will Dabney
2023-05-28T10:52:46Z
http://arxiv.org/abs/2305.18388v1
# The Statistical Benefits ###### Abstract We study the problem of temporal-difference-based policy evaluation in reinforcement learning. In particular, we analyse the use of a distributional reinforcement learning algorithm, quantile temporal-difference learning (QTD), for this task. We reach the surprising conclusion that even if a practitioner has no interest in the return distribution beyond the mean, QTD (which learns predictions about the full distribution of returns) may offer performance superior to approaches such as classical TD learning, which predict only the mean return, even in the tabular setting. Machine Learning, Reinforcement Learning, Value Estimation ## 1 Introduction Distributional approaches to reinforcement learning (RL) aim to learn the full probability distribution over random returns an agent may encounter, rather than just the expectation of the random return (Morimura et al., 2010; 2013; Bellemare et al., 2017, 2023). These methods have seen recent empirical successes in domains such as stratospheric balloon navigation (Bellemare et al., 2020), simulated race car control (Wurman et al., 2022), and algorithm discovery (Fawzi et al., 2022), as well as forming a core component of many successful agents in common simulated reinforcement learning benchmarks (Bellemare et al., 2013; Machado et al., 2018; Bellemare et al., 2017; Dabney et al., 2018; 2019; Yang et al., 2019; Vieillard et al., 2020; Nguyen et al., 2021), often improving over agents that estimate only the expected return. Notably, the success of these distributional approaches has typically been observed in combination with deep neural networks, and it is commonly hypothesised that the benefits of the distributional approach stem from its interaction with non-linear function approximators such as deep neural networks (Bellemare et al., 2017; Imani and White, 2018; Dabney et al., 2021; Sun et al., 2022), rather than for statistical reasons. In this paper, however, we reach a surprising conclusion: Even in the tabular setting, there are many scenarios where _quantile temporal-difference learning_ (QTD; Dabney et al., 2018), a distributional RL algorithm which aims to learn quantiles of the return distribution, can more accurately estimate the expected return than classical temporal-difference learning (TD; Sutton, 1984; 1988) which predicts only the expected return. To complement this core finding, we conduct novel theoretical analysis to establish what kinds of value predictions QTD converges to, and crucially how this depends on the number of quantiles that the algorithm estimates. We also examine how both TD and QTD trade-off between the variance of their updates, and their expected progress towards their asymptotic predictions. We find that when estimating a sufficient number of quantiles, QTD is able to converge to value predictions close to the true value function \(V^{\pi}\), yet with individual updates that are guaranteed to be of bounded magnitude. These insights lead to several testable hypotheses, which we use to conduct a further empirical study to better characterise domains in which QTD offers superior performance to TD, and vice versa, and find several common trends. In particular, we find that in environments with significant stochasticity, QTD often performs better (and in contrast, in (near-)deterministic environments, TD is clearly preferable), and that estimating a low number of quantiles may have adverse effects on the accuracy of QTD's predictions. By investigating a variant of QTD, we also find evidence that estimation specifically of the return distribution may lead to useful variance-reduction properties. These findings have consequences for both theoreticians and practitioners. For the former, QTD represents a distinct fundamental approach to the problem of value prediction in RL, often with complementary performance to classical TD, and raises a range of open questions. For the latter, QTD for mean estimation can be considered as a plug-in alternative to TD, in tabular settings and beyond. ## 2 Background We consider a Markov decision process, specified by a finite state space \(\mathcal{X}\), action space \(\mathcal{A}\), joint transition probabilities \(P:\mathcal{X}\times\mathcal{A}\rightarrow\mathcal{P}(\mathcal{X}\times\mathbb{R})\) that specify for each \((x,a)\in\mathcal{X}\times\mathcal{A}\) a distribution over an immediate reward and next state, and a discount factor \(\gamma\in[0,1)\). When a policy \(\pi:\mathcal{X}\rightarrow\mathcal{P}(\mathcal{A})\) for selecting actions at each state is specified, each choice of an initial state \(x_{0}\in\mathcal{X}\) gives rise to a probability distribution over the trajectory \((X_{t},R_{t})_{t\geq 0}\), which we refer to as a Markov reward process (MRP). We introduce the notation \(\mathbb{P}_{x_{0}}^{\pi}\) for the distribution of the trajectory, and \(\mathbb{E}_{x_{0}}^{\pi}\) for the corresponding expectation operator. The discounted return (or simply, the return) obtained along the trajectory \((X_{t},R_{t})_{t\geq 0}\) is defined as \[\sum_{t\geq 0}\gamma^{t}R_{t}\,, \tag{1}\] and encodes the utility of the trajectory to an agent interacting with the environment; higher returns are better. The value function \(V^{\pi}:\mathcal{X}\rightarrow\mathbb{R}\) for a policy \(\pi\) is defined by \[V^{\pi}(x)=\mathbb{E}_{x}^{\pi}\left[\sum_{t\geq 0}\gamma^{t}R_{t}\right]\,,\] for each \(x\in\mathcal{X}\). That is, \(V^{\pi}(x)\) is the mean return encountered along trajectories beginning at the state \(x\). Estimating \(V^{\pi}\) from observed interactions with the environments is a fundamental problem in reinforcement learning, as it allows agents to both predict the effects of their actions, and to improve their policies. ### Temporal-difference learning Temporal-difference learning (TD; Sutton, 1984; 1988) is a family of algorithms that aim to learn an estimate of the value function \(V^{\pi}\). We focus here on the simplest variant, TD(0). This algorithm maintains an estimate \(V\) of the value function, and incrementally updates these predictions in response to experience in the environment. Specifically, on observing a transition \((x,r,x^{\prime})\) generated by \(\pi\), TD learning selects a learning rate \(\alpha\in(0,1]\), and performs the assignment \[V(x)\gets V(x)+\alpha(r+\gamma V(x^{\prime})-V(x))\,, \tag{2}\] to update \(V\). Under appropriate conditions, the estimate \(V\) converges to the true value function \(V^{\pi}\) with probability 1. The TD(0) update rule is a central method for tabular policy evaluation, and moreover, this update and its variants forms a core component of many deep reinforcement learning agents, including value-based approaches such as DQN and its descendants (Mnih et al., 2015), as well as actor-critic approaches such as A3C and its descendants (Mnih et al., 2016). For conciseness, throughout the paper we use "TD" to refer to this classical temporal-difference learning algorithm. ### Quantile temporal-difference learning In contrast to TD learning, which maintains an estimate of the expected return at each state, distributional RL algorithms (Bellemare et al., 2023) aim to predict the full probability distribution of the random return in Equation (1). Quantile temporal-difference learning (QTD Dabney et al., 2018), in particular, maintains a _collection_ of predictions at each state, denoted \(((\theta(x,i))_{i=1}^{n}:x\in\mathcal{X})\), and has formed a core component of many deep reinforcement learning agents (Dabney et al., 2018; Yang et al., 2019; Bodnar et al., 2020; Bellemare et al., 2020; Wurman et al., 2022; Fawzi et al., 2022). The intention, in contrast to having \(V(x)\) directly approximate the mean return from \(x\) in TD learning, is to have \(\theta(x,i)\) approximate the \(\tau_{i}\)-quantile of the distribution of the random return in Equation (1), with \(\tau_{i}=\frac{2i-1}{2m}\), for \(i=1,\ldots,m\); see Figure 1 for an illustration. We write QTD(\(m\)) to denote the instantiation of QTD with \(m\) quantiles. In analogy with TD learning, upon observing a transition \((x,r,x^{\prime})\) generated by \(\pi\), QTD updates all estimates \((\theta(x,i))_{i=1}^{m}\) at state \(x\) by selecting a learning rate \(\alpha\geq 0\), and performing the assignments \[\theta(x,i)\leftarrow\theta(x,i)+ \tag{3}\] \[\alpha\Big{(}\tau_{i}-\frac{1}{m}\sum_{j=1}^{m}\mathbbm{1}\left[r +\gamma\theta(x^{\prime},j)-\theta(x,i)<0\right]\Big{)}\,,\] for all \(i=1,\ldots,m\). This algorithm differs from TD in a few important ways. First, each prediction \(\theta(x,i)\) is updated differently, due to the presence of the parameter \(\tau_{i}\) in the update. Second, the update depends only on the _sign_ (not magnitude) of the temporal-difference errors appearing in Equation (3), meaning that the update magnitude is bounded, in contrast to those of TD. The form of the update itself is motivated through the quantile regression loss (Koenker and Bassett, 1978; Koenker, 2005); see Rowland et al. (2023) and Bellemare et al. (2023) for further background and theory regarding QTD, and Appendix A for an overview and discussion of computational considerations. Figure 1: Left: Example return distribution at a state \(x\) in light blue, with QTD quantile predictions \((\theta(x,i))_{i=1}^{5}\) (\(m=5\)). Exact predictions correspond to the quantiles indicated on the CDF below. Right: Illustration of computation of the update to \(\theta(x,i)\) (from black to blue marker) upon observing a transition \((x,r,x^{\prime})\). ## 3 Quantile temporal-difference learning for mean-return estimation Quantile temporal-difference learning estimates the return distribution at a state \(x\) with the discrete distribution supported at the learnt quantile values \((\theta(x,i))_{i=1}^{m}\): \[\sum_{i=1}^{m}\frac{1}{m}\delta_{\theta(x,i)}\,;\] see Figure 1. A natural estimator for value at state \(x\) is therefore obtained by extracting the mean of this approximate distribution, by averaging these quantiles: \[\frac{1}{m}\sum_{i=1}^{m}\theta(x,i)\,. \tag{4}\] This is the estimator of value typically used in applications combining QTD with deep reinforcement learning. The approach of averaging certain quantile estimators to approximate the mean of a distribution in fact dates back to at least the work of Daniell (1920) and Mosteller (1946), with Gastwirth (1966) observing that this approach to estimation provides competitive relative efficiency across a wide variety of distributions, including those with heavy tails, where the usual sample-average mean estimator can be inefficient. Thus, although not originally designed with this connection in mind, QTD naturally combines this approach to mean estimation with the notion of bootstrapping in reinforcement learning. Given the motivation above, we might conjecture that QTD provides an approach to value estimation that is effective across a wide range of environments, particularly those with heavy-tailed reward distributions. For concreteness, the QTD algorithm for value estimation is presented in Algorithm 1. The additional variables \(\theta^{\prime}\) are used to avoid issues when \(x_{t}^{\prime}=x_{t}\); in such cases, this means that the for-loop over the quantile index \(i\) can be performed in any order (or in parallel) without affecting the result of the algorithm. ``` 0: Initial quantile estimates \(((\theta(x,i))_{i=1}^{m}:x\in\mathcal{X})\), learning rate \(\alpha\), number of updates \(T\). 1:for\(t=1,\ldots,T\)do 2: Observe transition \((x_{t},r_{t},x_{t}^{\prime})\). 3:for\(i=1,\ldots,m\)do 4: Set \(\theta^{\prime}(x_{t},i)\leftarrow\theta(x_{t},i)+\)\(\alpha\Big{(}\tau_{i}\!-\!\frac{1}{m}\sum_{j=1}^{m}\left\lfloor\!\theta(x_{t},i)\!-\!r_{t} \!-\!\gamma\theta(x_{t}^{\prime},j)<0\right\rfloor\!\Big{)}\) 5:endfor 6: Set \(\theta(x_{t},i)\leftarrow\theta^{\prime}(x_{t},i)\) for \(i=1,\ldots,m\) 7:endfor 8:return\((\frac{1}{m}\sum_{i=1}^{m}\theta(x,i):x\in\mathcal{X})\) ``` **Algorithm 1** QTD(\(m\)) for value estimation. As an initial comparison between TD and QTD for value estimation, we compare mean-squared error for value estimation on a suite of nine simple MRPs. Full details for replication are provided in Appendix C, with crucial details for the comparisons given here. The structure of the MRPs is given by the Cartesian product of three levels of stochasticity in transition structure: * Deterministic cycle structure; * Sparse stochastic transition structure (sampled from a Garnet distribution; * Dense stochastic transition structure (sampled from Dirichlet\((1,\ldots,1)\) distributions); Archibald et al., 1995); together with three levels of stochasticity in reward structure: * Deterministic rewards; * Gaussian (variance 1) rewards; * Exponentially distributed (rate 1) rewards. We focus on the use of constant learning rates throughout training, as is commonly the case in practice, and sweep across a variety of learning rates for both methods. We run both TD and QTD (using 128 quantiles) with a variety of learning rates, and measure the mean-squared error to the true value function after 1,000 updates via online interaction with the environments. The results of the sweep over learning rates are displayed in Figure 2; in this experiment and all that follow, each run was repeated 1,000 times, and the (narrow) confidence bands displayed are obtained via a measurement of \(\pm 2\) times the empirical standard error. As expected, in the environments with the heaviest-tailed rewards, QTD obtains a lower mean-squared error than TD. Interestingly, this is also the case in environments with stochasticity only in the transition dynamics, and deterministic rewards. To more easily visualise the extent of these improvements, and to check the robustness of this improvement to the number of updates undertaken by the algorithms, we plot the optimal MSE obtained by QTD as a proportion of that obtained by TD in Figure 3, as a function of the number updates completed by each algorithm. This preliminary experiment has already yielded a perhaps surprising conclusion: ``` 0: Initial quantile estimates \(((\theta(x,i))_{i=1}^{m}:x\in\mathcal{X})\), learning rate \(\alpha\), number of updates \(T\). 1:for\(t=1,\ldots,T\)do 2: Observe transition \((x_{t},r_{t},x_{t}^{\prime})\). 3:for\(i=1,\ldots,m\)do 4: Set \(\theta^{\prime}(x_{t},i)\leftarrow\theta(x_{t},i)+\)\(\alpha\Big{(}\tau_{i}\!-\!\frac{1}{m}\sum_{j=1}^{m}\left\lfloor\!\theta(x_{t},i)\!-\!r_{t}\!-\! \gamma\theta(x_{t}^{\prime},j)<0\right\rfloor\!\Big{)}\) 5:endfor 6: Set \(\theta(x_{t},i)\leftarrow\theta^{\prime}(x_{t},i)\) for \(i=1,\ldots,m\) 7:endfor 8:return\((\frac{1}{m}\sum_{i=1}^{m}\theta(x,i):x\in\mathcal{X})\) ``` **Algorithm 2** QTD(\(m\)) for value estimation. In addition to obtaining superior performance relative to TD when optimising over learning rates, Figure 2 also indicates that performance degradation due to a larger-than-optimal learning rate is considerably less severe with QTD than with TD in these environments. Importantly, however, we also note that for the deterministic environment in the suite, the performance of TD is far superior to QTD. In this case, the TD algorithm is able to very accurately approximate the value function, since the update in Equation 2 is essentially implementing exact asynchronous dynamic programming when \(\alpha=1\). The results above have shown that in some sense, QTD has a complementary performance profile to TD, viewed as algorithms for value estimation, and that the stochasticity of the environment is one important factor in determining the relative performance of QTD and TD. What else can be said about the performance of QTD in comparison with TD? We address this questions in two ways. First, we develop the theory of QTD for mean estimation in Section 4, establishing asymptotic guarantees on the quality of the value predictions that the algorithm makes. Second, we conduct further empirical investigations in Section 5, aiming to develop a more nuanced understanding of the relative performance of QTD and TD in practice. ## 4 Theoretical analysis For the TD learning update rule in Section 2.1, it is known that under mild conditions on the reward distributions of the MRP, learning rates, and frequency that each state \(x\) is updated, the predictions \(V\) converge to \(V^{\pi}\) with probability 1 (Watkins, 1989; Watkins and Dayan, 1992; Dayan, 1992; Dayan and Sejnowski, 1994; Tsitsiklis, 1994; Jaakkola et al., 1994; Bertsekas and Tsitsiklis, 1996). This core convergence result justifies the use of TD for policy evaluation. Rowland et al. (2023) show that under even milder conditions, QTD(\(m\)) also converges with probability 1 *. However, in general, the estimate of expected returns \(V_{m}^{\text{QTD}}\) extracted from a point of convergence \(\theta_{m}^{\text{QTD}}\) of QTD via Equation (4) is _not_ exactly equal to \(V^{\pi}\). Intuitively, this stems from the fact that the algorithm is not estimating the mean of the return distribution directly, but rather a finite collection of quantiles, and this information is insufficient to exactly reconstruct the mean of the distribution in question. Based on this observation, we bound the expected error of the (random) value estimator \(\hat{V}\) obtained from Algorithm 1 as follows: \[\mathbb{E}[\|\hat{V}-V^{\pi}\|]\leq\underbrace{\mathbb{E}[\|\hat{V}-V_{m}^{ \text{QTD}}\|]}_{\text{Finite-sample error}}+\underbrace{\|V_{m}^{\text{QTD}}-V ^{\pi}\|}_{\text{Fixed-point error}}\,. \tag{5}\] The precise norm is unimportant here; the main aim is to highlight the role played by fixed-point error and finite-sample error in the overall error incurred by the QTD value estimator. We now compare QTD and TD with each of these terms in mind, beginning with fixed-point error. Figure 3: Improvement of QTD(128) over TD in mean-squared error against number of updates for all environments in Figure 2. Figure 2: Mean-squared error against learning rate for TD (black) and QTD(128) (blue), on environments with Dirichlet transition structure (top), Garnet transition structure (middle), and deterministic cycle structure (bottom), and deterministic rewards (left), Gaussian rewards (centre), and exponential rewards (right). ### Fixed-point error As noted above, TD incurs zero fixed-point error, as its point of convergence is precisely \(V^{\pi}\). However, this is generally not true of QTD. Nevertheless, it is possible to bound the fixed-point error of QTD as a function of the number of quantiles estimated by QTD in many cases. The following result is a straightforward consequence of the fixed-point analysis of Rowland et al. (2023). Proofs of results stated in the main paper are provided in Appendix B. **Proposition 4.1**.: For an MRP with all reward distributions supported on \([R_{\text{min}},R_{\text{max}}]\), any convergence point \(\theta_{m}^{\text{QTD}}\) of QTD(\(m\)) with corresponding value function estimate \(V_{m}^{\text{QTD}}=(\frac{1}{m}\sum_{i=1}^{m}\theta_{m}^{\text{QTD}}(x,i):x \in\mathcal{X})\) satisfies \[\|V_{m}^{\text{QTD}}-V^{\pi}\|_{\infty}\leq\frac{R_{\text{max}}-R_{\text{min}} }{2m(1-\gamma)^{2}}\,.\] This guarantees that in the case of bounded reward distributions, we can ensure that the fixed points of QTD provide arbitrarily accurate value function estimates, as long as \(m\) is taken to be sufficiently large relative to the scale of the support of the reward distributions. **Remark 4.2**.: The form of this approximation error is easily interpreted; for a general distribution supported on \([R_{\text{min}}/(1-\gamma),R_{\text{max}}/(1-\gamma)]\) (as the return distribution at \(x\) is under the conditions of Proposition 4.1), with mean \(\mu\) and quantile function \(F^{-1}\), we have \[\mu=\int_{0}^{1}F^{-1}(\tau)\;\mathrm{d}\tau\approx\sum_{i=1}^{m}\frac{1}{m}F ^{-1}\Big{(}\frac{2i-1}{2m}\Big{)}\,. \tag{6}\] That is, estimating the mean with a finite number of quantiles can be understood as a midpoint-quadrature-based approximation to the true mean. From this point of view, a linear dependence of the error on the range \((R_{\text{max}}-R_{\text{min}})/(1-\gamma)\) of the integrand in Equation (6), and a dependence \(1/m\) on the number of quadrature points \(m\) are to be expected. The additional factor of \((1-\gamma)^{-1}\) in the bound stems from the fact that the estimate is obtained from the fixed point of a bootstrapping procedure, in which errors accumulate at each stage. Since only a finite number \(m\) of quantiles are estimated at each state, the remaining information about the return distributions is thrown away, and this results in an accumulation of error each time the update in Equation (3) is applied. This bears a relationship to the notion of Bellman closedness in distributional RL (Rowland et al., 2019; Bellemare et al., 2023), and is analogous to the compounding of error under linear function approximation (Tsitsiklis and Van Roy, 1997). We now develop this analysis further, obtaining results for environments with unbounded rewards. We state a bound for the important case of sub-Gaussian rewards below, which follows as a consequence of a much more general bound given by Proposition B.1. **Proposition 4.3**.: Consider an MRP with all reward distributions having means in \([R_{\text{min}},R_{\text{max}}]\), and all sub-Gaussian with parameter \(\sigma^{2}\), so that \(\mathbb{E}_{x}^{\pi}[\exp(\lambda(R-\mathbb{E}_{x}^{\pi}[R]))]\leq\exp( \lambda^{2}\sigma^{2}/2)\), for all \(\lambda\in\mathbb{R}\) and \(x\in\mathcal{X}\). Then for the value function estimate \(V_{m}^{\text{QTD}}\) obtained from any convergence point \(\theta_{m}^{\text{QTD}}\) of QTD(\(m\)) via Equation (4), we have \[\|V_{m}^{\text{QTD}}-V^{\pi}\|_{\infty}\leq\frac{1}{(1-\gamma)m}\times\] \[\left(\frac{R_{\text{max}}-R_{\text{min}}+2\sigma\sqrt{2\log(2m)} }{2(1-\gamma)}+\frac{\sigma}{\sqrt{2\log(2m)}}\right)\,.\] We also state a non-quantitative result applicable to any MDP for which the problem of mean return estimation is well defined. **Proposition 4.4**.: Consider an MDP with all reward distributions having finite mean. Then for the value function estimate \(V_{m}^{\text{QTD}}\) obtained from any convergence point \(\theta_{m}^{\text{QTD}}\) of QTD(\(m\)) via Equation (4), we have \(\|V_{m}^{\text{QTD}}-V^{\pi}\|_{\infty}\to 0\) as \(m\to\infty\). This analysis shows that even with unbounded reward distributions, the approximation error of the fixed points of QTD can still be made arbitrarily small by increasing \(m\), with a slightly slower rate (relative to the bounded-reward case) of \(O(m^{-1}\sqrt{\log(m)}(1-\gamma)^{-2})\) in the case of sub-Gaussian rewards; in general, the heavier the tails of the reward distributions, the slower the convergence may be. ### Expected updates and variance The analysis of the previous section shows that QTD (with a large enough number of quantiles) incurs low fixed-point error, but does not suggest how its finite-sample performance may compare to that of TD, and specifically in which kinds of environments it may outperform TD. To make progress on this question, we return to the other term in Inequality (5), and in particular consider how the updates of TD and QTD contribute to this quantity. We begin by considering the updates of TD. **TD update decomposition.** The right-hand side of the TD learning update in Equation (2) can be rewritten as \[V(x)+\alpha(r+\gamma V(x^{\prime})-V(x))\] \[= (1-\alpha)V(x)+\] \[\alpha(\underbrace{(T^{\pi}V)(x)}_{\begin{subarray}{c}\text{ Expected}\\ \text{update}\end{subarray}}+\underbrace{(r+\gamma V(x^{\prime})-(T^{\pi}V)(x))}_{ \text{Mean-zero noise}})\,,\] where \((T^{\pi}V)(x)=\mathbb{E}_{x}^{\pi}[R_{0}+\gamma V(X_{1})]\) is the classical dynamic programming operator. This decomposition is central to the analyses of TD cited above, and highlights that the learning rate \(\alpha\) balances two requirements: a large learning rate increases the expected update towards \((T^{\pi}V)(x)\), increasing the contraction towards the fixed point \(V^{\pi}\) of \(T^{\pi}\), but also amplifies the mean-zero noise. Note also that the magnitude of the noise is potentially unbounded (if there are unbounded rewards, or if the value estimate \(V\) grows large), and that the distance of the expected update \((T^{\pi}V)(x)\) also grows in magnitude with \(V\). The key to obtaining good performance, and low finite-update error, from TD is therefore selecting a learning rate that balances the tension between these two considerations. These links between temporal-difference learning and dynamic programming are well understood (see e.g. Jaakkola et al. (1994); Tsitsiklis (1994); Bertsekas and Tsitsiklis (1996)), and this specific trade-off has been previously quantified under a variety of formalisms; see the work of Kearns and Singh (2000) for the _phased_ setting, and Even-Dar and Mansour (2003) for the synchronous and online settings. **QTD update decomposition.** In analogy, we can also decompose the QTD update in Equation (3) into an expected update and mean-zero noise; this approach is central to the convergence analysis of Rowland et al. (2023). In particular, the right-hand side of Equation (3) can be decomposed as follows: \[\alpha\Big{(}\tau_{i}-\frac{1}{m}\sum_{j=1}^{m}\mathbb{1}[r+\gamma \theta(x^{\prime},j)<\theta(x,i)]\Big{)}\] \[= \alpha\Big{(}\underbrace{\tau_{i}-\mathbb{P}_{x}^{\pi}(\Delta_{ iJ}(x,R_{0},X_{1})<0)}_{\text{Expected update}}+\] \[\underbrace{\mathbb{P}_{x}^{\pi}(\Delta_{iJ}(x,R,X^{\prime})<0)- \frac{1}{m}\sum_{j=1}^{m}\mathbb{1}[\Delta_{ij}(x,r,x^{\prime})<0]}_{\text{ Mean-zero noise}}.\] where we write \(\Delta_{ij}(x,r,x^{\prime})=r+\gamma\theta(x^{\prime},j)-\theta(x,i)\). Rowland et al. (2023) show that following these expected updates leads to the points of convergence for QTD, and we therefore have a similar tension as described in TD, between an expected update that moves us towards the points of convergence, and noise that may perturb this progress. A central distinction between this decomposition for TD, and for QTD, is that in QTD both expected update and noise are bounded by 1, in stark contrast to the potentially unbounded terms in the TD update, which may grow in proportion with the value function norm \(\|V\|_{\infty}\). This suggests that QTD may tolerate higher step sizes than TD in stochastic environments, and also that as the level of stochasticity increases, due to higher-variance/heavier-tailed rewards, the performance of QTD may be more resilient than that of TD. Conversely, in near-deterministic environments, since the expected update magnitude of QTD is effectively independent of the magnitude of the update error, we may expect poorer performance than TD, which is able to make updates in proportion to the level of error. A summary of the comparison points highlighted between TD and QTD in this section is given in Table 1. ## 5 Further empirical analysis The theoretical analysis in the previous section has elucidated several salient differences between TD and QTD as policy evaluation algorithms; we now seek to compare these methods empirically, and test the predictions made in light of the analysis in the earlier sections. ### Heavy-tailed rewards As alluded to above, the sensitivity of TD updates to the magnitude of prediction errors makes it difficult to average out heavy-tailed noise, and we hypothesise that in cases of extremely heavy-tailed noise, QTD should strongly outperform TD. To this end, we extend the example environments from Figure 2, with \(t_{2}\)-distributed rewards; these are exceptionally heavy-tailed rewards, with infinite variance. The results of QTD and TD in these environments are displayed in Figure 4, with QTD providing substantial improvements in MSE. A plot of MSE against learning rates is provided in Appendix D.2. ### Low numbers of quantiles Propositions 4.1, 4.3, and 4.4 suggest that for low numbers of quantiles \(m\), the fixed-point bias of QTD may dominate the error decomposition described above, meaning that it \begin{table} \begin{tabular}{c|c|c|c} \hline \hline & Fixed-point & Update & Expected \\ & bias & variance & update magnitude \\ \hline TD & 0 & Unbounded* & \(\propto\) Bellman error \\ QTD & \(\widetilde{\mathcal{O}}(1/m)^{**}\) & \(\mathcal{O}(1)\) & \(\mathcal{O}(1)\) \\ \hline \hline \end{tabular} \end{table} Table 1: Trade-offs made by TD and QTD along various axes. *In general, TD update variance may be unbounded, though there are certain situations in which it is not; see text for further discussion. *For sub-Gaussian reward distributions. \(\tilde{O}\) denotes the possible dropping of polylog factors in \(m\). Figure 4: Relative improvement of QTD(128) over TD in mean-squared error against number of updates for all transition structures in Figure 2, with \(t_{2}\)-distributed rewards. may be outperformed by TD in certain environments. Figure 5 illustrates such a case, under the same experimental set-up as earlier in the section; MSE is poor with low values of \(m\) for all learning rates, and comparable performance to TD is recovered by increasing \(m\). What makes QTD(\(m\)) with \(m\) low behave so poorly in this example? It is precisely due to the fact that the average of a low number of quantiles in this domain is quite different from the mean, as mentioned in Remark 4.2. This serves to illustrate cases where large numbers of quantiles are necessary for accurate predictions, as the theory in Section 4 suggests. We also include results on the main suite of environments for QTD(1) and QTD(16) in Appendix D.3, for comparison with the results obtained for QTD(128) above. QTD(1) is outperformed by TD in several environments, as the experiment in Figure 5 suggests may be the case. On the other hand, the performance of QTD(16) is broadly in line with that of QTD(128); speaking pragmatically, we have found that using on the order of tens of quantiles is generally sufficient in practice. ### Varying reward scales Given our previous observations that TD outperforms QTD in deterministic environments, and that QTD tends to outperform TD in environments with significant stochasticity, we run an additional comparison to investigate the levels of stochasticity required to see benefits from QTD. In Figure 2, we see advantages to QTD in all environments with stochastic transition structure, but a clear difference in performance in passing from the environment with cycle transition structure and Gaussian reward (centre-bottom) to the same transition structure with deterministic rewards (bottom-left). In Figure 6, we plot the relative performance of QTD(128) and TD in the environment with cycle transition structure, and Gaussian rewards of varying levels of standard deviation. The results show that at low levels of reward noise, the performance of TD is far superior to QTD, as in the purely deterministic case, with the relative performance of QTD improving monotonically as a function of the standard deviation of the reward noise. ### An ablation: Pseudo-quantile temporal-difference learning We have motivated QTD theoretically as an effective algorithm for tabular policy evaluation, and we have also seen this borne out empirically. We have described its contrasting performance profile to TD, and noted its properties of (i) bounded-magnitude updates, and (ii) controllable fixed-point error. Taking a step back, a natural question to ask is: are there further nuances to the particular form of the QTD update which make it an effective algorithm? To investigate this question further, in this section we study a new algorithm for tabular policy evaluation, _pseudo-quantile temporal-difference learning_ (PQTD), which uses the same form of quantile updates as QTD, though does _not_ aim to learn quantiles of the return distribution. Our goal is to understand the role played by these two components of QTD in forming an effective policy evaluation algorithm. In particular, motivated by Achab's (2020) study of the one-step random return \(R+\gamma V^{\pi}(X^{\prime})\) (see also Achab and Neu (2021), Achab et al. (2022), and Achab et al. (2023)), PQTD aims to learn the quantiles of the distribution of these random variables, rather than those of the usual return distribution. The approach is presented in Algorithm 2. The distinction from QTD is that the targets in the quantile regression update are constructed from the mean-return estimate at the next state, rather than from the quantile estimates themselves; the learnt quantile estimates therefore reflect only the randomness resulting from a single step of environment interaction. This is also motivated by the approach of two-hot encoded categorical value learning in recent deep RL applications (Schrittwieser et al., 2020; Hessel et al., 2021; Hafner et al., 2023), which can be interpreted as a one-step version of categorical distributional RL (Bellemare et al., 2017). The results of running PQTD on the same suite of environ Figure 5: An example environment (left) where low values of \(m\) induce particularly high fixed-point bias. learning rate vs. mean-squared error for QTD(\(m\)) with varying \(m\) (right). Figure 6: Performance of QTD(128) relative to TD, with optimal learning rates, on the environment with deterministic cycle transition structure, and Gaussian rewards of varying standard deviation, indicated in the legend. ments as reported in Figure 2 are given in Figure 7, with improvements at optimised learning rates across a range of number of updates displayed in Figure 8. Overall, similar behaviour is observed with PQTD as with QTD: larger learning rates to TD are preferred, and the approach tends to work best in the presence of high environment stochasticity. However, the level of performance obtained is generally somewhat worse than QTD, and worse than TD in several stochastic environments too; this discrepancy provides a useful opportunity to understand the success of QTD better. In particular, considering the cycle environment with Gaussian reward noise, the fixed-point bias for both QTD and PQTD is in fact zero in this case; for readers familiar with distributional dynamic programming (Bellemare et al., 2023), intuitively this follows from symmetry of the one-step target distributions, meaning that the average of the learnt quantiles is equal to the mean. Our earlier decomposition of the error therefore suggests that the discrepancy between QTD and PQTD must arise from finite-sample error, and points to differences in the update variance between the algorithms. Our empirical observations concur with this conjecture, with PQTD updates often having variance several times greater than those of QTD around the points of convergence. The form of the update for both QTD and PQTD is also informative; the term multiplying the learning rate in Algorithm 2 can take on only the values \(\tau_{i}\) or \(1-\tau_{i}\), whereas the averaging that occurs in the corresponding QTD update allows for significantly lower-magnitude updates, and hence potentially lower variance. This finding suggests that a strength of QTD for value estimation is not only its bounded-magnitude updates, but the fact that variance of these updates is often significantly better than the bounds alone suggest, and that specifically learning the distribution of the full return can have beneficial variance-reduction properties in temporal-difference learning. ## 6 Related work **Mean estimation with quantiles.** The approach of estimating a location parameter by averaging quantiles dates back at least to Daniell (1920), who investigated the non-uniform averaging of order statistics to estimate a one-dimensional location parameter. Mosteller (1946) developed this line of work further, investigating the statistical properties of averages of quantile estimates in greater detail. Interestingly, several proposals for which quantile levels should be aver Figure 8: Relative improvement in MSE of PQTD over TD, with numbers of updates ranging from 0 to 10,000. Figure 7: Mean-squared error against learning rate for TD (black) and PQTD(128) (blue). aged were made in this work, including the levels \(\tau_{i}=\frac{2i-1}{2m}\) used by QTD, though without theoretical justification. Gastwirth (1966) also studied the efficiency of a mean estimator based on averaging of three specific quantiles for symmetric distributions with varying levels of heavy-tailedness. Huber (1964) proposed using smoothed versions of quantile losses for location estimation. See also Andrews et al. (1972) for a broader review of robust approaches to location estimation. Online estimation of quantiles via incremental algorithms also has a long history; quantile estimation (in the supervised learning setting) is one of the examples provided by Robbins and Monro (1951) in their work introducing the field of stochastic approximation. **Deep quantile temporal-difference learning.** In addition to the original QTD algorithm (Dabney et al., 2018), recent theoretical developments (Lheritier and Bondoux, 2022; Rowland et al., 2023), and extensions in the context of deep reinforcement learning (Dabney et al., 2018; Yang et al., 2019), several architectural innovations specifically exploiting neural network function approximation have been proposed to avoid the quantile-crossing problem when combining QTD with neural function approximation (Zhou et al., 2020; Luo et al., 2021; Theate et al., 2021). **Distributional reinforcement learning algorithms.** In this paper, we have focused on quantile temporal-difference learning, a particular instance of a distributional reinforcement learning algorithm. Other distributional reinforcement learning algorithms include categorical temporal-difference learning (CTD; Bellemare et al., 2017; Rowland et al., 2018), maximum-mean discrepancy-based methods (Nguyen et al., 2021), methods using distributional representations based on mixtures of Gaussians (Barth-Maron et al., 2018), and methods using Sinkhorn divergences (Sun et al., 2022). It is interesting to contrast the finding that QTD is a strong algorithm for tabular policy evaluation, with properties that _complement_ those of TD, with prior findings relating to CTD (Rowland et al., 2018; Lyle et al., 2019; Bellemare et al., 2023). In contrast to QTD, this prior work showed that in many circumstances, CTD behaves _identically_ to TD for mean estimation, and so offers no additional benefit, or complementary profile of performance. **Robust approaches to TD learning and optimisation.** A variety of approaches to robust and regularised variants of TD learning have been considered previously (Bossaerts et al., 2020; Lu and Giannakis, 2021; Meyer, 2021; Klima et al., 2019; Ghiassian et al., 2020; Liu et al., 2012; Manek and Kolter, 2022). Bounded updates naturally arise from the QTD learning algorithm; bounded gradients are also commonly encountered in deep learning as a heuristic approach to stabilising optimisation through clipping (Mikolov, 2012; Pascanu et al., 2013), as well as in fundamental optimisation algorithms (Riedmiller and Braun, 1993). ## 7 Conclusion In this paper, we have shown that QTD can be viewed as a fundamental algorithm for policy evaluation, with complementary properties to the classical approach to temporal-difference learning. The theoretical and empirical analysis, as well as the introduction and study of the related algorithm PQTD, has given indications as to which kinds of environments we might expect one approach to improve over the other. We emphasise that these findings are of course not exhaustive, and we expect there to be significant value in further empirical investigation of QTD as a tabular policy evaluation algorithm, as well as analysis of variants incorporating aspects such as multi-step returns, off-policy corrections, and function approximation, all of which interact in various ways with the complementary trade-offs made by TD and QTD between fixed-point error, variance and expected update magnitude (White and White, 2016; Mahmood et al., 2017; Rowland et al., 2020). Precise finite-sample bounds on performance are also a natural direction for future work. These findings are also pertinent to the overarching questions as to where exactly the benefits of distributional RL stem from. Common hypotheses have often focused on the interaction between distributional predictions and non-linear function approximation, with mechanisms such as improved representation learning, prevention of rank collapse, and improved loss landscapes being proposed. This work highlights that even in risk-neutral tabular settings, there are benefits to taking a distributional approach to reinforcement learning, and opens up several directions of research to understand the role of distributional RL as a core technique in reinforcement learning. Historically, distributional RL algorithms have often been evaluated in (near-)deterministic environments; this paper also supports the idea that by evaluating algorithms on a wider range of environments, we may obtain a more nuanced view of the strengths and weaknesses of the algorithms at play. Above all, this paper aims to show that distributional reinforcement learning has a fundamental role in developing algorithms that complement our existing approaches to core tasks such as policy evaluation; to estimate the mean, it can pay to estimate the full distribution. ## Acknowledgements We thank David Abel for detailed comments on an earlier draft, and the reviewers & area chair for their helpful comments on the paper. The experiments in this paper were undertaken using the Python 3 language, and made use of the NumPy (Harris et al., 2020), SciPy (Virtanen et al., 2020), and Matplotlib (Hunter, 2007) libraries.
2306.01389
Spectral gaps and Fourier dimension for self-conformal sets with overlaps
We prove a uniform spectral gap for complex transfer operators near the critical line associated to overlapping $C^2$ iterated function systems on the real line satisfying a Uniform Non-Integrability (UNI) condition. Our work extends that of Naud (2005) on spectral gaps for nonlinear Cantor sets to allow overlaps. The proof builds a new method to reduce the problem of the lack of Markov structure to average contraction of products of random Dolgopyat operators. This approach is inspired by a disintegration technique developed by Algom, the first author and Shmerkin in the study of normal numbers. As a consequence of the method of the second author and Stevens, our spectral gap result implies that the Fourier transform of any non-atomic self-conformal measure decays to zero at a polynomial rate for any $C^{2}$ iterated function system satisfying UNI. This latter result leads to Fractal Uncertainty Principles with arbitrary overlaps.
Simon Baker, Tuomas Sahlsten
2023-06-02T09:20:56Z
http://arxiv.org/abs/2306.01389v1
# Spectral gaps and Fourier dimension ###### Abstract. We prove a uniform spectral gap for complex transfer operators near the critical line associated to _overlapping_\(C^{2}\) iterated function systems on the real line satisfying a Uniform Non-Integrability (UNI) condition. Our work extends that of Naud (2005) on spectral gaps for nonlinear Cantor sets to allow overlaps. The proof builds a new method to reduce the problem of the lack of Markov structure to average contraction of products of _random_ Dolgopyat operators. This approach is inspired by a disintegration technique developed by Algom, the first author and Shmerkin in the study of normal numbers. As a consequence of the method of the second author and Stevens, our spectral gap result implies that the Fourier transform of any non-atomic self-conformal measure decays to zero at a polynomial rate for any \(C^{2}\) iterated function system satisfying UNI. This latter result leads to Fractal Uncertainty Principles with arbitrary overlaps. S.B. is supported by an EPSRC New Investigator Award (EP/W003880/1). T.S. is supported by the Academy of Finland via the project _Quantum chaos of large and many body systems_, grant Nos. 347365, 353738. (e.g. [29, 56, 7, 30]). These advances in overlapping IFSs have almost exclusively focused on linear systems of self-similar or self-affine type, and the case of truly non-linear systems remains elusive except for parametrised families using the transversality method [60]. This is because in the case of systems with non-linearity many of the methods in the linear case (e.g. using convolution methods) do not transfer over, and one has to restrict the class of maps, e.g. as in the work of Hochman and Solomyak [31]. The general \(C^{2}\) overlapping IFS theory is still largely unexplored. In this article we will make fundamental steps towards understanding the dynamics and geometry of overlapping \(C^{2}\) IFSs by proving a new spectral gap theorem for complex transfer operators associated to nonlinear \(C^{2}\) IFSs with arbitrary overlaps. Adapting ideas from the non-overlapping case, we then apply this new spectral gap theorem to prove new Fourier dimension bounds and Fractal Uncertainty Principles [22, 21, 9, 10] for systems with arbitrary overlaps. Our setting is an iterated function system \(\Phi=\{\varphi_{a}:a\in\mathbf{A}\}\) consisting of a finite number of \(C^{2}\) contractions on an interval \(I:=[0,1]\), see e.g. Figure 1. Then for a probability vector \((p_{a})_{a\in\mathbf{A}}\) and \(s=r+ib\in\mathbb{C}\), we associate a complex transfer operator \(\mathcal{L}_{s}:C^{1}(\mathbb{R})\to C^{1}(\mathbb{R})\) defined by \[\mathcal{L}_{s}f(x):=\sum_{a\in\mathbf{A}}p_{a}|\varphi_{a}^{\prime}(x)|^{s}f (\varphi_{a}(x)),\quad x\in\mathbb{R}. \tag{1.1}\] Such transfer operators arise naturally in the study of overlapping self-conformal measures of \(\Phi\), which are eigenmeasures \(\mu=\mathcal{L}_{0}\mu\). If the IFS \(\Phi\) is sufficiently separated (e.g. it satisfies the Strong Separation Condition) and the branches satisfy the _Uniform Non-Integrability_ (UNI) condition introduced by Chernov [13] and Dolgopyat [16], that is, there exists \(c_{1},c_{2}>0\) such that for all \(n\) sufficiently large, there exists \(\mathbf{a},\mathbf{b}\in\mathbf{A}^{n}\) such that the compositions \(\varphi_{\mathbf{a}}=\varphi_{a_{1}}\circ\cdots\circ\varphi_{a_{n}}\) and \(\varphi_{\mathbf{b}}=\varphi_{b_{1}}\circ\cdots\circ\varphi_{b_{n}}\) satisfy \[c_{1}\leq\left|\frac{\varphi_{\mathbf{a}}^{\prime\prime}(x)}{\varphi_{\mathbf{ a}}^{\prime}(x)}-\frac{\varphi_{\mathbf{b}}^{\prime\prime}(x)}{\varphi_{ \mathbf{b}}^{\prime}(x)}\right|\leq c_{2},\quad\text{for all }x\in K, \tag{1.2}\] then it goes back to the work of Naud [50] and Stoyanov [62] who adapted _Dolgopyat's method_[16], that the transfer operators in (1.1) have a _spectral gap_ on \(C^{1,b}(\mathbb{R})\) for \(|b|\) large enough and \(|r|\) small enough. Here \(K\) is the attractor of \(\Phi\), i.e. the unique non-empty compact set satisfying \(K=\cup_{a\in\mathbf{A}}\varphi_{a}(K)\), and \(C^{1,b}(\mathbb{R})\) is the Banach space of \(C^{1}\) functions on \(\mathbb{R}\) with the norm \[\|f\|_{b}:=\|f\|_{\infty}+|b|^{-1}\|f^{\prime}\|_{\infty}.\] The spectral gap one obtains has useful applications to many problems, for instance scattering resonances [50] and exponential mixing of Anosov flows [62] such as the Teichmuller flow [5]. The UNI condition is satisfied by many examples of roof functions. Informally it is saying that the IFS is uniformly far from being a linear IFS. It is implied by the _total non-linearity_ of the inverse branches \(\varphi_{a}\), that is, \(\Phi\) not being \(C^{2}\) conjugated to a linear system, see [3, Claims 2.12, 2.13] and [4, Claim 2.2] for a proof. In the special case where the inverse branches are analytic, then UNI is implied by \(\Phi\) not being conjugated to a self-similar iterated function system. The UNI condition can also be replaced with weaker conditions, such as the _non-local integrability property_ (NLI) [50] or the weaker _local non-integrability condition_ (LNIC) in higher dimensions [62]. If we introduce overlaps into the IFS \(\Phi\), the methods of Dolgopyat [16], Naud [50] and Stoyanov [62] that rely on the Markov partition do not apply. At the same time, most of the advances in the overlapping IFSs are focused on linear IFSs, so it is be unclear how the methods from linear IFSs would benefit the nonlinear case. When we have some true non-linearity in the system manifesting in the UNI condition, having spectral gaps for transfer operators could still be possible, but we need to overcome the overlapping structure with new ideas. In this work we indeed prove a uniform spectral gap for the operators (1.1) without any conditions on the overlaps: **Theorem 1.1**.: _Let \(\Phi=\{\varphi_{a}:a\in\mathbf{A}\}\) be a non-trivial uniformly contracting \(C^{2}\) iterated function system satisfying the UNI condition (1.2). Then there exists \(0<\varrho_{0}<1\) such that for \(s=r+ib\in\mathbb{C}\) with \(|r|\) sufficiently small and \(|b|\) sufficiently large, the operator \(\mathcal{L}_{s}\) satisfies for all \(n\in\mathbb{N}\) and \(f\in C^{1}(\mathbb{R})\):_ \[\|\mathcal{L}_{s}^{n}f\|_{b}\lesssim\varrho_{0}^{n}|b|^{1/2}\|f\|_{b}.\] _Thus there exists \(0<\delta<1\) such that for all \(|r|\) sufficiently small and \(|b|\) sufficiently large, the spectral radius satisfies_ \[\varrho(\mathcal{L}_{s})\leq 1-\delta.\] The novelty of this result comes from the way in which we overcome the arbitrary overlapping structure in the IFS. Our idea is to decompose the operator \(\mathcal{L}_{s}^{n}\) into a sum of _random_ compositions of transfer operators defined using a non-overlapping sub-IFS with nonlinearity. We then bound the norm of these compositions by using a family of _random_ Dolgopyat operators defined using the nonlinear sub-IFS. This decomposition idea is inspired by the work of Algom, the first author and Shmerkin [1], who disintegrated self-similar measures to study normal numbers. The way we build our decomposition suggests that our methods could be used in the study of IFSs that only contract on average [69, 68]. Furthermore, as the theory of \(C^{2}\) IFSs with overlaps is still in virgin territory, we believe the method we have built to prove Theorem 1.1 will provide an important step towards understanding the behaviour of \(C^{2}\) IFSs with overlaps. _Remark 1.2_.: We note that in the work [4] done simultaneously and independently of ours, Algom, Hertz and Wang also obtained a similar spectral gap theorem in the overlapping case, see [4, Theorem 2.8]. They used this spectral gap theorem to prove an exponential decay rate in a renewal theorem leading to a Fourier decay theorem similar to the one we have in the next section (Theorem 1.4). We do not apply Theorem 1.1 to prove a renewal theorem, but instead use it to prove a non-concentration estimate that with a sum-product bound leads to a Fourier decay theorem. Interestingly, Algom, Hertz and Wang need the full strength of their Theorem 2.8 to prove their Fourier decay theorem. They need to consider \(r<0\) to prove their renewal theorem, whereas to prove our non-concentration estimate we just need to consider the case where \(r=0\). _Remark 1.3_.: Theorem 1.1 is formulated in terms of Bernoulli measures and their pushforwards. It seems likely that both proofs could be generalised to cover Markov measures and their pushforwards. The main obstacle that would need to be overcome is understanding how the Markov structure would interact with the partition of our IFS (see Proposition 3.1 for details of this partition). In particular, given an element \(w\) in our partition, because we are now working with a Markov measure, the elements of the partition that may follow \(w\) will now depend upon \(w\). This in turn means that when we decompose our transfer operator as a sum of random transfer operators (see Lemma 3.2), we would have to be more careful about which random transfer operators are allowed. If there is an element of our partition that is allowed to follow itself and satisfies an appropriate version of the UNI condition, then this element will play the role of \(w^{*}\) in the statement of Proposition 3.1, and our arguments should still work with only minor changes. Furthermore, it is natural to wonder whether the proof of Theorem 1.1 could be adapted to cover Gibbs measures and their pushforwards as in [62]. The authors expect that this is possible. However, our method for decomposing the transfer operator does not work for these more general measures, and so further ideas are needed. ### Fourier decay in overlapping \(C^{2}\) IFSs Next, we want to move to an application of Theorem 1.1, and in particular to the Fourier transforms of fractal measures. The study of Fourier transforms of fractal measures and their high-frequency asymptotics was historically initiated by questions on uniqueness of trigonometric series, metric number theory, Fourier multipliers and maximal operators defined by fractal measures (see e.g. [36, 37] for a historical overview). There were works on weaker average decay for Fourier transforms of self-similar measures by R. Strichartz [63] and M. Tsujii [64], and various works on specific constructions such as Fourier transforms of Bernoulli convolutions originating in Erdos' work [23], random measure constructions e.g. using the Brownian motion [26, 27], and constructions in Diophantine approximation [40]. These works suggested some form of pseudo-randomness of the underlying dynamical system should lead to the decay of the Fourier transform. This principle was verified in the article [39] by Jordan and the second author in the case of equilibrium states for the Gauss map where nonlinearity manifested in the distribution of the continuants of the continued fraction expansions, and then also in the subsequent article by Bourgain and Dyatlov [9] on limit sets of Fuchsian groups. This latter article was motivated by proving a Fractal Uncertainty Principle to gain new information on scattering resonances in quantum chaos. Various thermodynamic, renewal theoretic and additive combinatoric techniques have been built which enable a systematic study of Fourier transforms of fractal measures using e.g. the under nonlinearity of the system. Since [39, 9], there has been a surge of activity in this topic in dynamics, metric number theory and fractal geometry to characterise measures with Fourier decay, such as for self-similar- and self-affine iterated function systems [36, 37, 61, 11, 67, 54], self-conformal systems [55, 2], hyperbolic dynamical systems [43, 44, 45, 70], fractal measures arising from random processes such as Brownian motion and Liouville quantum gravity [26, 27, 24, 57]. The method used in [55] by the second author and Stevens to prove polynomial Fourier decay for certain self-conformal measures in dimension \(1\), was based upon the thermodynamic formalism method introduced in [39] combined with a corollary of a sum-product theorem as in [9]. The required non-concentration assumption for the sum-product bound was verified using a spectral gap theorem for complex transfer operators due to Stoyanov [62]. Independently, Algom-Hertz-Wang [2] proved that the Rajchman property holds for self-conformal measures with weaker assumptions but without polynomial Fourier decay. Similar ideas have been generalised by Leclerc [45] to hyperbolic attractors. These works leave open the possibility of overlaps and how the result would work in higher dimensions. There is motivation to study such problems, especially due to the need to generalise Fractal Uncertainty Principles [9, 10] to more general fractals arising from Anosov flows in order to optimise essential spectral gap bounds in _variable_ negatively curved manifolds to study quantum scattering problems in this situation, see Section 1.3 for more discussion. In the higher dimensional case there has to be restrictions on the non-concentration of the measures on hyperplanes, see e.g. self-affine systems [37] and the recent work by Khalil [41] on exponential mixing of the geodesic flow on a geometrically finite locally symmetric space of negative curvature with respect to the Bowen-Margulis-Sullivan measure, where Fourier decay results are studied under non-concentration on hyperplanes. In the overlapping _self-similar_ case, it is possible to obtain logarithmic Fourier decay [36, 11, 67], but the renewal theoretic method uses the Cauchy-Schwartz inequality in a way so that the non-concentration from purely derivatives is not strong enough to establish polynomial Fourier decay. Thanks to the spectral gap Theorem 1.1, we will handle what happens when there are overlaps for self-conformal measures in the \(C^{2}\) category: **Theorem 1.4**.: _Let \(\mu\) be a non-atomic self-conformal measure associated to a \(C^{2}\) iterated function system \(\Phi\) on \(\mathbb{R}\) satisfying the UNI condition (1.2). Then there exists \(\alpha>0\) such that_ \[|\widehat{\mu}(\xi)|\lesssim|\xi|^{-\alpha}\] _for all \(\xi\in\mathbb{R}\) with \(|\xi|>1\), where \(\widehat{\mu}(\xi):=\int e^{-2\pi i\xi x}\,d\mu(x)\)._ The way we approach bounding the Fourier transform of the measure \(\mu\) in Theorem 1.4 is based on iterating the self-conformal property of \(\mu\) so that we can write an upper bound for \(|\widehat{\mu}(\xi)|^{2}\) that consists of an exponential sum over the regular blocks of words plus an error term that is small due to the large deviation bounds. This part is fundamentally the same as in [55], which in turn was based upon combining the large deviation approach of Jordan and the second author [39] with the multiscale block decomposition by Bourgain and Dyatlov [9]. To then control this exponential sum, we use, as in [9, 55] a sum-product bound due to Bourgain [8] that requires us to check a non-concentration hypothesis for the derivatives of the IFS. The main novelty comes in the proof of this non-concentration property, which needs the new spectral gap Theorem 1.1 for the transfer operators \(\mathcal{L}_{s}\) with \(\operatorname{Re}(s)=0\) and \(\operatorname{Im}(s)=c\xi\) for a suitable constant \(c\in\mathbb{R}\). Thus we do not need information on \(\mathcal{L}_{s}\) outside of the critical line, and in fact here we only need an \(L^{\infty}\) norm bound for \(\mathcal{L}_{s}^{n}f\) instead of a bound with respect to the \(b\)-norm. We note however that the proof of the \(L^{\infty}\) norm bound at the critical line is the most non-trivial part of the proof of Theorem 1.1. ### Fractal Uncertainty Principles and overlaps Finally, motivated by the work of Bourgain and Dyatlov [9, 10], we discuss an application of Theorem 1.4 to _Fractal Uncertainty Principles_ in quantum chaos, in particular, providing new examples where Markov structure can be avoided. Fractal Uncertainty Principles (FUPs) are a recently developed tool in harmonic analysis, which states that no function can be localised in both position and frequency near a fractal set, or more precisely: we say sets \(X,Y\subset\mathbb{R}^{d}\) satisfy a _Fractal Uncertainty Principle_ at the scale \(h>0\) with exponent \(\beta>0\) and constant \(C>0\) if for all \(f\in L^{2}(\mathbb{R}^{d})\) with \[\{\xi\in\mathbb{R}^{d}:\widehat{f}(\xi)\neq 0\}\subset h^{-1}Y\qquad\Rightarrow \qquad\|f\|_{L^{2}(X)}\leq Ch^{\beta}\|f\|_{L^{2}(\mathbb{R}^{d})},\] where \(\widehat{f}(\xi):=\int_{\mathbb{R}^{d}}e^{-2\pi ix\cdot\xi}f(x)\,dx\), \(\xi\in\mathbb{R}^{d}.\) When applied to \(h\)-neighbourhoods \(X\) and \(Y\) of fractals arising from hyperbolic dynamics, FUP has led to powerful applications in quantum chaos such as in bounding the essential spectral gaps and the \(L^{2}\) mass of eigenfunctions of the Laplacian in open sets, and new control and observability theorems of PDEs [18, 20]. By a result of Bourgain and Dyatlov [10], _porosity_ (or Ahlfors-David regularity) of the sets \(X\) and \(Y\) in an interval of scales \([h,1]\) is enough to establish _some_ exponent \(\beta>0\) in the FUP, but quantifications especially for sets of dimension less than \(1/2\) where additive combinatorics methods are used (e.g. by Dyatlov and Zahl [21] and Cladek and Tao [14]), require more structure from the fractal such as nonlinearity or curvature assumptions [9]. If we consider systems _without_ porosity such as non-injective hyperbolic skew products with overlapping fibres [49] or parabolic systems [46] where holes may not appear uniformly at all scales, it would be interesting to see if FUP could be applied in such more general contexts. This could potentially have utility in quantum chaos related to such systems. In the following, we will consider FUP for sets \(X\) and \(Y\) that arise as neighbourhoods of fractals in \(\mathbb{R}^{d}\) potentially without any porosity, but instead satisfy a Fourier decay condition and a mild Frostman regularity condition that is still possible even with arbitrary overlaps. We say a measure \(\mu\) on \(\mathbb{R}^{d}\) is \((C^{-},\delta^{-},C^{+},\delta^{+},h)\)-_Frostman_ if: 1. For \(r\in[h,1]\) and \(x\in\mathbb{R}^{d}\) we have \(\mu(B(x,r))\leq C^{+}r^{\delta^{+}}\); 2. For \(r\in[h,1]\) and \(x\in\operatorname{spt}\mu\) we have \(\mu(B(x,r))\geq C^{-}r^{\delta^{-}}\). Here \(\operatorname{spt}\mu\) is the support of the measure \(\mu\). Note that all non-atomic self-conformal measures even with overlaps are \((C^{-},\delta^{-},C^{+},\delta^{+},h)\)-Frostman for all small enough \(h\), see e.g. [25, Proposition 2.2] which has same proof in the self-conformal case using bounded distortions. The way Fourier decay connects to FUP can be observed in the following statement, it has a similar proof to that given in [9] in the special case of limit sets of Fuchsian groups, but we extend it to ensure only the weaker Frostman condition is applied. **Proposition 1.5**.: _For \(j=1,2\), suppose \(K_{j}=\operatorname{spt}\mu_{j}\subset\mathbb{R}^{d}\) are supports of \((C_{j}^{-},\delta_{j}^{-},C_{j}^{+},\delta_{j}^{+},h)\)-Frostman measures. Assume also that for some \(0<\alpha\leq\delta_{2}^{+}/2\) we have:_ \[|\widehat{\mu_{2}}(\xi)|\lesssim|\xi|^{-\alpha},\quad|\xi|\leq\operatorname{ diam}(K_{2})h^{-1}.\] _Let \(X=K_{1}+B(0,h)\) and \(Y=K_{2}+B(0,h)\). Then any \(f\in L^{2}(\mathbb{R}^{d})\) with_ \[\{\xi\in\mathbb{R}^{d}:\widehat{f}(\xi)\neq 0\}\subset h^{-1}Y\qquad\Rightarrow \qquad\|f\|_{L^{2}(X)}\lesssim_{C_{1}^{-},C_{2}^{\pm},\delta_{1}^{-},\delta_{ 2}^{-}}h^{\frac{d}{2}-\frac{\delta_{1}^{-}}{2}-\frac{\delta_{2}^{-}}{2}+\frac{ \alpha}{4}}\|f\|_{L^{2}(\mathbb{R}^{d})}.\] We can now combine this with Theorem 1.4 so that we obtain a wide class of non-porous and overlapping fractals such as basic sets of overlapping self-conformal sets satisfying FUP: **Theorem 1.6**.: _Let \(K_{1},K_{2}\subset\mathbb{R}\) be any non-trivial self-conformal sets for \(C^{2}\) IFSs. Assume that the IFS associated to \(K_{2}\) satisfies the UNI condition (1.2). Then there exists \(\alpha>0\) depending only on the IFS associated to \(K_{2}\) such that FUP holds at the scale \(h>0\) for \(X=K_{1}+B(0,h)\) and \(Y=K_{2}+B(0,h)\) with \(\beta=\frac{1}{2}-\frac{\delta_{1}^{-}}{2}-\frac{\delta_{2}^{-}}{2}+\frac{ \alpha}{4}\), where_ \[\delta_{j}^{-}=\max\{\overline{\dim_{\mathrm{H}}}\,\mu:\mu\text{ is a self-conformal measure on }K_{j}\},\] _for \(\overline{\dim_{\mathrm{H}}}\,\mu=\operatorname{ess}\sup_{x\in\mathbb{R}} \limsup_{r\to 0}\log\mu(B(x,r))/\log r\)._ Most of the proof of the Fourier decay theorem (Theorem 1.4) that implies Fractal Uncertainty Principle applies also in higher dimensions. In higher dimensions we would need a replacement for the sum-product bound, and a projective one by Li [33] would be natural here. However, to prove the projective non-concentration for the derivatives would require us to have an assumption on avoiding concentration to hyperplanes. For example, in higher dimensions, FUP cannot work for even all porous sets, e.g. using the line segments \(X_{h}=\mathbb{R}\times[-h,h]\) and \(Y_{h}=[-h,h]\times\mathbb{R}\). In two dimensions, it is possible to obtain a Fractal Uncertainty Principle by using the Fourier decay of the Patterson-Sullivan measure like in the case of Fuchsian groups [9] and in two dimensions by Li-Naud-Pan [35], see also Leclerc's recent work [43] involving twisted transfer operators and bunched attractors [45]. Moreover, adapting Dolgopyat's method, which also lies at the heart of proving a spectral gap for complex transfer operators [50, 62], Backus-Leng-Tao [6] proved a Fractal Uncertainty Principle for limit sets of Kleinian groups in \(\mathbb{H}^{d}\) with exponent \(d/2-\dim_{\mathrm{H}}F+\varepsilon\), where \(\dim_{\mathrm{H}}F\) is the Hausdorff dimension of the limit set. This generalised the 1D approach of Dyatlov and Jin [19] who obtained a similar result using Dolgopyat's method. A generalisation of Corollary 1.6 to higher dimensions applied to subshifts of finite type would allow for a higher dimensional Fractal Uncertainty Principle with a similar exponent as Backus, Leng and Tao. We also make note of a recent remarkable work of Cohen [15] on proving a higher dimensional Fractal Uncertainty Principle for line porous fractals without any dimension assumptions. We believe a UNI assumption for all directions similar to the one considered in [5] for the exponential mixing of the Teichmuller flow would provide a suitable analogue where the results of this paper would hold, see also [41]. We plan to investigate this in a future work. ### Organisation of the article In Section 2 we go through the basic symbolic notation we need. In Section 3 we prove the new spectral gap theorem. Then in Section 4 we prove Theorem 1.4 on Fourier decay using the spectral gap of transfer operators. Finally in Section 5 we give the Fractal Uncertainty Principle argument in \(\mathbb{R}^{d}\). ### Notation We collect here some notational conventions that we will adopt throughout this article. Given two real valued functions \(f,g\) defined on a set \(S\). We write \(f\lesssim g\) if there exists a constant \(c>0\) such that \(f(x)\leq cg(x)\) for all \(x\in S\). We write \(f\sim g\) if \(f\lesssim g\) and \(g\lesssim f\). We will also on occasion write \(f=\mathcal{O}(g)\) to mean the same thing as \(f\lesssim g\). ## 2. Symbolic notations of \(C^{2}\) IFSs Let \(\{\varphi_{a}:I\to I\}_{a\in\mathbf{A}}\), \(I=[0,1]\), \(\mathbf{A}\) finite, be a \(C^{2}\) iterated function system (IFS) acting on \(\mathbb{R}\) satisfying the following properties: 1. _Uniform contraction_: There exists \(1<\gamma<\gamma_{1}\) such that for all \(x\in I\) and \(n\in\mathbb{N}\), if \((a_{1},\dots,a_{n})\in\mathbf{A}^{n}\) then \[\gamma_{1}^{-n}\lesssim\left|(\varphi_{a_{1}}\circ\dots\circ\varphi_{a_{n}}) ^{\prime}(x)\right|\lesssim\gamma^{-n}.\] 2. _Bounded distortions_: For all \(x,y\in I\) we have \[\frac{|\varphi_{a}^{\prime}(x)|}{|\varphi_{a}^{\prime}(y)|}\leq B.\] 3. _Non-trivial_: The unique non-empty compact set \(K\) satisfying \[K=\bigcup_{a\in\mathbf{A}}\varphi_{a}(K)\] is not a singleton. 4. _Uniform Non-Integrability_ (UNI): We say that \(\Phi\) satisfies the uniform non-integrability condition if there exists \(c_{1},c_{2}>0\) such that for all \(n\) sufficiently large, there exists \(\mathbf{a},\mathbf{b}\in\mathbf{A}^{n}\) such that the compositions \(\varphi_{\mathbf{a}}=\varphi_{a_{1}}\circ\cdots\circ\varphi_{a_{n}}\) and \(\varphi_{\mathbf{b}}=\varphi_{b_{1}}\circ\cdots\circ\varphi_{b_{n}}\) satisfy \[c_{1}\leq\left|\frac{\varphi_{\mathbf{a}}^{\prime\prime}(x)}{\varphi_{\mathbf{ a}}^{\prime}(x)}-\frac{\varphi_{\mathbf{b}}^{\prime\prime}(x)}{\varphi_{ \mathbf{b}}^{\prime}(x)}\right|\leq c_{2},\quad\text{for all }x\in K.\] Given a probability vector \(\mathbf{p}=(p_{a})_{a\in\mathbf{A}}\) (\(0<p_{a}<1\) and \(\sum_{a\in\mathbf{A}}p_{a}=1\)), there exists a unique Borel probability measure \(\mu_{\mathbf{p}}\) satisfying \[\mu_{\mathbf{p}}=\sum_{a\in\mathbf{A}}p_{a}f_{a}\mu_{p}.\] \(\mu_{\mathbf{p}}\) is called a _self-conformal measure_. When the choice of \(\mathbf{p}\) is implicit we will simply denote \(\mu_{\mathbf{p}}\) by \(\mu\). We now take the opportunity to introduce some tree notation. We let \(\mathbf{A}^{*}=\cup_{n=1}^{\infty}\mathbf{A}^{n}\) denote the set of finite words over the alphabet \(\mathbf{A}\). Given \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathbf{A}^{*}\) we let \[\varphi_{\mathbf{a}}=\varphi_{a_{1}}\circ\cdots\circ\varphi_{a_{n}}\qquad \text{and}\qquad p_{\mathbf{a}}=\prod_{i=1}^{n}p_{a_{i}}.\] We also let \[[\mathbf{a}]:=\{\mathbf{b}\in\mathbf{A}^{\mathbb{N}}:b_{i}=a_{i}\text{ for }1 \leq i\leq n\}\] denote the cylinder set associated to \(\mathbf{a}.\) We let \(\pi:\mathbf{A}^{\mathbb{N}}\to K\) be the usual projection map given by \[\pi(\mathbf{a})=\lim_{n\to\infty}(\varphi_{a_{1}}\circ\cdots\circ\varphi_{a_{ n}})(0).\] Given a probability vector \(\mathbf{p}=(p_{a})_{a\in\mathbf{A}}\) we denote by \(m_{\mathbf{p}}:=\mathbf{p}^{\mathbb{N}}\) the product measure on \(\mathbf{A}^{\mathbb{N}}\). \(m_{\mathbf{p}}\) are \(\mu_{\mathbf{p}}\) are connected via the equation \(\mu_{\mathbf{p}}=\pi\mu_{\mathbf{p}}\). ## 3. Proof of the spectral gap theorem We cannot directly apply the argument of Naud [50] due to the potential overlaps coming from the IFS. To overcome this issue we use a disintegration argument due to Algom, the first author and Shmerkin [1]. In this paper the authors showed that one could disintegrate a self-similar measure \(\mu\) into measures that looked like self-similar measures for well separated IFSs. We employ a similar idea, however instead of disintegrating the measure \(\mu\), we in effect "disintegrate" the transfer operator and introduce a class of _random_ Dolgopyat operators. ### Partitioning the IFS and random transfer operators The following proposition guarantees the existence of a useful partition of our IFS. Roughly speaking, this partition splits our IFS into non-trivial sub-IFSs each one of which is well separated. Moreover, there exists one special sub-IFS that satisfies a suitable uniform non-integrability condition. **Proposition 3.1**.: _Let \(\{\varphi_{a}\}_{a\in\mathbf{A}}\) be a non-trivial IFS satisfying the UNI condition. Then there exists \(N\in\mathbb{N}\) and \(w^{*},w_{1},\ldots,w_{m}\subset\mathbf{A}^{N}\) such that the following properties are satisfied:_ 1. \(w^{*}\cup w_{1}\cup\cdots\cup w_{m}=\mathbf{A}^{N}.\) _Moreover, this union is disjoint._ 2. \(\sharp w_{i}\in\{2,3\}\) _for_ \(1\leq i\leq m\)_._ 3. _For any_ \(1\leq i\leq m\)_, for distinct_ \(\mathbf{a},\mathbf{b}\in w_{i}\) _we have_ \[\varphi_{\mathbf{a}}(I)\cap\varphi_{\mathbf{b}}(I)=\varnothing.\] 4. \(w^{*}=\{\alpha_{1},\alpha_{2}\}\) _and these words satisfy:_ 1. \[\varphi_{\alpha_{1}}(I)\cap\varphi_{\alpha_{2}}(I)=\varnothing.\] 2. _There exists_ \(c_{1},c_{2},\delta>0\) _such that for all_ \(x\in\{x:d(x,K)<\delta\}\) _and_ \(l\in\mathbb{N}\) _we have_ \[c_{1}\leq\left|\frac{\varphi_{\alpha_{1}}^{\prime\prime}(x)}{\varphi_{\alpha_ {1}}^{\prime}(x)}-\frac{\varphi_{\alpha_{2}}^{\prime\prime}(x)}{\varphi_{ \alpha_{2}}^{\prime}(x)}\right|\leq c_{2}.\] 3. \(p_{\alpha_{1}}=p_{\alpha_{2}}.\)__ Proof.: We begin our proof by remarking that for any \(\mathbf{a}\in\mathbf{A}^{*}\) we have \[\frac{d}{dx}\log|\varphi_{\mathbf{a}}^{\prime}(x)|=\frac{\varphi_{\mathbf{a} }^{\prime\prime}(x)}{\varphi_{\mathbf{a}}^{\prime}(x)}. \tag{3.1}\] Therefore the UNI condition (1.2) is equivalent to the following: there exists \(c_{1},c_{2}>0\) such that for all \(M\) sufficiently large, there exists \(\mathbf{a},\mathbf{b}\in\mathbf{A}^{M}\) such that \[c_{1}\leq\left|\frac{d}{dx}(\log|\varphi_{\mathbf{a}}^{\prime}(x)|-\log| \varphi_{\mathbf{b}}^{\prime}(x)|)\right|\leq c_{2},\qquad\text{for all }x\in K. \tag{3.2}\] What makes (3.2) easier to work with is the following useful identity that follows from two applications of the chain rule: for any \(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d}\in\mathbf{A}^{*}\) and \(x\in I\) we have \[\frac{d}{dx}(\log|\varphi_{\mathbf{ac}}^{\prime}(x)|-\log|\varphi _{\mathbf{bd}}^{\prime}(x)|)= \frac{d}{dx}(\log|\varphi_{\mathbf{c}}^{\prime}(x)|-\log|\varphi _{\mathbf{d}}^{\prime}(x)|)\] \[+ \varphi_{\mathbf{c}}^{\prime}(x)\left(\frac{d}{dx}\log|\varphi_{ \mathbf{a}}^{\prime}|\right)(\varphi_{\mathbf{c}}(x))-\varphi_{\mathbf{d}}^{ \prime}(x)\left(\frac{d}{dx}\log|\varphi_{\mathbf{b}}^{\prime}|\right)( \varphi_{\mathbf{d}}(x)).\] Using this identity and appealing to a bounded distortion argument, it can be shown that there exists \(C>0\), such that for any \(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d}\in\mathbf{A}^{*}\) and \(x\in I\) we have \[\left|\frac{d}{dx}(\log|\varphi_{\mathbf{ac}}^{\prime}(x)|-\log|\varphi_{ \mathbf{bd}}^{\prime}(x)|)-\frac{d}{dx}(\log|\varphi_{\mathbf{c}}^{\prime}(x )|-\log|\varphi_{\mathbf{d}}^{\prime}(x)|)\right|\leq C\gamma^{-\min\{|\mathbf{ c}|,|\mathbf{d}|\}}. \tag{3.3}\] By our non-triviality assumption, there exists \(L\in\mathbb{N}\) and \(\mathbf{c},\mathbf{d}\in\mathbf{A}^{L}\) such that \(\varphi_{\mathbf{c}}(I)\cap\varphi_{\mathbf{d}}(I)=\varnothing\). We let \(\alpha_{1}=\mathbf{dbca}\) and \(\alpha_{2}=\mathbf{cadb}\) where \(\mathbf{a},\mathbf{b}\in\mathbf{A}^{M}\) satisfy (3.2). We immediately have that \(\varphi_{\alpha_{1}}(I)\cap\varphi_{\alpha_{2}}(I)=\varnothing\) and \(p_{\alpha_{1}}=p_{\alpha_{2}}\). Moreover, by (3.3) it follows that for any \(M\) sufficiently large we have \[c_{1}/2\leq\left|\frac{d}{dx}(\log|\varphi_{\alpha_{1}}^{\prime}(x)|-\log| \varphi_{\alpha_{2}}^{\prime}(x)|)\right|\leq 2c_{2}\] for all \(x\in K\). It follows now by a continuity argument that for all \(M\) sufficiently large, there exists \(\delta_{M}>0\) such that \[c_{1}/3\leq\left|\frac{d}{dx}(\log|\varphi^{\prime}_{\alpha_{1}}(x)|-\log| \varphi^{\prime}_{\alpha_{2}}(x)|)\right|\leq 3c_{2}\] for all \(x\in\{x:d(x,K)<\delta_{M}\}.\) We can now appeal to (3.3) again to assert that in fact for any \(M\) sufficiently large, for any \(l\geq 1\) we have \[c_{1}/4\leq\left|\frac{d}{dx}(\log|\varphi^{\prime}_{\alpha_{1}^{l}}(x)|-\log| \varphi^{\prime}_{\alpha_{2}^{l}}(x)|)\right|\leq 4c_{2} \tag{3.4}\] for all \(x\in\{x:d(x,K)<\delta_{M}\}.\). Let \(N=2(M+L)\). Summarising the above, we have shown that for any \(M\) sufficiently large, there exists \(w^{*}=\{\alpha_{1},\alpha_{2}\}\subset\mathbf{A}^{N}\) such that properties \(4a\), \(4b\), and \(4c\) hold. Property \(4b\) holds because of (3.1) and (3.4). It remains to show that for \(M\) sufficiently large we can construct \(w_{1}\ldots w_{m}\subset\mathbf{A}^{N}\setminus w^{*}\) so that properties \(1,2\) and \(3\) are satisfied. Let \(\mu_{uni}\) be the self-conformal measure corresponding to the uniform probability vector \(\mathbf{p}=(\sharp\mathbf{A}^{-1})_{\mathbf{a}\in\mathbf{A}}\). Adapting an argument of Feng and Lau [25] to the setting of self-conformal measures, there exists \(C>0\) and \(\alpha>0\) such that \[\mu_{uni}(B(x,r))\leq Cr^{\alpha} \tag{3.5}\] for all \(x\in\mathbb{R}\). Since our IFS is uniformly contracting, there exists \(\gamma>1\) and \(C_{1}>0\) such that \[|\varphi_{\mathbf{a}}(I)|\leq C_{1}\gamma^{-N}\] for any \(\mathbf{a}\in\mathbf{A}^{N}\). Using this inequality together with (3.5) and the fact \(p_{\mathbf{a}}=\sharp\mathbf{A}^{-N}\) for all \(\mathbf{a}\in\mathbf{A}^{N}\) yields \[\sharp\left\{\mathbf{a}\in\mathbf{A}^{N}:\varphi_{\mathbf{a}}(I)\cap B(x,C_{1 }\gamma^{-N})\neq\varnothing\right\}\leq 3CC_{1}^{\alpha}\sharp\mathbf{A}^{N} \gamma^{-N\alpha} \tag{3.6}\] for any \(x\in\mathbb{R}\). Now let \(\{\mathbf{a}_{i}\}_{i=1}^{\sharp\mathbf{A}^{N}-2}\) be an enumeration of the elements of \(\mathbf{A}^{N}\setminus w^{*}\) such that if \(i<j\) then the left endpoint of \(\varphi_{\mathbf{a}_{i}}(I)\) lies to the left of the left endpoint of \(\varphi_{\mathbf{a}_{j}}(I)\). We also let \[T_{N}:=\lfloor 3CC_{1}^{\alpha}\sharp\mathbf{A}^{N}\gamma^{-N\alpha}\rfloor+1. \tag{3.7}\] The significance of the parameter \(T_{N}\) is that if \(j\geq i+T_{N}\) then \[\varphi_{\mathbf{a}_{i}}(I)\cap\varphi_{\mathbf{a}_{j}}(I)=\varnothing.\] This fact follows form (3.6). For each \(0\leq j\leq\lfloor\frac{\sharp\mathbf{A}^{N}-2}{2T_{N}}\rfloor-1\) and \(1\leq k\leq T_{N}\) we let \[\tilde{w}_{j,k}:=\{\mathbf{a}_{k+2jT_{N}},\mathbf{a}_{k+2jT_{N}+T_{N}}\}.\] Notice that for any \(0\leq j\leq\lfloor\frac{\sharp\mathbf{A}^{N}-2}{2T_{N}}\rfloor-1\) and \(1\leq k\leq T_{N}\), because the subscripts of \(\mathbf{a}_{k+2jT_{N}}\) and \(\mathbf{a}_{k+2jT_{N}+T_{N}}\) differ by \(T_{N}\) we have \[\varphi_{\mathbf{a}_{k+2jT_{N}}}(I)\cap\varphi_{\mathbf{a}_{k+2jT_{N}+T_{N}}}( I)=\varnothing.\] Moreover, \(\tilde{w}_{j,k}\cap\tilde{w}_{j^{\prime},k^{\prime}}=\varnothing\) for \((j,k)\neq(j^{\prime},k^{\prime})\). Our proof is almost complete, it remains to allocate those elements of \(\{\mathbf{a}_{i}\}_{i=2T_{N}\lfloor\frac{\sharp\mathbf{A}^{N}-2}{2T_{N}} \rfloor+1}^{\sharp\mathbf{A}^{N}-2}\) to appropriate subsets of \(\mathbf{A}^{N}\). The cardinality of \(\{\mathbf{a}_{i}\}_{i=2T_{N}\lfloor\frac{\sharp\mathbf{A}^{N}-2}{2T_{N}} \rfloor+1}^{\sharp\mathbf{A}^{N}-2}\) is at most \(2T_{N}\). Therefore to each \(2T_{N}\lfloor\frac{\sharp\mathbf{A}^{N}-2}{2T_{N}}\rfloor+1\leq i\leq\sharp \mathbf{A}^{N}-2\) we can associate a unique pair \((j_{i},k_{i})\) satisfying \(0\leq j\leq 1\) and \(1\leq k\leq T_{N}.\) Notice that the largest subscript for a word \(\mathbf{a}_{i}\) contained in \(\tilde{w}_{j,k}\) for some \(0\leq j\leq 1\) and \(1\leq k\leq T_{N}\) is \(4T_{N}\). It follows from (3.7) that for \(M\) sufficiently large, for any \(i\) satisfying \(2T_{N}\lfloor\frac{\sharp\mathbf{A}^{N}-2}{2T_{N}}\rfloor+1\leq i\leq\sharp \mathbf{A}^{N}-2\) we have \[i-4T_{N}\geq 2T_{N}\lfloor\frac{\sharp\mathbf{A}^{N}-2}{2T_{N}}\rfloor+1-4T_{ N}\geq T_{N}.\] Therefore \[\varphi_{\mathbf{a}_{i}}(I)\cap\varphi_{\mathbf{a}^{\prime}}(I)=\varnothing\] for any \(i\) satisfying \(2T_{N}\lfloor\frac{\sharp\mathbf{A}^{N}-2}{2T_{N}}\rfloor+1\leq i\leq\sharp \mathbf{A}^{N}-2\) and \(\mathbf{a}^{\prime}\in\tilde{w}_{j_{i},k_{i}}\). To each \(i\) satisfying \(2T_{N}\lfloor\frac{\sharp\mathbf{A}^{N}-2}{2T_{N}}\rfloor+1\leq i\leq\sharp \mathbf{A}^{N}-2\) we associate the subset \(\{\mathbf{a}_{i}\}\cup\tilde{w}_{j_{i},k_{i}}\). Taking \(w_{1},\ldots,w_{m}\) to be the subsets \(\{\mathbf{a}_{i}\}\cup\tilde{w}_{j_{i},k_{i}}\) together with the remaining unchanged \(\tilde{w}_{j,k}\), we see now that properties 1, 2, and 3 are satisfied. This completes our proof. Proposition 3.1 now allows us to make a significant simplification in our proof of Theorem 1.1. To prove Theorem 1.1 it suffices to show that the same bound holds for \(\mathcal{L}_{s}^{N}\) for some large \(N\in\mathbb{N}\). Now using the fact that \(\mathcal{L}_{s}^{N}\) coincides with the transfer operator corresponding to the IFS \(\{\varphi_{\mathbf{a}}\}_{\mathbf{a}\in\mathbf{A}^{N}}\), we can apply Proposition 3.1 to assert that without loss of generality our original IFS \(\{\varphi_{a}\}_{a\in\mathbf{A}}\) is such that there exists \(w^{*},w_{1},\ldots,w_{m}\subset\mathbf{A}\) satisfying the following properties: 1. \(w^{*}\cup w_{1}\cup\cdots\cup w_{m}=\mathbf{A}.\) Moreover, this union is disjoint. 2. \(\sharp w_{i}\in\{2,3\}\) for \(1\leq i\leq m\). 3. For any \(1\leq i\leq m\), for distinct \(a,b\in w_{i}\) we have \[\varphi_{a}(I)\cap\varphi_{b}(I)=\varnothing.\] 4. \(w^{*}=\{\alpha_{1},\alpha_{2}\}\) and these words satisfy: 1. \[\varphi_{\alpha_{1}}(I)\cap\varphi_{\alpha_{2}}(I)=\varnothing.\] 2. There exists \(c_{1},c_{2},\delta>0\) such that for all \(x\in\{x:d(x,K)<\delta\}\) and \(l\in\mathbb{N}\) we have \[c_{1}\leq\left|\frac{\varphi_{\alpha_{1}^{\prime}}^{\prime\prime}(x)}{\varphi_{ \alpha_{1}^{\prime}}^{\prime}(x)}-\frac{\varphi_{\alpha_{2}^{\prime}}^{\prime \prime}(x)}{\varphi_{\alpha_{2}^{\prime}}^{\prime}(x)}\right|\leq c_{2}.\] c. \(p_{\alpha_{1}}=p_{\alpha_{2}}\). We now introduce some more notation to complement this partition of \(\mathbf{A}\). We let \[\Omega:=\{w^{*},w_{1},\ldots,w_{m}\}\quad\text{ and }\quad\Omega^{*}:=\cup_{n=1}^ {\infty}\Omega^{n}.\] Moreover, for \(w\in\Omega\) and a finite word \(w_{1}\ldots w_{n}\in\Omega^{*}\) we let \[q_{w}=\sum_{a\in w}p_{a}\qquad\text{ and }\qquad q_{w_{1}\ldots w_{n}}=\prod_{i=1 }^{n}q_{w_{i}}.\] For each \(w\in\Omega\) and \(a\in w\) we let \[p_{a,w}=\frac{p_{a}}{q_{w}}.\] We finish this discussion of \(\Omega\) and its properties by introducing some notation. We let \(K^{*}\) be the unique non-empty compact set satisfying \[K^{*}:=\varphi_{\alpha_{1}}(K^{*})\cup\varphi_{\alpha_{2}}(K^{*}),\] i.e. \(K^{*}\) is the self-conformal set for the IFS \(\{\varphi_{\alpha_{1}},\varphi_{\alpha_{2}}\}\). We also let \(\mu_{*}\) be the unique Borel probability measure satisfying \[\mu_{*}=\frac{\mu_{*}\circ\varphi_{\alpha_{1}}^{-1}}{2}+\frac{\mu_{*}\circ \varphi_{\alpha_{2}}^{-1}}{2}.\] For any word \(w=w_{1}\ldots w_{n}\in\Omega^{*}\) we let \[K_{w}:=\bigcup_{\begin{subarray}{c}\mathbf{a}\in\mathbf{A}^{n}\\ a_{i}\in w_{i},\,1\leq i\leq n\end{subarray}}\varphi_{\mathbf{a}}(K^{*})\] and \[\mu_{w}:=\sum_{\begin{subarray}{c}\mathbf{a}\in\mathbf{A}^{n}\\ a_{i}\in w_{i},\,1\leq i\leq n\end{subarray}}p_{\mathbf{a},w}\cdot\mu_{*} \circ\varphi_{\mathbf{a}}^{-1}.\] Notice that for any word \(w\in\Omega^{*}\) the measure \(\mu_{w}\) is a probability measure supported on \(K_{w}\). These measures will play a similar role to that of stationary measures when one considers compositions of a single transfer operator. ### A spectral gap for random transfer operators The purpose of this section is to prove Theorem 1.1 whose statement we now recall. **Theorem 3.1**.: _Let \(\Phi=\{\varphi_{a}:a\in\mathbf{A}\}\) be a non-trivial uniformly contracting \(C^{2}\) iterated function system satisfying the UNI condition (1.2). Then there exists \(0<\varrho_{0}<1\) such that for \(s=r+ib\in\mathbb{C}\) with \(|r|\) sufficiently small and \(|b|\) sufficiently large, the operator \(\mathcal{L}_{s}\) satisfies for all \(n\in\mathbb{N}\) and \(f\in C^{1}(\mathbb{R})\):_ \[\|\mathcal{L}_{s}^{n}f\|_{b}\lesssim\varrho_{0}^{n}|b|^{1/2}\|f\|_{b}.\] _Thus there exists \(0<\delta<1\) such that for all \(|r|\) sufficiently small and \(|b|\) sufficiently large, the spectral radius satisfies_ \[\varrho(\mathcal{L}_{s})\leq 1-\delta.\] In the proof of Theorem 3.1, we end up getting the contraction from the imaginary part \(ib\) of \(s=r+ib\), and the real part can in the worst case (especially if \(r<0\)) cause expansion. To control this, the assumption that \(|r|\) is small enough is absorbed by weakening the contraction rate using the uniform contraction of the maps \(\varphi_{a}\). Recall that there exists \(1<\gamma<\gamma_{1}\) such that for all \(x\in I\) and \(n\in\mathbb{N}\), if \(\mathbf{a}\in\mathbf{A}^{n}\) then \[\gamma_{1}^{-n}\lesssim|\varphi_{\mathbf{a}}^{\prime}(x)|\lesssim\gamma^{-n}.\] Thus for any \(r\in\mathbb{R}\) and \(n\in\mathbb{N}\), there exists \(c_{0}>0\) such that we have \[\sup_{x\in I}\sup_{\mathbf{a}\in\mathbf{A}^{n}}|\varphi_{\mathbf{a}}^{\prime }(x)|^{r}\leq c_{0}\gamma_{1}^{n|r|}. \tag{3.8}\] The following proposition is the first step towards proving Theorem 3.1. **Proposition 3.3**.: _There exists \(N\in\mathbb{N}\) and \(\varrho\in(0,1)\) such that if \(w_{1}\ldots w_{N\lfloor n/N\rfloor}\) satisfies_ \[\sharp\{0\leq i\leq\lfloor n/N\rfloor-1:w_{iN+1}\ldots w_{(i+1)N}=(w^{*})^{N} \}\geq cn/N\] _for some \(c>0\), then for any \(s=r+ib\) with small enough \(|r|\) and large enough \(|b|\) we have_ \[\int_{K_{\tilde{w}}}|\mathcal{L}_{s,w_{n}}\circ\cdots\circ\mathcal{L}_{s,w_{1 }}(f)|^{2}d\mu_{\tilde{w}}\leq\varrho^{cn/N}\|f\|_{b}^{2}\] _for any word \(\tilde{w}\)._ Let \(N\) be as in the statement of Proposition 3.3. We say that a word \(w_{1}\ldots w_{n}\in\Omega^{n}\) is good if \[\sharp\{0\leq i\leq\lfloor n/2N\rfloor-1:w_{iN+1}\ldots w_{(i+1)N}=(w^{*})^{N }\}\geq\frac{(p_{\alpha_{1}}+p_{\alpha_{2}})^{N}n}{5N}.\] Similarly, we say that \(w_{1}\ldots w_{n}\in\Omega^{n}\) is bad if it fails to be good. The significance of the bound \(\frac{(p_{\alpha_{1}}+p_{\alpha_{2}})^{N}n}{5N}\) is that it is strictly less than \((p_{\alpha_{1}}+p_{\alpha_{2}})^{N}\left\lfloor\frac{n}{2N}\right\rfloor,\) which is the expectation for the number of \(0\leq i\leq\lfloor n/2N\rfloor-1\) satisfying \(w_{iN+1}\ldots w_{(i+1)N}=(w^{*})^{N}\). Thus we can use large deviation bounds, see for instance Hoeffding [32], to conclude that there exists \(\varrho_{2}\in(0,1)\) such that \[\sum_{\text{bad }w_{1}\ldots w_{n}}q_{w}\lesssim\varrho_{2}^{n/N}. \tag{3.9}\] This observation together with the following Theorem is what allows us to prove Theorem 3.1. **Theorem 3.2**.: _There exists \(\varrho_{1}\in(0,1)\) such that for \(s=r+ib\) with \(|r|\) sufficiently small and \(|b|\) sufficiently large, for all \(n\in\mathbb{N}\) and for all good words \(w_{1}\ldots w_{n}\) and \(f\in C^{1}\), we have_ \[\|\mathcal{L}_{s,w_{n}}\circ\cdots\circ\mathcal{L}_{s,w_{1}}(f)\|_{\infty} \lesssim\varrho_{1}^{n}|b|^{1/2}\|f\|_{b}.\] We now show how Theorem 3.1 follows from Theorem 3.2. Proof of Theorem 3.1.: By Lemma 3.2, (3.8), and Theorem 3.2, for \(|r|\) sufficiently small and \(|b|\) sufficiently large, for all \(n\in\mathbb{N}\) and \(f\in C^{1}(\mathbb{R})\) we have \[\|\mathcal{L}_{s}^{n}f\|_{\infty} \leq\sum_{\text{good }w_{1}\ldots w_{n}}q_{w}\cdot\|\mathcal{L}_{s,w_{n}} \circ\cdots\circ\mathcal{L}_{s,w_{1}}(f)\|_{\infty}+\sum_{\text{bad }w_{1}\ldots w_{n}}q_{w}\cdot\|\mathcal{L}_{s,w_{n}}\circ\cdots\circ \mathcal{L}_{s,w_{1}}(f)\|_{\infty}\] \[\lesssim\sum_{\text{good }w_{1}\ldots w_{n}}q_{w}\varrho_{1}^{n}|b|^{ 1/2}\|f\|_{b}+\gamma_{1}^{n|r|}\sum_{\text{bad }w_{1}\ldots w_{n}}q_{w}\cdot\|f\|_{b}\] \[\lesssim\varrho_{1}^{n}|b|^{1/2}\|f\|_{b}+\gamma_{1}^{n|r|}\varrho _{2}^{n/N}\|f\|_{b}\] \[\lesssim\max\{\varrho_{1},\gamma_{1}^{|r|}\varrho_{2}^{1/N}\}^{n} |b|^{1/2}\|f\|_{b}\] Here \(\varrho_{2}\) is as in (3.9). For \(|r|\) sufficiently small we have that \(\max\{\varrho_{1},\gamma_{1}^{|r|}\varrho_{2}^{1/N}\}\leq\max\{\varrho_{1}, \varrho_{2}^{1/2N}\}\). Therefore, taking \(\varrho_{3}:=\max\{\varrho_{1},\varrho_{2}^{1/2N}\}\) we have \[\|\mathcal{L}_{s}^{n}f\|_{\infty}\lesssim\varrho_{3}^{n}|b|^{1/2}\|f\|_{b}\] for all \(|r|\) sufficiently small. To get the \(\|\cdot\|_{b}\) bound, we also need to bound the derivative term \(\|(\mathcal{L}_{s}^{n}f)^{\prime}\|_{\infty}\). For this purpose, set \(m:=\lfloor n/2\rfloor.\) We have for any \(x\in I\) that: \[|\mathcal{L}_{s}^{n}f(x)|=|\mathcal{L}_{s}^{(n-m)}(\mathcal{L}_{s}^{m}f)(x)|.\] Moreover, by uniform expansion, uniform contraction and bounded distortions we have the bound \[\|(\mathcal{L}_{s}^{m}f)^{\prime}\|_{\infty}\lesssim|b|\gamma_{1}^{m|r|}\|f\|_ {\infty}+\gamma^{-m}\gamma_{1}^{m|r|}\|f^{\prime}\|_{\infty}\lesssim|b|\|f\|_{b }(\gamma_{1}^{m|r|}+\gamma^{-m}\gamma_{1}^{m|r|}).\] Thus, combining the above with what we proved earlier, we obtain: \[\|(\mathcal{L}_{s}^{n}f)^{\prime}\|_{\infty} \lesssim|b|\gamma_{1}^{(n-m)|r|}\|\mathcal{L}_{s}^{m}f\|_{\infty} +\gamma^{-(n-m)}\gamma_{1}^{(n-m)|r|}\|(\mathcal{L}_{s}^{m}f)^{\prime}\|_{\infty}\] \[\lesssim|b|\gamma_{1}^{(n-m)|r|}\varrho_{3}^{m}|b|^{1/2}\|f\|_{b}+ \gamma^{-(n-m)}\gamma_{1}^{(n-m)|r|}|b|\|f\|_{b}(\gamma_{1}^{m|r|}+\gamma^{-m} \gamma_{1}^{m|r|})\] \[\lesssim\max\{\gamma_{1}^{|r|/2}\varrho_{3}^{1/2},\gamma_{1}^{1/2} \gamma_{1}^{|r|},\gamma^{-1}\gamma_{1}^{|r|}\}^{n}|b|^{3/2}\|f\|_{b}.\] In the final line we have used that \(m=\lfloor n/2\rfloor.\) For \(|r|\) sufficiently small we have the bound \(\max\{\gamma_{1}^{|r|/2}\varrho_{3}^{1/2},\gamma^{-1/2}\gamma_{1}^{|r|},\gamma^ {-1}\gamma_{1}^{|r|}\}\leq\max\{\varrho_{3}^{1/4},\gamma^{-1/4},\gamma^{-1/2}\}.\) Therefore, taking \(\varrho_{4}:=\max\{\varrho_{3}^{1/4},\gamma^{-1/4},\gamma^{-1/2}\},\) we have \[\|(\mathcal{L}_{s}^{n}f)^{\prime}\|_{\infty}\lesssim\varrho_{4}^{n}|b|^{3/2} \|f\|_{b}\] for all \(r\) sufficiently small. Theorem 1.1 now follows by choosing \(\varrho_{0}=\min\{\varrho_{3},\varrho_{4}\}\in(0,1).\) We will now explain why Theorem 3.2 follows from Proposition 3.3. Proof of Theorem 3.2.: For any \(x\in[0,1],\) using the definition of the transfer operator we have: \[|\mathcal{L}_{s,w_{n}}\circ\cdots\circ\mathcal{L}_{s,w_{1}}(f)(x)|^{2}\leq \mathcal{L}_{r,w_{n}\circ\cdots\circ\mathcal{L}_{r,w_{N}\lfloor n/2N\rfloor+1 }}(|\mathcal{L}_{s,w_{N}\lfloor n/2N\rfloor}\circ\ldots\mathcal{L}_{s,w_{1}} (f)(x)|^{2}).\] Therefore, applying Proposition 3.3, we have the following for \(|r|\) sufficiently small and \(|b|\) sufficiently large: \[\mathcal{L}_{r,w_{n}}\circ\cdots\circ\mathcal{L}_{r,w_{N\lfloor n /2N\rfloor+1}}(|\mathcal{L}_{s,w_{N\lfloor n/2N\rfloor}}\circ\cdots\circ \mathcal{L}_{s,w_{1}}(f)(x)|^{2})\] \[\lesssim\gamma_{1}^{|r|n/2}\mathcal{L}_{w_{n}}\circ\cdots\circ \mathcal{L}_{N\lfloor n/2N\rfloor+1}(|\mathcal{L}_{s,w_{N\lfloor n/2N\rfloor} }\circ\cdots\circ\mathcal{L}_{s,w_{1}}(f)(x)|^{2})\] \[=\gamma_{1}^{|r|n/2}\int_{K_{w_{N\lfloor n/2N\rfloor+1}\cdots w_{ n}}}|\mathcal{L}_{s,w_{N\lfloor n/2N\rfloor}}\circ\cdots\circ\mathcal{L}_{s,w_{1}} (f)(x)|^{2}\,d\mu_{w_{N\lfloor n/2N\rfloor+1}\cdots w_{n}}\] \[\quad+O(\gamma_{1}^{|r|n/2}\gamma^{-n/2}\|(|\mathcal{L}_{s,w_{N \lfloor n/2N\rfloor}}\circ\cdots\circ\mathcal{L}_{s,w_{1}}(f)|^{2})^{\prime} \|_{\infty})\] \[\leq\gamma_{1}^{|r|n/2}\varrho^{cn/N}\|f\|_{b}^{2}+O(\gamma_{1}^{ |r|n/2}\gamma^{-n/2}\|(|\mathcal{L}_{s,w_{N\lfloor n/2N\rfloor}}\circ\cdots \circ\mathcal{L}_{s,w_{1}}(f)|^{2})^{\prime}\|_{\infty}).\] In the above we have taken \[c=\frac{(p_{\alpha_{1}}+p_{\alpha_{2}})^{N}}{5}\] and used the fact that \(w_{1}\ldots w_{n}\) is good. Furthermore, we always have the bound \[\|(|\mathcal{L}_{s,w_{N\lfloor n/2N\rfloor}}\circ\cdots\circ\mathcal{L}_{s,w _{1}}(f)|^{2})^{\prime}\|_{\infty}\lesssim\gamma^{|r|n/2}|b|\|f\|_{b}^{2}.\] Applying this bound in the above, we see that \[|\mathcal{L}_{s,w_{n}}\circ\cdots\circ\mathcal{L}_{s,w_{1}}(f)(x)|^{2}\lesssim \max\{\gamma_{1}^{|r|}\varrho^{c/N},\gamma_{1}^{|r|}\gamma^{-1/2}\}^{n}|b|\|f \|_{b}^{2}\] for all \(x\in I\). For \(|r|\) sufficiently small we have \(\max\{\gamma_{1}^{|r|}\varrho^{c/N},\gamma_{1}^{|r|}\gamma^{-1/2}\}\leq\max\{ \varrho^{c/2N},\gamma^{-1/4}\}.\) Therefore, taking \(\varrho_{1}=\max\{\varrho^{c/2N},\gamma^{-1/4}\}^{1/2},\) we have \[\|\mathcal{L}_{s,w_{n}}\circ\cdots\circ\mathcal{L}_{s,w_{1}}(f)\|_{\infty} \lesssim\varrho_{1}^{n}|b|^{1/2}\|f\|_{b}\] for all \(|r|\) sufficiently small and \(|b|\) sufficiently large. This completes our proof. The missing piece in our argument is a proof of Proposition 3.3. We do this in the next section by constructing suitable Dolgopyat [17] type random operators. ### Reduction to Dolgopyat type random operators Our purpose now is to show how Proposition 3.3 follows from the following crucial lemma. This lemma gives a construction of certain _random Dolgopyat operators_ (which we define formally later in its proof). **Lemma 3.4** (Construction of random Dolgopyat operators).: _There exists \(N\in\mathbb{N}\), \(A>1\) and \(\varrho=\varrho(w^{*})\in(0,1)\), such that for all \(s=r+ib\) with \(|r|\) sufficiently small and \(|b|\) sufficiently large, for any \(w^{\prime}\in\bigcup_{n=1}^{\infty}\Omega^{n}\) there exists a finite set of bounded operators \((\mathcal{N}_{s}^{J})_{J\in\mathcal{E}_{s}}\) on \(C^{1}(I)\) satisfying the following properties:_ 1. _The cone_ \[C_{A|b|}=\{f\in C^{1}(I):f>0,|f^{\prime}(x)|\leq A|b|f(x)\}\] _is stable under_ \(\mathcal{N}_{s}^{J}\) _for all_ \(J\in\mathcal{E}_{s}\)_, that is, if_ \(H\in C_{A|b|}\) _and_ \(J\in\mathcal{E}_{s}\)_, then_ \[|\mathcal{N}_{s}^{J}(H)^{\prime}(x)|\leq A|b|\mathcal{N}_{s}^{J}(H)(x)\] _for all_ \(x\in I\)_._ 2. _For all_ \(H\in C_{A|b|}\) _and_ \(J\in\mathcal{E}_{s}\)_,_ \[\int_{K_{w^{\prime}}}|\mathcal{N}_{s}^{J}H|^{2}\,d\mu_{w^{\prime}}\leq\varrho \int_{K_{(w^{*})^{N}w^{\prime}}}|H|^{2}\,d\mu_{(w^{*})^{N}w^{\prime}}\] 3. _Given_ \(H\in C_{A|b|}\) _and_ \(f\in C^{1}(I)\) _such that_ \(|f|\leq H\) _and_ \(|f^{\prime}|\leq A|b|H\)_, there exists_ \(J\in\mathcal{E}_{b}\) _such that_ \[|\mathcal{L}_{s,w^{*}}^{N}f|\leq\mathcal{N}_{s}^{J}(H)\] _and_ \[|(\mathcal{L}_{s,w^{*}}^{N}f)^{\prime}|\leq A|b|\mathcal{N}_{s}^{J}(H)\] Assuming Lemma 3.4 we now focus on proving Proposition 3.3. Our proof of this proposition depends upon a careful choice of random Dolgopyat operators. Our analysis naturally falls into two cases, whether we observe the word \((w^{*})^{N}\) or not. Proof of Proposition 3.3.: Let \(N\) be as in Lemma 3.4 and let \(w_{1}\ldots w_{N\lfloor n/N\rfloor}\) satisfy \[\sharp\{0\leq i\leq\lfloor n/N\rfloor-1:w_{iN+1}\ldots w_{(i+1)N}=(w^{*})^{N} \}\geq cn/N\] for some \(c>0\). Let \(s=r+ib\) be such that \(|r|\) is sufficiently small and \(|b|\) is sufficiently large so that Lemma 3.4 applies. We inductively choose a sequence of operators \(\tilde{\mathcal{N}}_{0},\ldots\tilde{\mathcal{N}}_{\lfloor n/N\rfloor-1}\) as follows: 1. Consider the block \(w_{1}\ldots w_{N}\). If \(w_{1}\ldots w_{N}\neq(w^{*})^{N}\) then we let \[\tilde{\mathcal{N}}_{0}=\mathcal{L}_{r,w_{N}}\circ\cdots\circ\mathcal{L}_{r,w _{1}}.\] If \(w_{1}\ldots w_{N}=(w^{*})^{N}\) then we let \[\tilde{\mathcal{N}}_{0}:=\mathcal{N}_{s}^{J},\] where \(\mathcal{N}_{s}^{J}\) is the random Dolgopyat operator coming from Lemma 3.4 for the choice of the word \[w^{\prime}=w_{N+1}\ldots w_{N\lfloor n/N\rfloor}\tilde{w}\] and where \(H\) is the _constant_ function \[H:=H_{0}=\|f\|_{b}\mathbf{1}.\] 2. Assume we have made choices of \(\tilde{\mathcal{N}}_{k}\) for \(0\leq k\leq\ell-1\). If \(w_{r,\ell N+1}\ldots w_{r,(\ell+1)N}\neq(w^{*})^{N}\) then we let \[\tilde{\mathcal{N}}_{\ell}:=\mathcal{L}_{r,w_{(\ell+1)N}}\circ\cdots\circ \mathcal{L}_{r,w_{\ell N+1}}.\] If \(w_{\ell N+1}\ldots w_{(\ell+1)N}=(w^{*})^{N}\), then we take our operator to be the random Dolgopyat operator given by Lemma 3.4 with the choice of the word \[w^{\prime}:=w_{(\ell+1)N+1}\ldots w_{N\lfloor n/N\rfloor}\tilde{w}\] and the choice of the function \[H:=H_{\ell}=\tilde{\mathcal{N}}_{\ell-1}\circ\cdots\circ\tilde{\mathcal{N}}_{ 0}(\|f\|_{b}\mathbf{1}).\] We repeat this process until we have defined \(\tilde{\mathcal{N}}_{0},\ldots\tilde{\mathcal{N}}_{\lfloor n/N\rfloor-1}\). These operators will allow us to bound the \(L^{2}\) norms of our random operators as in the statement of Proposition 3.3. The first step towards achieving this bound is to observe the following inequality: \[|\mathcal{L}_{s,w_{N}\lfloor n/N\rfloor}\circ\cdots\circ\mathcal{L}_{s,w_{1}} (f)|\leq\tilde{\mathcal{N}}_{\lfloor n/N\rfloor-1}\circ\cdots\circ\tilde{ \mathcal{N}}_{0}(H). \tag{3.10}\] We omit the proof of (3.10). Its proof relies upon a simple inductive argument that makes use of property 3 of Lemma 3.4. By (3.10), it is enough to bound the integral \[\int_{K_{\tilde{w}}}|\tilde{\mathcal{N}}_{\lfloor n/N\rfloor-1}\circ\cdots \circ\tilde{\mathcal{N}}_{0}(H)|^{2}\,d\mu_{\tilde{w}}.\] Let \[D=\{0\leq\ell\leq\lfloor n/N\rfloor-1:\tilde{\mathcal{N}}_{\ell}=\mathcal{N} _{s}^{J}\},\] that is, \(D\) is the set of subscripts where \(\tilde{\mathcal{N}}_{\ell}\) has been chosen to be an operator coming from Lemma 3.4. Now the idea is that along the blocks corresponding to \(\ell\in D\), we will see decay due to Lemma 3.4, and for the other blocks we can control the expansion using (3.8). Let us look at the last block of length \(N\). If \(\lfloor n/N\rfloor-1\notin D\), then by the Cauchy-Schwartz inequality, (3.8) and the definition of the unperturbed transfer operator, we have the bound \[\int_{K_{\tilde{w}}}|\tilde{\mathcal{N}}_{\lfloor n/N\rfloor-1} \circ\cdots\circ\tilde{\mathcal{N}}_{0}(H)|^{2}\,d\mu_{\tilde{w}}\] \[\leq \gamma_{1}^{2|r|N}\int_{K_{w_{N\lfloor n/N\rfloor-N+1}\cdots w_{N \lfloor n/N\rfloor}}}|\tilde{\mathcal{N}}_{\lfloor n/N\rfloor-2}\circ\cdots \circ\tilde{\mathcal{N}}_{0}(H)|^{2}\,d\mu_{w_{N\lfloor n/N\rfloor-N+1}\cdots w _{N\lfloor n/N\rfloor}\tilde{w}}.\] If \(\lfloor n/N\rfloor-1\in D\), we bound it instead by \[\varrho\int_{K_{w_{N\lfloor n/N\rfloor-N+1}\cdots w_{N\lfloor n/N\rfloor} \tilde{w}}}|\tilde{\mathcal{N}}_{\lfloor n/N\rfloor-2}\circ\cdots\circ\tilde{ \mathcal{N}}_{0}(H)|^{2}\,d\mu_{w_{N\lfloor n/N\rfloor-N+1}\cdots w_{N\lfloor n /N\rfloor}\tilde{w}},\] which is possible by property 2 from Lemma 3.4. We then continue this process at the next stage. We bound \[\int_{K_{w_{N\lfloor n/N\rfloor-N+1}\cdots w_{N\lfloor n/N\rfloor}\tilde{w}}} |\tilde{\mathcal{N}}_{\lfloor n/N\rfloor-2}\circ\cdots\circ\tilde{\mathcal{N} }_{0}(H)|^{2}\,d\mu_{w_{N\lfloor n/N\rfloor-N+1}\cdots w_{N\lfloor n/N\rfloor }\tilde{w}}\] when \(\lfloor n/N\rfloor-2\notin D\) by \[\gamma_{1}^{2|r|N}\int_{K_{w_{N\lfloor n/N\rfloor-2N+1}\cdots w_{N\lfloor n/N \rfloor}\tilde{w}}}|\tilde{\mathcal{N}}_{\lfloor n/N\rfloor-3}\circ\cdots \circ\tilde{\mathcal{N}}_{0}(H)|^{2}\,d\mu_{w_{N\lfloor n/N\rfloor-2N+1} \cdots w_{N\lfloor n/N\rfloor}\tilde{w}},\] and if \(\lfloor n/N\rfloor-2\in D\) we use Lemma 3.4 to bound by \[\varrho\int_{K_{w_{N\lfloor n/N\rfloor-2N+1}\cdots w_{N\lfloor n/N\rfloor^{ \tilde{w}}}}}|\tilde{\mathcal{N}}_{[n/N]-3}\circ\cdots\circ\tilde{\mathcal{N}}_{ 0}(H)|^{2}\,d\mu_{w_{N\lfloor n/N\rfloor-2N+1}\cdots w_{N\lfloor n/N\rfloor^{ \tilde{w}}}}.\] We repeat this process until we have exhausted all of our operators \(\tilde{\mathcal{N}}_{0},\ldots,\tilde{\mathcal{N}}_{[n/N]-1}\). Importantly, we will see a \(\varrho\) contraction every time \(\ell\in D\), and a \(\gamma_{1}^{2|r|N}\) expansion when \(\ell\notin D\). At the same time, \(\sharp D\geq cn/N\), so we arrive to \[\int_{K_{\tilde{w}}}|\tilde{\mathcal{N}}_{[n/N]-1}\circ\cdots\circ\tilde{ \mathcal{N}}_{0}(H)|^{2}\,d\mu_{\tilde{w}}\leq\gamma_{1}^{2|r|N\lfloor n/N \rfloor}\varrho^{cn/N}\int_{K_{w_{1}\ldots w_{\lfloor n/N\rfloor^{w^{\prime}}} }}|H|^{2}\,d\mu_{w_{1}\ldots w_{\lfloor n/N\rfloor}w^{\prime}}\] and as we chose \(H=H_{0}=\|f\|_{b}\mathbf{1}\) and \(\mu_{w_{1}\ldots w_{n}w^{\prime}}\) is a probability measure, we have \[\int_{K_{\tilde{w}}}|\tilde{\mathcal{N}}_{[n/N]-1}\circ\cdots\circ\tilde{ \mathcal{N}}_{0}(H)|^{2}\,d\mu_{\tilde{w}}\leq\gamma_{1}^{2|r|N\lfloor n/N \rfloor}\varrho^{cn/N}\|f\|_{b}^{2}.\] Taking \(|r|\) sufficiently small that \(\gamma_{1}^{2|r|N}<\varrho^{-c/2}\) we obtain \[\int_{K_{\tilde{w}}}|\tilde{\mathcal{N}}_{[n/N]-1}\circ\cdots\circ\tilde{ \mathcal{N}}_{0}(H)|^{2}\,d\mu_{\tilde{w}}\leq\varrho^{cn/2N}\|f\|_{b}^{2}.\] This completes our proof. Thus we are just left with proving Lemma 3.4 by constructing the Dolgopyat-type operators. ### Construction of the Dolgopyat operators (proof of Lemma 3.4) The starting point for the construction of the Dolgopyat operators is to build a kind of "tree structure" using the Cantor sets \(K_{w}\) and various parameters, which eventually will depend on the probability vector \((p_{a})_{\mathbf{a}\in\mathbf{A}}\), the IFS, the partition of our IFS and the imaginary part \(b\) of the complex number \(s\). **Proposition 3.5** (Tree structure).: _There exists constants \(A_{1}^{\prime},A_{1}>0\) and \(A_{2}>0\) such that for all \(\varepsilon\) sufficiently small, for any \(w\in\cup_{n=1}^{\infty}\Omega^{n}\) there exists a finite collection \((V_{j})_{1\leq j\leq q}\) of closed intervals ordered from left to right such that:_ 1. \(I\subseteq\bigcup_{j}V_{j}\)_, and_ \[\operatorname{int}V_{i}\cap\operatorname{int}V_{j}=\varnothing\] _for_ \(i\neq j\)_._ 2. _For all_ \(1\leq j\leq q\)_, we have_ \[\varepsilon A_{1}^{\prime}\leq|V_{j}|\leq\varepsilon A_{1}.\] 3. _For all_ \(1\leq j\leq q\) _such that_ \(V_{j}\cap K_{w}\neq\varnothing\)_, either_ \[V_{j-1}\cap K_{w}\neq\varnothing\quad\text{and}\quad V_{j+1}\cap K_{w}\neq\varnothing;\] _or_ \[V_{j-2}\cap K\neq\varnothing\quad\text{and}\quad V_{j-1}\cap K_{w}\neq\varnothing;\] _or_ \[V_{j+1}\cap K_{w}\neq\varnothing\quad\text{and}\quad V_{j+2}\cap K_{w}\neq\varnothing.\] 4. _For all_ \(1\leq j\leq q\) _such that_ \(V_{j}\cap K_{w}\neq\varnothing\) _we have_ \[\operatorname{dist}(\partial V_{j},K_{w})\geq A_{2}|V_{j}|.\] Proof.: Fix \(w\in\cup_{n=1}^{\infty}\Omega^{n}\). We define \(\pi_{w}:w\times(w^{*})^{\mathbb{N}}\to K_{w}\) according to the rule \[\pi_{w}(\mathbf{a})=\lim_{n\to\infty}\varphi_{a_{1}}\circ\cdots\circ\varphi_{a_{ n}}(0).\] Recall that our separation assumptions means that for any \(w\in\Omega\), for distinct \(a,b\in w\) we have \(\varphi_{a}(I)\cap\varphi_{b}(I)=\varnothing.\) This fact implies that \(\pi_{w}\) is a continuous bijection from \(w\times(w^{*})^{\mathbb{N}}\) to \(K_{w}\). For any \(\varepsilon>0\) sufficiently small, for \(\mathbf{a}\in w\times(w^{*})^{\mathbb{N}}\), we define the \(\varepsilon\)-cutoff of \(\mathbf{a}\) to be the unique prefix of \(\mathbf{a}\) satisfying the following: \[\operatorname{diam}(\varphi_{a_{1}\ldots a_{M}}(I))<\varepsilon\qquad\text{ and }\qquad\operatorname{diam}(\varphi_{a_{1}\ldots a_{M-1}}(I))\geq\varepsilon.\] We let \(\Sigma_{\varepsilon}\) denote the set of \(\varepsilon\)-cutoff words. To each \(\mathbf{a}\in\Sigma_{\varepsilon}\) we associate the w-cylinder set \[[\mathbf{a}]_{w}:=\left\{\mathbf{b}\in w\times(w^{*})^{\mathbb{N}}:b_{1} \ldots b_{|\mathbf{a}|}=\mathbf{a}\right\},\] and let \[K_{\mathbf{a},w}:=\pi_{w}([\mathbf{a}]_{w}).\] We have \(\cup_{\mathbf{a}\in\Sigma_{\varepsilon}}K_{\mathbf{a},w}=K_{w}.\) The sets \(K_{\mathbf{a},w}\) will be the tools we use to construct the intervals \((V_{j})\). Our first step is to derive a separation bound for these sets. Let \(\mathbf{a},\mathbf{a}^{\prime}\in\Sigma_{\varepsilon}\) be distinct and \(|\mathbf{a}\wedge\mathbf{a}^{\prime}|=\inf\{k:a_{k}\neq a_{k}^{\prime}\}\). Then \[d(K_{\mathbf{a},w},K_{\mathbf{a}^{\prime},w}) \geq d(\varphi_{a_{1}\ldots a_{|\mathbf{a}\wedge\mathbf{a}^{ \prime}|}}(I),\varphi_{a_{1}^{\prime}\ldots a_{|\mathbf{a}^{\prime}|}^{\prime}} (I))\] \[\geq\inf_{x\in I}\{|\varphi_{a_{1}\ldots a_{|\mathbf{a}\wedge \mathbf{a}^{\prime}|-1}}^{\prime}(x)|\}\cdot d(\varphi_{a_{|\mathbf{a}\wedge \mathbf{a}^{\prime}|}}(I),\varphi_{a_{|\mathbf{a}\wedge\mathbf{a}^{\prime}|}} (I)).\] \(d(\varphi_{a_{|\mathbf{a}\wedge\mathbf{a}^{\prime}|}}(I),\varphi_{a_{|\mathbf{ a}^{\prime}|}^{\prime}}(I))\) is bounded below by a constant that only depends upon the partition of our IFS. Moreover, by a bounded distortion argument, we know that \[\inf_{x\in I}|\varphi_{a_{1}\ldots a_{|\mathbf{a}\wedge\mathbf{a}^{\prime}|-1} }^{\prime}(x)|\sim\operatorname{diam}(\varphi_{a_{1}\ldots a_{|\mathbf{a} \wedge\mathbf{a}^{\prime}|-1}}(I)).\] Since \(a_{1}\ldots a_{|\mathbf{a}\wedge\mathbf{a}^{\prime}|-1}\) is a prefix of an \(\varepsilon\)-cutoff word, we know that \(\operatorname{diam}(\varphi_{a_{1}\ldots a_{|\mathbf{a}\wedge\mathbf{a}^{ \prime}|-1}}(I))\geq\varepsilon.\) It therefore follows from the above that \[d(K_{\mathbf{a},w},K_{\mathbf{a}^{\prime},w})\gtrsim\varepsilon. \tag{3.11}\] Where the underlying constant depends only upon the partition of our IFS. For each word \(\mathbf{a}\in\Sigma_{\varepsilon}\), by definition there exists \(w_{\mathbf{a},1},w_{\mathbf{a},2}\in\Omega\) such that \([\mathbf{a}]_{w}=\cup_{\mathbf{b}\in w_{a,1}\times w_{a,2}}[\mathbf{a}\mathbf{ b}]_{w}\). Moreover the following properties holds for each \(\mathbf{a}\in\Sigma_{\varepsilon}\): 1. \(K_{\mathbf{a},w}=\cup_{\mathbf{b}\in w_{\mathbf{a},1}\times w_{\mathbf{a},2}} K_{\mathbf{a}\mathbf{b},w}\). 2. \(\operatorname{diam}(K_{\mathbf{a}\mathbf{b},w})\sim\varepsilon\) for each \(\mathbf{b}\in w_{\mathbf{a},1}\times w_{\mathbf{a},2}\). 3. \(d(K_{\mathbf{a}\mathbf{b},w},K_{\mathbf{a}\mathbf{b}^{\prime},w})\sim\varepsilon\) for distinct \(\mathbf{b},\mathbf{b}^{\prime}\in w_{\mathbf{a},1}\times w_{\mathbf{a},2}\). 4. \(\sharp w_{\mathbf{a},1}\times w_{\mathbf{a},2}\geq 4\). Crucially the underlying constants in the above items only depend upon the partition of our IFS. Item 1 holds by definition. Item 2 follows from the fact \(\mathbf{a}\) is an \(\varepsilon\)-cutoff word and \(\operatorname{diam}(K_{\mathbf{a}\mathbf{b},w})\sim\operatorname{diam}(K_{ \mathbf{a},w})\). The implicit lower bound in item 3 follows from the same reasoning as that given above to show \(d(K_{\mathbf{a},w},K_{\mathbf{a}^{\prime},w})\gtrsim\varepsilon\). The implicit upper bound follows since \(K_{\mathbf{a}\mathbf{b},w},K_{\mathbf{a}\mathbf{b}^{\prime},w}\subset K_{ \mathbf{a},w}\) and \(\operatorname{diam}(K_{\mathbf{a},w})\sim\varepsilon\) for any \(\varepsilon\)-cutoff word. The final bound follows because each element of \(\Omega\) is a set containing either two or three elements by Proposition 3.1. We now use the sets \(\{K_{\mathbf{a}\mathbf{b},w}\}_{\mathbf{a}\in\Sigma_{\varepsilon},\mathbf{b} \in w_{\mathbf{a},1}\times w_{\mathbf{a},2}}\) to construct the intervals \((V_{j})\). It follows from (3.11), item 1, and item 3 that \[d(K_{\mathbf{a}\mathbf{b},w},K_{\mathbf{a}^{\prime}\mathbf{b}^{\prime},w})\succeq\varepsilon \tag{3.12}\] when \(\mathbf{a},\mathbf{a}^{\prime}\in\Sigma_{\varepsilon}\) are distinct or \(\mathbf{b},\mathbf{b}^{\prime}\in w_{\mathbf{a},1}\times w_{\mathbf{a},2}\) are distinct. Now using (3.12) and item 2, we can associate to each \(K_{\mathbf{a}\mathbf{b},w}\) a closed interval \(V_{\mathbf{a}\mathbf{b},w}\) so that the following properties are satisfied * \(K_{\mathbf{a}\mathbf{b},w}\subset V_{\mathbf{a}\mathbf{b},w}\) for each \(\mathbf{a}\in\Sigma_{\varepsilon}\) and \(\mathbf{b}\in w_{\mathbf{a},1}\times w_{\mathbf{a},2}\). * \(\operatorname{int}V_{\mathbf{a}\mathbf{b},w}\cap\operatorname{int}V_{\mathbf{ a}^{\prime}\mathbf{b}^{\prime},w}=\varnothing\) when \(\mathbf{a},\mathbf{a}^{\prime}\in\Sigma_{\varepsilon}\) are distinct or \(\mathbf{b},\mathbf{b}^{\prime}\in w_{\mathbf{a},1}\times w_{\mathbf{a},2}\) are distinct. * \(d(K_{w},\partial V_{\mathbf{a}\mathbf{b},w})\gtrsim\varepsilon\) for each \(\mathbf{a}\in\Sigma_{\varepsilon}\) and \(\mathbf{b}\in w_{\mathbf{a},1}\times w_{\mathbf{a},2}\). * \(\operatorname{diam}(V_{\mathbf{a}\mathbf{b},w})\sim\varepsilon\) for each \(\mathbf{a}\in\Sigma_{\varepsilon}\) and \(\mathbf{b}\in w_{\mathbf{a},1}\times w_{\mathbf{a},2}\). * If \(\mathbf{a},\mathbf{a}^{\prime}\in\Sigma_{\varepsilon}\) are distinct then \(d(V_{\mathbf{a}\mathbf{b},w},V_{\mathbf{a}^{\prime}\mathbf{b}^{\prime},w}) \gtrsim\varepsilon\) for all \(\mathbf{b}\in w_{\mathbf{a},1}\times w_{\mathbf{a},2}\) and \(\mathbf{b}^{\prime}\in w_{\mathbf{a}^{\prime},1}\times w_{\mathbf{a}^{\prime},2}\). * For a fixed \(\mathbf{a}\in\Sigma_{\varepsilon}\), successive \(V_{\mathbf{a}\mathbf{b},w}\) share a common endpoint. We emphasise that each of the implicit constants in the above only depend upon the partition of our IFS. The intervals \(\{V_{\mathbf{a}\mathbf{b},w}\}\) satisfy properties \(2,3\) and \(4\) of our proposition. Properties 2 and 4 follow from items c and d. Property 3 follows from items 4 and f. It remains to address property 1. By item a, for each \(\mathbf{a}\in\Sigma_{\varepsilon}\) we have the inclusion \(Conv(K_{\mathbf{a},w})\subset\cup_{\mathbf{b}\in w_{\mathbf{a},1}\times w_{ \mathbf{a},2}}V_{\mathbf{a}\mathbf{b},w}\). Moreover \(\{V_{\mathbf{a}\mathbf{b},w}\}\) satisfies the second part of property 1 by item b. It suffices therefore to introduce additional closed intervals to fill the gaps between the sets \(\cup_{\mathbf{b}\in w_{\mathbf{a},1}\times w_{\mathbf{a},2}}V_{\mathbf{a} \mathbf{b},w}\) so that the first part of property 1 is satisfied, and so that the second part of this property and property 2 still hold. By item e we have \(d(\cup V_{\mathbf{a}\mathbf{b}},\cup V_{\mathbf{a}^{\prime}\mathbf{b}^{\prime }})\gtrsim\varepsilon\) for distinct \(\mathbf{a},\mathbf{a}^{\prime}\in\Sigma_{\varepsilon}\). Therefore we can introduce finitely many closed intervals \(\{V_{l}\}\) each satisfying \(\operatorname{diam}(V_{l})\sim\varepsilon\), whose union fills the gaps between the sets \(\cup_{\mathbf{b}\in w_{\mathbf{a},1}\times w_{\mathbf{a},2}}V_{\mathbf{a} \mathbf{b},w}.\) Moreover, we can insist that successive elements of \(\{V_{l}\}\) only intersect at their endpoints if at all. Taking \(\{V_{j}\}=\{V_{l}\}\cup\{V_{\mathbf{a}\mathbf{b},w}\}\) we see that this collection satisfies property 1 and properties \(2,3\), and \(4\) still hold. Let \(w^{\prime}\in\cup_{n=1}^{\infty}\Omega^{n}\) be fixed. We now also fix a collection of intervals \((V_{j})\) so that Proposition 3.5 is satisfied for \(\varepsilon=\frac{\varepsilon^{\prime}}{|b|}\) with \(\varepsilon^{\prime}\) and \(\frac{1}{|b|}\) both sufficiently small. For \(i\in\{1,2\}\) and \(1\leq j\leq q\) we let \(Z_{j}^{i}=\varphi_{\alpha_{i}^{N}}(V_{j})\). Properties 2 and 4 in Proposition 3.5 imply \(\operatorname{dist}(K_{w}\cap V_{j},\partial V_{j})\geq A_{2}A_{1}^{\prime} \frac{\varepsilon^{\prime}}{b}\) whenever \(K_{w}\cap V_{j}\neq\varnothing\). Hence, for all \(j\) such that \(K_{w}\cap V_{j}\neq\varnothing\), there exists a \(C^{1}\) cut off function \(\chi_{j}\) on \(I\) such that \(0\leq\chi_{j}\leq 1\), \(\chi_{j}\equiv 1\) on the convex hull of \(K_{w}\cap V_{j}\), and \(\chi_{j}\equiv 0\) outside of \(V_{j}\). Moreover, we can assume \[\|\chi_{j}^{\prime}\|_{\infty}\leq A_{3}\frac{|b|}{\varepsilon^{\prime}} \tag{3.13}\] for some constant \(A_{3}\) depending upon the preceding constants. Given \(s=r+ib\), the set \(\mathcal{J}_{s}\) is defined by \[\mathcal{J}_{s}=\left\{(i,j):i=1,2\text{ and }1\leq j\leq q\text{ with }V_{j}\cap K_{w}\neq\varnothing\right\}.\] Note that by construction \(\mathcal{J}_{s}\) actually only depends on \(b\), but the random Dolgopyat operators \(\mathcal{N}_{s}^{J}\) are defined using the real part \(r\) as well: **Definition 3.6** (Random Dolgopyat operators \(\mathcal{N}_{s}^{J}\)).: Let \(s=r+ib\) for \(|b|\) sufficiently large. Fix \(\theta\in(0,1)\) which we will eventually pick to be sufficiently small. Given non-empty \(J\subset\mathcal{J}_{s}\), define a function \(\chi_{J}\in C^{1}(I)\) by \[\chi_{J}:=\begin{cases}1-\theta\chi_{j}((\varphi_{\alpha_{i}^{N}}x))&\text{ if }x\in Z_{j}^{i}\text{ for }(i,j)\in J\\ 1&\text{ otherwise.}\end{cases}\] The _random Dolgopyat operator_\(\mathcal{N}_{s}^{J}\) is defined on \(C^{1}(I)\) by \[\mathcal{N}_{s}^{J}(f):=\mathcal{L}_{r,w^{*}}^{N}(\chi_{J}f).\] We now set out to prove that these operators satisfy the properties given in Lemma 3.4. We will begin with property 1 of this lemma. Proof of property 1 of Lemma 3.4.: We start by showing that for suitable constants \(A,N\) and \(\theta\) the cone \(C_{A|b|}\) is stable under \(\mathcal{N}_{s}^{J}\). Given \(H\in C_{A|b|}\), assuming \(|r|<1\), for all \(x\in I\) we have \[|\mathcal{N}_{s}^{J}(H)^{\prime}(x)| =|\mathcal{L}_{r,w^{*}}^{N}(\chi_{J}H)^{\prime}(x)|\] \[\leq\sum_{\mathbf{a}\in(w^{*})^{N}}\frac{1}{2^{N}}\Big{|}\frac{ \varphi_{\mathbf{a}}^{\prime\mathbf{a}}(x)}{\varphi_{\mathbf{a}}^{\prime}(x)} \Big{|}|\varphi_{\mathbf{a}}^{\prime}(x)|^{r}\chi_{J}(\varphi_{\mathbf{a}}(x)) H(\varphi_{\mathbf{a}}(x))\] \[\qquad+\frac{1}{2^{N}}|\varphi_{\mathbf{a}}^{\prime}(x)|^{r} \Big{(}|(\chi_{J}\circ\varphi_{\mathbf{a}})^{\prime}(x)H(\varphi_{\mathbf{a}}( x))|+|\chi_{J}(\varphi_{\mathbf{a}}(x))((H\circ\varphi_{\mathbf{a}})^{\prime}(x))| \Big{)}.\] By a bounded distortion argument, there exists \(C_{0}>0\) such that for all \(N\in\mathbb{N}\), \(x\in I\) and \(\mathbf{a}\in(w^{*})^{N}\), we have \[\Big{|}\frac{\varphi_{\mathbf{a}}^{\prime\prime}(x)}{\varphi_{\mathbf{a}}^{ \prime}(x)}\Big{|}\leq C_{0}.\] If \((\chi_{J}\circ\varphi_{\mathbf{a}})^{\prime}(x)\neq 0\), then there exists \((i,j)\in J\) such that \(\mathbf{a}=\alpha_{i}^{N}\) and \[\chi_{J}\circ\varphi_{\mathbf{a}}=1-\theta(\chi_{j}\circ\varphi_{\alpha_{i}^{ N}}^{-1}\circ\varphi_{\alpha_{i}^{N}}).\] Differentiating this latter expression and using (3.13) we obtain \[|(\chi_{J}\circ\varphi_{\mathbf{a}})^{\prime}|\leq\theta A_{3}\frac{|b|}{ \varepsilon^{\prime}}.\] Moreover, \[|(H\circ\varphi_{\mathbf{a}})^{\prime}(x)|=|H^{\prime}(\varphi_{\mathbf{a}}(x ))\varphi_{\mathbf{a}}^{\prime}(x)|\leq A|b|\gamma^{-N}H(\varphi_{\mathbf{a}}( x)),\] where in the last inequality we used that \(H\in C_{A|b|}\). Combining the inequalities above and using that \(H>0\), we have \[|\mathcal{N}_{s}^{J}(H)^{\prime}(x)|\leq C_{0}\mathcal{N}_{s}^{J}(H)(x)+A_{3} \theta\frac{|b|}{\varepsilon^{\prime}}\mathcal{L}_{r,w^{*}}^{N}(H)(x)+A|b| \gamma^{-N}\mathcal{N}_{s}^{J}(H)(x).\] Note that \(H=\chi_{J}H/\chi_{J}\leq\frac{1}{1-\theta}\chi_{J}H.\) Using this inequality, we see that for all \(A>2C_{0}+4A_{3}\), \(\theta<\min\{\varepsilon^{\prime},1/2\}\), and \(N\) sufficiently large so that \(\gamma^{-N}<1/2\), we have \[|\mathcal{N}_{s}^{J}(H)^{\prime}(x)|\leq\left(C_{0}+\frac{A_{3}\theta}{(1- \theta)\varepsilon^{\prime}}+A\gamma^{-N}\right)|b|\mathcal{N}_{s}^{J}(H)(x) \leq A|b|\mathcal{N}_{s}^{J}(H)(x).\] So the cone \(C_{A|b|}\) is stable under \(\mathcal{N}_{s}^{J}\). We have therefore established property 1 from Lemma 3.4. We now draw our attention to property 2 from Lemma 3.4. **Definition 3.7** (Dense subset).: We say that \(J\subset\mathcal{J}_{s}\) is _dense_ if for all \(1\leq j\leq q\) such that \(V_{j}\cap K_{\omega}\neq\varnothing\), there exists \(1\leq j^{\prime}\leq q\) with \((i,j^{\prime})\in J\) for some \(i\in\{1,2\}\) such that \(|j^{\prime}-j|\leq 2\). Let \(J\) be a dense subset, we denote by \(W_{J}\) the subset of \(K_{w}\) defined by \[W_{J}=\{x\in K_{w}:\exists(i,j)\in J:x\in V_{j}\}.\] The following uniform doubling property of the random measures \(\mu_{w}\) will prove to be useful. **Lemma 3.8**.: _There exists \(C>0\) such that for any \(w\in\cup_{n=1}^{\infty}\Omega^{n}\) we have_ \[\mu_{w}(B(x,2R))\leq C\mu_{w}(B(x,R))\] _for all \(x\in K_{\omega}\) and \(R>0\)._ Proof.: Let \(x\in K_{w}\) and \(R>0\). Recalling the notation used in the proof of Proposition 3.5, we let \(a_{1}\dots a_{n}\) be the unique shortest word such that \(K_{a_{1}\dots a_{n},w}\subset B(x,R)\) and \(x\in K_{a_{1}\dots a_{n},w}\). Then \(\mu_{\omega}(B(x,R))\geq\mu_{\omega}(K_{a_{1}\dots a_{n},w})\). By Proposition 3.1 we know that for any \(w\in\Omega\), for any distinct \(a,b\in w\) we \(\varphi_{a}(I)\cap\varphi_{b}(I)=\varnothing\). It follows from this separation property that there exists \(l\in\mathbb{N}\) depending only upon the partition of our IFS, such that \(B(x,2R)\cap K_{\omega}\subset K_{a_{1}\dots a_{n-l},w}\). Therefore \(\mu_{w}(B(x,2R))\leq\mu_{w}(K_{a_{1}\dots a_{n-l}}).\) Combining this bound with our previous inequality yields \[\frac{\mu_{w}(B(x,2R))}{\mu(B(x,R))}\leq\frac{\mu_{w}(K_{a_{1}\dots a_{n-l},w} )}{\mu_{w}(K_{a_{1}\dots a_{n},w})}.\] Crucially this latter term can be bounded above by a constant that only depends upon the partition of our IFS and the underlying probability vector \(\mathbf{p}\). This completes our proof. **Lemma 3.9**.: _Let \(J\) be a dense subset and \(H\in C_{A|b|}\). Then there exists a constant \(\tilde{\varepsilon}>0\) depending upon \(\varepsilon^{\prime},\) the doubling constant from Lemma 3.8, and the partition of our IFS such that_ \[\int_{W_{J}}H\,d\mu_{w}\geq\tilde{\varepsilon}\int_{K_{w}}H\,d\mu_{w}.\] Proof.: Let \(\mathcal{G}\) denote the indices in \(\{1,\dots,q\}\) such that \(V_{k}\cap K_{w}\neq\varnothing\). Given \(k\in\mathcal{G}\), by the density of \(J\) there exists an index \(j(k)\) with \((i,j(k))\in J\) for some \(i\in\{1,2\}\) such that \(|j(k)-k|\leq 2\). By choosing such a \(j(k)\) for all \(k\in\mathcal{G}\) we get an map \(j:\mathcal{G}\to\{1,\dots,q\}\). Notice that for all \(j^{\prime}\in\{1,\dots,q\}\) the set \(j^{-1}(j^{\prime})\) contains at most \(5\) elements. For all \(k\in\mathcal{G}\), we choose an arbitrary \(u_{k}\in K_{w}\cap V_{k}\). We have \(V_{j(k)}\subset B(u_{k},R)\) and \(V_{k}\subset B(u_{k},R)\) for \[R=3A_{1}\frac{\varepsilon^{\prime}}{|b|}.\] Here we have used property \(2\) from Proposition 3.5. By property \(4\) of Proposition 3.5, we also have \(B(v_{k},R^{\prime})\subset V_{j(k)}\) where \[R^{\prime}=\frac{1}{2}A_{2}A_{1}^{\prime}\frac{\varepsilon^{\prime}}{|b|}\] and \(v_{k}\in K_{w}\cap V_{j(k)}\) is such that \[\operatorname{dist}(v_{k},\partial V_{j(k)})=\operatorname{dist}(K_{w}\cap V_ {j(k)},\partial V_{j(k)}).\] Let \(H\in C_{A|b|}\). We have \[\int_{K_{w}}H\,d\mu_{w}=\sum_{k\in\mathcal{G}}\int_{V_{k}}H\,d\mu_{w}\leq\sum _{k\in\mathcal{G}}\int_{B(u_{k},R)}H\,d\mu_{w}\leq\sum_{k\in\mathcal{G}}(\max _{B(u_{k},R)}H)\mu_{w}(B(u_{k},R)).\] For our choice of \(R\) and \(R^{\prime}\) we have \[B(v_{k},R^{\prime})\subset B(u_{k},R)\subset B(v_{k},2R).\] Therefore, using Lemma 3.8, it follows that there exists \(C^{\prime}>0\) depending only upon \(A^{\prime}_{1},A_{1}\) and \(A_{2}\) such that \[\mu_{w}(B(u_{k},R))\leq\mu_{w}(B(v_{k},2R))\leq C^{\prime}\mu_{w}(B(v_{k},R^{ \prime}))\leq C^{\prime}\mu_{w}(V_{j(k)}).\] Now using the fact \[e^{-A|b||x-y|}\leq\frac{H(x)}{H(y)}\leq e^{A|b||x-y|1}\] for all \(x,y\in I\), for \(|b|\) large enough, we deduce \[\int_{K_{w}}H\,d\nu \leq C^{\prime}\sum_{k\in\mathcal{G}}e^{2A|b|R}(\min_{V_{j(k)}}H) \mu_{w}(V_{j(k)})\] \[\leq C^{\prime}e^{6AA_{1}\varepsilon^{\prime}}\sum_{k\in\mathcal{ G}}\int_{V_{j(k)}}H\,d\mu_{w}\] \[\leq 5C^{\prime}e^{6AA_{1}\varepsilon^{\prime}}\sum_{j:\exists i \,s.t.(i,j)\in J}\int_{V_{j}}H\,d\mu_{w}\] \[=5C^{\prime}e^{6AA_{1}\varepsilon^{\prime}}\int_{W_{J}}H\,d\mu_{ w}.\] Taking \(\tilde{\varepsilon}=5C^{\prime}e^{6AA_{1}\varepsilon^{\prime}}\) completes our proof. We define \(\mathcal{E}_{b}\) to be those subsets \(J\subset\mathcal{J}_{b}\) such that \(J\) is dense. **Proposition 3.10**.: _There exists \(0<\varrho<1\) such that for all \(w\in\cup_{n=1}^{\infty}\Omega^{n}\), \(|r|\) sufficiently small and \(|b|\) sufficiently large, for all \(H\in C_{A|b|}\) and for all \(J\in\mathcal{E}_{s}\) we have_ \[\int_{K_{w}}|\mathcal{N}_{s}^{J}(H)|^{2}\,d\mu_{w}\leq\varrho\int_{K_{(w^{*}) N_{w}}}H^{2}\,d\mu_{(w^{*})^{N}w}.\] Proof.: Let \(H\in C_{A|b|}\) and \(J\in\mathcal{E}_{s}\). For all \(x\in I\), we have by the Cauchy-Schwartz inequality, \[(\mathcal{N}_{s}^{J}(H))^{2}(x) =\left(\sum_{\mathbf{a}\in(w^{*})^{N}}\frac{1}{2^{N}}|\varphi_{ \mathbf{a}}^{\prime}(x)|^{r}\chi_{J}(\varphi_{\mathbf{a}}(x))H(\varphi_{ \mathbf{a}}(x))\right)^{2}\] \[\leq\left(\sum_{\mathbf{a}\in(w^{*})^{N}}\frac{1}{2^{N}}|\varphi_ {\mathbf{a}}^{\prime}(x)|^{2r}\chi_{J}^{2}(\varphi_{\mathbf{a}}(x))\right) \left(\sum_{\mathbf{a}\in(w^{*})^{N}}\frac{1}{2^{N}}H^{2}(\varphi_{\mathbf{a}} (x))\right)\] \[=\left(\sum_{\mathbf{a}\in(w^{*})^{N}}\frac{1}{2^{N}}|\varphi_{ \mathbf{a}}^{\prime}(x)|^{2r}\chi_{J}(\varphi_{\mathbf{a}}(x))\right)\mathcal{ L}_{w^{*}}^{N}(H^{2})(x).\] For all \(x\in W_{J}\), for a well chosen \(i\) we have \(\chi_{J}(\varphi_{\alpha_{i}^{N}}(x))=1-\theta\). Therefore for \(x\in W_{J}\) we have \[\sum_{\mathbf{a}\in(w^{*})^{N}}\frac{1}{2^{N}}|\varphi_{\mathbf{a }}^{\prime}(x)|^{2r}\chi_{J}(\varphi_{\mathbf{a}}(x)) \leq\sum_{\mathbf{a}\neq\alpha_{i}^{N}}\frac{1}{2^{N}}|\varphi_{ \mathbf{a}}^{\prime}(x)|^{2r}\chi_{J}(\varphi_{\mathbf{a}}(x))+\frac{(1- \theta)}{2^{N}}|\varphi_{\alpha_{i}}^{\prime}(x)|^{2r}\] \[\leq\sum_{a\neq\alpha_{i}^{N}}\frac{1}{2^{N}}\gamma_{1}^{|2r|N}+ \frac{(1-\theta)}{2^{N}}\gamma_{1}^{|2r|N}\] \[\leq\gamma_{1}^{|2r|N}\left(1-\frac{\theta}{2^{N}}\right).\] In the penultimate inequality we have used (3.8). It can similarly be shown that for \(x\notin W_{J}\) we have \[\sum_{\mathbf{a}\in(w^{*})^{N}}\frac{1}{2^{N}}|\varphi_{\mathbf{a}}^{\prime}( x)|^{2r}\chi_{J}(\varphi_{\mathbf{a}}(x))\leq\gamma_{1}^{|2r|N}.\] Now \[\int_{K_{w}}(\mathcal{N}_{s}^{J}(H))^{2}\,d\mu_{w}=\int_{W_{J}}(\mathcal{N}_{s }^{J}(H))^{2}\,d\mu_{w}+\int_{K_{w}\setminus W_{J}}(\mathcal{N}_{s}^{J}(H))^{ 2}\,d\mu_{w}.\] Applying the inequalities above yields \[\int_{K_{w}}(\mathcal{N}_{s}^{J}(H))^{2}\,d\mu_{w} \leq\gamma_{1}^{|2r|N}\left(1-\frac{\theta}{2^{N}}\right)\int_{W _{J}}\mathcal{L}_{w}^{N}(H^{2})\,d\mu_{w}+\gamma_{1}^{|2r|N}\int_{K_{w} \setminus W_{J}}\mathcal{L}_{w}^{N}(H^{2})\,d\mu_{w}\] \[=\gamma_{1}^{|2r|N}\int_{K_{w}}\mathcal{L}_{w^{*}}^{N}(H^{2})\,d \mu_{w}-\frac{\gamma_{1}^{|2r|N}\theta}{2^{N}}\int_{W_{J}}\mathcal{L}_{w^{*}} ^{N}(H^{2})\,d\mu_{w}.\] Applying Lemma 3.9 to \(\mathcal{L}_{w^{*}}^{N}(H^{2})\), which is possible as \(\mathcal{L}_{w^{*}}^{N}(H^{2})\in C_{3A|b|/4}\) when \(N\) is sufficiently large, we have \[\int_{K_{w}}(\mathcal{N}_{s}^{J}(H))^{2}\,d\mu_{w}\leq\gamma_{1}^{|2r|N}\left( 1-\frac{\tilde{\varepsilon}\theta}{2^{N}}\right)\int_{K_{w}}\mathcal{L}_{w^{*} }^{N}(H^{2})\,d\mu_{w}.\] Our proof now follows by taking \(\varrho\in(0,1)\) so that \(\gamma_{1}^{|2r|N}\left(1-\frac{\tilde{\varepsilon}\theta}{2^{N}}\right)\leq\varrho\) for all \(|r|\) sufficiently small, and using that \[\int_{K_{w}}\mathcal{L}_{w^{*}}^{N}(H^{2})\,d\mu_{w}=\int_{K_{(w^{*})^{N}w}}H^ {2}\,d\mu_{(w^{*})^{N}w}.\] This proposition establishes property 2 of Lemma 3.4. Next, we now draw our attention to the first part of property 3 of Lemma 3.4. Here is where the nonlinearity of the IFS will manifest itself. We recall here the nonlinearity property of the IFS \(w^{*}\) that follows from the discussion following Proposition 3.1: There exists \(c_{1},c_{2},\delta>0\) such that for all \(x\in\{x:d(x,K)<\delta\}\) and \(l\in\mathbb{N}\) we have \[c_{1}\leq\left|\frac{\varphi_{\alpha_{1}^{\prime}}^{\prime\prime}(x)}{\varphi_ {\alpha_{1}^{\prime}}^{\prime}(x)}-\frac{\varphi_{\alpha_{2}^{\prime}}^{ \prime\prime}(x)}{\varphi_{\alpha_{2}^{\prime}}^{\prime}(x)}\right|\leq c_{2}. \tag{3.14}\] Equipped with the nonlinearity UNI condition (3.14), we can now prove: **Lemma 3.11**.: _Let \(s=r+ib\), \(H\in C_{A|b|}\), \(f\in C^{1}(I)\) be such that \(|f|\leq H\) and \(|f^{\prime}|\leq A|b|H\). Define the functions \(\Theta_{i}:I\to[0,\infty)\) for \(i=1,2\), by_ \[\Theta_{1}(x):=\frac{\left||\varphi_{\alpha_{1}^{N}}^{\prime}(x)|^{s}f( \varphi_{\alpha_{1}^{N}}(x))+|\varphi_{\alpha_{2}^{N}}^{\prime}(x)|^{s}f( \varphi_{\alpha_{2}^{N}}(x))|\right|}{(1-2\theta)|\varphi_{\alpha_{1}^{N}}^{ \prime}(x)|^{r}H(\varphi_{\alpha_{1}^{N}}(x))+|\varphi_{\alpha_{2}^{N}}^{ \prime}(x)|^{r}H(\varphi_{\alpha_{2}^{N}}(x))}\] _and_ \[\Theta_{2}(x):=\frac{\left||\varphi_{\alpha_{1}^{N}}^{\prime}(x)|^{s}f( \varphi_{\alpha_{1}^{N}}(x))+|\varphi_{\alpha_{2}^{N}}(x)|^{s}f(\varphi_{ \alpha_{2}^{N}}(x))|\right|}{|\varphi_{\alpha_{1}^{N}}^{\prime}(x)|^{r}H( \varphi_{\alpha_{1}^{N}}(x))+(1-2\theta)|\varphi_{\alpha_{2}^{N}}^{\prime}(x )|^{r}H(\varphi_{\alpha_{2}^{N}}(x))}\] _Then for \(\theta\), \(|r|\) and \(\varepsilon^{\prime}\) sufficiently small, for any \(w\in\Omega^{*}\) for all \(j\) such that \(V_{j}\cap K_{w}\neq\varnothing\), there exists \(j^{\prime}\) with \(|j^{\prime}-j|\leq 2\), \(V_{j^{\prime}}\cap K_{w}\neq\varnothing\) and \(i\in\{1,2\}\) such that for all \(x\in V_{j^{\prime}}\), we have_ \[\Theta_{i}(x)\leq 1.\] This important lemma relies upon the following lemmas that are taken directly from Naud's paper [50, Lemma 5.11 and Lemma 5.12]: **Lemma 3.12**.: _Let \(Z\subset I\) be an interval with \(|Z|\leq\frac{c}{|b|}\). Let \(H\in C_{A|b|}\) and \(f\in C^{1}(I)\) satisfy \(|f|\leq H\) and \(|f^{\prime}|\leq A|b|H\). Then for \(c\) sufficiently small, we have either \(|f(u)|\leq\frac{3}{4}H(u)\) for all \(u\in Z\), or \(|f(u)|\geq\frac{1}{4}H(u)\) for all \(u\in Z\)._ **Lemma 3.13**.: _Let \(z_{1},z_{2}\neq 0\) be two complex numbers such that \(\left|\frac{z_{1}}{z_{2}}\right|\leq L\) and \(2\pi-\varepsilon\geq|arg(z_{1})-arg(z_{2})|\geq\varepsilon>0\). Then there exists \(0<\delta(L,\varepsilon)<1\) such that_ \[|z_{1}+z_{2}|\leq(1-\delta)|z_{1}|+|z_{2}|.\] Proof of Lemma 3.11.: Let \(\varepsilon^{\prime}\) be sufficiently small so that Lemma 3.12 holds for all \(Z=Z_{j}^{i}\). We assume that \(0<\theta<1/8\). We have \(|Z_{j}^{i}|\leq|V_{j}|\cdot\gamma^{-N}\) so we can always assume \(|Z_{j}^{i}|\leq|V_{j}|\). Let \(V_{j},V_{j+1},V_{j+2}\) be a triple of intervals each with non-empty intersection with \(K_{w}\). Let \(\widehat{V}_{j}=V_{j}\cup V_{j+1}\cup V_{j+2}\). We assume that \(\varepsilon^{\prime}\) is sufficiently small so that \(\widehat{V}_{j}\subset\{x:d(x,K)<\delta\}\), and therefore (3.14) applies to elements in \(\widehat{V}_{j}\). Two cases occur. If there exists \(j^{\prime}\in\{j,j+1,j+2\}\) such that \(|f(u)|\leq\frac{3}{4}H(u)\) for all \(u\in Z_{j^{\prime}}^{i}\) for some \(i\in\{1,2\}\), then \(\Theta_{i}(x)\leq 1\) for all \(x\in V_{j^{\prime}}\) (here we are using that \(\theta<1/8\)). If this is not the case, then by Lemma 3.12 we have for all \(j^{\prime}\in\{j,j+1,j+2\}\), for all \(i\in\{1,2\}\) and for all \(u\in Z_{j^{\prime}}^{i}\), \[|f(u)|\geq\frac{1}{4}H(u). \tag{3.15}\] We now set out to apply Lemma 3.13 to complete our proof. For \(x\in\widehat{V}_{j}\), we set \[z_{1}(x)=|\varphi_{\alpha_{1}^{N}}^{\prime}(x)|^{s}f(\varphi_{\alpha_{1}^{N}}( x))\text{ and }z_{2}(x)=|\varphi_{\alpha_{2}^{N}}^{\prime}(x)|^{s}f(\varphi_{\alpha_{2}^{N}}(x)).\] We claim that given \(j^{\prime}\in\{j,j+1,j+2\}\), we have either \(\left|\frac{z_{1}(x)}{z_{2}(x)}\right|\leq M\) for all \(x\in V_{j^{\prime}}\) or \(\left|\frac{z_{2}(x)}{z_{1}(x)}\right|\leq M\) for all \(x\in V_{j^{\prime}}\) for some \(M>0\). Using (3.15), our assumptions on \(f\) and (3.8), we see that for all \(x\in V_{j^{\prime}}\) we have \[\gamma^{-|2r|N}\frac{H(\varphi_{\alpha_{1}^{N}}(x))}{4H(\varphi_{\alpha_{2}^{N}}(x ))}\leq\left|\frac{z_{1}(x)}{z_{2}(x)}\right|\leq\gamma^{|2r|N}\frac{4H(\varphi_ {\alpha_{1}^{N}}(x))}{H(\varphi_{\alpha_{2}^{N}}(x))}.\] Taking \(|r|\) sufficiently small so that \(|r|<1/2\), we see that the above immediately implies \[\gamma^{-N}\frac{H(\varphi_{\alpha_{1}^{N}}(x))}{4H(\varphi_{\alpha_{2}^{N}}(x ))}\leq\left|\frac{z_{1}(x)}{z_{2}(x)}\right|\leq\gamma^{N}\frac{4H(\varphi_{ \alpha_{1}^{N}}(x))}{H(\varphi_{\alpha_{2}^{N}}(x))}.\] If there exists \(x_{0}\in V_{j^{\prime}}\) such that \[\frac{H(\varphi_{\alpha_{1}^{N}}(x_{0})))}{H(\varphi_{\alpha_{2}^{N}}(x_{0}) )}\leq 1, \tag{3.16}\] then for all \(x\in V_{j^{\prime}}\) we have \[\frac{H(\varphi_{\alpha_{1}^{N}}(x))}{H(\varphi_{\alpha_{2}^{N}}(x))}\leq \frac{e^{AA_{1}e^{\prime}}H(\varphi_{\alpha_{1}^{N}}(x_{0})))}{e^{-AA_{1}e^{ \prime}}H(\varphi_{\alpha_{2}^{N}}(x_{0}))}\leq e^{2AA_{1}e^{\prime}}.\] Here we are using property 2 from Proposition 3.5, the fact \(|Z_{j}^{i}|\leq|V_{j}|\), and the inequality \[e^{-A|b||x-y|}\leq\frac{H(x)}{H(y)}\leq e^{A|b||x-y|}\] for all \(x,y\in I\). Therefore if (3.16) holds for some \(x_{0}\in V_{j^{\prime}}\) then \[\left|\frac{z_{1}(x)}{z_{2}(x)}\right|\leq 4\gamma^{N}e^{2AA_{1}e^{\prime}}=:M.\] If \[\frac{H(\varphi_{\alpha_{1}^{N}}(x)))}{H(\varphi_{\alpha_{2}^{N}}(x))}\geq 1\] for all \(x\in V_{j^{\prime}}\), then it can similarly be shown that \[\left|\frac{z_{2}(x)}{z_{1}(x)}\right|\leq 4\gamma^{N}e^{2AA_{1}e^{\prime}}.\] This completes our proof of the claim. We now try to control the variations of the arguments of \(z_{1}\) and \(z_{2}\). Since \(|z_{i}(x)|\geq\frac{\gamma^{-|r|N}}{4}H(\varphi_{\alpha_{i}^{N}}(x))>0\) for all \(x\in\widehat{V}_{j}\) and \(i=1,2\), there exist two \(C^{1}\) functions \(L_{i}:\widehat{V}_{j}\to\mathbb{C}\) such that for \(i=1,2\) we have \(L_{i}^{\prime}(x)=\frac{z_{i}^{\prime}(x)}{z_{i}(x)}\) and \(e^{L_{i}(x)}=z_{i}(x)\) for all \(x\in\widehat{V}_{j}\)2. Footnote 2: Details on how to construct the \(L_{i}\) are given in [50]. Let \[\Phi(x)=Im(L_{1}(x))-Im(L_{2}(x)).\] Taking derivatives, for all \(x\in\widehat{V}_{j}\) we get \[\Phi^{\prime}(x) =Im\left(\frac{z_{1}^{\prime}(x)}{z_{1}(x)}-\frac{z_{2}^{\prime}( x)}{z_{2}(x)}\right)\] \[=b\left(\frac{\varphi_{\alpha_{1}^{N}}^{\prime\prime}(x)}{\varphi_ {\alpha_{1}^{N}}^{\prime}(x)}-\frac{\varphi_{\alpha_{2}^{N}}^{\prime\prime}(x)} {\varphi_{\alpha_{2}^{N}}^{\prime}(x)}\right)+Im\left(\frac{(f\circ\varphi_ {\alpha_{1}^{N}})^{\prime}(x)}{f(\varphi_{\alpha_{1}^{N}}(x))}-\frac{(f\circ \varphi_{\alpha_{2}^{N}})^{\prime}(x)}{f(\varphi_{\alpha_{2}^{N}}(x))}\right).\] Using (3.15) and our assumptions on \(f\) we have \[\left|\frac{(f\circ\varphi_{\alpha_{1}^{N}})^{\prime}(x)}{f(\varphi_{\alpha_{1}^{ N}}(x))}-\frac{(f\circ\varphi_{\alpha_{2}^{N}})^{\prime}(x)}{f(\varphi_{\alpha_{2}^{ N}}(x))}\right|\leq 8A|b|\gamma^{-N}.\] Recall that \(\widehat{V}_{j}\subset\{x:d(x,K)<\delta\}\) where \(\delta\) is as in the statement of (3.14). Hence by the UNI condition (3.14), for all \(x\in\widehat{V}_{j}\) we have \[c_{1}-8A\gamma^{-N}\leq\frac{|\Phi^{\prime}(x)|}{|b|}\leq c_{2}+8A\gamma^{-N}.\] For \(x\in V_{j}\) and \(x^{\prime}\in V_{j+2}\), we now have by Proposition 3.5 and the mean value theorem that \[\left(c_{1}-8A\gamma^{-N}\right)A_{1}^{\prime}\varepsilon^{\prime}\leq|\Phi(x )-\Phi(x^{\prime})|\leq\left(c_{2}+8A\gamma^{-N}\right)3A_{1}\varepsilon^{ \prime}.\] By choosing \(N\) large enough, we see that there exists \(B_{1},B_{2}\) independent of \(x,x^{\prime}\) and \(|b|\) such that \[B_{1}\varepsilon^{\prime}\leq|\Phi(x)-\Phi(x^{\prime})|\leq B_{2}\varepsilon^ {\prime}.\] We now choose \(\varepsilon^{\prime}\) such that \((B_{2}+B_{1}/2)\varepsilon^{\prime}\leq\pi\) and set \(\varepsilon=B_{1}\frac{\varepsilon^{\prime}}{4}\). Suppose now that there exists \(x\in V_{j}\) and \(x^{\prime}\in V_{j+2}\) such that both \[\Phi(x),\Phi(x^{\prime})\in\cup_{k\in\mathbb{Z}}[2k\pi-\varepsilon,2k\pi+ \varepsilon].\] Since \(|\Phi(x)-\Phi(x)|\leq B_{2}\varepsilon^{\prime}\), we cannot have \[\Phi(x)\in[2k_{1}\pi-\varepsilon,2k_{1}\pi+\varepsilon]\text{ and }\Phi(x^{ \prime})\in[2k_{2}\pi-\varepsilon,2k_{2}\pi+\varepsilon])\] with \(k_{1}\neq k_{2}\). As in that case we would have \[B_{2}\varepsilon^{\prime}\geq|\Phi(x)-\Phi(x^{\prime})|\geq 2\pi-2\varepsilon=2 \pi-B_{1}\varepsilon^{\prime}/2,\] which is not possible by our choice of \(\varepsilon^{\prime}\). Therefore we have \[B_{1}\varepsilon^{\prime}\leq|\Phi(x)-\Phi(x^{\prime})|\leq 2\varepsilon=B_{1} \varepsilon^{\prime}/2\] which is again a contradiction. Therefore there exists \(j^{\prime}\in\{j,j+2\}\) such that for all \(x\in V_{j^{\prime}}\), \(d(\Phi(x),2\pi\mathbb{Z}))>\varepsilon.\) Because \(e^{i(\Phi(x))}=e^{i(arg(z_{1})-arg(z_{2}))}\), the hypothesis of Lemma 3.13 are satisfied. We get either for all \(x\in V_{j^{\prime}}\) \[|z_{1}(x)+z_{2}(x)|\leq(1-\delta(M,\varepsilon))|z_{1}(x)|+|z_{2}(x)|,\] or for all \(x\in V_{j^{\prime}}\) \[|z_{1}(x)+z_{2}(x)|\leq(1-\delta(M,\varepsilon))|z_{2}(x)|+|z_{1}(x)|,\] depending on whether \(\left|\frac{z_{1}(x)}{z_{2}(x)}\right|\leq M\) or \(\left|\frac{z_{2}(x)}{z_{1}(x)}\right|\leq M\). By choosing \(0<\theta<\frac{1}{2}\delta(M,\varepsilon)\) we have \(\Theta_{i}(x)\leq 1\) for some \(i\in\{1,2\}\) for all \(x\in V_{j^{\prime}}\). Now we can prove the first part of property 3 from Lemma 3.4. Proof of the first part of the property 3 of Lemma 3.4.: Fix \(w\in\cup_{n=1}^{\infty}\Omega^{n}\). We assume that the constants have been chosen so that property 1, property 2, and Lemma 3.11 are satisfied. Let \(f\in C^{1}(I)\) and \(H\in C_{A|b|}\) with \(|f|\leq H\) and \(|f^{\prime}|\leq A|b|H\). We must show that there exists a dense subset \(J\in\mathcal{E}\) such that \[|\mathcal{L}_{s,w^{*}}^{N}(f)|\leq\mathcal{N}_{s}^{J}(H).\] Let \(J\) be the set of indexes \((i,j)\) such that \(\Theta_{i}(x)\leq 1\) for all \(x\in V_{j}\). Lemma 3.11 tells us that \(J\) is dense. Let \(x\in I\). If \(x\notin\operatorname{int}V_{j}\) for any \(j\) such that \((i,j)\in J\) for some \(i\in\{1,2\}\), then \(\chi_{J}(\varphi_{\mathbf{a}}(x))=1\) for all \(\mathbf{a}\in(w^{*})^{N}\). This is because \(\varphi_{\mathbf{a}}(x)\in Z_{j}^{i}\) if and only if \(\mathbf{a}=\alpha_{i}^{N}\) and \(x\in V_{j}\) for some \((i,j)\in J\). Therefore for \(x\notin\operatorname{int}V_{j}\) we have \[|\mathcal{L}_{s,w^{*}}^{N}(f)(x)| \leq\sum_{\mathbf{a}\in(w^{*})^{N}}\frac{1}{2^{N}}|\varphi_{ \mathbf{a}}^{\prime}(x)|^{r}H(\varphi_{\mathbf{a}}(x))\] \[=\sum_{\mathbf{a}\in(w^{*})^{N}}\frac{1}{2^{N}}|\varphi_{ \mathbf{a}}^{\prime}(x)|^{r}\chi_{J}(\varphi_{\mathbf{a}}(x))H(\varphi_{ \mathbf{a}}(x))\] \[=\mathcal{N}_{s}^{J}(H)(x).\] If \(x\in\operatorname{int}V_{j}\) for some \(j\) for which there exists \(i\in\{1,2\}\) such that \((i,j)\in J,\) then we apply the following argument. 1. If \((1,j)\in J\) and \((2,j)\notin J\), then \(\chi_{J}(\varphi_{\mathbf{a}}(x))=1\) for all \(\mathbf{a}\in(w^{*})^{N}\) such that \(a\neq\alpha_{1}^{N}\). Now using the fact that \(\Theta_{1}(x)\leq 1\), we get \[|\mathcal{L}_{s,w^{*}}^{N}(f)(x)| \leq\sum_{\begin{subarray}{c}\mathbf{a}\in(w^{*})^{N}\\ \mathbf{a}\neq\alpha_{1}^{N},\alpha_{2}^{N}\end{subarray}}\frac{1}{2^{N}}| \varphi_{\mathbf{a}}^{\prime}(x)|^{r}H(\varphi_{\mathbf{a}}(x))\] \[\qquad+\frac{(1-2\theta)}{2^{N}}|\varphi_{\alpha_{1}^{N}}^{\prime }(x)|^{r}H(\varphi_{\alpha_{1}^{N}}(x))+\frac{1}{2^{N}}|\varphi_{\alpha_{2}^{N} }^{\prime}(x)|^{r}H(\varphi_{\alpha_{2}^{N}}(x))\] \[\leq\sum_{\mathbf{a}\in(w^{*})^{N}}\frac{1}{2^{N}}|\varphi_{ \mathbf{a}}(x)|^{r}\chi_{J}(\varphi_{\mathbf{a}}(x))H(\varphi_{\mathbf{a}}(x))\] \[=\mathcal{N}_{s}^{J}(H)(x).\] The case \((2,j)\in J\) and \((1,j)\notin J\) is symmetric. 2. If \((1,j)\in J\) and \((2,j)\in J\), then \(\chi_{J}(\varphi_{\mathbf{a}}(x))=1\) for all \(\mathbf{a}\notin\{\alpha_{1}^{N},\alpha_{2}^{N}\}\). In addition \(\Theta_{1}(x)\leq 1\) and \(\Theta_{2}(x)\leq 1\). Combining these two inequalities we deduce \[\left|\varphi_{\alpha_{1}^{N}}^{\prime}(x)|^{s}f(\varphi_{\alpha _{1}^{N}}(x))+|\varphi_{\alpha_{2}^{N}}^{\prime}(x)|^{s}f(\varphi_{\alpha_{2}^ {N}}(x))\right|\] \[\leq (1-\theta)|\varphi_{\alpha_{1}^{N}}^{\prime}(x)|^{r}H(\varphi_{ \alpha_{1}^{N}}(x))+(1-\theta)|\varphi_{\alpha_{2}^{N}}^{\prime}(x)|^{r}H( \varphi_{\alpha_{2}^{N}}(x))\] \[\leq |\varphi_{\alpha_{1}^{N}}^{\prime}(x)|^{r}\chi_{J}(\varphi_{ \alpha_{1}^{N}}(x))H(\varphi_{\alpha_{1}^{N}}(x))+|\varphi_{\alpha_{2}^{N}}^{ \prime}(x)|^{r}\chi_{J}(\varphi_{\alpha_{2}^{N}}(x))H(\varphi_{\alpha_{2}^{N}} (x)).\] This implies that \[|\mathcal{L}_{s,w^{*}}^{N}(f)(x)|\leq\mathcal{N}_{s}^{J}(H)(x).\] This complete our proof of the first part of property 3 from Lemma 3.4. Now we will focus on the second part of property 3 from Lemma 3.4. Proof of the second part of property 3 of Lemma 3.4.: Let \(f\in C^{1}(I)\) and \(H\in C_{A|b|}\) be such that \(|f|\leq H\) and \(|f^{\prime}|\leq A|b|H.\) Assume now \(|b|>1\) and \(|r|<1\). Then we have: \[|\mathcal{L}^{N}_{s,w^{*}}(f)^{\prime}(x)|\] \[\leq\sum_{\mathbf{a}\in(w^{*})^{N}}\frac{1}{2^{N}}|\varphi^{\prime }_{\mathbf{a}}(x)|^{r}\left(|(f\circ\varphi_{\mathbf{a}})^{\prime}(x)|+\frac{| \varphi^{\prime\prime}_{\mathbf{a}}(x)|}{|\varphi^{\prime}_{\mathbf{a}}(x)|}| s||f(\varphi_{\mathbf{a}}(x))|\right)\] \[\leq\sum\frac{1}{2^{N}}|\varphi^{\prime}_{\mathbf{a}}(x)|^{r} \left(|f^{\prime}(\varphi_{\mathbf{a}}(x))|\gamma^{-N}+C_{0}|s||H(\varphi_{ \mathbf{a}}(x))|\right)\] \[\leq\sum\frac{1}{2^{N}}|\varphi^{\prime}_{\mathbf{a}}(x)|^{r} \left(A|b|H(\varphi_{\mathbf{a}}(x))\gamma^{-N}+2C_{0}|b|H(\varphi_{\mathbf{a }}(x))|\right)\] \[=A|b|\gamma^{-N}\mathcal{L}^{N}_{r,w^{*}}(H)(x)+2C_{0}|b| \mathcal{L}^{N}_{r,w^{*}}(H)(x)\] \[\leq\frac{A|b|\gamma^{-N}}{1-\theta}\mathcal{N}^{J}_{s}(H)(x)+ \frac{2C_{0}|b|}{1-\theta}\mathcal{N}^{J}_{s}(H)(x)\] In the last line we used that \(H\leq\frac{1}{1-\theta}\chi_{J}H\) giving \[\mathcal{L}^{N}_{r,w^{*}}(H)(x)\leq\frac{1}{1-\theta}\mathcal{N}^{J}_{s}(H)(x).\] It follows now that for \(A>8C_{0}\), \(\theta<1/2\), and \(N\) sufficiently large so that \(\gamma^{-N}<1/4\), we have \[|\mathcal{L}^{N}_{s,w^{*}}(f)^{\prime}(x)|\leq A|b|\mathcal{N}^{J}_{s}(H)(x).\] This establishes the second part of property 3 from Lemma 3.4. This completes the proof of Lemma 3.4 and thus the proof of the spectral gap Theorem 1.1 is complete. ## 4. Proof of the Fourier decay theorem Assuming the spectral gap Theorem 1.1 holds, let us now show how to prove Theorem 1.4. Our main task is to reduce the quantity \(|\widehat{\mu}(\xi)|^{2}\) using Cauchy-Schwartz, the mean value theorem and certain large deviation bounds into an exponential sum. We can then apply the following general exponential sum bound for non-concentrated products of real numbers. This bound is a corollary of the sum-product theorem [8]. This specific form is taken from [55, Lemma 4.3]: **Theorem 4.1** (Bound for exponential sums of non-concentrated products).: _Fix \(\varepsilon_{0}>0\). Then there exist \(k\in\mathbb{N}\) and \(\varepsilon_{1}>0,\varepsilon_{2}>0\) depending only on \(\varepsilon_{0}\) such that the following holds._ _Fix \(\eta\in\mathbb{R}\) such that \(|\eta|>1\). Let \(R,N>1\) and \(\mathcal{Z}_{1},\dots,\mathcal{Z}_{k}\) be finite sets such that \(\sharp\mathcal{Z}_{j}\leq RN\). Suppose \(\zeta_{j}\), \(j=1,\dots,k\), are real valued functions on the sets \(\mathcal{Z}_{j}\) that satisfy for all \(j=1,\dots,k\) that_ \((1)\) _the range_ \[\zeta_{j}(\mathcal{Z}_{j})\subset[R^{-1},R];\] \((2)\) _for all \(\sigma\in[R^{-2}|\eta|^{-1},|\eta|^{-\varepsilon_{1}}]\)_ \[\sharp\{(\mathbf{b},\mathbf{c})\in\mathcal{Z}_{j}^{2}:|\zeta_{j}(\mathbf{b}) -\zeta_{j}(\mathbf{c})|\leq\sigma\}\leq N^{2}\sigma^{\varepsilon_{0}}.\] _Then there exists a constant \(c>0\) depending only on \(k\) such that_ \[\left|N^{-k}\sum_{\mathbf{b}_{1}\in\mathcal{Z}_{1},\ldots,\mathbf{b}_{k}\in \mathcal{Z}_{k}}\exp(2\pi i\eta\zeta_{1}(\mathbf{b}_{1})\ldots\zeta_{k}(\mathbf{ b}_{k}))\right|\leq cR^{k}|\eta|^{-\varepsilon_{2}}.\] In other words, if the numbers \(\zeta_{j}(\mathbf{b}_{j})\) do not concentrate too much in scales roughly between \(|\eta|^{-1}\) and \(|\eta|^{-\varepsilon_{1}}\), then the corresponding exponential sums for the products \[\zeta_{1}(\mathbf{b}_{1})\ldots\zeta_{k}(\mathbf{b}_{k})\] at frequency \(\eta\) have to decay with for some power of \(|\eta|\). For us, the mappings \(\zeta_{j}:\mathcal{Z}_{j}\to[R^{-1},R]\) appear from a multiscale decomposition of \(\mu\) when we iterate the self-conformality. In order to define them, we first need some notations and parameters. **Notations 4.1**.: Let \(\varepsilon>0\) and \(\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{0})\) for \(\varepsilon>0\), \(n\in\mathbb{N}\), \(\varepsilon_{0}>0\) and \(k\in\mathbb{N}\) be defined by the set of blocks (concatenations of words in \(\mathbf{A}^{n}\)) \(\mathbf{a}_{1}\ldots\mathbf{a}_{k}\in(\mathbf{A}^{n})^{k}\) where \[\mathbf{a}_{j}\in\mathcal{R}_{n}(\varepsilon,\varepsilon_{0}):=\bigcap_{l= \lfloor\varepsilon_{0}n\rfloor}^{n}\{\mathbf{a}\in\mathbf{A}^{n}:[a_{1}\ldots a _{l}]\subset A_{l}(\varepsilon)\},\quad j=1,\ldots,k,\] where, using \[\tau(\mathbf{a}):=-\log|\varphi^{\prime}_{a_{1}}(\pi((a_{i+1})))|\] and \[\psi(\mathbf{a}):=-\log p_{a_{1}}\] we define: \[A_{n}(\varepsilon):=\Big{\{}\mathbf{a}\in\mathbf{A}^{\mathbb{N}}:\Big{|} \frac{1}{n}S_{n}\tau(\mathbf{a})-\lambda\Big{|}<\varepsilon\quad\text{and} \quad\Big{|}\frac{1}{n}S_{n}\psi(\mathbf{a})-h\Big{|}<\varepsilon\Big{\}}.\] Here the Lyapunov exponent and entropy of \(\mu\) are given by \[\lambda:=\int\tau(\mathbf{a})\,dm_{\mathbf{p}}\qquad\text{ and }\qquad h:=-\sum_{a\in\mathbf{A}}p_{a}\log p_{a}.\] Applying the large deviation principle (see e.g. [39, Theorem 4.1]) and the arguments given in the proof of [55, Lemma 2.2], we see that the elements of \(\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})\) and \(\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{0})\) satisfy the following useful properties. **Lemma 4.2** (Regularity and large deviations).: _For any \(\varepsilon,\varepsilon_{0}>0\) and \(k,n\in\mathbb{N}\), if \(\mathbf{a}\in\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})\), then for any \(\lfloor\varepsilon_{0}n\lfloor\leq j\leq n\) we have_ \[e^{-\varepsilon j}e^{-\lambda j}\lesssim|\varphi^{\prime}_{\mathbf{a}|_{j}}(x )|\lesssim e^{\varepsilon j}e^{-\lambda j}\quad\text{for all }x\in I\quad\text{and} \quad e^{-\varepsilon j}e^{-hj}\lesssim p_{\mathbf{a}|_{j}}\lesssim e^{ \varepsilon j}e^{-hj}. \tag{4.1}\] _Furthermore,_ \[e^{\varepsilon n}e^{hn}\lesssim\sharp\mathcal{R}_{n}(\varepsilon,\varepsilon_ {0})\lesssim e^{\varepsilon n}e^{hn}\quad\text{and}\quad e^{\varepsilon kn}e^{ hkn}\lesssim\sharp\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{0})\lesssim e^{ \varepsilon kn}e^{hkn} \tag{4.2}\] _and_ \[m_{\mathbf{p}}(\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{0}))\lesssim ke^{- \delta n} \tag{4.3}\] _for some \(\delta>0\)._ Note that the proof in [39] is formulated in terms of the measure \(\mu_{\mathbf{p}}\) rather than \(m_{\mathbf{p}}\). However, because the underlying IFS they are considering satisfies the strong separation condition, the proof translates over into our symbolic setting with exactly the same proof. Given now the notation on regular words, we will use the following parameters, which we can use to define the maps \(\zeta_{j}\) needed for Theorem 4.1. This also helps to keep track of all the various constants, coefficients and their dependencies that might have been hard to track in [55]: **Parameters 4.3** (All the parameters and their dependencies).: 1. Given the IFS \(\Phi=\{\varphi_{a}:a\in\mathbf{A}\}\) and self-conformal measure \(\mu\), let \(\lambda>0\) be the Lyapunov exponent of \(\mathbf{p}\), \(h>0\) be the entropy of \(\mathbf{p}\) and \(s=h/\lambda>0\). Recall that \(\gamma>1\) is the maximal uniform contraction rate of the maps \(\varphi_{a}\) and \(B>0\) is the bounded distortion constant. 2. This IFS now fixes the family of perturbed transfer operators \(\mathcal{L}_{s}\), \(s\in\mathbb{C}\). For this family, let \(\varrho_{0}>0\) be the uniform spectral gap from Theorem 1.1 that exists for \(|r|\) sufficiently small and \(|b|\) sufficiently large (note that crucially it does not depend on \(b\)) and set \[\varepsilon_{0}:=\frac{1}{2}\min\{\log(1/\varrho_{0}),\lambda/2\}>0,\] which fixes once and for all by Theorem 4.1 the parameters \(k\in\mathbb{N}\), \(\varepsilon_{1}>0\) and \(\varepsilon_{2}>0\). 3. Now, using the data \(\varepsilon_{0},k,\lambda\), let us introduce a way for us to fix the length of the words \(n\) we will use when considering a given frequency \(\xi\in\mathbb{R}\), \(\xi\neq 0\). We define \[n:=\lfloor((2k+1)\lambda+\varepsilon_{0})\log|\xi|\rfloor.\] Here we assume (depending on \(\lambda\)) that \(\xi\) is large enough such that \(n>1\). Note that now \(|\xi|\sim e^{((2k+1)\lambda+\varepsilon_{0})n}\). 4. Given all of the above data, we will end up getting multiplicative error terms of the form \(\exp(\beta_{j}\varepsilon n)\) where coefficients \(\beta_{j}>0\) depend only on all the above data and artefacts of the estimates such as Cauchy-Schwartz inequality. In the end of the proof, we gather all these multiplicative errors into a single one, \(\exp(\beta_{0}\varepsilon n)\), and we end up with an estimate \[|\widehat{\mu}(\xi)|\lesssim e^{\beta_{0}\varepsilon n}e^{-\alpha_{0}n}\] for some \(\alpha>0\). Thus to get polynomial Fourier decay, one simply has to pick any \(0<\varepsilon<\frac{\alpha_{0}}{\beta_{0}}\). 5. We can now define what parameters we use in Theorem 4.1. Assuming we have fixed \(\varepsilon>0\), then define for \(n\in\mathbb{N}\) a collection \[J_{n}(\varepsilon,\varepsilon_{0}):=\{\eta\in\mathbb{R}:e^{\frac{\varepsilon_{ 0}}{2}n}\leq|\eta|\leq e^{(\varepsilon_{0}+\varepsilon)n}\}\] which will constitute the range of \(\eta\) to which we will apply Theorem 4.1 with the choice of inputs: 1. \(R:=R_{\varepsilon,n}=e^{\varepsilon n}\). 2. \(\mathcal{Z}_{j}:=\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})\) for all \(j=1,\ldots,k\). 3. The maps \(\zeta_{j}:=\zeta_{j,\mathbf{a}}:\mathcal{R}_{n}(\varepsilon,\varepsilon_{0}) \rightarrow[R^{-1},R]\) will be defined by \[\zeta_{j,\mathbf{a}}(\mathbf{b}):=e^{2\lambda n}|\varphi_{\mathbf{a}_{j-1} \mathbf{b}}^{\prime}(x_{\mathbf{a}_{j}})|,\quad\mathbf{b}\in\mathcal{R}_{n} (\varepsilon,\varepsilon_{0}),\] where \(\mathbf{a}=\mathbf{a}_{0}\mathbf{a}_{1}\ldots\mathbf{a}_{k}\in\mathbf{A}^{n (k+1)}\) and \(x_{\mathbf{a}_{j}}\) is the centre point of the interval \(\varphi_{\mathbf{a}_{j}}(I)\). They do indeed map \(\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})\) to \([R^{-1},R]\) by (4.1). The main consequence of the spectral gap Theorem 1.1 is the following multiscale non-concentration estimate that we then feed into Theorem 4.1 to eventually establish Theorem 1.4. **Proposition 4.4** (Multiscale non-concentration).: _Let \(\mathcal{W}\) be the set of \((k+1)\)-tuples \(\mathbf{a}\in\mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0})\) such that for all \(j=1,\ldots,k\), \(\eta\in J_{n}(\varepsilon,\varepsilon_{0})\) and \(\sigma\in[R^{-2}|\eta|^{-1},|\eta|^{-\varepsilon_{1}}]\), we have that_ \[\sharp\{(\mathbf{b},\mathbf{c})\in\mathcal{R}_{n}(\varepsilon,\varepsilon_{0} )^{2}:|\zeta_{j,\mathbf{a}}(\mathbf{b})-\zeta_{j,\mathbf{a}}(\mathbf{c})|\leq \sigma\}\leq\sharp\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^{2}\sigma^{ \varepsilon_{0}/4}.\] _Then there exists \(\kappa_{0}>0\) such that_ \[\frac{\sharp(\mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0})\setminus \mathcal{W})}{\sharp\mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0})} \lesssim ke^{\varepsilon\kappa_{0}n-\varepsilon_{0}^{2}\varepsilon_{1}n/24}.\] Proof.: We will split our proof into four steps. ### Step 1. Deriving our proposition from (4.4) The main estimate we will need for the proof of this proposition is the following: _There exists \(\kappa_{0}>0\) such that for all \(\varepsilon>0\), \(n\in\mathbb{N}\), \(\eta\in J_{n}(\varepsilon,\varepsilon_{0})\), \(\sigma\in[R^{-2}|\eta|^{-1},|\eta|^{-\varepsilon_{1}}]\), \(x\in I\) we have_ \[\sharp\{(\mathbf{a},\mathbf{b},\mathbf{c})\in\mathcal{R}_{n}(\varepsilon, \varepsilon_{0})^{3}:|e^{2\lambda n}|\varphi^{\prime}_{\mathbf{a}\mathbf{b}}( x)|-e^{2\lambda n}|\varphi^{\prime}_{\mathbf{a}\mathbf{c}}(x)||\leq\sigma\} \lesssim e^{\kappa_{0}\varepsilon n}\sigma^{\varepsilon_{0}/3}\sharp\mathcal{R }_{n}(\varepsilon,\varepsilon_{0})^{3}. \tag{4.4}\] Indeed assuming (4.4) now holds, we can now conclude our proposition as follows. If \(\eta\in J_{n}(\varepsilon,\varepsilon_{0})\) and \(\sigma\in[R^{-2}|\eta|^{-1},|\eta|^{-\varepsilon_{1}}]\), there is a unique \(l\geq\lfloor\frac{\varepsilon_{0}\varepsilon_{1}n}{2\log 2}\rfloor-1\) such that \[2^{-l-1}\leq\sigma\leq 2^{-l}.\] For such an \(l\), let \(\mathcal{R}_{l}^{*}\) be the collection of pairs \((\mathbf{a},\mathbf{d})\in\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^{2}\) such that \[\sharp\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^{-2}\cdot\sharp\{( \mathbf{b},\mathbf{c})\in\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^{2}: \left\|\varphi^{\prime}_{\mathbf{a}\mathbf{b}}(x_{\mathbf{d}})\right\|-| \varphi^{\prime}_{\mathbf{a}\mathbf{c}}(x_{\mathbf{d}})||\leq e^{-2\lambda n }2^{-l}\}\leq 2^{-(l+1)\varepsilon_{0}/4}.\] Using this terminology, if we have a block \(\mathbf{a}\) such that \((\mathbf{a}_{j-1},\mathbf{a}_{j})\in\mathcal{R}_{l}^{*}\) for every \(j=1,...,k\) and every \(l\geq\lfloor\frac{\varepsilon_{0}\varepsilon_{1}n}{2\log 2}\rfloor-1\), then by the definition of \(\mathcal{R}_{l}^{*}\) and by definition of \(\zeta_{j,\mathbf{a}}(\mathbf{b})\) we have that \[\sharp\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^{-2}\cdot \sharp\{(\mathbf{b},\mathbf{c})\in\mathcal{R}_{n}(\varepsilon,\varepsilon_{0} )^{2}:\left|\zeta_{j,\mathbf{A}}(\mathbf{b})-\zeta_{j,\mathbf{A}}(\mathbf{c}) \right|\leq\sigma\}\] \[\leq\sharp\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^{-2}\cdot \sharp\{(\mathbf{b},\mathbf{c})\in\mathcal{R}_{n}(\varepsilon,\varepsilon_{0} )^{2}:\left\|\varphi^{\prime}_{\mathbf{a}_{j-1}\mathbf{b}}(x_{\mathbf{a}_{j}}) \right\|-|\varphi^{\prime}_{\mathbf{a}_{j-1}\mathbf{c}}(x_{\mathbf{a}_{j}}) \right\|\leq e^{-2\lambda n}2^{-l}\}\] \[\leq 2^{-(l+1)\varepsilon_{0}/4}\] \[\leq\sigma^{\varepsilon_{0}/4}\] for all \(\sigma\in[R^{-2}|\eta|^{-1},|\eta|^{-\varepsilon_{1}}]\) and \(\eta\in J_{n}(\varepsilon,\varepsilon_{0})\). Thus we have the inclusion \[\left\{\mathbf{a}\in\mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0}):( \mathbf{a}_{j-1},\mathbf{a}_{j})\in\mathcal{R}_{l}^{*}\text{ for all }l\geq\left\lfloor\frac{\varepsilon_{0}\varepsilon_{1}n}{2\log 2} \right\rfloor-1,j=1,\ldots k\right\}\subset\mathcal{W}.\] So a block \(\mathbf{a}\notin\mathcal{W}\) if there exists at least one \(j\) and \(l\) such that \((\mathbf{a}_{j-1},\mathbf{a}_{j})\notin\mathcal{R}_{l}^{*}\). On the other hand, using Markov's inequality for \[f(\mathbf{a},\mathbf{d})=\sharp\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^ {-2}\cdot\sharp\{(\mathbf{b},\mathbf{c})\in\mathcal{R}_{n}(\varepsilon, \varepsilon_{0})^{2}:\left\|\varphi^{\prime}_{\mathbf{a}\mathbf{b}}(x_{\mathbf{ d}})\right\|-|\varphi^{\prime}_{\mathbf{a}\mathbf{c}}(x_{\mathbf{d}})|\leq e^{-2 \lambda n}2^{-l}\}\] gives us \[\sharp\{{\mathcal{R}}_{n}(\varepsilon,\varepsilon_{0})^{2}\setminus{ \mathcal{R}}_{l}^{*}\} =\sharp\{({\bf a},{\bf d})\in{\mathcal{R}}_{n}(\varepsilon, \varepsilon_{0})^{2}:f({\bf a},{\bf d})\geq 2^{-(l+1)\varepsilon_{0}/4}\}\] \[\leq 2^{(l+1)\varepsilon_{0}/4}\sum_{({\bf a},{\bf d})\in{ \mathcal{R}}_{n}(\varepsilon,\varepsilon_{0})^{2}}f({\bf a},{\bf d})\] \[\lesssim\frac{2^{(l+1)\varepsilon_{0}/4}\sharp\{({\bf a},{\bf b},{\bf c},{\bf d})\in{\mathcal{R}}_{n}(\varepsilon,\varepsilon_{0})^{4}:|| \varphi^{\prime}_{\bf a}(x_{\bf d})|-|\varphi^{\prime}_{\bf a}(x_{\bf d})|| \leq e^{-2\lambda n}2^{-l}\}}{\sharp\mathcal{R}_{n}(\varepsilon,\varepsilon_{ 0})^{2}}\] Applying now (4.4) with \(\sigma=2^{-l}\) gives us \[\frac{2^{(l+1)\varepsilon_{0}/4}\sharp\{({\bf a},{\bf b},{\bf c},{ \bf d})\in{\mathcal{R}}_{n}(\varepsilon,\varepsilon_{0})^{4}:||\varphi^{ \prime}_{\bf a}(x_{\bf d})|-|\varphi^{\prime}_{\bf a}(x_{\bf d})||\leq e^{-2 \lambda n}2^{-l}\}}{\sharp\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^{2}}\] \[\lesssim e^{\varepsilon\kappa_{0}n}2^{-l\varepsilon_{0}/12}\sharp \mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^{2}.\] This in turn implies \[\sharp({\mathcal{R}}_{n}(\varepsilon,\varepsilon_{0})^{2}\setminus{\mathcal{R }}_{l}^{*})\lesssim e^{\varepsilon\kappa_{0}n}2^{-l\varepsilon_{0}/12}\cdot \sharp\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^{2}. \tag{4.5}\] Applying (4.5) we now have \[\sharp({\mathcal{R}}_{n}^{k+1}(\varepsilon,\varepsilon_{0})\setminus{ \mathcal{W}}) \leq\sum_{j=1}^{k}\sum_{l=\lfloor\frac{\varepsilon_{0}\varepsilon _{1}n}{2\log 2}\rfloor-1}^{\infty}\sharp\{{\bf a}\in{\mathcal{R}}_{n}^{k+1}( \varepsilon,\varepsilon_{0}):({\bf a}_{j-1},{\bf a}_{j})\notin{\mathcal{R}}_{ l}^{*}\}\] \[=\sum_{j=1}^{k}\sum_{l=\lfloor\frac{\varepsilon_{0}\varepsilon_{ 1}^{n}}{2\log 2}\rfloor-1}^{\infty}\sharp\mathcal{R}_{n}^{k-1}(\varepsilon, \varepsilon_{0})\cdot\sharp(\mathcal{R}_{n}(\varepsilon)^{2}\setminus{ \mathcal{R}}_{l}^{*})\] \[\leq\sharp\mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0})\sum_ {j=1}^{k}\sum_{l=\lfloor\frac{\varepsilon_{0}\varepsilon_{1}^{n}}{2\log 2} \rfloor-1}^{\infty}e^{\varepsilon\kappa_{0}n}2^{-l\varepsilon_{0}/12}\] \[\lesssim\sharp\mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0}) \cdot ke^{\varepsilon\kappa_{0}n-\varepsilon_{0}^{2}\varepsilon_{1}n/24}.\] Thus our desired bound holds. ### Step 2. Reducing the proof of (4.4) to establishing (4.6) Instead of proving (4.4) directly we will instead prove the following statement: _There exists \(\kappa_{0}>0\) such that for all \(\varepsilon>0\), \(n\in{\mathbb{N}}\), \(\eta\in J_{n}(\varepsilon,\varepsilon_{0})\), \(\sigma\in[(R|\eta|^{-1})^{1-\frac{|\varepsilon_{0}n|}{n}},|\eta|^{-\varepsilon _{1}}]\), \(x\in I\) we have_ \[\sharp\{({\bf a},{\bf b},{\bf c})\in{\mathcal{R}}_{n}(\varepsilon,\varepsilon_ {0})^{3}:|e^{2\lambda n}|\varphi^{\prime}_{\bf a}(x)|-e^{2\lambda n}|\varphi^{ \prime}_{\bf a}(x)||\leq\sigma\}\lesssim e^{\kappa_{0}\varepsilon n}\sigma^{ \varepsilon_{0}/2}\sharp\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^{3}. \tag{4.6}\] The difference between (4.6) and (4.4) is that we consider \(\sigma\in[(R|\eta|^{-1})^{1-\frac{|\varepsilon_{0}n|}{n}},|\eta|^{-\varepsilon _{1}}]\) instead of \(\sigma\in[R^{-2}|\eta|^{-1},|\eta|^{-\varepsilon_{1}}].\) It can be shown that (4.4) follows from (4.6), albeit for a potentially different value of \(\kappa_{0}\). We leave the details to the interested reader. We emphasise that in (4.6) we observe a \(\sigma^{\varepsilon_{0}/2}\) term in our upper bound, whereas in (4.4) we observe a \(\sigma^{\varepsilon_{0}/3}\) term in our upper bound. This difference in the exponents is crucial when it comes to deriving (4.4) from (4.6). To complete the proof of our proposition it now suffices to prove (4.6). **Step 3. Reducing the proof of (4.6) to establishing (4.7).** Let us fix \(\varepsilon,n,\eta\) and \(\sigma\) as in its statement of (4.6). We fix \(m\in\mathbb{N}\) such that \[e^{-\varepsilon_{0}(m-1)}<\sigma\leq e^{-\varepsilon_{0}m}.\] Note that since \(\sigma\in[(R|\eta|^{-1})^{1-\frac{|\varepsilon_{0}n|}{n}},|\eta|^{-\varepsilon _{1}}]\) and \(\eta\in J_{n}(\varepsilon,\varepsilon_{0})\), we have \[e^{-\varepsilon_{0}(n-\lfloor\varepsilon_{0}n\rfloor)}\leq\sigma\leq e^{- \frac{\varepsilon_{0}\varepsilon_{1}n}{2}}\] therefore \[\frac{\varepsilon_{1}}{2}n\leq m\leq n-\lfloor\varepsilon_{0}n\rfloor.\] Appealing to a bounded distortions argument and the fact that \(p_{\mathbf{a}}p_{\mathbf{b}}=\mathbf{p_{ab}}\) for any \(\mathbf{a},\mathbf{b}\in\mathbf{A}^{*}\), we can deduce that there exists a constant \(\beta>0\) such that for every pair \((\mathbf{a},\mathbf{b})\in\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^{2}\), the concatenation \(\mathbf{ab}\) splits into a word \(\mathbf{ab}=\mathbf{ed}\) with \(\mathbf{e}:=\mathbf{ab}|_{2n-m}\in\mathcal{R}_{2n-m}(\varepsilon,\varepsilon_{ 0})\) and \[\mathbf{d}:=\mathbf{ab}|_{2n-m+1}^{2n}\in\tilde{\mathcal{R}}:=\Big{\{}\mathbf{ d}\in\mathbf{A}^{m}:[\mathbf{d}]\subset A_{m}(\beta\varepsilon)\Big{\}}.\] Thus if we now write \[\mathcal{P}=\{(\mathbf{e},y):y\pm e^{-\varepsilon_{0}m}\in[R^{-1},R]\quad \text{and}\quad\mathbf{e}\in\tilde{\mathcal{R}}_{2n-m}(\varepsilon)\}\] we have: \[\sharp\{(\mathbf{a},\mathbf{b},\mathbf{c})\in\mathcal{R}_{n}( \varepsilon,\varepsilon_{0})^{3}:|e^{2\lambda n}|\varphi^{\prime}_{\mathbf{ab }}(x)|-e^{2\lambda n}|\varphi^{\prime}_{\mathbf{ac}}(x)||\leq\sigma\}\] \[\leq\sharp\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})\sup_{ \mathbf{c}\in\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})}\sharp\{(\mathbf{ a},\mathbf{b})\in\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^{2}:|e^{2 \lambda n}|\varphi^{\prime}_{\mathbf{ab}}(x)|-e^{2\lambda n}|\varphi^{\prime}_ {\mathbf{ac}}(x)||\leq\sigma\}\] \[\leq\sharp\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})\sup_{ \mathbf{c}\in\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})}\sharp\{(\mathbf{ a},\mathbf{b})\in\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^{2}:|e^{2 \lambda n}|\varphi^{\prime}_{\mathbf{ab}}(x)|-e^{2\lambda n}|\varphi^{\prime}_ {\mathbf{ac}}(x)||\leq e^{-\varepsilon_{0}m}\}\] \[\leq\sharp\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})\sharp \mathcal{R}_{2n-m}(\varepsilon,\varepsilon_{0})\sup_{(\mathbf{e},y)\in \mathcal{P}}\sharp\{\mathbf{d}\in\tilde{\mathcal{R}}:e^{2\lambda n}|\varphi^{ \prime}_{\mathbf{ed}}(x)|\in B(y,e^{-\varepsilon_{0}m})\}.\] Since \(e^{-\varepsilon_{0}m}\sim\sigma\) and \[\sharp\tilde{\mathcal{R}}\sim e^{\beta^{\prime}\varepsilon m}\sharp\mathcal{ R}_{m}(\varepsilon,\varepsilon_{0})\] for some \(\beta^{\prime}>0\) depending on \(\beta\) by (4.2) and the definition of \(A_{m}(\beta\varepsilon)\), (4.6) will follow if we can establish the following: _There exists \(\kappa_{0}>0\) such that for any \(y\in\mathbb{R}\) with \(y\pm e^{-\varepsilon_{0}m}\in[R^{-1},R]\), \(\mathbf{e}\in\mathcal{R}_{2n-m}(\varepsilon,\varepsilon_{0})\) and \(x\in I\) we have_ \[\sharp\{\mathbf{d}\in\tilde{\mathcal{R}}:e^{2\lambda n}|\varphi^{\prime}_{ \mathbf{ed}}(x)|\in B(y,e^{-\varepsilon_{0}m})\}\lesssim e^{\varepsilon\kappa_ {0}m}\sigma^{\varepsilon_{0}/2}\sharp\tilde{\mathcal{R}}. \tag{4.7}\] **Step 4. Verifying (4.7).** Let us now proceed to prove (4.7). As \(y-e^{-\varepsilon_{0}m}\geq R^{-1}>0\), we have \[e^{2\lambda n}|\varphi^{\prime}_{\mathbf{ed}}(x)|\in B(y,e^{-\varepsilon_{0}m})\] if and only if \[-\log|\varphi^{\prime}_{\mathbf{ed}}(x)|\in J:=[2\lambda n-\log(y+e^{- \varepsilon_{0}m}),2\lambda n-\log(y-e^{-\varepsilon_{0}m})].\] so \[\sharp\{\mathbf{d}\in\tilde{\mathcal{R}}:e^{2\lambda n}|\varphi^{\prime}_{ \mathbf{ed}}(x)|\in B(y,e^{-\varepsilon_{0}m})\}=\sharp\{\mathbf{d}\in\tilde{ \mathcal{R}}:-\log|\varphi^{\prime}_{\mathbf{ed}}(x)|\in J\}\] We will bound this using a mollifier \(h\in C^{2}(\mathbb{R})\) satisfying \[\chi_{J}\leq h,\quad\|h\|_{1}\lesssim|J|,\quad\|h^{\prime\prime}\|_{L^{1}} \lesssim\frac{1}{|J|}.\] As \(\chi_{J}\leq h\), we have \[\sharp\{\mathbf{d}\in\tilde{\mathcal{R}}:-\log|\varphi^{\prime}_{\mathbf{ed}}(x)| \in J\}\leq\sum_{\mathbf{d}\in\tilde{\mathcal{R}}}h(-\log|\varphi^{\prime}_{ \mathbf{ed}}(x)|)^{1/2}\] The Cauchy-Schwartz inequality implies then implies the bound \[\sharp\{\mathbf{d}\in\tilde{\mathcal{R}}:-\log|\varphi^{\prime}_{\mathbf{ed}}(x )|\in J\}^{2}\leq\Big{(}\sum_{\mathbf{d}\in\tilde{\mathcal{R}}}p_{\mathbf{d}}h (-\log|\varphi^{\prime}_{\mathbf{ed}}(x)|)\Big{)}^{2}\Big{(}\sum_{\mathbf{d} \in\tilde{\mathcal{R}}}\frac{1}{p_{\mathbf{d}}}\Big{)}^{2}.\] By Fourier inversion we know that \[h(-\log|\varphi^{\prime}_{\mathbf{ed}}(x)|)=\int\exp(-2\pi i\xi\log|\varphi^{ \prime}_{\mathbf{ed}}(x)|)\widehat{h}(\xi)\,d\xi=\int|\varphi^{\prime}_{ \mathbf{ed}}(x)|^{-2\pi i\xi}\widehat{h}(\xi)\,d\xi.\] Moreover, because the words in \(\tilde{\mathcal{R}}\) are all of length \(m\), we have the bound: \[\sum_{\mathbf{d}\in\tilde{\mathcal{R}}}p_{\mathbf{d}}h(-\log| \varphi^{\prime}_{\mathbf{ed}}(x)|) \leq\sum_{\mathbf{d}\in\mathbf{A}^{m}}p_{\mathbf{d}}h(-\log|f^{ \prime}_{\mathbf{ed}}(x)|)\] \[=\sum_{\mathbf{d}\in\mathbf{A}^{m}}p_{\mathbf{d}}\int\widehat{h} (\xi)|\varphi^{\prime}_{\mathbf{ed}}(x)|^{-2\pi i\xi}\,d\xi\] \[=\int\widehat{h}(\xi)\sum_{\mathbf{d}\in\mathbf{A}^{m}}p_{ \mathbf{d}}|\varphi^{\prime}_{\mathbf{ed}}(x)|^{-2\pi i\xi}\,d\xi.\] Let \(\Theta>1\) be such that Theorem 1.1 is satisfied for \(r=0\) and \(|b|>\Theta\). We now bound the integral above over the domains \(|\xi|\leq\Theta/2\pi\) and \(|\xi|>\Theta/2\pi\). Firstly, we have \[\int_{|\xi|\leq\Theta/2\pi}\widehat{h}(\xi)\sum_{\mathbf{d}\in \mathbf{A}^{m}}p_{\mathbf{d}}|\varphi^{\prime}_{\mathbf{ed}}(x)|^{-2\pi i\xi} \,d\xi\] \[\lesssim\sup_{|\xi|\leq\Theta/2\pi}\Big{|}\widehat{h}(\xi)\sum_{ \mathbf{d}\in\mathbf{A}^{m}}p_{\mathbf{d}}|\varphi^{\prime}_{\mathbf{ed}}(x)|^ {-2\pi i\xi}\Big{|}\] \[\leq\sup_{|\xi|\leq 1/2\pi}|\widehat{h}(\xi)|\sum_{\mathbf{d} \in\mathbf{A}^{m}}p_{\mathbf{d}}\] \[\leq|J|.\] Next, for the integral over \(|\xi|>\Theta/2\pi\), for given such \(\xi\), define the function \[|g(x)|:=|\varphi^{\prime}_{\mathbf{e}}(x)|^{-2\pi i\xi}\] and take \(b:=-2\pi\xi.\) Notice that the definition of the transfer operator gives that for any \(x\in I\) we have the identity: \[\sum_{\mathbf{d}\in\mathbf{A}^{m}}p_{\mathbf{d}}|\varphi^{\prime}_{\mathbf{ed} }(x)|^{-2\pi i\xi}=\mathcal{L}^{m}_{0+ib}(g)(x).\] On the other hand, by Theorem 1.1, there exists \(\varrho_{0}>0\) such that for all \(m\in\mathbb{N}\) we have: \[\|\mathcal{L}^{m}_{b}(g)\|_{\infty}\lesssim\varrho_{0}^{m}|b|^{1/2}\|g\|_{b}.\] By bounded distortions \[|g(x)|=1\quad\text{and}\quad|g^{\prime}(x)|=2\pi|\xi|\frac{|\varphi^{\prime \prime}_{\mathbf{e}}(x)|}{|\varphi^{\prime}_{\mathbf{e}}(x)|}|g(x)|\lesssim| \xi|,\] so the \(b\)-norm is bounded: \[\|g\|_{b}=\|g\|_{\infty}+\frac{\|g^{\prime}\|_{\infty}}{|b|}\lesssim 1.\] Hence we can bound the integral over \(|\xi|>\Theta/2\pi\) as follows: \[\int\limits_{|\xi|>\Theta/2\pi}\widehat{h}(\xi)\sum_{\mathbf{d} \in\mathbf{A}^{m}}p_{\mathbf{d}}|\varphi^{\prime}_{\mathbf{ed}}(x)|^{-2\pi i \xi}\,d\xi\] \[\lesssim\int\limits_{|\xi|>\Theta/2\pi}|\widehat{h}(\xi)|\cdot \varrho_{0}^{m}|b|^{1/2}\|g\|_{b}\,d\xi\] \[\lesssim\varrho_{0}^{m}\int\limits_{|\xi|>\Theta/2\pi}|\widehat{ h}(\xi)|\cdot|\xi|^{1/2}\,d\xi.\] On the other hand, by integration by parts, we can bound \(\widehat{h}(\xi)\) for any \(\xi\in\mathbb{R}\) as follows: \[|\widehat{h}(\xi)|\leq\frac{1}{1+|2\pi\xi|^{2}}(\|h\|_{L^{1}}+\|h^{\prime \prime}\|_{L^{1}}).\] Thus \[\int_{|\xi|>\Theta/2\pi}|\widehat{h}(\xi)|\cdot|\xi|^{1/2}\,d\xi\leq\int\frac {|\xi|^{1/2}}{1+|2\pi\xi|^{2}}(\|h\|_{L^{1}}+\|h^{\prime\prime}\|_{L^{1}})\,d \xi\lesssim\|h\|_{L^{1}}+\|h^{\prime\prime}\|_{L^{1}}.\] Combining our bounds for the two integrals, we arrive at: \[\sharp\{\mathbf{d}\in\tilde{\mathcal{R}}:e^{2\lambda n}|\varphi^{\prime}_{ \mathbf{ed}}(x)|\in B(y,e^{-\varepsilon_{0}m})\}^{2}\lesssim E(x)[\varrho_{0} ^{m}(\|h\|_{L^{1}}+\|h^{\prime\prime}\|_{L^{1}})+|J|],\] where \[E(x):=\sum_{\mathbf{d}\in\tilde{\mathcal{R}}}\frac{1}{P_{\mathbf{d}}}.\] By definition of \(\tilde{\mathcal{R}}\), we have \[E(x)\lesssim e^{\varepsilon\beta m}e^{hm}\sharp\tilde{\mathcal{R}}.\] So for some \(\kappa>0\) \[E(x)\lesssim e^{\varepsilon\kappa n}\sharp\tilde{\mathcal{R}}^{2}\] Moreover, by the mean value theorem \[|J|\leq\frac{2e^{-\varepsilon_{0}m}}{y-e^{-\varepsilon_{0}m}}\leq 2Re^{- \varepsilon_{0}m}\quad\text{and}\quad\frac{1}{|J|^{3}}\leq\frac{1}{8}R^{3}e^ {3\varepsilon_{0}m}.\] Recalling \(R=e^{\varepsilon n}\) gives \[|J|\lesssim e^{\varepsilon n}e^{-\varepsilon_{0}m}\] and \[\frac{1}{|J|}\lesssim e^{3\varepsilon n}e^{\varepsilon_{0}m}.\] Then by the choice of \(h\), we have \[\|h\|_{L^{1}}+\|h^{\prime\prime}\|_{L^{1}}\leq|J|+\frac{1}{|J|}\lesssim e^{ \varepsilon n}e^{-\varepsilon_{0}m}+e^{3\varepsilon n}e^{\varepsilon_{0}m}.\] Thus we obtain \[\sharp\{\mathbf{d}\in\tilde{\mathcal{R}}:e^{2\lambda n}|\varphi^{ \prime}_{\mathbf{ed}}(x)|\in B(y,e^{-\varepsilon_{0}m})\}^{2} \lesssim e^{\varepsilon\kappa n}\sharp\tilde{\mathcal{R}}^{2}[ \varrho_{0}^{m}(e^{\varepsilon n}e^{-\varepsilon_{0}m}+e^{3\varepsilon n}e^{ \varepsilon_{0}m})+e^{\varepsilon n}e^{-\varepsilon_{0}m}]\] \[\lesssim e^{\varepsilon(3+\kappa)n}\sharp\tilde{\mathcal{R}}^{2} \varrho_{0}^{m}e^{\varepsilon_{0}m}\] Since \(\varrho_{0}\leq e^{-2\varepsilon_{0}}\) by the choice of \(\varepsilon_{0}\) and \(e^{-\varepsilon_{0}(m-1)}\leq\sigma\), we see that \[\varrho_{0}^{m}e^{\varepsilon_{0}m}\leq e^{-2\varepsilon_{0}m}e^{\varepsilon_ {0}m}=e^{-\varepsilon_{0}m}\lesssim\sigma^{\varepsilon_{0}}.\] Selecting now \(\kappa_{0}=\frac{(3+\kappa)}{\varepsilon_{1}}\) and using that \(n\leq\frac{2m}{\varepsilon_{1}}\) gives us \[\sharp\{\mathbf{d}\in\tilde{\mathcal{R}}:e^{2\lambda n}|\varphi^{\prime}_{ \mathbf{ed}}(x)|\in B(y,e^{-\varepsilon_{0}m})\}\lesssim e^{\varepsilon\kappa _{0}m}\sigma^{\varepsilon_{0}/2}\sharp\tilde{\mathcal{R}}.\] Thus the proof of (4.7) is complete. This completes the proof of Proposition 4.4. Combining Theorem 4.1 and Proposition 4.4, we can now prove Theorem 1.4: Proof of Theorem 1.4.: Recall the data and notations we fixed in 4.3. Iterating the self-conformality \[\mu=\sum_{a\in\mathbf{A}}p_{a}\varphi_{a}\mu\] yields using the notation \(e(y):=\exp(-2\pi iy)\), \(y\in\mathbb{R}\) that \[\widehat{\mu}(\xi)=\sum_{\mathbf{a}*\mathbf{b}\in\mathbf{A}^{(2k+1)n}}p_{ \mathbf{a}*\mathbf{b}}\int e(\xi\varphi_{\mathbf{a}*\mathbf{b}}(x))d\mu(x).\] Here we have used the notation \(\mathbf{a}*\mathbf{b}\) to mean \((a_{0},b_{1},a_{1},b_{2},\ldots,b_{k},a_{k})\) for \(\mathbf{a}=(a_{0},\ldots,a_{k})\in\mathbf{A}^{(k+1)n}\) and \(\mathbf{b}=(b_{1},\ldots,b_{k})\in\mathbf{A}^{kn}\). Now splitting this sum based upon whether a word is in \(\mathcal{R}_{n}^{2k+1}(\varepsilon)\) or not and using Lemma 4.2, we have \[|\widehat{\mu}(\xi)|\lesssim\left|\sum_{\mathbf{a}\in\mathcal{R}_{n}^{k+1}( \varepsilon,\varepsilon_{0})}\sum_{\mathbf{b}\in\mathcal{R}_{n}^{k}( \varepsilon,\varepsilon_{0})}p_{\mathbf{a}*\mathbf{b}}\int e(\xi\varphi_{ \mathbf{a}*\mathbf{b}}(x))d\mu(x)\right|+ke^{-\delta n/2}.\] Now taking squares and using the inequality \(|a+b|^{2}\leq 2|a|^{2}+2|b|^{2}\) for \(a,b\in\mathbb{C}\) yields \[|\widehat{\mu}(\xi)|^{2}\lesssim\left|\sum_{\mathbf{a}\in\mathcal{R}_{n}^{k+1} (\varepsilon,\varepsilon_{0})}\sum_{\mathbf{b}\in\mathcal{R}_{n}^{k}( \varepsilon,\varepsilon_{0})}p_{\mathbf{a}*\mathbf{b}}\int e(\xi\varphi_{ \mathbf{a}*\mathbf{b}}(x))d\mu(x)\right|^{2}+k^{2}e^{-\delta n}. \tag{4.8}\] Focusing now on the first term, by the Cauchy-Schwartz inequality we have \[\left|\sum_{\mathbf{a}\in\mathcal{R}_{n}^{k+1}(\varepsilon, \varepsilon_{0})}\sum_{\mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon, \varepsilon_{0})}p_{\mathbf{a}*\mathbf{b}}\int e(\xi\varphi_{\mathbf{a}* \mathbf{b}}(x))d\mu(x)\right|^{2}\] \[\leq \left|\sum_{\mathbf{a}\in\mathcal{R}_{n}^{k+1}(\varepsilon, \varepsilon_{0})}\sum_{\mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon, \varepsilon_{0})}p_{\mathbf{a}*\mathbf{b}}^{2}\right|\times\sum_{\mathbf{a} \in\mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0})}\sum_{\mathbf{b}\in \mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{0})}\left|\int e(\xi\varphi_{ \mathbf{a}*\mathbf{b}}(x))d\mu(x)\right|^{2}\] \[\lesssim e^{(2k+1)\varepsilon n}e^{-(2k+1)hn}\times\sum_{\mathbf{a}\in \mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0})}\sum_{\mathbf{b}\in \mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{0})}\left|\int e(\xi\varphi_{ \mathbf{a}*\mathbf{b}}(x))d\mu(x)\right|^{2}\] In the final line we've used that if \(\mathbf{a}\in\mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0})\) and \(\mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{0})\) then \(p_{\mathbf{a}\ast\mathbf{b}}\leq e^{(2k+1)\varepsilon n}\cdot e^{-(2k+1)hn}\), and \[\sum_{\mathbf{a}\in\mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0})}\sum_{ \mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{0})}p_{\mathbf{a} \ast\mathbf{b}}=\sum_{\mathbf{a}\in\mathcal{R}_{n}^{k+1}(\varepsilon, \varepsilon_{0})}p_{\mathbf{a}}\sum_{\mathbf{b}\in\mathcal{R}_{n}^{k}( \varepsilon,\varepsilon_{0})}p_{\mathbf{b}}\leq 1.\] Substituting the above into (4.8) proves that for all \(\varepsilon>0\) we have \[|\widehat{\mu}(\xi)|^{2}\lesssim e^{(2k+1)n\varepsilon}e^{-(2k+1)hn}\cdot\sum _{\mathbf{a}\in\mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0})}\sum_{ \mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{0})}\left|\int e( \xi\varphi_{\mathbf{a}\ast\mathbf{b}}(x))d\mu(x)\right|^{2}+k^{2}e^{-\delta n}.\] Next, we will reduce the upper bound to exponential sums. We will use the fundamental theorem of calculus to replace the term \(\varphi_{\mathbf{a}\ast\mathbf{b}}(x)-\varphi_{\mathbf{a}\ast\mathbf{b}}(y)\) with a product of derivatives. We begin by observing that for a fixed \(\mathbf{a}\in\mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0})\) we have \[\sum_{\mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{ 0})}\left|\int e(\xi\varphi_{\mathbf{a}\ast\mathbf{b}}(x))d\mu(x)\right|^{2} =\sum_{\mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{ 0})}\int\int e(\xi(\varphi_{\mathbf{a}\ast\mathbf{b}}(x)-\varphi_{\mathbf{a} \ast\mathbf{b}}(y)))\,d\mu(x)d\mu(y)\] \[\leq\int\int\left|\sum_{\mathbf{b}\in\mathcal{R}_{n}^{k}( \varepsilon,\varepsilon_{0})}e(\xi(\varphi_{\mathbf{a}\ast\mathbf{b}}(x)- \varphi_{\mathbf{a}\ast\mathbf{b}}(y)))\right|\,d\mu(x)d\mu(y)\] We now focusing on the final term in the above. For any \(x,y\) we let \[\eta(x,y):=\xi e^{-2k\lambda n}(\varphi_{\mathbf{a}_{k}}(x)-\varphi_{\mathbf{ a}_{k}}(y)).\] Then by the regularity of \(\mathbf{a}_{k}\), we have \[e^{-\varepsilon n}e^{\varepsilon_{0}n}|x-y|\leq|\eta(x,y)|\leq e^{\varepsilon n }e^{\varepsilon_{0}n}|x-y|.\] Appealing to the fundamental theorem of calculus, and duplicating the arguments from [55] we have \[|\xi(\varphi_{\mathbf{a}\ast\mathbf{b}}(x)-\varphi_{\mathbf{a}\ast\mathbf{b}} (y))-\eta(x,y)\zeta_{1,\mathbf{a}}(b_{1})\cdots\zeta_{k,\mathbf{a}}(b_{k}))| \lesssim e^{2k}e^{(k+2)\varepsilon n}e^{-\lambda n}e^{\varepsilon_{0}n}.\] This in turn implies that \[\left|\sum_{\mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon _{0})}e(\xi(\varphi_{\mathbf{a}\ast\mathbf{b}}(x)-\varphi_{\mathbf{a}\ast \mathbf{b}}(y)))\right|\] \[\lesssim \left|\sum_{\mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon, \varepsilon_{0})}e(\eta(x,y)\zeta_{1,\mathbf{a}}(b_{1})\cdots\zeta_{k,\mathbf{a }}(b_{k}))\right|+\sum_{\mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon, \varepsilon_{0})}e^{2k}e^{(k+2)\varepsilon n}e^{-\lambda n}e^{\varepsilon_{0}n}\] \[\lesssim \left|\sum_{\mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon, \varepsilon_{0})}e(\eta(x,y)\zeta_{1,\mathbf{a}}(b_{1})\cdots\zeta_{k,\mathbf{a }}(b_{k}))\right|+e^{2k}e^{(2k+2)\varepsilon n}e^{hkn}e^{-\lambda n}e^{ \varepsilon_{0}n}.\] In the final line we have used that \(\sharp\mathcal{R}^{k}_{n}(\varepsilon,\varepsilon_{0})\leq e^{\varepsilon nk}e^{ khn}\). Thus \[|\widehat{\mu}(\xi)|^{2}\lesssim e^{(2k+1)\varepsilon n}e^{-(2k+1)hn}\cdot\sum_{\mathbf{a}\in\mathcal{R}^{k+1}_{n}( \varepsilon,\varepsilon_{0})}\iint\left|\sum_{\mathbf{b}\in\mathcal{R}^{k}_{n }(\varepsilon,\varepsilon_{0})}e(\eta(x,y)\zeta_{1,\mathbf{a}}(b_{1})\cdots \zeta_{k,\mathbf{a}}(b_{k}))\right|d\mu(x)d\mu(y)\] \[+e^{(2k+1)\varepsilon n}e^{-(2k+1)hn}\sum_{\mathbf{a}\in\mathcal{ R}^{k+1}_{n}(\varepsilon,\varepsilon_{0})}e^{2k}e^{(2k+2)\varepsilon n}e^{ hkn}e^{-\lambda n}e^{\varepsilon_{0}n}\] \[+k^{2}e^{-\delta n/2}.\] Now using the inequality \(\sharp\mathcal{R}^{k+1}_{n}(\varepsilon,\varepsilon_{0})\leq e^{(k+1) \varepsilon n}e^{(k+1)hn}\), we see that the above implies \[|\widehat{\mu}(\xi)|^{2}\lesssim e^{(2k+1)\varepsilon n}e^{-(2k+1)hn}\cdot\sum_{\mathbf{a}\in \mathcal{R}^{k+1}_{n}(\varepsilon,\varepsilon_{0})}\iint\left|\sum_{b\in \mathcal{R}^{k}_{n}(\varepsilon,\varepsilon_{0})}e(\eta(x,y)\zeta_{1,\mathbf{ a}}(b_{1})\cdots\zeta_{k,\mathbf{a}}(b_{k}))\right|\,d\mu(x)d\mu(y)\] \[+e^{2k}e^{(5k+3)\varepsilon n}e^{-\lambda n}e^{\varepsilon_{0}n}+ k^{2}e^{-\delta n/2}.\] As \(\varepsilon_{0}\leq\lambda/4\), and \(\varepsilon>0\) is chosen small enough depending on \(k\), the latter two terms go to zero exponentially in \(n\), which in turn means polynomially in \(|\xi|\). It remains to estimate the term with the integrals above. By the self-conformality of \(\mu\), and using the assumption that our IFS is non-trivial, we know that there exists \(\kappa>0\) such that \[\mu(B(x,r))\lesssim r^{\kappa}. \tag{4.9}\] for any \(x\in\mathbb{R}\) and \(r>0\). The proof of this follows by adapting the proof of Feng and Lau [25] on overlapping self-similar measures. Using (4.9) it follows that \[\mu\times\mu(\{(x,y):|x-y|\leq e^{\varepsilon n}e^{-\varepsilon_{0}n/2}\}) \lesssim e^{\varepsilon n\kappa}e^{-\varepsilon_{o}\kappa n/2}.\] Using this bound we have \[e^{\varepsilon n(2k+1)}e^{-(2k+1)hn}\sum_{\mathbf{a}\in\mathcal{ R}^{k+1}_{n}(\varepsilon,\varepsilon_{0})}\iint\left|\sum_{\mathbf{b}\in \mathcal{R}^{k}_{n}(\varepsilon,\varepsilon_{0})}e(\eta(x,y)\zeta_{1, \mathbf{a}}(b_{1})\cdots\zeta_{k,\mathbf{a}}(b_{k}))\right|d\mu(x)d\mu(y)\] \[\leq e^{\varepsilon n(2k+1)}e^{-(2k+1)hn}\sum_{\mathbf{a}\in\mathcal{ R}^{k+1}_{n}(\varepsilon,\varepsilon_{0})}\int\sum_{|x-y|\geq e^{\varepsilon n}e^{- \varepsilon_{0}n/2}}\left|\sum_{\mathbf{b}\in\mathcal{R}^{k}_{n}(\varepsilon, \varepsilon_{0})}e(\eta(x,y)\zeta_{1,\mathbf{a}}(b_{1})\cdots\zeta_{k,\mathbf{ a}}(b_{k}))\right|d\mu(x)d\mu(y)\] \[+e^{\varepsilon n(2k+1)}e^{-(2k+1)hn}\sum_{\mathbf{a}\in \mathcal{R}^{k+1}_{n}(\varepsilon,\varepsilon_{0})}\sum_{\mathbf{b}\in \mathcal{R}^{k}_{n}(\varepsilon,\varepsilon_{0})}e^{\varepsilon n\kappa}e^{- \varepsilon_{o}\kappa n/2},\] which is bounded by \[e^{\varepsilon n(2k+1)}e^{-(2k+1)hn}\sum_{\mathbf{a}\in\mathcal{ R}^{k+1}_{n}(\varepsilon,\varepsilon_{0})}\int_{|x-y|\geq e^{\varepsilon n}e^{- \varepsilon_{0}n/2}}\left|\sum_{\mathbf{b}\in\mathcal{R}^{k}_{n}(\varepsilon, \varepsilon_{0})}e(\eta(x,y)\zeta_{1,\mathbf{a}}(b_{1})\cdots\zeta_{k, \mathbf{a}}(b_{k}))\right|d\mu(x)d\mu(y)\] \[+e^{\varepsilon n(4k+2)}e^{\varepsilon n\kappa}\varepsilon^{- \varepsilon_{0}\kappa n/2}.\] Here as \(\varepsilon>0\) can be chosen small enough in terms of \(k\) and \(\kappa\), the second term here goes to zero exponentially in \(n\), so polynomially in \(|\xi|\). Thus we can just focus on bounding the integral term above. If a pair of points \(x\) and \(y\) satisfies \(|x-y|\geq e^{\varepsilon n}e^{-\varepsilon_{0}n/2}\), we have \(\eta(x,y)\geq e^{\varepsilon_{0}n/2}\) so \(\eta(x,y)\in J_{n}(\varepsilon,\varepsilon_{0})\). Hence \[e^{\varepsilon n(2k+1)}e^{-(2k+1)hn}\sum_{\mathbf{a}\in\mathcal{R}_{n}^{k+1}( \varepsilon,\varepsilon_{0})}\int\limits_{|x-y|\geq e^{\varepsilon n}e^{- \varepsilon_{0}n/2}}\left|\sum_{\mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{0})}e(\eta(x,y)\zeta_{1,\mathbf{a}}(b_{1})\cdots\zeta_{k, \mathbf{a}}(b_{k}))\right|d\mu(x)d\mu(y)\] \[\leq e^{\varepsilon n(2k+1)}e^{-(2k+1)hn}\sum_{\mathbf{a}\in\mathcal{R}_{n}^{ k+1}(\varepsilon,\varepsilon_{0})}\sup_{\eta\in J_{n}(\varepsilon,\varepsilon_{0})} \left|\sum_{\mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{0})}e( \eta\zeta_{1,\mathbf{a}}(b_{1})\cdots\zeta_{k,\mathbf{a}}(b_{k}))\right|.\] Recall that in Proposition 4.4 we defined \(\mathcal{W}\) to be the set of \((k+1)\)-tuples \(\mathbf{a}\in\mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0})\) such that for all \(j=1,\ldots,k\), \(\eta\in J_{n}(\varepsilon,\varepsilon_{0})\) and \(\sigma\in[R^{-2}|\eta|^{-1},|\eta|^{-\varepsilon_{1}}]\), we have that \[\sharp\{(\mathbf{b},\mathbf{c})\in\mathcal{R}_{n}(\varepsilon,\varepsilon_{0} )^{2}:|\zeta_{j,\mathbf{a}}(\mathbf{b})-\zeta_{j,\mathbf{a}}(\mathbf{c})|\leq \sigma\}\leq\sharp\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^{2}\sigma^{ \varepsilon_{0}/4}.\] By Proposition 4.4 there exists \(\kappa_{0}>0\) such that \[\frac{\sharp(\mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0})\setminus \mathcal{W})}{\sharp\mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0})}\lesssim ke ^{\varepsilon\kappa_{0}n-\varepsilon_{0}^{2}\varepsilon_{1}n/24}\] Using this bound for \(\mathcal{W}\) together with \(\sharp\mathcal{R}_{n}^{k+1}(\varepsilon,\varepsilon_{0})\lesssim e^{ \varepsilon(k+1)n}e^{(k+1)hn}\) and \(\sharp\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{0})\lesssim e^{\varepsilon kn }e^{khn}\), we can deduce the following bound \[e^{\varepsilon n(2k+1)}e^{-(2k+1)hn}\sum_{\mathbf{a}\in\mathcal{R}_{n}^{k+1}( \varepsilon,\varepsilon_{0})}\sup_{\eta\in J_{n}(\varepsilon,\varepsilon_{0} )}\left|\sum_{\mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{0})}e (\eta\zeta_{1,\mathbf{a}}(b_{1})\cdots\zeta_{k,\mathbf{a}}(b_{k}))\right|\\ \lesssim e^{(3k+2)\varepsilon n}e^{-khn}\max_{\mathbf{a}\in \mathcal{W}}\sup_{\eta\in J_{n}(\varepsilon,\varepsilon_{0})}\left|\sum_{ \mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{0})}e(\eta\zeta_{ 1,\mathbf{a}}(b_{1})\cdots\zeta_{k,\mathbf{a}}(b_{k}))\right|+ke^{(4k+2+ \kappa_{0})\varepsilon n}e^{-\varepsilon_{0}^{2}\varepsilon_{1}n/24}.\] The second term in the above decays to zero exponentially in \(n\), and therefore polynomially in \(|\xi|\), for \(\varepsilon\) sufficiently small. It remains to bound the first term. Recall now that \[R:=R_{\varepsilon,n}=e^{\varepsilon n}\] and \[\mathcal{Z}_{j}:=\mathcal{R}_{n}(\varepsilon,\varepsilon_{0}),\quad\text{for all }j=1,\ldots,k\] and the maps \(\zeta_{j}:=\zeta_{j,\mathbf{a}}:\mathcal{R}_{n}(\varepsilon,\varepsilon_{0}) \rightarrow[R^{-1},R]\) will be defined by \[\zeta_{j,\mathbf{a}}(\mathbf{b}):=e^{2\lambda n}|\varphi^{\prime}_{\mathbf{a} _{j-1}\mathbf{b}}(x_{\mathbf{a}_{j}})|,\quad\mathbf{b}\in\mathcal{R}_{n}( \varepsilon,\varepsilon_{0}),\] If we now fix \(\eta\in J_{n}(\varepsilon,\varepsilon_{0})\), \(\mathbf{a}\in\mathcal{W}\) and \(\sigma\in[R^{-2}|\eta|^{-1},|\eta|^{-\varepsilon_{1}}]\), then as \[\sharp\{(\mathbf{b},\mathbf{c})\in\mathcal{R}_{n}(\varepsilon,\varepsilon_{0 })^{2}:|\zeta_{j,\mathbf{a}}(\mathbf{b})-\zeta_{j,\mathbf{a}}(\mathbf{c})|\leq \sigma\}\leq\sharp\mathcal{R}_{n}(\varepsilon,\varepsilon_{0})^{2}\sigma^{ \varepsilon_{0}/4}\] we have by Theorem 4.1 that \[e^{(3k+2)\varepsilon n}e^{-khn}\max_{\mathbf{a}\in\mathcal{W}}\Big{|}\sum_{ \mathbf{b}\in\mathcal{R}_{n}^{k}(\varepsilon,\varepsilon_{0})}e(\eta\zeta_{1, \mathbf{a}}(\mathbf{b}_{1})...\zeta_{k,\mathbf{a}}(\mathbf{b}_{k}))\Big{|} \lesssim e^{(4k+2)\varepsilon n}|\eta|^{-\varepsilon_{2}}\lesssim e^{(4k+2) \varepsilon n}e^{-\varepsilon_{0}\varepsilon_{2}n/2}\] since \(|\eta|\geq e^{\varepsilon_{0}n/2}\) by the definition of \(J_{n}(\varepsilon,\varepsilon_{0})\) as \(\eta\in J_{n}(\varepsilon,\varepsilon_{0})\). By making sure that \(\varepsilon>0\) is chosen small enough, we have proven that \(\widehat{\mu}(\xi)\) decays to zero polynomially in \(|\xi|\) ## 5. Fractal Uncertainty Principles from Fourier decay Finally, we give the details of the proof of how the Fractal Uncertainty Principle follows from Fourier decay. The proof method here is adapted from the proof of the Fractal Uncertainty Principle for the Patterson-Sullivan measure [9, 22] adapted to the measures we have with weaker regularity. Fix \(\mu_{j}\) as \((C_{j}^{-},\delta_{j}^{-},C_{j}^{+},\delta_{j}^{+},h)\)-Frostman measures, \(j=1,2\), and denote their supports by \(K_{1}\) and \(K_{2}\). Given \(h>0\), define the semiclassical Fourier transform of \(f:\mathbb{R}^{d}\to\mathbb{C}\) by: \[\mathcal{F}_{h}f(\xi):=\frac{1}{(2\pi h)^{d/2}}\int_{\mathbb{R}^{d}}e^{-ix\cdot \xi/h}f(x)\,dx,\quad\xi\in\mathbb{R}^{d}.\] Then FUP follows for \(X=K_{1}+B(0,h)\) and \(Y=K_{2}+B(0,h)\) if we can prove: \[\|\mathbf{1}_{X}\mathcal{F}_{h}^{*}\mathbf{1}_{Y}\|_{L^{2}(\mathbb{R}^{d})\to L ^{2}(\mathbb{R}^{d})}\lesssim h^{\beta}.\] Define \[\Upsilon_{X}^{\pm}(x)=\frac{1}{4h^{\delta_{1}^{\pm}}}\mu_{1}(B(x,2h))\quad \text{and}\quad\Upsilon_{Y}^{\pm}(y)=\frac{1}{4h^{\delta_{2}^{\pm}}}\mu_{2}(B (y,2h))\] Then for all \(x\in X\) we have \[C_{1}^{-}\leq\Upsilon_{X}^{-}(x),\Upsilon_{X}^{+}(x)\leq C_{1}^{+},\] and similarly for all \(y\in Y\) we have \[C_{2}^{-}\leq\Upsilon_{Y}^{-}(x),\Upsilon_{Y}^{+}(x)\leq C_{2}^{+}\] Indeed, for example if \(x\in X\), then for some \(x_{0}\in K_{1}\) we have \(|x-x_{0}|\leq h\). Thus \(\mu_{1}(B(x,2h))\geq\mu_{1}(B(x_{0},h))\geq C_{1}^{-}h^{\delta_{1}^{-}}\). **Lemma 5.1**.: _Suppose for any bounded \(u:\mathbb{R}^{d}\to\mathbb{C}\) we have_ \[\|\sqrt{\Upsilon_{X}^{-}}\mathcal{F}_{h}^{*}\Upsilon_{Y}^{-}u\|_{L^{2}(\mathbb{ R}^{d})}\lesssim h^{\beta}\|\sqrt{\Upsilon_{Y}^{-}}u\|_{L^{2}(\mathbb{R}^{d})}.\] _Then_ \[\|\mathbf{1}_{X}\mathcal{F}_{h}^{*}\mathbf{1}_{Y}\|_{L^{2}(\mathbb{R}^{d})\to L ^{2}(\mathbb{R}^{d})}\lesssim h^{\beta}.\] Proof.: Let \(f\in L^{2}(\mathbb{R}^{d})\) be bounded. Write \(u=\frac{f\mathbf{1}_{Y}}{\Upsilon_{Y}}\). Then \(u\) is bounded as \(\Upsilon_{Y}^{-}\) is bounded from below. Then by definition \[\|\mathbf{1}_{X}\mathcal{F}_{h}^{*}\mathbf{1}_{Y}f\|_{L^{2}(\mathbb{R}^{d})}= \|\mathbf{1}_{X}\mathcal{F}_{h}^{*}\Upsilon_{Y}^{-}u\|_{L^{2}(\mathbb{R}^{d})}.\] Moreover, as \(C_{1}^{-}\lesssim\Upsilon_{X}^{-}\) we obtain \[\|\mathbf{1}_{X}\mathcal{F}_{h}(\Upsilon_{Y}^{-}u)\|_{L^{2}(\mathbb{R}^{d})} \lesssim\|\Upsilon_{X}^{-}\mathcal{F}_{h}\Upsilon_{Y}^{-}u\|_{L^{2}(\mathbb{R}^ {d})}\lesssim h^{\beta}\|\sqrt{\Upsilon_{X}^{-}}u\|_{L^{2}(\mathbb{R}^{d})}\] by the assumption. Finally, \[\|\sqrt{\Upsilon_{Y}^{-}}u\|_{L^{2}(\mathbb{R}^{d})}=\|f\mathbf{1}_{Y}\|_{L^{2} (\mathbb{R}^{d})}\leq\|f\|_{L^{2}(\mathbb{R}^{d})}.\] Bounded \(L^{2}(\mathbb{R}^{d})\) functions generate \(L^{2}(\mathbb{R}^{d})\), so this implies the claim for all \(L^{2}(\mathbb{R}^{d})\) functions. We will now prove **Proposition 5.2**.: _For any bounded \(u:\mathbb{R}^{d}\to\mathbb{C}\) we have_ \[\|\sqrt{\Upsilon_{X}^{-}}\mathcal{F}_{h}^{*}\Upsilon_{Y}^{-}u\|_{L^{2}(\mathbb{ R}^{d})}\lesssim h^{\beta}\|\sqrt{\Upsilon_{Y}^{-}}u\|_{L^{2}(\mathbb{R}^{d})}.\] Proof.: Fix a bounded \(u:\mathbb{R}^{d}\to\mathbb{C}\). For \(x\in\mathbb{R}^{d}\) and \(t\in\mathbb{R}^{d}\) define the translation of any \(v:\mathbb{R}^{d}\to\mathbb{R}\) by: \[\omega_{t}v(x)=v(x-t).\] Then by Fubini's theorem \[\|\sqrt{\Upsilon_{X}^{-}}\mathcal{F}_{h}^{*}\Upsilon_{Y}^{-}u\|_{L^{2}(\mathbb{ R}^{d})}^{2}=\frac{1}{4h^{\delta_{1}^{-}}}\int_{B(0,2h)}\|w_{t}\mathcal{F}_{h}^{*} \Upsilon_{Y}^{-}u\|_{L^{2}(\mu_{1})}^{2}\,dt\] and \[\|\sqrt{\Upsilon_{Y}^{-}}u\|_{L^{2}(\mathbb{R}^{d})}^{2}=\frac{1}{4h^{\delta_ {2}^{-}}}\int_{B(0,2h)}\|\omega_{s}v\|_{L^{2}(\mu_{2})}^{2}\,ds.\] Also \[\omega_{t}\mathcal{F}_{h}^{*}\Upsilon_{Y}^{-}u(x)=\frac{1}{4h^{d/2+\delta_{2 }^{-}}}\int_{B(0,2h)}\int e^{2\pi i(x-t)\cdot(y-s)/h}\omega_{s}u(y)\,d\mu_{2}( y)\,ds.\] Define an operator: \[B_{t}u(x)=\int e^{2\pi i(x-t)\cdot y/h}u(y)\,d\mu_{2}(y)\] so \[\int e^{2\pi i(x-t)\cdot(y-s)/h}\omega_{s}u(y)\,d\mu_{2}(y)=e^{-2\pi i(x-t) \cdot s/h}B_{t}(\omega_{s}u)(x).\] Thus \[\|\sqrt{\Upsilon_{X}^{-}}\mathcal{F}_{h}^{*}\Upsilon_{Y}^{-}u\|_ {L^{2}(\mathbb{R}^{d})}^{2} =\frac{1}{4h^{\delta_{1}^{-}}}\int_{B(0,2h)}\|w_{t}\mathcal{F}_{h }^{*}\Upsilon_{Y}^{-}u\|_{L^{2}(\mu_{1})}^{2}\,dt\] \[\lesssim h^{d-\delta_{1}^{-}-2\delta_{2}^{-}}\sup_{|t|\leq 2h}\int_ {B(0,2h)}\|B_{t}(\omega_{s}u)\|_{L^{2}(\mu_{1})}^{2}\,ds.\] Now we are done if we can show \[\|B_{t}v\|_{L^{2}(\mu_{K_{1}})}\lesssim h^{\alpha/2}\|v\|_{L^{2}(\mu_{K_{1}})}\] for any bounded \(v\) and any \(|t|\leq 2h\). Indeed, then \[h^{d-\delta_{1}^{-}-2\delta_{2}^{-}}\sup_{|t|\leq 2h}\int_{B(0,2h) }\|B_{t}(\omega_{s}u)\|_{L^{2}(\mu_{1})}^{2}\,ds \lesssim h^{d-\delta_{1}^{-}-2\delta_{2}^{-}}\sup_{|t|\leq 2h} \int_{B(0,2h)}\|\omega_{s}u\|_{L^{2}(\mu_{1})}^{2}\,ds\] \[\lesssim h^{d-\delta_{1}^{-}-\delta_{2}^{-}+\alpha/2}\|\sqrt{ \Upsilon_{Y}^{-}}u\|_{L^{2}(\mathbb{R}^{d})}^{2}.\] We can do this step now. By the \(TT^{*}\) theorem [28], we have \[\|B_{t}\|_{L^{2}(\mu_{1})\to L^{2}(\mu_{1})}^{2}=\|B_{t}B_{t}^{*}\|_{L^{2}(\mu _{1})\to L^{2}(\mu_{1})}\] and we can write \[B_{t}B_{t}^{*}v(z)=\int K(z,z^{\prime})v(z^{\prime})\,d\mu_{2}(z^{\prime})\] with kernel \[K(z,z^{\prime}):=\int e^{-2\pi i(z-z^{\prime})y/h}\,d\mu_{2}(y).\] By Schur's inequality for \(K(z,z^{\prime})\), we obtain: \[\|B_{t}\|_{L^{2}(\mu_{1})\to L^{2}(\mu_{1})}^{2}\leq\sup_{z\in K_{1}}\int|K(z, z^{\prime})|\,d\mu_{2}(z^{\prime}).\] By Fourier decay assumption on \(\mu_{2}\), we have: \[|K(z,z^{\prime})|=\Big{|}\widehat{\mu_{2}}\Big{(}\frac{z-z^{\prime}}{h}\Big{)} \Big{|}\lesssim\Big{|}\frac{z-z^{\prime}}{h}\Big{|}^{-\alpha}.\] Splitting \[\int|K(z,z^{\prime})|\,d\mu_{2}(z)=\int_{|z-z^{\prime}|\geq h^{1/2}}|K(z,z^{ \prime})|\,d\mu_{2}(z)+\int_{|z-z^{\prime}|\leq h^{1/2}}|K(z,z^{\prime})|\,d \mu_{2}(z),\] we get that the first integral is bounded by \(\lesssim h^{\alpha/2}\) and the second integral by \(\lesssim h^{\delta_{2}^{+}/2}\) using the Frostman assumption of \(\mu_{2}\). Finally, we know that \(2\alpha\leq\delta_{2}^{+}\), so the claim is done. ## Acknowledgements We thank Amir Algom, Semyon Dyatlov, Jonathan Fraser, Antti Kaenmaki, Connor Stevens, Sascha Troscheit and Meng Wu for useful discussions during the preparation of this manuscript, in particular Amir Algom for coordinating the submission of this and the independent work [4] simultaneously.
2302.04706
Non-Hermitian fermions with effective mass
In this work, we readdress the Dirac equation in the position-dependent mass (PDM) scenario. Here, one investigates the quantum dynamics of non-Hermitian fermionic particles with effective mass assuming a $(1+1)$-dimension flat spacetime. In seeking a Schr\"{o}dinger-like theory with PT symmetry is appropriate to assume a complex potential. This imaginary interaction produces an effective potential purely dependent on mass distribution. Furthermore, we study the non-relativistic limit by adopting the Foldy-Wouthuysen transformation. As a result, that limit leads to an ordering equivalent to the ordering that describes abrupt heterojunctions. Posteriorly, particular cases of mass distribution also were analyzed. Thus, interesting results arise for the case of a linear PDM, which produces an effective harmonic oscillator and induces the emergence of bound states. For a hyperbolic PDM, an effective potential barrier emerges. However, in this case, the fermions behave as free particles with positive-defined energy.
F. C. E. Lima, L. N. Monteiro, C. A. S. Almeida
2023-02-09T15:45:01Z
http://arxiv.org/abs/2302.04706v1
# Non-Hermitian fermions with effective mass ###### Abstract In this work, we readdress the Dirac equation in the position-dependent mass (PDM) scenario. Here, one investigates the quantum dynamics of non-Hermitian fermionic particles with effective mass assuming a \((1+1)\)-dimension flat spacetime. In seeking a Schrodinger-like theory with \(\mathcal{PT}\) symmetry is appropriate to assume a complex potential. This imaginary interaction produces an effective potential purely dependent on mass distribution. Furthermore, we study the non-relativistic limit by adopting the Foldy-Wouthuysen transformation. As a result, that limit leads to an ordering equivalent to the ordering that describes abrupt heterojunctions. Posteriorly, particular cases of mass distribution also were analyzed. Thus, interesting results arise for the case of a linear PDM, which produces an effective harmonic oscillator and induces the emergence of bound states. For a hyperbolic PDM, an effective potential barrier emerges. However, in this case, the fermions behave as free particles with positive-defined energy. ## I Introduction Problems of position-dependent effective mass help us to understand impurity in crystals [1; 2; 3], and heterojunctions in semiconductors [4; 5]. Furthermore, the idea of the position-dependent effective mass (commonly called position-dependent mass or PDM) became an attractive topic for several researchers, see Refs. [6; 7; 8; 9; 10]. The PDM concept in a non-relativistic regime is ambiguous. Indeed, this is because the momentum operator is not well defined. Therefore, it needs to symmetrize its kinetic energy operator (KEO) to study PDM. A few symmetrized KEO operator proposals are the orderings of BenDaniel and Duke [11], Li and Kuhn [12], Zhu and Kroemer [13], Gora and Williams [14]. These orderings previously mentioned are compressed into the equation \[\hat{\mathcal{K}}=\frac{1}{4}[m^{\alpha}(\vec{r})\hat{p}\,m^{\beta}(\vec{r}) \hat{p}\,m^{\gamma}(\vec{r})+m^{\gamma}(\vec{r})\hat{p}\,m^{\beta}(\vec{r}) \hat{p}\,m^{\alpha}(\vec{r})]. \tag{1}\] This general ordering was proposed initially by von Roos. In his proposal, the parameters \(\alpha\), \(\beta\), and \(\gamma\) must respect the condition \(\alpha+\beta+\gamma=-1\)[15]. Furthermore, it is essential to highlight that there is no unanimity on the appropriate form of the KEO (or the values of the Hermiticity parameters). Table I shows the main orderings proposed in the literature. The KEO ambiguity only arises in non-relativistic theory. In relativistic quantum mechanics, this problem does not appear. Indeed, this is a consequence of the mass term not being coupled to the quadri-momentum operator, as shown first by one of the author of the present work [16]. It is essential to highlight that, in principle, we can apply the PDM concept in relativistic quantum mechanics [16] and non-relativistic one [17; 18]. As a matter of fact, in 2000, Dutra and Almeida [17], have applied the PDM concept to discuss the exact solvability of the Schrodinger equation. It is found that even in the presence of KEO ambiguity some class of potential are exactly solved. On the other hand, the PDM arises in investigations of exact solutions of some kind of exponential mass distribution [19], semi-confined harmonic oscillator [20], quantum gravitational effect [21], and quantum information entropies [22]. Generally, we expect a quantum theory to be a Hermitian theory [23]. However, the non-Hermitian Hamiltonians are essential in studies of open quantum systems in nuclear physics or quantum optics (see Refs. [24; 25]). These non-Hermitian Hamiltonians are considered an effective subsystem within a projective subspace of the total system, which obeys \begin{table} \begin{tabular}{c c} \hline \hline Authors (year) & KEO parameters \\ \hline \hline BenDaniel and Duke (1966) & \(\alpha=\gamma=0\) and \(\beta=-1\) \\ Gora and Williams (1969) & \(\beta=\gamma=0\) and \(\alpha=-1\) \\ Zhu and Kroemer (1983) & \(\alpha=\gamma=-\frac{1}{2}\) and \(\beta=0\) \\ Li and Kuhn (1993) & \(\beta=\gamma=-\frac{1}{2}\) and \(\alpha=0\) \\ \hline \end{tabular} \end{table} Table 1: Values of the hermiticity parameters. conventional quantum mechanics with a Hermitian Hamiltonian [24; 25; 26]. In 1998, Bender et al. [27; 28] showed that the theory is unitary with real energy eigenvalues obtained when a weak condition on the Hamiltonian is assumed. This condition is parity-time symmetric Hamiltonians, along with a deformed inner product [27; 28]. Briefly, Bender et al. [27; 29] showed that for a non-Hermitian quantum theory to be accepted physically, the Hamiltonian must be invariant under \(\mathcal{PT}\) symmetry [30; 31]. Currently, several works discussing non-Hermitian models have emerged in the literature. Between these works, we can mention investigations of topology [32], skin effect [33; 34], quantum information and thermodynamic properties [23], and phase transitions in quasi-crystals [35]. The list of applications related to non-Hermitian models is extensive, which motivates us to study a non-Hermitian theory for a PDM context. Therefore, we seek to formulate a non-Hermitian theory of effective mass. To achieve our purpose it was built a non-Hermitian approach for fermions with arbitrary PDM. To exemplify the method, we choose mass profiles in which the Hamiltonian is kept invariant under the \(\mathcal{PT}\) transformation and studied the quantum eigenstates of a linear PDM. In this case, it is possible to perceive that linear mass produces an effective harmonic oscillator. Moreover, for a PDM with a hyperbolic profile potential an effective barrier will arise. We organized our paper as follows: In Sec. II it is introduced a non-Hermitian theory of effective mass inside Dirac's equation. Furthermore, the non-relativistic limit of the kinetic term is analyzed. In this limit, the kinetic contribution of the Hamiltonian is similar to the ordering proposed by Li and Kuhn [12], as predicted by Cavalcante et al. [16]. In Sec. III, particular cases of mass dependence are studied, namely, linear mass distribution and a hyperbolic distribution. Finally, we present our findings in Sec. IV. ## II The non-Hermitian theory for a PDM An active issue of condensed matter physics is the symmetrization problem of the kinetic energy operator in effective mass theory [22]. That is because, in this scenario, the use of the concept of effective mass can describe defects or impurities in crystals [1; 2; 3]. In this work, we propose an investigation of a PDM in a non-Hermitian Dirac theory. To reach our purpose and to bypass the ambiguity of KEO, allow us to assume a rela tivistic system in \((1+1)\)D. In this case, Dirac's equation is written as \[\left[\gamma^{\mu}(p_{\mu}-eA_{\mu})-m(x)\right]\psi(x,t)=0\quad\text{with}\quad \mu=0,1. \tag{2}\] Here, \(p_{\mu}\) is the momentum, \(m(x)\) is the PDM, \(\gamma^{\mu}\) are the Dirac matrices. Furthermore, the signature metric used is \(g_{\mu\nu}=(+,-)\). Besides, \(A_{\mu}=(V(x),\vec{A}(x))\) where \(V(x)\) is the electrical interaction, and \(\vec{A}(x)\) is the vector potential (see Ref. [36]). Let us assume a system only with electrical interaction. In this case, we have \(A_{\mu}=(V(x),0)\). Using these definitions Eq. (2) is rewritten as \[i\mathcal{I}_{2}\frac{\partial\psi}{\partial t}=\left(-i\gamma^{0}\gamma^{1} \partial_{x}+\gamma^{0}m(x)+\mathcal{I}_{2}V(x)\right)\psi(x,t). \tag{3}\] To obtain Eq. (3) it was necessary to use the identity operator \((\gamma^{0})^{2}=-(\gamma^{1})^{2}\). Here, we define \(\mathcal{I}_{2}\) as \[\mathcal{I}_{2}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}. \tag{4}\] i. e., \(\mathcal{I}_{2}\) represents a second rank identity matrix. Eq. (3) informs us that the Hamiltonian operator is \[\mathcal{H}=\alpha p_{x}+\beta m(x)+\mathcal{I}_{2}V(x), \tag{5}\] where the matrices \(\alpha=\gamma^{0}\gamma^{1}\) and \(\beta=\gamma^{0}\). From now on, let us assume a stationary theory. In this case, the Hamiltonian is not explicitly time-dependent. Therefore, it is assumed that the wave function has the form \(\psi(x,t)=e^{\frac{iE}{\hbar}t}\varphi(x)\). Furthermore, choosing the Dirac matrix representation in \((1+1)\)D, namely, \[\gamma^{0}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\qquad\text{ and }\qquad\gamma^{1}=\begin{pmatrix}0&-1\\ 1&0\end{pmatrix} \tag{6}\] we arrived at \[\mathcal{H}\varphi=E\varphi, \tag{7}\] with \[\varphi=\begin{pmatrix}\varphi_{1}\\ \varphi_{2}\end{pmatrix}. \tag{8}\] In matrix form, Dirac's equation for a PDM with interaction is \[\begin{pmatrix}-i\partial_{x}+V(x)&m(x)\\ m(x)&i\partial_{x}+V(x)\end{pmatrix}\begin{pmatrix}\varphi_{1}\\ \varphi_{2}\end{pmatrix}=E\begin{pmatrix}\varphi_{1}\\ \varphi_{2}\end{pmatrix}. \tag{9}\] Notice that Eq. (9) produces two coupled differential equations, i. e., \[-i\frac{d\varphi_{1}}{dx}+m(x)\varphi_{2}= (E-V(x))\varphi_{1}, \tag{10}\] and \[i\frac{d\varphi_{2}}{dx}+m(x)\varphi_{1}= (E-V(x))\varphi_{2}. \tag{11}\] By algebraic manipulations, we can decoupled the \(\varphi_{1}\) (or \(\varphi_{2}\)) component. In this case, the decoupled equation for the \(\varphi_{1}\) component will be \[-\varphi_{1}^{\prime\prime}(x)+\frac{m^{\prime}(x)}{m(x)}\varphi_ {1}^{\prime}(x)+\bigg{[}2EV(x)-V(x)^{2}-iV^{\prime}(x)-i\frac{m^{\prime}(x)}{ m(x)}(E-V(x))\bigg{]}\varphi_{1}(x)=\] \[(E^{2}-m(x)^{2})\varphi_{1}(x). \tag{12}\] The prime notation represents the derivatives concerning the variable \(x\). Eq. (12) was previously discussed by Jia and Dutra [37] to address a PDM problem with violation of \(\mathcal{PT}\) symmetry. To treat Eq. (12), allow us to consider the following dependent variable transformation, i. e., \[\varphi_{1}(x)=m(x)^{a}\phi_{1}(x). \tag{13}\] The transformation (13), leads us to \[-\phi_{1}^{\prime\prime}(x)+\frac{m^{\prime}(x)}{m(x)}(1-2a)\phi_ {1}^{\prime}(x)+\bigg{[}a\bigg{(}\frac{m^{\prime}(x)}{m(x)}\bigg{)}^{2}-a \frac{m^{\prime\prime}(x)}{m(x)}-a(a-1)\frac{m^{\prime}(x)^{2}}{m(x)^{2}}+2EV (x)+\] \[-V(x)^{2}-iV^{\prime}(x)-i\frac{m^{\prime}(x)}{m(x)}(E-V(x)) \bigg{]}\phi_{1}(x)=(E^{2}-m(x)^{2})\phi_{1}(x). \tag{14}\] Considering \(a=1/2\), we arrive at \[-\phi_{1}^{\prime\prime}(x)+\bigg{[}m(x)^{2}+\frac{3}{4}\frac{m^ {\prime}(x)^{2}}{m(x)^{2}}-\frac{1}{2}\frac{m^{\prime\prime}(x)}{m(x)}+\bigg{(} 2V(x)-i\frac{m^{\prime}(x)}{m(x)}\bigg{)}E-V(x)^{2}-iV^{\prime}(x)+\] \[+i\frac{m^{\prime}(x)}{m(x)}V(x)\bigg{]}\phi_{1}(x)=E^{2}\phi_{1} (x). \tag{15}\] Here, allow us to define the effective potential as \[V_{\rm eff}(E,m(x);x)=m(x)^{2}+\frac{3}{4}\frac{m^{\prime}(x)^{2}}{ m(x)^{2}}-\frac{1}{2}\frac{m^{\prime\prime}(x)}{m(x)}+\bigg{(}2V(x)-i\frac{m^{ \prime}(x)}{m(x)}\bigg{)}E-V(x)^{2}-iV^{\prime}(x)+\] \[+i\frac{m^{\prime}(x)}{m(x)}V(x). \tag{16}\] Physically this effective potential is produced due to the effective behavior of the particle. Indeed, this is because the particle (or defect) behaves like a system with PDM, which generates an effective interaction. Seeking to investigate a model where effective interaction is purely dependent on mass profile, let us propose a potential \[V(x)=i\frac{m^{\prime}(x)}{2m(x)}. \tag{17}\] This choice of potential leads us to the Hermitian effective theory since the mass profile must always be positive-defined. Furthermore, complex effective potential suggests that our model has an apparent paradox, i. e., it is a non-Hermitian theory. This apparent result leads us to the hypothesis of complex energy eigenvalues. Nonetheless, as discussed by Ramos et al. [38], although the system may be non-Hermitian, the eigenenergies of the system may be real. For more details on non-Hermitian theories, see Refs. [23; 27; 29; 38]. Indeed, this is because the effective theory given by Eq. (15) is a Schrodinger-type theory that is invariant under \(\mathcal{PT}\) symmetry. Besides, one notes that our Hamiltonian (5) has the form \(\mathcal{H}\equiv\mathcal{K}+if(x)\) (\(\mathcal{K}\) is the kinetic energy operator and \(f(x)\) a position function). Indeed, Bender studied this class of Hamiltonian in Ref. [39]. As discussed in Ref. [39], we conclude that our system guarantees common states of \(\mathcal{H}\) and \(\mathcal{PT}\), i. e., \(\mathcal{H}\varphi=E\varphi\) and \(\mathcal{PT}\varphi=\lambda\varphi\), which gives us \(E=E^{*}\) since \([\mathcal{PT},\mathcal{H}]=0\). More details about this proof are in Ref. [39]. Finally, note that the interaction (17) leads us to an effective potential of the form \[V_{\rm eff}(m(x);\,x)=m(x)^{2}. \tag{18}\] This effective potential is dependent purely on the mass distribution of the particle. Here it is necessary to highlight that the mass must be a real function and positive-defined. Furthermore, it is essential to mention that the effective potential profile will define whether we will have bound or free quantum states. In principle, the choice of mass profile is arbitrary. However, if \(m(x)\) is such effective potential assumes a confining configuration (i. e., a potential well), the fermions will have bound states. Otherwise, only free states will exist. In the next section, we will present an example for each case. Adopting the effective potential (18), one obtains that the \(\phi_{1}(x)\) component of the fermion with effective mass is described by \[-\phi_{1}^{\prime\prime}(x)+m(x)^{2}\phi_{1}(x)=E^{2}\phi_{1}(x). \tag{19}\] ### The non-relativistic limit In Sec. II we developed the relativistic theory. Nonetheless, the relativistic theory and non-relativistic must be compatible [36]. In Ref. [16], in the non-relativistic limit, the fermionic particle with PDM is described by the ordering of Li and Kuhn [12]. In our study, the issue arises: in the non-relativistic regime, the Hamiltonian (5) is equivalent to which theory of effective mass? Naturally, this questioning motivates us to seek a correspondence of our model in the non-relativistic limit. Therefore, to analyze the low-energy limit, i. e., when the kinetic energy is small compared to the rest energy, the term \(\beta m(x)\) is dominant. This consideration is a consequence of speeds being very small compared to the speed of light. To obtain the non-relativistic limit, let us start by considering \[\mathcal{H}\rightarrow\beta m(x), \tag{20}\] when \(c\rightarrow\infty\). Here, \(c\) is the speed of light. Using the approach proposed by Fouldy and Wouthuysen [40], it is possible to investigate a corresponding theory in the non-relativistic limit. Thereby, in search of a correspondence between the relativistic and non-relativistic theories, allow us to assume the Fouldy-Wouthuysen transformation in the \(\phi\) spinor, namely, \[\varphi(x)\rightarrow\mathrm{e}^{iS}\varphi(x), \tag{21}\] so that the transformed Hamiltonian is \[\mathcal{H}^{\prime}=\mathrm{e}^{iS}\,\mathcal{H}\,\mathrm{e}^{-iS}. \tag{22}\] Following the proposal of Foldy and Wouthuysen [40] and considering that the mass and momentum will not commute, the operator \(S\) is \[S=-\frac{i}{2}\frac{1}{\sqrt{m(x)}}\beta\alpha p_{x}\frac{1}{\sqrt{m(x)}}. \tag{23}\] As discussed by Fouldy and Wouthuysen [40] for a constant mass and later by Cavalcante _et al._[16] for a PDM, there are no restrictions on the unitarity of the operator \(S\). For more details, see Refs. [16; 40]. Using the Baker-Hausdorff lemma [41] to expand the equation (22), let us write the transformed Hamiltonian as \[\mathcal{H}^{\prime}=\mathcal{H}+i[S,\,\mathcal{H}]-\frac{1}{2}[S,\,[S,\, \mathcal{H}]]+\ldots \tag{24}\] Assuming the representation (6), we arrive at the conclusion that \[S=-\frac{1}{2}\begin{pmatrix}0&-\frac{1}{m(x)}\frac{d}{dx}+\frac{m^{\prime}}{2 m^{2}}\\ \frac{1}{m(x)}\frac{d}{dx}-\frac{m^{\prime}}{2m^{2}}&0\end{pmatrix}, \tag{25}\] and \[[S,\,\mathcal{H}]=\begin{pmatrix}\frac{d}{dx}&0\\ 0&-\frac{d}{dx}\end{pmatrix}. \tag{26}\] Furthermore, one obtains \[-\frac{1}{2}[S,\,[S,\,\mathcal{H}]] =\Big{(}\frac{1}{2m}\frac{d^{2}}{dx^{2}}-\frac{m^{\prime}}{2m^{2} }\frac{d}{dx}-\frac{m^{\prime\prime}}{8m^{2}}\Big{)}\beta\] \[=-\frac{1}{4}\Big{(}\frac{1}{\sqrt{m(x)}}p_{x}\frac{1}{\sqrt{m(x) }}p_{x}+p_{x}\frac{1}{\sqrt{m(x)}}p_{x}\frac{1}{\sqrt{m(x)}}\Big{)}\beta. \tag{27}\] To calculate the above commutators, the order terms \(\mathcal{O}(m^{-3})\) or higher are negligible. Adopting the operators \(S\) (23), \(\mathcal{H}\) (20) and their commutation relations (Eqs. (25), (26), and (27)), one arrives at \[\mathcal{H}^{\prime} =\mathcal{H}+i[S,\,\mathcal{H}]-\frac{1}{2}[S,\,[S,\,\mathcal{H}]]\] \[=\Big{(}\alpha p_{x}+\beta m(x)\Big{)}-\alpha p_{x}-\frac{1}{4} \Big{(}\frac{1}{\sqrt{m(x)}}p_{x}\frac{1}{\sqrt{m(x)}}p_{x}+p_{x}\frac{1}{ \sqrt{m(x)}}p_{x}\frac{1}{\sqrt{m(x)}}\Big{)}\beta\] \[=\Big{[}m(x)-\frac{1}{4}\Big{(}\frac{1}{\sqrt{m(x)}}p_{x}\frac{1 }{\sqrt{m(x)}}p_{x}+p_{x}\frac{1}{\sqrt{m(x)}}p_{x}\frac{1}{\sqrt{m(x)}}\Big{)} \Big{]}\beta. \tag{28}\] Perceive that this result is the explicit contribution of rest energy added to the kinetic energy. So, we write the non-relativistic KEO as \[\hat{\mathcal{K}}=\frac{1}{4}\Big{(}\frac{1}{\sqrt{m(x)}}p_{x}\frac{1}{\sqrt{m(x )}}p_{x}+p_{x}\frac{1}{\sqrt{m(x)}}p_{x}\frac{1}{\sqrt{m(x)}}\Big{)}. \tag{29}\] Therefore, the non-relativistic Hamiltonian will be \[\mathcal{H}=\frac{1}{4}\Big{(}\frac{1}{\sqrt{m(x)}}p_{x}\frac{1}{\sqrt{m(x)}} p_{x}+p_{x}\frac{1}{\sqrt{m(x)}}p_{x}\frac{1}{\sqrt{m(x)}}\Big{)}+V(x). \tag{30}\] That is the Hamiltonian ordered by Li and Kuhn [12] for a PDM. Looking at Tab. 1, the Hermiticity parameters are \(\beta=\gamma=-\frac{1}{2}\) and \(\alpha=0\). Indeed, in Ref. [16], the authors discuss the ordering of Li and Kuhn. They consider a quantum system with a dimension higher in another fermionic representation and obtain the same result. Furthermore, the profile of KEO (and the Hamiltonian) in the non-relativistic limit is extremely relevant. Basically, this is because the electron transmission phenomenon in semiconductor heterostructures is sensitive to the Hermicity parameters [16]. Therefore, our results also correspond to the theory proposed by Li and Kuhn [12] for semiconductor heterostructures. ## III Some particular cases of PDM Throughout this section, we will consider two particular mass profiles, i. e., a linear mass and a hyperbolic. Indeed, these mass profiles are convenient because they describe systems of interest in condensed matter physics and solid-state physics. That is because the linear mass can describe, for example, the electrons in a graphene sheet. This description is possible because the effective mass of these systems is linear, and its potential is harmonic-like. For more details on linear PDM, see Ref. [42]. The second type of effective mass we chose is hyperbolic mass. Particularly, the hyperbolic mass profile considered is the mass of a soliton. Thus, our system will describe, for example, a defect that propagates in a crystal lattice (solid-state physics system), keeping its shape and energy unchanged [42; 43]. Motivated by these applications, we will now understand the quantum dynamics of non-Hermitian fermions with this PDM. ### Linear mass generating an harmonic effective potential Let us now particularize our study to the case of an effective mass that depends linearly on position, i. e., \[m(x)=\mu x\quad\text{ with }\quad x\geq 0. \tag{31}\] Here, the \(\mu\) parameter adjusts the mass unit, i. e., \(\mu\) will have the unit of mass (atomic unit) by length. Fig. 1 shows the linear profile of the mass. Furthermore, in Fig. 2 the effective potential (16) and the potential (17) are displayed. Figure 1: Linear PDM with several values of \(\mu\). Figure 2: (a) Plot of the \(V(x)^{2}\) produced by the effective mass distribution. (b) Effective potential generated by mass distribution. Therefore, to describe the fermions with effective mass, Eq. (19) is written as \[-\phi_{1}^{\prime\prime}(x)+\mu^{2}x^{2}\phi_{1}(x)=E^{2}\phi_{1}(x). \tag{32}\] Investigating the solution of Eq. (32), let us assume the change of variable \[\xi=\sqrt{\mu}x\hskip 28.452756pt\mbox{with}\hskip 28.452756pt\xi>0, \tag{33}\] which leads us to \[\phi_{1}^{\prime\prime}(\xi)+(K-\xi^{2})\phi(\xi)=0, \tag{34}\] where \(K=E^{2}/\mu\). To obtain normalizable wave functions, we must assume \[\phi_{1}(\xi)=\mbox{e}^{-\xi^{2}/2}h(\xi). \tag{35}\] This transformation is useful for obtaining normalizable wave functions. Furthermore, let us point out that some references explicitly suggest the transformation (35) when analytically solving the equations describing quantum oscillators. For more details, see Ref. [41]. Adopting the change of variable (35), one obtains \[h^{\prime\prime}(\xi)-2\xi h^{\prime}(\xi)+(K-1)h(\xi)=0. \tag{36}\] To investigate the equation (36), we use the Frobenius method. Thus, considering this method, allow us to propose that the solutions of Eq. (36) expressed in terms of the power series are \[h(\xi)=\sum_{j=0}^{\infty}a_{j}\xi^{j}, \tag{37}\] which brings us to the recursion relation \[a_{j+2}=\frac{2j+1-K}{(j+1)(j+2)}a_{j}. \tag{38}\] For the solutions of the expression (36) to be physically acceptable (i. e., the normalizable wave functions), the power series (37) must have finite. So there must be some large \(j\) such that the recursion relation (38) generates \(a_{n+2}=0\). Furthermore, when \(x\to 0\), the wave function is zero due to the mass profile. Thus, assuming these conditions, it is concluded that \[K=2n+1\hskip 28.452756pt\text{with}\hskip 28.452756ptn=1,3,5\ldots \tag{39}\] i. e., \[E_{n}=\sqrt{(2n+1)\mu}. \tag{40}\] For allowed values of \(K\), the recursion relation takes the form \[a_{j+2}=\frac{-2(n-j)}{(j+1)(j+2)}a_{j}. \tag{41}\] Analyzing the recursion relation (41), one concludes that \[h_{n}(\xi)=A_{n}H_{n}(\xi), \tag{42}\] where \(A_{n}\) is the normalization constant and \(H_{n}(\xi)\) is the Hermite polynomial, i. e., \[H_{n}(\xi)=(-1)^{n}\mathrm{e}^{\xi^{2}}\bigg{(}\frac{d}{d\xi} \bigg{)}^{n}\mathrm{e}^{-\xi^{2}}, \tag{43}\] and so, we finally arrive at \[\phi_{1}^{(n)}(x)=A_{n}\mathrm{e}^{-\mu x^{2}}H_{n}(\sqrt{\mu}x) \hskip 28.452756pt\text{with}\hskip 28.452756ptn=1,3,5,\ldots \tag{44}\] We show the analytical solutions for the first eigenstates in Fig. 3. ### Hyperbolic PDM generating a smooth effective potential barrier Now, allow us to particularize the theory for a hyperbolic mass. In this case, the mass profile is \[m(x)=m_{0}\,\sqrt{\text{sech}(ax)}. \tag{45}\] Here, \(m_{0}\) is the mass distribution. Furthermore, \(a\) adjusts the width of the mass distribution. This mass profile (hyperbolic) is known as a soliton. That structure is a topological structure that maintains its shape unchanged when interacting with other solitons and has mass proportional to the \(\text{sech}(x)\)[43]. Besides, its energy is always finite [22; 43]. In Fig. 4 is exposed the mass profile when the parameters \(a\) and \(m_{0}\) are varying. Moreover, we show in Fig. 5 the effective potential produced by the mass distribution. In Fig. 6, the profile of the potential squared (17) is displayed. Figure 4: (a) Hyperbolic PDM when \(m_{0}\) varies, and (b) when \(a\) varies. Figure 3: Analytical solutions of the first eigenstates of the linear PDM. For the mass profile (45), Eq. (19) is rewritten as \[-\phi_{1}^{\prime\prime}(x)+m_{0}^{2}\mathrm{sech}(ax)\phi_{1}(x)=E^{2}\phi_{1}(x). \tag{46}\] To solve this equation, let us assume the coordinate change \[\xi\rightarrow\cosh(ax). \tag{47}\] So, the wave function describing \(\phi_{1}(x)\) is rewritten as \[\phi_{1}^{\prime\prime}(\xi)+\bigg{(}\frac{1/2}{\xi+1}+\frac{1/2}{\xi-1} \bigg{)}\phi_{1}^{\prime}(\xi)+\bigg{[}\frac{\mathcal{E}^{2}\xi-\mu_{0}^{2}}{ \xi(\xi+1)(\xi-1)}\bigg{]}\phi_{1}(\xi)=0, \tag{48}\] with \(\mathcal{E}=E/a\) and \(\mu_{0}=m/a\). If we think of our system as a model of solid state physics will be possible to imagine the model as a fermion with an effective mass. In this case, Figure 5: (a) Effective potential when \(m_{0}\) varies, and (b) when \(a\) varies. Figure 6: Plot of \(V(x)^{2}\) produced by the mass distribution. the fermion will compose the crystalline structure with lattice parameter \(a\). Furthermore, for this model, the electron will acquire an effective mass that will depend on the position [44; 45]. For example, in the case of Al\({}_{x}\)Ga\({}_{1-x}\)As the parameter \(m_{0}\) is 0.0665 a. u. [46]. Therefore, \(m_{0}\ll 1\) is natural. Allow us to remember that the expression (48) belongs to the second-order Fuchsian class [47]. Besides, it is a particular case of the Heun equation, namely, \[H^{\prime\prime}(y)+\bigg{(}\frac{\gamma}{y}+\frac{\delta}{y-1}+\frac{ \varepsilon}{y-d}\bigg{)}H^{\prime}(y)+\frac{\alpha\beta y-q}{y(y-1)(y-d)}H(y) =0. \tag{49}\] The Heun equation parameters must obey the Fuchsian relation, i. e., \[\alpha+\beta+1=\gamma+\delta+\varepsilon. \tag{50}\] Moreover, in the neighborhood of each singularity of Heun's equation (49), two local independent solutions are found. The recurrence relations of the Heun equation are derivative from the Frobenius series [48]. This equation has a set of 192 different expressions of a transformation set of a group of automorphisms [48; 49]. We are interested in the solutions of the system (Eq. (19)) in the vicinity of the location of the mass, i. e., \(x=0\) (or \(\xi=1\)). The linearly independent solutions around the singularity \(\xi=1\) are \[H^{(1)}(y)=\text{HeunC}(1-d,-q+\alpha\beta,\alpha,\beta,\delta, \gamma;1-y), \tag{51}\] and \[H^{(2)}(y)=(1-y)^{1-\delta}\times\] \[\text{HeunC}[1-d,-q+(\delta-1)\gamma d+(\alpha-\delta+1)(\beta- \delta+1),\beta-\delta+1,\alpha-\delta+1,2-\delta,\gamma;1-y], \tag{52}\] with the characteristic exponents being 0 and \(1-\delta\). Comparing Heun's equation (49) with Eq. (48) and using Fuchsian relation (50), one obtains \[\varepsilon=\frac{1}{2},\ \ d=-1,\ \ \delta=\frac{1}{2},\ \ \gamma=0,\ \ q=\mu_{0},\ \ \alpha=\pm iE\ \ \text{and}\ \ \beta=\mp iE. \tag{53}\] Therefore, the linearly independent solutions of the Eq. (48) are \[\phi_{1}^{(1)}(\xi)=\text{HeunC}\bigg{(}2,\,\mathcal{E}^{2}-\mu_{0}^{2},\,i \mathcal{E},\,-i\mathcal{E},\,\frac{1}{2},\,0;\,1-\xi\bigg{)}, \tag{54}\] \[\phi_{1}^{(2)}(\xi)=\sqrt{(1-\xi)}\text{HeunC}\bigg{[}2,\,\mathcal{E}^{2}-\mu_{0}^{ 2}+\frac{1}{4},\,\frac{1}{2}\pm i\mathcal{E},\,\frac{1}{2}\mp i\mathcal{E},\, \frac{3}{2},\,0;\,1-\xi\bigg{]}. \tag{55}\] In more compact notation, one can write system solutions (19) as \[\phi_{1}(x)=\phi_{1}^{(1,2)}(\cosh(ax)). \tag{56}\] Due to the complex profile of the interaction (17), the wave function will describe a free fermionic particle with energy \(E>0\). Physically, this represents a particle that moves freely, i. e., in the absence of interaction (Fig. 7). That is because the particle does not notice any interaction. We show in Fig. 7 the eigenfunction of the model. ## IV Final remarks In this paper, we perform studies on the one-dimensional relativistic theory for an arbitrary position-dependent mass. First, we considered a fermionic particle with PDM subject to a position-dependent electrostatic interaction. Applying the FW transformation it was possible to analyze the non-relativistic limit. In closing were investigated two mass profiles, i. e., linear PDM and hyperbolic PDM. Stimulating results arise from investigating an arbitrary mass in the one-dimensional Dirac theory. That results emerge when building a Schrodinger-like formalism for the fermion. For example, in seeking a Schrodinger-like formalism, it is necessary to assume Figure 7: Wave eigenfunction \(\phi_{1}(x)\) when \(E>0\). a complex potential dependent on the spatial distribution of mass. However, in doing so, the system will be a non-Hermitian system. Although the obtained system is non-Hermitian, as the Hamiltonian preserves the \(\mathcal{PT}\) symmetry, the energy eigenvalues will be reals. Knowing that the concept of PDM emerges in solid-state physics systems, it is interesting to study a correspondence of our theory with a non-relativistic approach. As a result, one obtains that our model corresponds to the model proposed by Li and Kuhn [12]. When analyzing the linear mass profile it is possible to note the generation of an effective harmonic-like potential that confines the fermion. Furthermore, although the mass has a range of variation in every space, it is perceived that the wave function will only exist for positive values of the position. This result can be seen as a consequence that the particle can never physically admit a negative profile (except for the anti-particle and consequently the \(\phi_{2}\) component). Meantime, it is possible to see that a smooth potential barrier emerges if we consider the hyperbolic mass. However, as the effective potential barrier produced by the particle is weak compared to the particle energy, the solution of the system will be a flat wave. This result is interesting because, in a non-relativistic theory, this mass profile will produce effective potentials with the ability to confine it, as discussed in Ref. [22]. ###### Acknowledgements. The authors thank the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), n\({}^{\underline{\rm a}}\) 309553/2021-0 (CASA), and Coordenacao de Aperfeicoamento do Pessoal de Nivel Superior (CAPES), for financial support.
2304.07181
Rigidification of arithmetic $\mathscr{D}$-modules and an overconvergent Riemann-Hilbert correspondence
In this article, I define triangulated categories of constructible isocrystals on varieties over a perfect field of positive characteristic, in which Le Stum's abelian category of constructible isocrystals sits as the heart of a natural t-structure. I then prove a Riemann-Hilbert correspondence, showing that, for objects admitting some (unspecified) Frobenius, this triangulated category is equivalent to the triangulated category of overholonomic $\mathscr{D}^\dagger$-modules in the sense of Caro. I also show that the cohomological functors $f^!$, $f_+$ and $\otimes$ defined for $\mathscr{D}^\dagger$-modules have natural interpretations on the constructible side of this correspondence. Finally, I use this to prove that, for any variety $X$ admitting an immersion into a smooth and proper formal scheme, rigid cohomology (with lisse coefficients) agrees with cohomology defined using arithmetic $\mathscr{D}$-modules.
Christopher Lazda
2023-04-14T14:54:55Z
http://arxiv.org/abs/2304.07181v1
Rigidification of arithmetic \(\mathscr{D}\)-modules and an overconvergent Riemann-Hilbert correspondence ###### Abstract. In this article, I define triangulated categories of constructible isocrystals on varieties over a perfect field of positive characteristic, in which Le Stum's abelian category of constructible isocrystals sits as the heart of a natural t-structure. I then prove a Riemann-Hilbert correspondence, showing that, for objects admitting some (unspecified) Frobenius, this triangulated category is equivalent to the triangulated category of overholonomic \(\mathscr{D}^{\dagger}\)-modules in the sense of Caro. I also show that the cohomological functors \(f^{!},f_{+}\) and \(\otimes\) defined for \(\mathscr{D}^{\dagger}\)-modules have natural interpretations on the constructible side of this correspondence. Finally, I use this to prove that, for any variety \(X\) admitting an immersion into a smooth and proper formal scheme, rigid cohomology (with lisse coefficients) agrees with cohomology defined using arithmetic \(\mathscr{D}\)-modules. ###### Contents * 1 Preliminaries * 2 Constructible isocrystals * 3 The derived category of constructible isocrystals * 4 Overholonomic \(\mathscr{D}^{\dagger}\)-modules * 5 Quasi-coherent complexes and rigidification * 6 Rigidification of \(\mathscr{D}^{\dagger}\)-modules and constructibility * 7 Logarithmic variations * 8 The overconvergent Riemann-Hilbert correspondence * 9 Cohomological operations for constructible isocrystals * 10 Rigid cohomology of varieties ## Introduction Let \(X/\mathbb{C}\) be a smooth algebraic variety, and \(\mathscr{D}_{X}\) the ring of differential operators on \(X\). One of the (many!) forms of the Riemann-Hilbert correspondence states that there is an equivalence of categories \[\mathbf{D}^{b}_{c}(X^{\mathrm{an}},\mathbb{C})\leftrightarrow\mathbf{D}^{b}_{ \mathrm{rh}}(\mathscr{D}_{X})\] between the bounded derived category of (algebraically) constructible sheaves of \(\mathbb{C}\)-modules on \(X^{\mathrm{an}}\), and the bounded derived category of regular holonomic \(\mathscr{D}_{X}\)-modules. Moreover, this equivalence matches up various natural cohomological operations on each side, namely the'six functors' of usual and extraordinary pushforward and pullback, duality, and tensor product. From a differential geometer's perspective, this gives a way of understanding differential equations on \(X\) by studying their associated constructible sheaves. But from an algebro-geometric point of view, it shows that the cohomology theory of constructible sheaves on \(X^{\mathrm{an}}\) can be reconstructed completely algebraically, via the theory of regular holonomic \(\mathscr{D}\)-modules on \(X\). It therefore provides a template for understanding the cohomology of algebraic varieties in situations where taking the complex analytification is not possible, for example when working in characteristic \(p\). This idea has been the one of the main driving forces behind two different approaches to studying the \(p\)-adic cohomology of varieties in characteristic \(p\), both developed by Berthelot. The first of these to be introduced was the theory of rigid cohomology [10], generalising earlier work of Monsky-Washnitzer [11]. Rigid cohomology works as a version of de Rham cohomology on \(p\)-adic analytic varieties, and its coefficient objects, called overconvergent \(F\)-isocrystals, are therefore direct analogues of vector bundles with integrable connection. Such 'lisse' coefficient objects cannot support a good formalism of cohomological operations, and so Berthelot introduced in [10] a theory of arithmetic \(\mathscr{D}\)-modules on mixed characteristic formal schemes. This is analogous to the theory of regular holonomic \(\mathscr{D}\)-modules on complex varieties, and was intended to play the same role in rigid cohomology as that played by the theory of constructible sheaves in \(\ell\)-adic etale cohomology. At least for objects admitting a Frobenius structure, it was proved by Caro-Tsuzuki in [12] that the category of overholonomic \(\mathscr{D}^{\dagger}\)-modules does indeed support a formalism of Grothendieck's six operations. Despite the fact that both approaches were inspired by the classical Riemann-Hilbert correspondence, they have a rather different flavour. The theory of rigid cohomology is based upon the de Rham cohomology of \(p\)-adic analytic varieties in characteristic \(0\), whereas arithmetic \(\mathscr{D}\)-modules live on \(p\)-adic formal schemes of mixed characteristic \((0,p)\). This may seem like a fairly minor distinction, but it does significantly complicate comparisons between the two. For example, even though the theory of arithmetic \(\mathscr{D}\)-modules was explicitly introduced in order to provide rigid cohomology with a'six operations' formalism, it has still remained open in general whether or not the rigid cohomology groups of a variety, with coefficients in an overconvergent \(F\)-isocrystal, coincide with the analogous cohomology groups computed using the theory of arithmetic \(\mathscr{D}\)-modules. The two theories also have somewhat complementary strengths. It is rigid cohomology (and it's earlier version, Monsky-Washnitzer cohomology) that is generally more computable, and actually appears in algorithms [12, 1] used to calculate zeta functions of algebraic varieties. On the other hand, it is the theory of arithmetic \(\mathscr{D}\)-modules that has a good cohomological formalism. It is therefore an important problem to relate these two approaches, and combine the strengths of both into one unified overconvergent cohomology theory. A major step in this direction was taken by Le Stum, who in [13] defined a category of 'constructible isocrystals' on a variety in characteristic \(p\). These are _direct_ generalisations of overconvergent \(F\)-isocrystals, and like them, live on \(p\)-adic analytic varieties in characteristic \(0\). The definition is also very closely analogous to that of the category of constructible \(\ell\)-adic etale sheaves. He then conjectured that there should be a Deligne-Kashiwara correspondence between his category of constructible isocrystals and a certain category of 'perverse arithmetic \(\mathscr{D}\)-modules', and proved this conjecture in the case of smooth and proper curves [13], at least for objects which are of 'Frobenius type'. One of the main results in this article proves Le Stum's conjecture for arbitrary smooth formal schemes, albeit with a slightly stronger 'Frobenius type' hypothesis. In fact, my approach will be to first establish a Riemann-Hilbert type correspondence on the level of derived categories, and then compare t-structures on both sides to deduce an equivalence between the respective hearts. The first goal of this article is therefore to define a derived analogue of Le Stum's category of constructible isocrystals. To explain the construction, let \(\mathcal{V}\) be a complete DVR with fraction field \(K\) of characteristic \(0\) and perfect residue field \(k\) of characteristic \(p>0\). The definition is the most naive one: if \(\mathfrak{P}\) is a smooth formal scheme over \(\mathcal{V}\), with generic fibre \(\mathfrak{P}_{K}\), then a constructible complex on \(\mathfrak{P}\) is a bounded complex of modules over the ring \(\mathscr{D}_{\mathfrak{P}_{K}}\) of (algebraic) differential operators on \(\mathfrak{P}_{K}\), whose cohomology sheaves are constructible isocrystals in the sense of Le Stum. The category \(\mathbf{D}^{b}_{\mathrm{cons}}(\mathfrak{P})\) of these objects is then viewed simply as a full subcategory of \(\mathbf{D}(\mathscr{D}_{\mathfrak{P}_{K}})\). The hard work then consists of showing that this gives rise to a reasonable theory in characteristic \(p\). Concretely, this amounts to showing that if \(X\hookrightarrow\mathfrak{P}\) is a locally closed immersion, and \(\mathfrak{P}\) is proper, then the full subcategory of \(\mathbf{D}^{b}_{\mathrm{cons}}(\mathfrak{P})\) consisting of objects supported on \(]X[_{\mathfrak{P}}\) is independent of \(\mathfrak{P}\).1 More generally, I show that this is the case provided that \(\mathfrak{P}\) is smooth in a neighbourhood of \(X\), and the closure of \(X\) in \(P\) is proper over \(k\). This gives rise to a derived category \(\mathbf{D}^{b}_{\mathrm{cons}}(X)\) of constructible isocrystals on \(X\), which is therefore a reasonable candidate for admitting a six operations formalism (at least for objects which are of Frobenius type). Footnote 1: Since I will be working with adic spaces throughout, the tube \(]X[_{\mathfrak{P}}\) considered here behaves very much like the tubes considered in Berkovich geometry, and in particular the formalism of overconvergent sections is in some sense absorbed into the topology of \(]X[_{\mathfrak{P}}\). The main goal of this article is then to prove an 'overconvergent Riemann-Hilbert correspondence', showing that \(\mathbf{D}^{b}_{\mathrm{cons}}(X)\) is equivalent to the derived category \(\mathbf{D}^{b}_{\mathrm{hol}}(X)\) of overholonomic \(\mathscr{D}^{\dagger}\)-modules on \(X\) in the sense of Caro and Abe-Caro [1] (again, this will only hold for objects of Frobenius type). In fact, since I will be working with constructible objects it suffices to do this on the level of the smooth formal scheme \(\mathfrak{P}\) in which \(X\) has been embedded. In the classical Riemann-Hilbert correspondence, the functor from \(\mathscr{D}\)-modules to constructible sheaves simply takes the (shifted) de Rham complex of an algebraic \(\mathscr{D}\)-module, but in the \(p\)-adic world even constructing the appropriate functor takes a little bit of work. In our previous article [1] we constructed a functor \(\mathrm{sp}\), _from_ constructible isocrystals _to_ overholonomic \(\mathscr{D}^{\dagger}\)-modules. This was a rather complicated version of pushforward along \(\mathrm{sp}\colon\mathfrak{P}_{K}\to\mathfrak{P}\), and only worked on the level of abelian categories. Here, I construct a functor \(\mathrm{sp}^{\dagger}\) going in the other direction, which is a kind of completed pullback along \(\mathrm{sp}\), shifted by the dimension of \(\mathfrak{P}\) (the reason for this shift will be discussed below). Conceptually, the functor \(\mathrm{sp}^{\dagger}\) is much simpler than \(\mathrm{sp}_{\dagger}\), and works perfectly well on the level of derived categories. The main result of this article is then the following. **Theorem** (8.0.1).: _Let \(\mathfrak{P}\) be a smooth formal scheme over \(\mathcal{V}\), \(\mathbf{D}^{b}_{\mathrm{cons},F}(\mathfrak{P})\subset\mathbf{D}^{b}_{\mathrm{ cons}}(\mathfrak{P})\) the full subcategory of objects of Frobenius type,2 and \(\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\subset\mathbf{D}^{b}_{\mathrm{ coh}}(\mathscr{D}^{\dagger}_{\mathfrak{P}})\) the full subcategory of overholonomic complexes of Frobenius type. Then \(\mathrm{sp}^{\dagger}\) induces an equivalence of categories_ Footnote 2: For the purposes of this article, I will take ‘of Frobenius type’ to mean objects which are iterated extensions of those admitting some unspecified \(p^{n}\)-power Frobenius structure (where \(n\) is allowed to vary). This coincides with the terminology from [1], though not with that from [13]. It is what has previously been called ‘\(F\)-able’ in the literature. \[\mathrm{sp}^{\dagger}\colon\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P}) \stackrel{{\cong}}{{\longrightarrow}}\mathbf{D}^{b}_{\mathrm{cons},F }(\mathfrak{P}).\] The very similar notation \(\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\) and \(\mathbf{D}^{b}_{\mathrm{cons},F}(\mathfrak{P})\) hides the fact that objects in the two categories are of a very different nature. On the one hand, we have overholonomic complexes of \(\mathscr{D}^{\dagger}\)-modules on \(\mathfrak{P}\), on the other we have complexes of modules with integrable connection on \(\mathfrak{P}_{K}\) whose cohomology sheaves are constructible. As in the case of varieties over \(\mathbb{C}\), it is important not just to have such an equivalence, but to know how it behaves with respect to the natural cohomological operations on each side. Out of the six functors \(f_{+},f^{+},f_{\mathrm{f}},f^{\dagger},\otimes\) and \(\mathbf{D}\) defined for overholonomic \(\mathscr{D}^{\dagger}\)-modules, I give natural interpretations of \(f^{\dagger},f_{+}\) and \(\otimes\) as functors on the derived category of constructible complexes \(\mathbf{D}^{b}_{\mathrm{cons}}(X)\). The extraordinary pullback \(f^{\dagger}\) and tensor product \(\otimes\) are easy: after taking an embedding \(X\hookrightarrow\mathfrak{P}\) they just correspond to ordinary pullback of \(\mathscr{D}_{]X[_{\mathfrak{P}}}\)-modules, and ordinary tensor product over \(\mathcal{O}_{]X[_{\mathfrak{P}}}\). For a smooth and proper morphism \(u\colon\mathfrak{P}\to\mathfrak{Q}\) of formal schemes, the functor \(u_{+}\) for overholonomic \(\mathscr{D}^{\dagger}_{\mathfrak{P}\mathfrak{Q}}\)-modules corresponds (up to a shift) with the higher de Rham pushfoward of constructible isocrystals. This has the following important consequence. **Corollary** (8.0.3).: _Let \(u\colon\mathfrak{P}\to\mathfrak{Q}\) be a smooth and proper morphism of smooth formal schemes. Then the functor \(\mathbf{R}u_{\mathrm{dR}*}\) maps \(\mathbf{D}^{b}_{\mathrm{cons},F}(\mathfrak{P})\) into \(\mathbf{D}^{b}_{\mathrm{cons},F}(\mathfrak{Q})\)._ For varieties \(X/k\) embedded inside formal schemes, the correct interpretation of \(f_{+}\) on the 'constructible' side turns out to use the compactly supported de Rham pushfowards introduced in [1] (see SS9.2 for details). Given the usual duality relations amongst the six functors, the only thing missing from a complete six functor formalism on the constructible side of the correspondence is a suitable interpretation of the duality functor, and I don't currently have a good candidate for this. One can also also ask about the effect of \(\operatorname{sp}^{\text{!}}\) on t-structures. The category \(\mathbf{D}^{b}_{\operatorname{cons},F}(\mathfrak{P})\) admits a natural t-structure coming from the inclusion \(\mathbf{D}^{b}_{\operatorname{cons},F}(\mathfrak{P})\to\mathbf{D}(\mathscr{D} _{\mathfrak{P}_{K}})\). On the other hand, there are three distinct t-structures on \(\mathbf{D}^{b}_{\operatorname{hol},F}(\mathfrak{P})\) that have been previously studied in the literature. The first is just the obvious one coming from the inclusion \(\mathbf{D}^{b}_{\operatorname{hol},F}(\mathfrak{P})\subset\mathbf{D}^{b}_{ \operatorname{coh}}(\mathscr{D}^{\dagger}_{\mathfrak{P}})\). The second is what is called the 'constructible' t-structure. Namely, it is the analogue of the t-structure on \(\mathbf{D}^{b}_{\operatorname{rh}}(\mathscr{D}_{X})\) given by transporting the ordinary t-structure on \(\mathbf{D}^{b}_{c}(X^{\operatorname{an}},\mathbb{C})\) along the classical Riemann-Hilbert correspondence. It is characterised by the fact that \(f^{+}\) is t-exact. The third is what I call the 'dual constructible' t-structure, and is defined as the Verdier dual of the constructible t-structure. It is therefore characterised by the fact that \(f^{!}\) is t-exact. It turns out that \(\operatorname{sp}^{!}\) matches up the ordinary t-structure on \(\mathbf{D}^{b}_{\operatorname{cons},F}(\mathfrak{P})\) with the dual constructible t-structure on \(\mathbf{D}^{b}_{\operatorname{hol},F}(\mathfrak{P})\).3 Denoting the heart of the dual constructible t-structure by \(\mathbf{DCon}_{F}(\mathfrak{P})\subset\mathbf{D}^{b}_{\operatorname{hol},F}( \mathfrak{P})\), and the abelian category of constructible isocrystals on \(\mathfrak{P}\) (of Frobenius type) by \(\mathbf{Isoc}_{\operatorname{cons},F}(\mathfrak{P})\), I therefore deduce the following version of Le Stum's conjecture. Footnote 3: This is the reason for the shift in the definition of \(\operatorname{sp}^{!}\). **Corollary** (9.2.4).: _Let \(\mathfrak{P}\) be a smooth formal scheme over \(\mathcal{V}\). Then \(\operatorname{sp}^{!}\) induces an equivalence of categories_ \[\mathbf{DCon}_{F}(\mathfrak{P})\xrightarrow{\cong}\mathbf{Isoc}_{\operatorname {cons},F}(\mathfrak{P}).\] This immediately implies a similar result for varieties over \(k\). Finally, I use the equivalence \(\operatorname{sp}^{!}\) to prove that rigid cohomology (with coefficients) coincides with its \(\mathscr{D}^{\dagger}\)-module counterpart. Recall that in [1, SS3], Abe defines a functor \(\rho_{X}\colon\mathbf{Isoc}^{\dagger}_{F}(X)\to\mathbf{D}^{b}_{\operatorname{ hol},F}(X)\) from the category of overconvergent isocrystals of Frobenius type on a variety \(X\), to the category of overholonomic complexes on \(X\), building on previous work of Caro. **Corollary** (10.0.2).: _Let \(X\) be a strongly realisable variety, \(f\colon X\to\operatorname{Spec}\left(k\right)\) the structure morphism, and \(\mathscr{F}\) an overconvergent isocrystal on \(X\) of Frobenius type. Then \(\operatorname{sp}^{!}\) induces an isomorphism_ \[f_{+}\rho_{X}(\mathscr{F})\xrightarrow{\cong}\mathbf{R}\Gamma_{\operatorname{ rig}}(X,\mathscr{F})\] _in \(\mathbf{D}^{b}(K)\)._ Let me now give an outline of contents the article. In SS1 I recall various preliminary results that I need in the theory of analytic geometry and rigid cohomology. In particular, I state the main results that I will need on compactly supported cohomology of analytic varieties from [1]. In SS2 I carefully define the (abelian) category of constructible isocrystals on varieties and pairs, following Le Stum. Then in SS3 I define the analogous triangulated category, and prove that it satisfies all the expected invariance and functoriality properties. In SS4 I recall how the theory of overholonomic \(\mathscr{D}^{\dagger}\)-modules on varieties and pairs works, and define the three natural t-structures that exist on the derived categories of these objects. In SS5 I construct the 'rigidification' functor \(\mathbf{L}\!\operatorname{sp}^{*}\) on the level of \(\mathcal{O}\)-modules, which is a kind of completed pullback along the specialisation morphism \(\operatorname{sp}\colon\mathfrak{P}_{K}\to\mathfrak{P}\) associated to a flat formal scheme \(\mathfrak{P}\) over \(\mathcal{V}\). Then in SS6 I upgrade this functor to include \(\mathscr{D}\)-module structures, giving rise to the functor \(\operatorname{sp}^{!}\) from overholonomic complexes of \(\mathscr{D}^{\dagger}\)-modules to (complexes of) constructible isocrystals. I prove that this functor is t-exact for the dual constructible t-structure on the source and the natural t-structure on the target. In SS7 I pave the way for proving the Riemann-Hilbert correspondence by establishing a key vanishing result in log rigid cohomology, and then in SS8 I prove my main result, that \(\operatorname{sp}^{!}\) is an equivalence on objects of Frobenius type. Then in SS9 I describe the compatibility of \(\operatorname{sp}^{!}\) on the various cohomological operations defined for constructible isocrystals and overholonomic \(\mathscr{D}^{\dagger}\)-modules, and finally in SS10 I prove the comparison theorem between rigid and \(\mathscr{D}^{\dagger}\)-module cohomology. **Acknowledgements.** This article has benefited enormously from conversations with Tomoyuki Abe, and could not have been written without his help. In particular, I learned from him the rigidification construction described in SS5 below. I would also like to thank Bernard Le Stum for many helpful comments on this and earlier articles, as well as Atsushi Shiho for answering some of my questions about his work. Further acknowledgements to be added after the referee process. **Notation and conventions.** * I will denote by \(K\) a complete, discretely valued field of characteristic \(0\), whose residue field \(k\) is perfect of characteristic \(p\). While the general formalism of rigid cohomology does not require \(K\) to be discretely valued, or \(k\) to be perfect, the theory of arithmetic \(\mathscr{D}^{\dagger}\)-modules is generally developed under these assumptions.4 Since my main point of interest is comparisons between the two, I will impose this hypothesis from the beginning. The absolute value on \(K\) (or any valued extension thereof) will be normalised so that \(|p|=p^{-1}\). Footnote 4: Although the requirement that \(k\) is perfect has recently been lifted by [1]. * I will write \(\mathcal{V}\) for the ring of integers of \(K\), \(\mathfrak{m}\) for its maximal ideal, and \(\varpi\) for a choice of uniformiser. I will fix a power \(q=p^{a}\) of \(p\), and assume that \(K\) admits a lift \(\sigma\) of the \(q\) Frobenius on \(k\). Frobenius will always mean the \(q\)-power Frobenius. * A _variety_ will mean a separated and finite type \(k\)-scheme, a _formal scheme_ will mean a separated and (topologically) finite type formal scheme over \(\operatorname{Spf}\left(\mathcal{V}\right)\), and an _analytic variety_ will mean an adic space, separated and locally of finite type over \(\operatorname{Spa}\left(K,\mathcal{V}\right)\). Given a formal scheme \(\mathfrak{P}\), its generic fibre \(\mathfrak{P}_{K}\) is therefore an analytic variety, and its special fibre \(\mathfrak{P}_{k}\) is a variety. I will use fraktur letters to denote formal schemes, and the corresponding roman letters for their special fibres, for example \(P=\mathfrak{P}_{k}\). * Adjectives such as flat or smooth, when applied to varieties or formal schemes, should be understood to apply to the structure morphism to \(\operatorname{Spec}\left(k\right)\) or \(\operatorname{Spf}\left(\mathcal{V}\right)\). Thus a smooth variety will be a variety that is smooth over \(k\), and a flat formal scheme will be a formal scheme that is flat over \(\mathcal{V}\). Similar adjectives applied to analytic varieties should be understood to apply to the structure morphism to \(\operatorname{Spa}\left(K,\mathcal{V}\right)\). * If \(\rho\in\sqrt{|K|}\), I will denote by \(\mathbb{D}^{d}_{K}(0;\rho)\) the closed polydisc of radius \(\rho\) over \(K\),5 and by \(\mathbb{D}^{d}_{K}(0;\rho^{-})\) the open polydisc of radius \(\rho\). If \(\mathscr{X}\) is an analytic variety, and \(\#\in\{\emptyset,-\}\), I will denote by \(\mathbb{D}^{d}_{\mathscr{X}}(0;\rho^{\#})\) the fibre product \(\mathbb{D}^{d}_{K}(0;\rho^{\#})\times_{\operatorname{Spa}\left(K,\mathcal{V} \right)}\mathscr{X}\). Footnote 5: via the normalisation \(|p|=p^{-1}\) * If \(X\) is a topological space, I will denote by \(\operatorname{\mathbf{Sh}}(X)\) the category of (abelian) sheaves on \(X\). If \(\mathcal{O}_{X}\) is a sheaf of (not necessarily commutative) rings on \(X\), I will denote the category of coherent \(\mathcal{O}_{X}\)-modules by \(\operatorname{\mathbf{Coh}}(\mathcal{O}_{X})\).6 Footnote 6: Recall that an \(\mathcal{O}_{X}\)-module \(\mathscr{F}\) is called _coherent_ if it is locally finitely generated, and the kernel of any map \(\mathcal{O}^{\mathfrak{D}^{\mathcal{D}^{\mathcal{D}^{\mathcal{D}^{\mathcal{D} ^{\mathcal{D}}}}}}}_{U}\to\mathscr{F}\)\(|_{U}\) on any open subset \(U\subset X\) is also locally finitely generated. * If \(\mathcal{A}\) is an abelian category I will denote by \(\operatorname{\mathbf{D}^{\#}}(\mathcal{A})\) the derived category with boundedness condition \(\#\in\{\emptyset,+,-,b\}\). If \(\mathcal{A}=\operatorname{\mathbf{Sh}}(X)\) for some topological space \(X\), I will usually write \(\operatorname{\mathbf{Ch}^{\#}}(X)\) and \(\operatorname{\mathbf{D}^{\#}}(X)\) instead, and if \(\mathcal{A}\) is the category of \(\mathcal{O}_{X}\)-modules on a ringed space \((X,\mathcal{O}_{X})\), I will write \(\operatorname{\mathbf{Ch}^{\#}}(\mathcal{O}_{X})\) and \(\operatorname{\mathbf{D}^{\#}}(\mathcal{O}_{X})\). * If \(A\) is an abelian group (or more generally, a sheaf of abelian groups on a topological space), I will write \(A_{\mathbb{Q}}\) for \(A\otimes_{\mathbb{Z}}\mathbb{Q}\). If \(\mathcal{C}\) is an additive category, I will denote by \(\mathcal{C}_{\mathbb{Q}}\) the corresponding isogeny category. Thus the objects of \(\mathcal{C}_{\mathbb{Q}}\) are the same as \(\mathcal{C}\), but the hom sets have been tensored with \(\mathbb{Q}\): \(\operatorname{Hom}_{\mathcal{C}_{\mathbb{Q}}}(X,Y)=\operatorname{Hom}_{\mathcal{C }}(X,Y)\otimes_{\mathbb{Z}}\mathbb{Q}\). * For a morphism \(f\colon(X,\mathcal{O}_{X})\to(Y,\mathcal{O}_{Y})\) of ringed spaces (or more generally, ringed sites) I will use the formalism of \(K\)-injective and \(K\)-flat resolutions from [10] to define the functors \(\operatorname{\mathbf{R}}f_{*}\) and \(\operatorname{\mathbf{L}}f^{*}\) on unbounded derived categories of \(\mathcal{O}_{X}\) and \(\mathcal{O}_{Y}\)-modules. In order for this construction to be well-behaved, the sites \(X\) and \(Y\) under consideration will need to have bases for their topologies which are of finite cohomological dimension, this will be the case for all sites considered in this article. In particular, if \(Y\) is a point, then this gives the derived global sections functor \(\mathbf{R}\Gamma(X,-)\) for unbounded complexes. The internal hom functor \[\mathbf{R}\underline{\mathrm{Hom}}_{\mathcal{O}_{X}}(-,-)\colon\mathbf{D}( \mathcal{O}_{X})^{\mathrm{op}}\times\mathbf{D}(\mathcal{O}_{X})\to\mathbf{D}( \mathcal{O}_{X})\] can be defined similarly. By then applying \(\mathbf{R}\Gamma(X,-)\), so can the functor \(\mathbf{R}\mathrm{Hom}_{\mathcal{O}_{X}}(-,-)\) taking values in \(\mathbf{D}(\Gamma(X,\mathcal{O}_{X}))\). * I will use the notions of partition and stratification as defined in [10, 11], although with slightly different terminology. Thus a partition of Noetherian topological space \(X\) is a finite decomposition \(X=\bigsqcup_{\alpha\in A}X_{\alpha}\) into locally closed subsets \(X_{\alpha}\). It is called a stratification if \(X_{\alpha}\cap\overline{X}_{\beta}\neq\emptyset\implies X_{\alpha}\subset \overline{X}_{\beta}\). This is in fact called a _good_ stratification in [10, 11]. Every partition can be refined to a stratification. ## 1. Preliminaries In this section I recall some general results and constructions I will need, mostly in rigid analytic geometry and rigid cohomology. ### **Adic spaces.** In this article, analytic varieties will always be considered as adic spaces. I will therefore write \(\mathbf{An}_{K}\) for the category of adic spaces separated and locally of finite type over \(\mathrm{Spa}\,(K,\mathcal{V})\), and refer to such objects as _analytic varieties_ (over \(K\)). By [13, SS1.1.11] there is an equivalence of categories \[(-)_{0}\colon\mathbf{An}_{K} \to\mathbf{Rig}_{K}\] \[\mathscr{X} \mapsto\mathscr{X}_{0}\] between \(\mathbf{An}_{K}\) and the category of separated rigid analytic spaces over \(K\) in the sense of Tate [14]. Denote a quasi-inverse to this functor by \(r(-)\). If \(\mathscr{X}_{\mathrm{an}}\) denotes the analytic site of \(\mathscr{X}\) (that is, the category of open subsets of \(\mathscr{X}\) equipped with its canonical topology), and \(\mathscr{X}_{0,G}\) the \(G\)-site of \(\mathscr{X}_{0}\) (that is, the category of admissible opens equipped with the topology of admissible open coverings), then the functor \[\mathscr{X}_{\mathrm{an}} \leftarrow\mathscr{X}_{0,G}\] \[r(U) \mapsmaps U\] induces an equivalence of toposes \[\mathbf{Sh}(\mathscr{X}) \xrightarrow{\cong}\mathbf{Sh}_{G}(\mathscr{X}_{0})\] \[\mathscr{F} \mapsto\mathscr{F}_{0},\] which is natural in \(\mathscr{X}\)[13, SS1.1.11]. In particular, it induces isomorphisms in cohomology \[\mathrm{H}^{q}(\mathscr{X},\mathscr{F}) \xrightarrow{\cong}\mathrm{H}^{q}(\mathscr{X}_{0},\mathscr{F}_{0})\] for any abelian sheaf \(\mathscr{F}\). Since \(\left(\mathcal{O}_{\mathscr{X}}\right)_{0}\xrightarrow{\cong}\mathcal{O}_{ \mathscr{X}_{0}}\), it also induces an equivalence of categories \[\mathbf{Coh}(\mathcal{O}_{\mathscr{X}})\xrightarrow{\cong}\mathbf{Coh}( \mathcal{O}_{\mathscr{X}_{0}})\] between coherent sheaves on \(\mathscr{X}\) and \(\mathscr{X}_{0}\). ### Frames and tubes. The basic objects of rigid cohomology are frames and tubes. Since the theory is generally phrased in the language of rigid analytic spaces, I will briefly discuss here the changes that need to be made when using adic spaces instead. 1.2.1 Definition (1): A pair \((X,Y)\) consists of an open immersion \(X\hookrightarrow Y\) of varieties.7 2. A frame \((X,Y,\mathfrak{P})\) consists of a pair \((X,Y)\) together with a closed immersion \(Y\hookrightarrow\mathfrak{P}\) of formal schemes, such that \(\mathfrak{P}\) is flat.8 Footnote 8: Again, this flatness is over \(\mathcal{V}\), and a more precise term for a frame would be a ‘frame over \(\mathcal{V}\)’. There is an obvious notion of a morphism of pairs or of frames. The two formalisms of rigid cohomology and arithmetic \(\mathscr{D}\)-modules for pairs and varieties work under slightly different hypotheses on the frames involved. #### 1.2.2. Definition Let \((X,Y,\mathfrak{P})\) be a frame. 1. \(\mathfrak{P}\) is smooth around \(X\) if there exists an open subscheme \(\mathfrak{U}\subset\mathfrak{P}\), containing \(X\), which is smooth over \(\mathcal{V}\). 2. \((X,Y,\mathfrak{P})\) is an _l.p. frame_ if \(\mathfrak{P}\) is smooth, and admits a locally closed immersion into a smooth and proper formal \(\mathcal{V}\)-scheme. This leads to two different notions of'realisability' for pairs or varieties. **1.2.3 Definition**.: 1. A pair \((X,Y)\) is said to be weakly realisable if there exists a frame \((X,Y,\mathfrak{P})\) with \(\mathfrak{P}\) smooth around \(X\). 2. A pair \((X,Y)\) is said to be strongly realisable if there exists an l.p. frame \((X,Y,\mathfrak{P})\). 3. A variety \(X\) is said to be weakly realisable if there exists a frame \((X,Y,\mathfrak{P})\) with \(Y\) proper and \(\mathfrak{P}\) smooth around \(X\). 4. A variety \(X\) is said to be strongly realisable if there exists an l.p. frame \((X,Y,\mathfrak{P})\) with \(Y\) proper.9 Footnote 9: Equivalently, \(X\) admits a locally closed immersion into a smooth and proper formal scheme. If \(\mathfrak{P}\) is a formal scheme, there is a continuous specialisation map \[\operatorname{sp}=\operatorname{sp}_{\mathfrak{P}}\colon\mathfrak{P}_{K} \to\mathfrak{P}\cong P.\] Write \([\mathfrak{P}_{K}]\) for the separated quotient of \(\mathfrak{P}_{K}\) in the sense of [11, Chapter 0, SS2.3], this is the set of maximal points of \(\mathfrak{P}_{K}\) equipped with the quotient topology via the map \[\operatorname{sep}=\operatorname{sep}_{\mathfrak{P}_{K}}\colon\mathfrak{P}_{ K}\to[\mathfrak{P}_{K}]\] taking a point \(x\) to its maximal generalisation \([x]\). Define a (non-continuous!) map \[[\operatorname{sp}]=[\operatorname{sp}_{\mathfrak{P}}]\colon\mathfrak{P}_{K}\to P\] as the composite \[\mathfrak{P}_{K}\stackrel{{\operatorname{sep}}}{{\longrightarrow} }[\mathfrak{P}_{K}]\stackrel{{\operatorname{sep}}}{{\longrightarrow }}P\] where the second map is the restriction of \(\operatorname{sp}\) to the subset \([\mathfrak{P}_{K}]\subset\mathfrak{P}_{K}\). The map \([\operatorname{sp}]\) is in fact anti-continuous: the preimage of a closed subset is open and vice versa. Recall that a subset \(S\subset T\) of a topological space \(T\) is called _constructible_ if it lies in the Boolean algebra generated by the open subsets \(U\) of \(T\) for which the inclusion \(U\to T\) is a quasi-compact morphisms. In particular, if \(T=P\) is a variety over \(k\), then the quasi-compactness condition is automatically satisfied. **1.2.4 Definition**.: For any constructible subset \(S\subset P\), define the tube \(]S[_{\mathfrak{P}}:=[\operatorname{sp}]^{-1}(S)\subset\mathfrak{P}_{K}\). If \(S\) is closed, then \(]S[_{\mathfrak{P}}\) is an open subspace of \(\mathfrak{P}_{K}\), and if \(S\) is open, then \(]S[_{\mathfrak{P}}\) is a closed subset of \(\mathfrak{P}_{K}\). Since an implicit assumption on \(\mathfrak{P}\) is that it is quasi-compact, it follows that \(]S[_{\mathfrak{P}}\) will be a finite union of locally closed subsets of \(\mathfrak{P}_{K}\). However, it will not be constructible. For example, if \(\mathfrak{P}=\widehat{\mathbb{A}}^{1}_{\mathcal{V}}\) and \(S=\{0\}\subset\mathbb{A}^{1}_{k}\), then \(]0[_{\widehat{\mathbb{A}}^{1}_{\mathcal{V}}}\subset\mathbb{D}^{1}_{K}(0;1)\) is a non-quasi-compact open, and is therefore not constructible. In general, \(]S[_{\mathfrak{P}}\) won't admit any natural structure as an adic space, unless \(S\) is a closed subset of \(P\). I can, however, always consider it as a locally ringed space by equipping it with the restriction \[\mathcal{O}_{]S[_{\mathfrak{P}}}:=\mathcal{O}_{\mathfrak{P}_{K}}|_{]S[_{ \mathfrak{P}}}\] of the structure sheaf on \(\mathfrak{P}_{K}\). If \(i\colon S\hookrightarrow S^{\prime}\) is an inclusion of constructible subsets of \(P\), I will generally abuse notation and also write \(i\colon\,]S[_{\mathfrak{P}}\to]S^{\prime}[_{\mathfrak{P}}\) for the induced morphism of tubes. This is then naturally a morphism of locally ringed spaces, and if \(i\colon S\hookrightarrow S^{\prime}\) is a locally closed immersion, then \(i\colon\,]S[_{\mathfrak{P}}\to]S^{\prime}[_{\mathfrak{P}^{\prime}}\) is topologically the inclusion of a locally closed subset. In this case the functor \[i_{\mathfrak{l}}\colon\mathbf{Sh}(]S[_{\mathfrak{P}})\to\mathbf{Sh}(]S^{ \prime}[_{\mathfrak{P}})\] of extension by zero along \(i\) is defined purely topologically. A sheaf on \(]S^{\prime}[_{\mathfrak{P}}\) is said to be _supported on_\(]S[_{\mathfrak{P}}\) iff it is in the essential image of this functor (note that we are not necessarily insisting on \(]S[_{\mathfrak{P}}\) being closed in \(]S^{\prime}[_{\mathfrak{P}}\) when we say that a sheaf is supported on \(]S[_{\mathfrak{P}})\). Now, suppose that is a morphism of frames. Then there is an induced morphism of locally ringed spaces, with associated module pullback functor \(]f[_{\mathfrak{u}}^{*}\). #### 1.2.5. Remark When \(u=\mathrm{id}\) and \(f\) is a locally closed immersion, this will often be denoted \(f\colon\,]X^{\prime}[_{\mathfrak{P}}\to]X[_{\mathfrak{P}}\) instead of \(]f[_{\mathrm{id}}\). In general I will often simply write it as \(u\colon\,]X^{\prime}[_{\mathfrak{P}^{\prime}}\to]X[_{\mathfrak{P}}\), in a (possibly doomed) attempt to keep notational complexity within reasonable bounds. I will try my best to do this only when there is no possible scope for confusion. By definition of the structure sheaves on \(]X^{\prime}[_{\mathfrak{P}}\) and \(]X[_{\mathfrak{P}}\), if \(u=\mathrm{id}\) and \(f\) is a locally closed immersion, the module pullback and sheaf pullback coincide: \(f^{*}=f^{-1}\). The extension by zero functor \(f_{!}\colon\mathbf{Sh}(]X^{\prime}[_{\mathfrak{P}})\to\mathbf{Sh}(]X^{\prime} [_{\mathfrak{P}})\) can also be upgraded to a functor of \(\mathcal{O}\)-modules, which coincides with the functor on underlying sheaves. The following simple 'base change' result will be used constantly. **1.2.6 Lemma**.: _Let \((f,g,u):(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime})\to(X,Y,\mathfrak{P})\) be a morphism of frames, and \(i:S\to X\) a locally closed immersion. Set \(S^{\prime}:=f^{-1}(S)\), and write \(i^{\prime}:S^{\prime}\to X^{\prime}\) for the induced locally closed immersion. Consider the (topologically Cartesian) diagram_ _of tubes. Then, for any \(\mathcal{O}_{]S[_{\mathfrak{P}}}\)-module \(\mathscr{F}\), the base change map_ \[]f[_{\mathfrak{u}}^{*}\,i_{*}\mathscr{F}\to i_{*}^{\prime}\,]f[_{\mathfrak{u }}^{*}\,\mathscr{F}\] _induces an isomorphism_ \[]f[_{\mathfrak{u}}^{*}\,i_{!}\mathscr{F}\xrightarrow{\cong}i_{!}^{\prime}\,]f[ _{\mathfrak{u}}^{*}\mathscr{F}.\] Proof.: Set \(T=X\setminus S\) and \(T^{\prime}=X^{\prime}\setminus S^{\prime}=f^{-1}(T)\) and let \(j:T\to X\), \(j^{\prime}:T^{\prime}\to X^{\prime}\) be the natural inclusions (of constructible subsets). Then the commutativity of the diagram shows that \(j^{\prime-1}|f|_{u}^{*}\,i!\mathscr{F}=|f|_{u}^{*}\,j^{-1}i!\mathscr{F}=0\). First of all, this implies that the canonical map \(|f|_{u}^{*}\,i!\mathscr{F}\to|f|_{u}^{*}\,i_{*}\mathscr{F}\to i_{*}^{\prime}|f|_{ u}^{*}\mathscr{F}\) factors through the subsheaf \(i_{!}^{\prime}|f|_{u}^{*}\,\mathscr{F}\subset i_{*}^{\prime}\,]f|_{u}^{*}\, \mathscr{F}\). Secondly, since \(j^{\prime-1}i_{!}^{\prime}|f|_{u}^{*}\,\mathscr{F}=0\) trivially, we can see that to prove \[|f|_{u}^{*}\,i_{!}\mathscr{F}\to i_{!}^{\prime}|f|_{u}^{*}\,\mathscr{F}\] is an isomorphism, it suffices to do so after restricting to \(|S^{\prime}[_{\mathfrak{A}\mathfrak{P}^{\prime}}\). But now we have \[i^{\prime-1}\,]f|_{u}^{*}\,i_{!}\mathscr{F}=|f|_{u}^{*}\,i^{-1}i_{!}\mathscr{ F}=|f|_{u}^{*}\,\mathscr{F},\quad i^{\prime-1}i_{!}^{\prime}|f|_{u}^{*}\, \mathscr{F}=|f|_{u}^{*}\,\mathscr{F},\] and the morphism \[|f|_{u}^{*}\,i_{!}\mathscr{F}\to i_{!}^{\prime}|f|_{u}^{*}\mathscr{F}\] restricts to the identity on \(|S^{\prime}[_{\mathfrak{A}\mathfrak{P}^{\prime}}\). The tube of a subset can be used to formulate the following rather weak notion of smoothness for frames, which will occasionally be useful. **1.2.7 Definition**.: If \((X,Y,\mathfrak{P})\) is a frame, I will say that \(\mathfrak{P}\) is rig-smooth around \(X\) if \(|X[_{\mathfrak{P}}\) admits an open neighbourhood \(]X[_{\mathfrak{A}\mathfrak{P}}\subset V\subset\mathfrak{P}_{K}\) which is smooth over \(K\). The non-smooth locus of a morphism of analytic varieties is the subspace defined by a coherent ideal sheaf. Thus, if \(\mathfrak{P}\) is smooth around \(X\), then it is rig-smooth around \(X\). It is easy to provide counter-examples to the converse statement. Finally, as well as the open tube \(]Y[_{\mathfrak{A}\mathfrak{P}}\) of a closed subscheme \(Y\hookrightarrow P\) defined above, I will also need the variants \([Y]_{\mathfrak{A}\mathfrak{P}_{\eta}}\) and \(]Y[_{\mathfrak{A}\mathfrak{P}_{\eta}}\), which are defined for \(\eta<1\) sufficiently close to \(1\). When \(\mathfrak{P}\) is affine, and \(f_{1},\dots,f_{r}\in\Gamma(\mathfrak{P},\mathcal{O}_{\mathfrak{P}})\) are such that \(Y=V(\varpi,f_{1},\dots,f_{r})\), then \[]Y[_{\mathfrak{A}\mathfrak{P}}=\big{\{}\,x\in\mathfrak{P}_{K}|\,v_{[x]}(f_{i} )<1\;\forall i\big{\}}\] by [11, Proposition II.4.2.11]. Berthelot therefore defines the closed and open tubes \[[Y]_{\mathfrak{A}\mathfrak{P}_{\eta}} :=\{\,x\in\mathfrak{P}_{K}|\,v_{x}(f_{i})\leq\eta\;\forall i\}\] \[]Y[_{\mathfrak{A}\mathfrak{P}_{\eta}} :=\big{\{}\,x\in\mathfrak{P}_{K}|\,v_{[x]}(f_{i})<\eta\;\forall i \big{\}}\] of radius \(\eta\). When \(|\varpi|<\eta<1\) these do not depend on the choice of the \(f_{i}\), and hence glue together over an affine covering of \(\mathfrak{P}\), see [1, SS1.1.8]. ### Overconvergence Berthelot's functors \(j^{\dagger}\) and \(\underline{\Gamma}^{\dagger}\) of overconvergent sections and sections with support have a very natural interpretation in the world of adic spaces. Let \((X,Y,\mathfrak{P})\) be a frame, and write \(j\colon X\to Y\) for the given open immersion. Let \(i\colon Z\to Y\) be a complementary closed immersion. **1.3.1 Definition**.: Define endofunctors \[j_{X}^{\dagger} \colon\mathbf{Sh}(]Y[_{\mathfrak{A}\mathfrak{P}})\to\mathbf{Sh}(]Y[_{ \mathfrak{A}\mathfrak{P}})\] \[\underline{\Gamma}_{Z}^{\dagger} \colon\mathbf{Sh}(]Y[_{\mathfrak{A}\mathfrak{P}})\to\mathbf{Sh}(]Y[_{ \mathfrak{A}\mathfrak{P}})\] by \(j_{X}^{\dagger}:=j_{*}j^{-1}\) and \(\underline{\Gamma}_{Z}^{\dagger}:=i_{!}i^{-1}\). These are both exact, and there is an exact sequence \[0\to\underline{\Gamma}_{Z}^{\dagger}\to\mathrm{id}\to j_{X}^{\dagger}\to 0\] of endofunctors of \(\mathbf{Sh}(]Y[_{\mathfrak{A}\mathfrak{P}})\). More generally, if \(j^{\prime}\colon U\to X\gets T\colon i^{\prime}\) are complementary open and closed immersions, then we have functors \[j_{U}^{\dagger} :=j_{*}^{\prime}j^{\prime-1}\colon\mathbf{Sh}(]X[_{\mathfrak{A} \mathfrak{P}})\to\mathbf{Sh}(]X[_{\mathfrak{A}\mathfrak{P}})\] \[\underline{\Gamma}_{T}^{\dagger} :=i_{!}^{\prime}i^{\prime-1}\colon\mathbf{Sh}(]X[_{\mathfrak{A} \mathfrak{P}})\to\mathbf{Sh}(]X[_{\mathfrak{A}\mathfrak{P}})\] sitting in a short exact sequence \[0\to\underline{\Gamma}_{T}^{\dagger}\to\operatorname{id}\to j_{U}^{\dagger}\to 0.\] To compare these with the original definitions of Berthelot, let \(\mathfrak{P}_{K0}\) denote the rigid analytic generic fibre of \(\mathfrak{P}\) in the sense of [1, SS0.2]. In the notation of SS1.1 above this is the rigid analytic space \((\mathfrak{P}_{K})_{0}\). Let \(]Y[_{\mathfrak{P}0}\subset\mathfrak{P}_{K0}\) denote the rigid analytic tube in the sense of Berthelot [1, SS1.1], and \(j_{X}^{\dagger}\), \(\underline{\Gamma}_{Z}^{\dagger}\) the corresponding endofunctors of \(\mathbf{Sh}_{G}(]Y[_{\mathfrak{P}0})\) as defined in [1, SS2.1]. **1.3.2 Proposition**.: _There is a unique isomorphism \(\left(]Y[_{\mathfrak{P}0}\right)_{0}\xrightarrow{\cong}]Y[_{\mathfrak{P}0}\) of rigid analytic spaces over \(K\), compatible with the natural open immersions of both sides into \(\mathfrak{P}_{K0}\). Moreover, the diagrams_ _commute up to natural isomorphism._ Proof.: Note that \(\mathfrak{P}_{K0}\) can be identified with the set of _rigid_ points of \(\mathfrak{P}_{K}\), and the functor \(U\mapsto U\cap\mathfrak{P}_{K0}\) gives a one-to-one correspondence between admissible open subsets of \(\mathfrak{P}_{K}\) (in the sense of [1, Definition II.B.1.1]) and admissible open subsets of \(\mathfrak{P}_{K0}\) (in the sense of the \(G\)-topology). Since tube open subsets of \(\mathfrak{P}\) are admissible by [1, Proposition II.B.1.7], the first claim is reduced to showing that \(]Y[_{\mathfrak{P}0}\cap\mathfrak{P}_{K0}=]Y[_{\mathfrak{P}0}\) as subsets of \(\mathfrak{P}\). The question is now local on \(\mathfrak{P}\), which we may assume to be affine. Let \(f_{1},\dots,f_{r}\in\Gamma(\mathfrak{P},\mathcal{O}_{\mathfrak{P}})\) be such that \(Y=V(\varpi,f_{1},\dots,f_{r})\). Then by [1, Proposition II.4.2.11] we can identify \[]Y[_{\mathfrak{P}}=\left\{\left.x\in\mathfrak{P}_{K}\right|v_{[x]}(f_{i})<1\ \forall i \right\},\] where \([x]\) denote the maximal generalisation of \(x\). Since rigid points \(x\) satisfy \(x=[x]\), the claim now reduces to [1, Proposition 1.1.1]. For the second claim, there are exact sequences \[0\to\underline{\Gamma}_{Z}^{\dagger}\to\operatorname{id}\to j_{X}^{\dagger}\to 0\] of functors on both \(\mathbf{Sh}(]Y[_{\mathfrak{P}0})\) and \(\mathbf{Sh}_{G}(]Y[_{\mathfrak{P}0})\), it therefore suffices to consider \(j_{X}^{\dagger}\). In this case, since the topos of an analytic variety is equivalent to that of the associated rigid space, the claim fact follows from the alternative definition of \(j_{X}^{\dagger}\) given in [1, Proposition 5.3] (for the equivalence with Berthelot's definition, see [1, Proposition 5.1.12]). On the level of \(\mathcal{O}\)-modules, Proposition 1.3.2 shows that: 1. there is a canonical isomorphism \(\left(j_{X}^{\dagger}\mathcal{O}_{]Y[_{\mathfrak{P}}}\right)_{0}\cong j_{X}^{ \dagger}\mathcal{O}_{]Y[_{\mathfrak{P}0}}\); 2. the functor \(\mathscr{F}\mapsto\mathscr{F}_{0}\) induces an equivalence of categories \[\mathbf{Mod}(j_{X}^{\dagger}\mathcal{O}_{]Y[_{\mathfrak{P}}})\xrightarrow{ \cong}\mathbf{Mod}(j_{X}^{\dagger}\mathcal{O}_{]Y[_{\mathfrak{P}0}}),\] preserving the full subcategories of coherent modules. We can then use this to transport all results proved for overconvergent sheaves in the language of rigid analytic spaces into the adic context, for example the following. **1.3.3 Proposition** (Proposition 2.1.10, [1]).: _The inverse image functor induces an equivalence of categories_ \[\varinjlim_{V}\mathbf{Coh}(\mathcal{O}_{V})\xrightarrow{\cong}\mathbf{Coh}( \mathcal{O}_{]X[_{\mathfrak{P}}})\] _where \(V\) ranges over all open neighbourhoods of \(]X[_{\mathfrak{P}}\) in \(]Y[_{\mathfrak{P}}\)._ In particular, this implies that \(\mathcal{O}_{|X[_{\mathfrak{X}}}\) and \(j^{\dagger}_{X}\mathcal{O}_{]Y[_{\mathfrak{X}}}\) are coherent sheaves of rings on \(]X[_{\mathfrak{X}}\) and \(]Y[_{\mathfrak{X}}\) respectively. Since \(j\colon\,]X[_{\mathfrak{X}}\to\,]Y[_{\mathfrak{X}}\) is a closed immersion on the underling topological spaces, the following lemma is then elementary. **Lemma 1.3.4**.: _The functors_ \[j_{*}\colon\mathbf{Coh}(\mathcal{O}_{]X[_{\mathfrak{X}}]}\rightleftarrows \mathbf{Coh}(j^{\dagger}_{X}\mathcal{O}_{]Y[_{\mathfrak{X}}})\colon j^{-1}\] _are inverse equivalences of categories. If \(\mathscr{E}\) is a coherent \(j^{\dagger}_{X}\mathcal{O}_{]Y[_{\mathfrak{X}}}\)-module, then_ \[\mathrm{H}^{*}(]Y[_{\mathfrak{X}}\,,\mathscr{E})\stackrel{{ \cong}}{{\longrightarrow}}\mathrm{H}^{*}(]X[_{\mathfrak{X}}\,,j^{-1} \mathscr{E}).\] ### Germs and compactly supported de Rham cohomology If \((X,Y,\mathfrak{P})\) is a frame, then the tube \(]X[_{\mathfrak{X}}\) is not an adic space, but a closed subset of the adic space \(]Y[_{\mathfrak{X}}\). Moreover, it has the property that it is stable under generalisation inside \(]Y[_{\mathfrak{X}}\), in other words it is an _overconvergent_ closed subset. In our previous work on rigid cohomology and arithmetic \(\mathscr{D}\)-modules [1, 2], we found it useful to formalise this in the notion of an _overconvergent germ_. **Definition 1.4.1**.: 1. A pre-germ is a pair \((S,\mathscr{X})\) where \(\mathscr{X}\) is an analytic variety, and \(S\subset\mathscr{X}\) is a closed subset. 2. A morphism \(f\colon(S,\mathscr{X})\to(T,\mathscr{Y})\) of pre-germs is a morphism \(f\colon\mathscr{X}\to\mathscr{Y}\) of analytic varieties such that \(f(S)\subset T\). 3. A morphism \(f\colon(S,\mathscr{X})\to(T,\mathscr{Y})\) of pre-germs is called a strict neighbourhood if \(f\colon\mathscr{X}\hookrightarrow\mathscr{Y}\) is an open immersion inducing a homeomorphism \(f\colon S\stackrel{{\cong}}{{\longrightarrow}}T\). 4. The category of germs of analytic varieties is the localisation of the category of pre-germs at the class of strict neighbourhoods. 5. A germ \((S,\mathscr{X})\) is called _overconvergent_ if \(S\) is stable under generalisation in \(\mathscr{X}\). I will generally suppress the ambient adic space \(\mathscr{X}\) from the notation, and write a germ \((S,\mathscr{X})\) simply as \(S\). Thus the motivating example \((]X[_{\mathfrak{X}}\,,]Y[_{\mathfrak{X}})\) of an overconvergent germ will be written simply as \(]X[_{\mathfrak{X}}\). The category of germs admits fibre products, and contains the category of analytic varieties as a full subcategory. Thus it is possible to form spaces such as the relative (open or closed) polydisc \(\mathbb{D}^{s}_{S}(0;\rho^{(-)})\) over a germ \(S\). For example, in the language of germs, Berthelot's strong fibration theorem [1] has the following form. **Theorem 1.4.2**.: (Berthelot). _Let \((f,g,u)\colon(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime})\to(X,Y,\mathfrak{P})\) be a morphism of frames, such that \(f\) is an isomorphism, \(g\) is proper and \(u\) is smooth in a neighbourhood of \(X^{\prime}\), of relative dimension \(d\)._ 1. _If_ \(g\) _is also an isomorphism, then, locally on_ \(X\) _and on_ \(\mathfrak{P}\)_, there exists an isomorphism_ \[]X^{\prime}[_{\mathfrak{P}^{\prime}}\stackrel{{\cong}}{{ \longrightarrow}}]X[_{\mathfrak{P}}\times_{K}\mathbb{D}^{d}_{K}(0;1^{-})\] _of germs, identifying_ \(]f[_{u}\) _with the first projection._ 2. _If_ \(d=0\)_, then_ \[]f[_{u}:\,]X^{\prime}[_{\mathfrak{P}^{\prime}}\stackrel{{ \cong}}{{\longrightarrow}}]X[_{\mathfrak{P}}\] _is an isomorphism of germs._ As with the case of tubes, any germ \(S\) can be viewed as a ringed space by equipping it with the restriction \[\mathcal{O}_{S}:=\mathcal{O}_{\mathscr{X}}|_{S}\] of the structure sheaf from its ambient adic space. Similarly, if \(f\colon S\to T\) is a morphism in \(\mathbf{Germ}_{K}\), the relative de Rham complex \(\Omega^{\bullet}_{S/T}\) is defined via restriction from the ambient adic space. I will occasionally talk about morphisms of germs being smooth or partially proper, or other similar adjectives familiar from the theory of schemes and/or adic spaces. This should always be understood in the sense of [10, SS1.10]. For example, being smooth means that there exists a representative \(f\colon(S,\mathscr{X})\to(T,\mathscr{Y})\) at the level of pre-germs such that \(f\colon\mathscr{X}\to\mathscr{Y}\) is smooth and \(S=f^{-1}(T)\). Partial properness is slightly more involved, see [10, Definition 1.10.15]. In fact, more important for us than partial properness is what might be called partial properness in the sense of Kiehl. This is a morphism \(f\colon S\to T\) which, locally on the source and target, factors through a closed immersion \(S\to\mathbb{D}_{T}^{d}(0;1^{-})\) for some \(d\). For a more detailed discussion of this property, see [1, SS4]. Note that being smooth in this sense is a very strong condition on a morphism of germs. For example, if \(S\) is a germ which is _not_ an open subset of its ambient adic space, then the structure morphism \(f\colon S\to\operatorname{Spa}\left(K,\mathcal{V}\right)\) cannot be smooth. In particular, if \((X,Y,\mathfrak{P})\) is a frame, then \(|X[_{\mathfrak{P}}\) cannot be smooth over \(K\) if \(X\) is not closed in \(Y\). It will therefore be helpful to make the following weakening of the definition. **1.4.3 Definition**.: A morphism \(f\colon(S,\mathscr{X})\to(T,\mathscr{Y})\) of germs is _quasi-smooth_ if there exists an open neighbourhood \(S\subset V\subset\mathscr{X}\) which is smooth over \(\mathscr{Y}\). Note that this only depends on the induced morphism of germs, and implies that \(\Omega_{S/T}\) is a locally finite free \(\mathcal{O}_{S}\)-module. **1.4.4 Example**.: If \((X,Y,\mathfrak{P})\) is a frame, then \(|X[_{\mathfrak{P}}\) is quasi-smooth over \(K\) if and only if \(\mathfrak{P}\) is rig-smooth around \(X\). More generally, let \((f,g,u)\colon(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime})\to(X,Y,\mathfrak{P})\) be a morphism of frames. If \(g\) is proper, then \(|f|_{u}\,:\,|X^{\prime}[_{\mathfrak{P}^{\prime}}\to]\,|X[_{\mathfrak{P}}\) is partially proper in the sense of Kiehl. Indeed, the morphism \(|Y^{\prime}[_{\mathfrak{P}^{\prime}}\to]\,|Y[_{\mathfrak{P}}\) is a partially proper morphism of analytic varieties, and is hence partially proper in the sense of Kiehl by [10, Remark 1.3.19], and \(|X^{\prime}[_{\mathfrak{P}^{\prime}}\to]\,|Y^{\prime}[_{\mathfrak{P}^{\prime}}\) is a closed immersion. If \(u\) is smooth in a neighbourhood of \(X^{\prime}\), then \(|f|_{u}\) is quasi-smooth, because the non-smooth locus of \(|g|_{u}\) is a closed analytic subspace of \(|Y^{\prime}[_{\mathfrak{P}^{\prime}}\). If, moreover, the given square of varieties is Cartesian, then \(|X^{\prime}[_{\mathfrak{P}^{\prime}}=]g|_{u}^{-1}\) (\(|X[_{\mathfrak{P}})\), and so \(|f|_{u}\) is smooth. Up to possibly replacing \(Y^{\prime}\) by a closed subscheme containing \(X^{\prime}\), the above square will be Cartesian whenever both \(g\) and \(f\) are proper. Quasi-smooth germs also satisfy analytic continuation. **1.4.5 Lemma**.: _Let \(S\) be a connected, quasi-smooth germ, and \(\mathscr{F}\) a locally finite free \(\mathcal{O}_{S}\)-module. Then, for any non-empty open subset \(U\subset S\), the natural map_ \[\Gamma(S,\mathscr{F})\to\Gamma(U,\mathscr{F})\] _is injective._ Proof.: Fix a smooth ambient analytic space \(\mathscr{X}\) for \(S\), and let \(s\in\Gamma(S,\mathscr{F})\) be a section. If \(U,V\subset S\) are open subsets such that \(s|_{U}=0\) and \(s|_{V}=0\), then \(s|_{U\cup V}=0\). We may therefore take \(U\) maximal with the property that \(s|_{U}=0\). Suppose for contradiction that \(U\neq S\). Consider the collection \(\mathcal{C}\) of all connected open subsets \(V\subset S\) such that: * \(V=\boldsymbol{V}\cap S\) for a connected open \(\boldsymbol{V}\subset\mathscr{X}\); * \(\mathscr{F}|_{V}\cong\mathcal{O}_{V}^{\oplus n}\) where \(n=\operatorname{rank}\mathscr{F}\); * the section \(s|_{V}\) of \(\mathscr{F}|_{V}\cong\mathcal{O}_{V}^{\oplus n}\) extends to a section of \(\Gamma(\boldsymbol{V},\mathcal{O}_{\boldsymbol{V}}^{\oplus n})\). Note that every point \(x\in S\) admits a cofinal set of neighbourhoods satisfying the first and second of these, any sufficiently small element of which also satisfies the third. Hence \(x\) at least one neighbourhood satisfying all three. Therefore, such open sets cover \(S\), and since \(S\) is connected, we deduce that there exists some \(V\in\mathcal{C}\) such that \(V\not\subset U\) and \(U\cap V\neq\emptyset\). To contradict the maximality of \(U\), it therefore suffices to show that \(s|_{V}=0\). But we know that \(s|_{U\cap V}=0\), and since \(s|_{V}\) extends to \(\boldsymbol{V}\), it suffices to show that \[\Gamma(\boldsymbol{V},\mathcal{O}_{\boldsymbol{V}})\to\Gamma(U\cap V, \mathcal{O}_{\boldsymbol{V}}).\] If we pick any point \(v\in U\cap V\), it therefore suffices to show that \[\Gamma(\boldsymbol{V},\mathcal{O}_{\boldsymbol{V}})\to\mathcal{O}_{ \boldsymbol{V},v}\] is injective. Since \(\boldsymbol{V}\) is smooth and connected, this follows from [1, Proposition 0.1.13]. For any partially proper morphism of germs \(f\colon S\to T\), we defined in [1] a functor \[\mathbf{R}f_{!}\colon\mathbf{D}^{+}(S)\to\mathbf{D}^{+}(T),\] having the following properties: 1. \(f_{!}:=\mathcal{H}^{0}(\mathbf{R}f_{!})\) is the functor of sections with proper support; 2. \(\mathbf{R}f_{!}\) is the total derived functor of \(f_{!}\); 3. there is a natural isomorphism \(\mathbf{R}g_{!}\circ\mathbf{R}f_{!}\xrightarrow{\cong}\mathbf{R}(g\circ f)_{!}\) whenever \(f,g\) are composable partially proper morphisms; 4. \(\mathbf{R}f_{!}=\mathbf{R}f_{*}\) whenever \(f\) is proper; 5. \(\mathbf{R}f_{!}=f_{!}\) is the usual extension by zero functor whenever \(f\) is a (partially proper) locally closed immersion. If \(f:S\to\operatorname{Spa}\left(K,\mathcal{V}\right)\) is the structure map of a germ, I will write \(\mathbf{R}\Gamma_{c}(S,-)\) for \(\mathbf{R}f_{!}\), and \(\operatorname{H}^{i}_{c}(S,-)\) for its cohomology groups. The functor \(\mathbf{R}f_{!}\) preserves module structures, and there is the following version of the projection formula. **1.4.6 Lemma** ([1], Corollary 3.8.2).: _Let \(f:S\to T\) be a partially proper morphism of germs. For any locally free \(\mathcal{O}_{T}\)-module \(\mathscr{F}\) of finite rank, and any complex \(\mathscr{G}\) of \(\mathcal{O}_{S}\)-modules, there is a natural isomorphism_ \[\mathscr{F}\otimes_{\mathcal{O}_{T}}\mathbf{R}f_{!}\mathscr{G}\xrightarrow{ \cong}\mathbf{R}f_{!}(f^{*}\mathscr{F}\otimes_{\mathcal{O}_{S}}\mathscr{G})\] _in \(\mathbf{D}^{+}(\mathcal{O}_{T})\)._ In general, the proper base change theorem for \(\mathbf{R}f_{!}\) fails, but there is at least the following partial result. **1.4.7 Proposition** ([1], Lemma 3.5.2).: _Let_ _by a Cartesian diagram of germs, such that \(f\) is partially proper, and \(g\) is one of the following:_ 1. _a locally closed immersion onto a subspace which is closed under generalisation;_ 2. _the inclusion of a maximal point of_ \(T\)_._ _Then, for any \(\mathscr{F}\in\mathbf{D}^{+}(S)\), the base change map_ \[g^{-1}\mathbf{R}f_{!}\mathscr{F}\to\mathbf{R}f_{!}^{\prime}g^{\prime-1} \mathscr{F}\] _is an isomorphism._ If \(f\colon S\to T\) is a quasi-smooth morphism of germs, de Rham pushforwards with or without proper supports can be defined in the usual way. Namely, if \(\mathscr{F}\) is an \(\mathcal{O}_{S}\)-module with integrable connection, then we form the de Rham complex \(\Omega^{\bullet}_{S/T}\otimes_{\mathcal{O}_{S}}\mathscr{F}\) and set \[\mathbf{R}f_{\mathrm{dR}*}\mathscr{F} :=\mathbf{R}f_{*}(\Omega^{\bullet}_{S/T}\otimes_{\mathcal{O}_{S}} \mathscr{F})\] \[\mathbf{R}f_{\mathrm{dR}!}\mathscr{F} :=\mathbf{R}f_{!}(\Omega^{\bullet}_{S/T}\otimes_{\mathcal{O}_{S}} \mathscr{F}).\] as objects of \(\mathbf{D}^{+}(\mathcal{O}_{T})\). If, moreover, \(T\) is quasi-smooth over \(K\), then these can be upgraded to (complexes of) modules over the ring of differential operators on \(T\). Indeed, if \(\mathscr{X}\) and \(\mathscr{Y}\) are ambient analytic varieties for \(S\) and \(T\) respectively, then we set \[\mathscr{D}_{T}:=\mathscr{D}_{\mathscr{Y}}|_{T},\quad\mathscr{D}_{S}:=\mathscr{ D}_{\mathscr{X}}|_{S}\;.\] The transfer bimodule \(\mathscr{D}_{T\gets S}:=\mathscr{D}_{\mathscr{Y}\leftarrow\mathscr{X}}|_{S}\) is also defined via restriction, and there is the usual identification \[\Omega^{\bullet}_{S/T}\otimes_{\mathcal{O}_{S}}\mathscr{F}\stackrel{{ \cong}}{{\longrightarrow}}\mathscr{D}_{T\gets S}\otimes^{ \mathbf{L}}_{\mathscr{D}_{S}}\mathscr{F}[-\dim f].\] The left \(\mathscr{D}_{T}\)-module structure on \(\mathscr{D}_{T\gets S}\) therefore upgrades \(\mathbf{R}f_{\mathrm{dR}*}\mathscr{F}\) and \(\mathbf{R}f_{\mathrm{dR}!}\mathscr{F}\) to complexes of \(\mathscr{D}_{T}\)-modules. ### The trace morphism Now suppose that \(f\colon S\to T\) is a smooth morphism of germs,10 of relative dimension \(d\), and partially proper in the sense of Kiehl. If \(T\) is overconvergent, we constructed in [1, SS5] a trace map Footnote 10: recall that this is a very strong condition on \(f\)! \[\mathrm{Tr}\colon\mathbf{R}^{2d}f_{\mathrm{dR}}\mathcal{O}_{S}\to\mathcal{O}_{T}\] having the following properties: 1. if \(T=\mathrm{Spa}\,(R,R^{+})\) is affinoid, and \(S=\mathbb{D}^{d}_{\mathcal{V}}(0;1^{-})\) is the relative open unit disc over \(Y\), with co-ordinates \(z_{1},\ldots,z_{d}\), then \(\mathrm{Tr}\) is induced by the residue map \[\mathrm{H}^{d}_{c}(S/T,\omega_{S/T})=R(z_{1}^{-1},\ldots,z_{d}^{-1 })^{\dagger}\;d\log z_{1}\wedge\ldots\wedge d\log z_{d} \to R\] \[\sum_{i_{1},\ldots,i_{d}\geq 0}r_{i_{1},\ldots,i_{d}}z_{1}^{-i_{1}} \ldots z_{d}^{-i_{d}}\;d\log z_{1}\wedge\ldots\wedge d\log z_{d} \mapsto r_{0,\ldots,0};\] 2. whenever \(S\) is locally either a \(\mathbb{D}^{d}(0;1^{-})\)-bundle or an \(\mathbb{A}^{d,\mathrm{an}}\)-bundle over \(T\), \(\mathrm{Tr}\) is an isomorphism. Moreover, we showed that \(\mathbf{R}^{g}f_{\mathrm{dR}}\mathcal{O}_{S}=0\) if \(q>2d\), and hence the trace map can be viewed as a morphism \[\mathrm{Tr}\colon\mathbf{R}f_{\mathrm{dR}}\mathcal{O}_{S}[2d]\to\mathcal{O}_{T}.\] The projection formula then gives rise to a trace map \[\mathrm{Tr}_{\mathscr{F}}\colon\mathbf{R}f_{\mathrm{dR}}f^{*}\mathscr{F}[2d] =\mathbf{R}f_{\mathrm{dR}}\mathcal{O}_{S}\otimes_{\mathcal{O}_{T}}\mathscr{F} [2d]\to\mathscr{F}\] for any finite locally free \(\mathcal{O}_{T}\)-module \(\mathscr{F}\). Whenever \(\mathscr{F}\) has the structure of a \(\mathscr{D}_{T}\)-module, this map is \(\mathscr{D}_{T}\)-linear, and whenever \(S\) is locally either a \(\mathbb{D}^{d}(0;1^{-})\)-bundle or an \(\mathbb{A}^{d,\mathrm{an}}\)-bundle over \(T\), this map is an isomorphism. **1.5.1 Corollary**.: _Let \((f,g,u)\colon(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime})\to(X,Y,\mathfrak{P})\) be a morphism of frames, such that \(f\) is an isomorphism, \(g\) is proper and \(u\) is smooth in a neighbourhood of \(X^{\prime}\), of relative dimension \(d\). Then the trace map_ \[\mathrm{Tr}\colon\mathbf{R}]f[_{\mathrm{dR}!}\mathcal{O}_{]X^{\prime}[_{ \mathfrak{D}^{\prime}}}[2d]\to\mathcal{O}_{]X[_{\mathfrak{D}}}\] _is an isomorphism in \(\mathbf{D}^{b}(\mathscr{D}_{]X[_{\mathfrak{D}}})\)._ Proof.: This follows from Theorem 1.4.2 in exactly the same way as the proof that \[\mathcal{O}_{]X[_{\mathfrak{p}}}\to\mathbf{R}]f[_{\operatorname{dR}*}\,\mathcal{O }_{]X^{\prime}[_{\mathfrak{p}^{\prime}}}\] is an isomorphism, see for example [10, SS6.5]. The first step is to show the result is true when \(g\) is an isomorphism, which is a fairly direct consequence of Theorem 1.4.2 and the above properties of the trace map. This then implies that the claim only depends on the given morphism of pairs \((X,Y^{\prime})\to(X,Y)\) and not on the ambient morphism \(u\) of formal schemes. The question is then local on \(Y\), which can therefore be assumed quasi-projective, and one can use Chow's lemma to find a projective morphism \(Y^{\prime\prime}\to Y^{\prime}\) such that \(Y^{\prime\prime}\to Y\) is also projective. Via a two-out-of-three argument, it therefore suffices to prove the result for both \((X,Y^{\prime\prime})\to(X,Y^{\prime})\) and \((X,Y^{\prime\prime})\to(X,Y)\). This therefore reduces to the case when \(g\colon Y^{\prime}\to Y\) is projective. Proposition 1.4.7 together with Berthelot's resolution [11, Proposition 2.1.8] then implies that the question is local on \(X\), and thanks to [10, Lemma 6.5.1] one can then assume that \(u\) is actually etale in a neighbourhood of \(X\). Finally, this case follows directly from Theorem 1.4.2. ### Sheaves on diagrams of spaces To make precise certain constructions, involving derived functors applied to diagrams of complexes, I will need to use the formalism of diagram toposes. So let \(I\) be a category, and \(X\colon I\to\mathbf{Top}\) a diagram in the category of topological spaces. Then there is a natural site \(X_{\operatorname{Zar}}\) associated to \(X\) whose objects are pairs \((i,U)\) where \(i\in I\) and \(U\subset X_{i}\) is an open subset, and morphisms \((i,U)\to(j,V)\) consist of morphisms \(\alpha\colon i\to j\) such that the morphism \(\alpha\colon X_{i}\to X_{j}\) satisfies \(U\subset\alpha^{-1}(V)\). A covering family is just a covering family on a single \(X_{i}\). Sheaves on this site have the usual description as sheaves \(\mathscr{F}_{i}\) on each \(X_{i}\), together with transition maps \(\alpha^{-1}\mathscr{F}_{j}\to\mathscr{F}_{i}\) for any \(\alpha\colon i\to j\) in \(I\), satisfying the usual compatibility condition for commutative triangles in \(I\). The sheaf \(\mathscr{F}_{i}\) is called the restriction of \(\mathscr{F}\) to \(X_{i}\). I will usually abuse terminology and talk above sheaves on the diagram \(X\) itself. 1. [label=1.6.0] 2. _Example_.: 1. If \(X\) is a diagram of ringed spaces, then \(X_{\operatorname{Zar}}\) is naturally ringed by the sheaf of rings \(\mathcal{O}_{X}\) whose restriction to each \(X_{i}\) is \(\mathcal{O}_{X,i}:=\mathcal{O}_{X_{i}}\). 3. If \(X\) is a constant diagram, then the category of sheaves on \(X\) is equivalent to the category of \(I^{\operatorname{op}}\)-shaped diagrams of sheaves on the single topological space \(X\). Now suppose that \(Y\colon J\to\mathbf{Top}\) is another diagram of topological spaces. Suppose furthermore that there is a functor \(f^{-1}\colon J\to I\), together with a morphism \(f\colon X\circ f^{-1}\to Y\) of \(J\)-shaped diagrams in \(\mathbf{Top}\). There is then a natural functor \[f^{-1}\colon Y_{\operatorname{Zar}} \to X_{\operatorname{Zar}}\] \[(j,V) \mapsto(f^{-1}(j),f^{-1}(V)).\] Explicitly, if \((j,V)\in Y_{\operatorname{Zar}}\), with \(V\subset Y_{j}\), then \(f\colon X_{f^{-1}(j)}\to Y_{j}\), and thus \(f^{-1}(V)\subset X_{f^{-1}(j)}\). **Lemma 1.6.2**.: _Assume that for every \(i\in I\), the category \(i/J\) is cofiltered. Then \(f^{-1}\) induces a morphism of sites \(f\colon X_{\operatorname{Zar}}\to Y_{\operatorname{Zar}}\)._ Proof.: The functor \(f^{-1}\) induces, by the usual formulae, a functor \(f^{-1}\) from sheaves on \(Y_{\operatorname{Zar}}\) to those on \(X_{\operatorname{Zar}}\), and the content of the lemma is that this functor is exact. Explicitly, if \(\mathscr{F}\) is a sheaf on \(Y_{\operatorname{Zar}}\), and \(i\in I\), then for any \(j\in i/J\) there is the given morphism \(f_{j}\colon X_{i}\to Y_{j}\), and the restriction of \(f^{-1}\mathscr{F}\) to \(X_{i}\) is given explicitly by \(\operatorname*{colim}_{(i/J)^{\operatorname{op}}}f_{j}^{-1}\mathscr{F}_{j}\). (This can be seen by observing that the functor thus defined is adjoint to \(f_{*}\).) If each \((i/J)^{\operatorname{op}}\) is filtered, then colimits of diagrams indexed by \((i/J)^{\operatorname{op}}\) are exact, and thus \(f^{-1}\) is exact as required. Of course, if \(X\) and \(Y\) are diagrams of ringed spaces, and \(f\colon X\circ f^{-1}\to Y\) is a morphism of diagrams of ringed spaces, then \(f\) can be naturally promoted to a morphism of ringed sites (at least under the assumptions of Lemma 1.6.2. #### 1.6.3. Example Suppose that \(I=\{*\}\) is the category with a single object and morphism. Then a morphism \(f\colon X\to Y\) amounts to a compatible system of morphisms \(f_{j}\colon X\to Y_{j}\) for each \(j\). The condition of Lemma 1.6.2 amounts to saying that \(J\) is cofiltered. In this case, if \(\mathscr{F}=\{\mathscr{F}_{j}\}_{j\in J}\) is a sheaf on \(Y\), then \(f^{-1}\mathscr{F}=\operatorname*{colim}_{J^{\operatorname*{op}}}f_{j}^{-1} \mathscr{F}_{j}\). It is also possible to form morphisms of toposes by taking limits, although of course this doesn't fit into the general formalism of Lemma 1.6.2. To describe this construction, let \(X\) be a topological space, \(I\) a filtered category, and \(X_{I}\colon I\to\mathbf{Top}\) the constant diagram with value \(X\). Thus a sheaf on \(X_{I}\) is just an \(I^{\operatorname*{op}}\)-shaped diagram of sheaves on \(X\). The functor \[\mathbf{Sh}(X_{I}) \to\mathbf{Sh}(X)\] \[\{\mathscr{F}_{i}\}_{i\in I} \mapsto\lim_{I}\mathscr{F}_{i}\] is then the pushforward for a morphism of toposes \(X_{I}\to X\). The adjoint pullback functor \[\mathbf{Sh}(X)\to\mathbf{Sh}(X_{I})\] is, of course, the 'constant diagram' functor. ### Limits and colimits For want of a better place to put them, I prove here a couple of simple lemmas on limits and colimits that will be useful later on. **1.7.1 Lemma**.: _Let \(A,B:I\to\mathbf{Ab}\) be filtered diagrams of abelian groups, and write \(B^{\mathbb{N}}\) for the diagram given by \(i\mapsto B(i)^{\mathbb{N}}:=\prod_{n\in\mathbb{N}}B(i)\). Suppose that we are given a morphism_ \[F:A\to B^{\mathbb{N}}\] _of diagrams. Assume that:_ * \(F\) _is level-wise injective;_ * _for each_ \(i\in I\)_, the kernels of the transition maps_ \(B(i)\to B(j)\) _for_ \(j\geq i\) _eventually stabilise._11__ Footnote 11: More formally, for each \(i\in I\), there exists some \(j_{0}\geq i\), such that for all \(j\geq j_{0}\), the inclusion \(\ker\left(B(i)\to B(j_{0})\right)\to\ker\left(B(i)\to B(j)\right)\) is an equality. _Then the natural map_ \[\operatorname*{colim}_{I}A\to\left(\operatorname*{colim}_{I}B\right)^{ \mathbb{N}}\] _is also injective._ Proof.: Since \(F\) is levelwise injective, we see that the map \[\operatorname*{colim}_{I}A\to\operatorname*{colim}_{I}(B^{\mathbb{N}})\] is injective, it therefore suffices to prove that the natural map \[\operatorname*{colim}_{I}(B^{\mathbb{N}})\to\left(\operatorname*{colim}_{I}B \right)^{\mathbb{N}}\] is injective. An element of \(\operatorname*{colim}_{I}(B^{\mathbb{N}})\) is represented by an element \(i\in I\) and a collection \((b_{n})\in B(i)^{\mathbb{N}}\). Such an element mapping to zero in \((\operatorname*{colim}_{I}B)^{\mathbb{N}}\) says that for each \(n\), there exists \(i_{n}\geq i\) such that \(b_{n}\in\ker\left(B(i)\to B(i_{n})\right)\). The hypothesis that the kernels eventually stabilise means that I can pick a single \(j\in I\) such that \(b_{n}\in\ker\left(B(i)\to B(j)\right)\) for all \(n\), thus \((b_{n})\in\ker\left(B(i)^{\mathbb{N}}\to B(j)^{\mathbb{N}}\right)\). This implies that \((b_{n})\) represents the zero element in \(\operatorname*{colim}_{I}(B^{\mathbb{N}})\), and so \[\operatorname*{colim}_{I}(B^{\mathbb{N}})\to\left(\operatorname*{colim}_{I}B \right)^{\mathbb{N}}\] is injective as claimed. #### 1.7.2. Lemma _Let \(X\) be a topological space, and \(\mathscr{K}_{n}\) a uniformly bounded \(\mathbb{N}\)-indexed projective system of complexes on \(X\). Suppose that for all \(q\in\mathbb{Z},n\in\mathbb{N}\), the transition maps in cohomology \(\mathcal{H}^{q}(\mathscr{K}_{n+1})\to\mathcal{H}^{q}(\mathscr{K}_{n})\) are zero. Then \(\mathbf{R}\underset{n}{\lim}\mathscr{K}_{n}=0\)._ Proof.: By writing \(\mathscr{K}_{\bullet}\) as an iterated extension of its cohomology sheaves, I can assume that in fact \(\mathscr{K}_{\bullet}\) is concentrated in a single degree. The assumption that the transition maps \(\mathscr{K}_{n+1}\to\mathscr{K}_{n}\) are zero implies that the natural morphism of projective systems \[\mathscr{K}_{\bullet+1}\to\mathscr{K}_{\bullet}\] factors through the zero projective system. It follows that the identity map on \(\mathbf{R}\varlim_{n}\mathscr{K}_{n}\) factors through the zero map, and so \(\mathbf{R}\varlim_{n}\mathscr{K}_{n}=0\). ## 2. Constructible isocrystals Constructible isocrystals on varieties were introduced in [16], and their study continued in [16, 17], where most results are phrased in terms of Le Stum's overconvergent site. It will be helpful to have a version of this formalism in the language of _realisations_ of constructible isocrystals on tubes. In this section, the aim is therefore to restate many of the foundational results of [16], but in the context of realisations. For a frame \((X,Y,\mathfrak{P})\), I want to be able to work locally on \(\mathfrak{P}\), which makes it important to work without properness hypotheses on \(Y\). This makes it necessary to develop a theory of constructible isocrystals not just for varieties, but for pairs \((X,Y)\) consisting of a variety \(X\) and a partial compactification \(Y\). In fact, this case is implicitly dealt with in [16], however, it will be more convenient here to phrase everything from the beginning in terms of realisations. ### Constructible modules I start by introducing the \(\mathcal{O}\)-modules underlying constructible isocrystals. **2.1.1 Definition**.: Let \((X,Y,\mathfrak{P})\) be a frame, and let \(\mathscr{F}\) be an \(\mathcal{O}_{|X|_{\mathfrak{P}}}\)-module. Then \(\mathscr{F}\) is called _constructible_ if there exists a stratification \(\{i_{\alpha}\colon X_{\alpha}\to X\}_{\alpha\in A}\) such that, for each \(\alpha\in A\), \(i_{\alpha}^{-1}\mathscr{F}\) is a coherent \(\mathcal{O}_{|X_{\alpha}|}\)-module. It is called _constructible locally free_ if a stratification can be chosen such that each \(i_{\alpha}^{-1}\mathscr{F}\) is a locally finite free \(\mathcal{O}_{|X_{\alpha}|_{\mathfrak{P}}}\)-module. The following facts about constructible and constructible locally free modules can be verified immediately. 1. If \(\mathscr{F}\) is an \(\mathcal{O}_{|X|_{\mathfrak{P}}}\)-module, and \(\{i_{\alpha}\colon X_{\alpha}\to X\}_{\alpha\in A}\) is a stratification of \(X\), then \(\mathscr{F}\) is constructible (resp. constructible locally free) if and only if each \(i_{\alpha}^{-1}\mathscr{F}\) is. 2. If \(i\colon Z\to X\) is a locally closed immersion, and \(\mathscr{F}\) is an \(\mathcal{O}_{|Z|_{\mathfrak{P}}}\)-module, then \(\mathscr{F}\) is constructible (resp. constructible locally free) if and only if \(i_{i}\mathscr{F}\) is constructible (resp. constructible locally free) as an \(\mathcal{O}_{|X|_{\mathfrak{P}}}\)-module. The exact functors \[i_{i}\colon\mathbf{Mod}(\mathcal{O}_{|Z|_{\mathfrak{P}}})\leftrightarrows \mathbf{Mod}(\mathcal{O}_{|X|_{\mathfrak{P}}})\colon i^{-1}\] induce an equivalence of categories between constructible (resp. constructible locally free) \(\mathcal{O}_{|Z|_{\mathfrak{P}}}\)-modules, and constructible (resp. constructible locally free) \(\mathcal{O}_{|X|_{\mathfrak{P}}}\)-modules which are supported on \(|Z|_{\mathfrak{P}}\). 3. If \(i\colon Z\to X\) is a closed immersion, with open complement \(j\colon U\to X\), then every constructible (resp. constructible locally free) \(\mathcal{O}_{|X|_{\mathfrak{P}}}\)-module \(\mathscr{F}\) sits in a short exact sequence \[0\to i_{i}i^{-1}\mathscr{F}\to\mathscr{F}\to j_{*}j^{-1}\mathscr{F}\to 0\] where \(i^{-1}\mathscr{F}\) and \(j^{-1}\mathscr{F}\) are constructible (resp. constructible locally free) modules on \(|Z|_{\mathfrak{P}}\) and \(|U|_{\mathfrak{P}}\) respectively. 4. Every constructible (resp. constructible locally free) \(\mathcal{O}_{|X|_{\mathfrak{P}}}\)-module \(\mathscr{F}\) admits a finite composition series \[0=\mathscr{F}_{0}\subset\mathscr{F}_{1}\subset\ldots\subset\mathscr{F}_{n}= \mathscr{F},\] such that, for each \(1\leq\alpha\leq n\), there exists a locally closed subscheme \(i_{\alpha}\colon Z_{\alpha}\to X\), a coherent (resp. locally finite free) \(\mathcal{O}_{]Z_{\alpha}[_{\mathfrak{Y}}}\)-module \(\mathscr{G}_{\alpha}\), and an isomorphism \[\mathscr{F}_{\alpha}/\mathscr{F}_{\alpha-1}\stackrel{{\cong}}{{ \longrightarrow}}i_{\alpha!}\mathscr{G}_{\alpha}.\] 5. The category of constructible (resp. constructible locally free) \(\mathcal{O}_{]X[_{\mathfrak{Y}}}\)-module is an abelian subcategory of \(\mathbf{Mod}(\mathcal{O}_{]X[_{\mathfrak{Y}}})\), closed under extensions. In particular, every \(\mathcal{O}_{]X[_{\mathfrak{Y}}}\)-module admitting a filtration as in (4) is constructible (resp. constructible locally free if each \(\mathscr{G}_{\alpha}\) is locally free). 6. If \(\mathscr{F}\) and \(\mathscr{G}\) are constructible (resp. constructible locally free) then so is \(\mathscr{F}\otimes_{\mathcal{O}_{]X[_{\mathfrak{Y}}}}\mathscr{G}\). _2.1.2 Remark_.: Note that the functors \(i_{!}i^{-1}\) and \(j_{*}j^{-1}\) appearing in (3) are the functors \(\underline{\Gamma}^{\dagger}_{Z}\) and \(j^{\dagger}_{U}\) introduced in SS1.2 above. **2.1.3 Lemma**.: _Any constructible locally free \(\mathcal{O}_{]X[_{\mathfrak{Y}}}\)-module is flat._ Proof.: Let \(\{i_{\alpha}\colon X_{\alpha}\to X\}_{\alpha\in A}\) be a stratification such that each \(i_{\alpha}^{-1}\mathscr{F}\) is locally finite free. Note that the tubes \(]X_{\alpha}[_{\mathfrak{Y}}\) cover \(]X[_{\mathfrak{Y}}\), and, by definition, we have \(\mathcal{O}_{]X_{\alpha}[_{\mathfrak{Y}}}=i_{\alpha}^{-1}\mathcal{O}_{]X[_{ \mathfrak{Y}}}\). Since flatness can be checked on stalks, we can therefore replace \(X\) by \(X_{\alpha}\), and thus assume that \(\mathscr{F}\) is a locally finite free \(\mathcal{O}_{]X[_{\mathfrak{Y}}}\)-module. In this case the claim is clear. _2.1.4 Remark_.: In general, constructible \(\mathcal{O}_{]X[_{\mathfrak{Y}}}\)-modules which are not constructible locally free need not be flat. **2.1.5 Lemma**.: _Suppose that \(\mathscr{F}\in\mathbf{Mod}(\mathcal{O}_{]X[_{\mathfrak{Y}}})\)._ 1. _Let_ \(\{\mathfrak{P}_{\alpha}\}_{\alpha\in A}\) _be a finite open cover of_ \(\mathfrak{Y}\)_, and_ \(X_{\alpha}:=X\cap\mathfrak{P}_{\alpha}\)_. Then_ \(\mathscr{F}\) _is constructible (resp. constructible locally free) if and only if each_ \(\mathscr{F}_{]X_{\alpha}[_{\mathfrak{Y}_{\alpha}}}\) _is._ 2. _Let_ \(\{X_{\alpha}\}_{\alpha\in A}\) _be a finite open cover of_ \(X\)_. Then_ \(\mathscr{F}\) _is constructible (resp. constructible locally free) if and only if each_ \(\mathscr{F}_{]X_{\alpha}[_{\mathfrak{Y}}}\) _is._ Proof.: In both cases the 'only if' direction is clear, I will prove the 'if' direction. 1. Given stratifications of each \(X_{\alpha}\), there exists a stratification of \(X\) whose restriction to each \(X_{\alpha}\) is a refinement of the given one of \(X\). Working on each of the given strata, the question therefore amounts to showing that if \(\mathscr{F}\) is an \(\mathcal{O}_{]X[_{\mathfrak{Y}}}\)-module, and each \(\mathscr{F}_{]X_{\alpha}[_{\mathfrak{Y}_{\alpha}}}\) is coherent (resp. locally finite free), then \(\mathscr{F}\) is coherent (resp. locally finite free). In this case, the tubes \(]X_{\alpha}[_{\mathfrak{Y}_{\alpha}}=]X[_{\mathfrak{Y}}\cap\mathfrak{P}_{ \alpha K}\) form an open cover of \(]X[_{\mathfrak{Y}}\), and the claim then follows. 2. Arguing similarly, I need to show that if \(\mathscr{F}\) is an \(\mathcal{O}_{]X[_{\mathfrak{Y}}}\)-module, and each \(\mathscr{F}_{]X_{\alpha}[_{\mathfrak{Y}}}\) is coherent (resp. locally finite free), then \(\mathscr{F}\) is coherent (resp. locally finite free). Thanks to [1, Proposition 2.1.8], \(\mathscr{F}\) sits in an exact sequence \[0\to\mathscr{F}\to\bigoplus_{\alpha\in A}j^{\dagger}_{X_{\alpha}}\mathscr{F} \to\bigoplus_{\alpha,\beta}j^{\dagger}_{X_{\alpha}\cap X_{\beta}}\mathscr{F},\] and applying [1, Proposition 5.4.12] shows that the unique such \(\mathcal{O}_{]X[_{\mathfrak{Y}}}\)-module \(\mathscr{F}\) has to be coherent. If each \(j^{\dagger}_{X_{\alpha}}\mathscr{F}\) is moreover locally free, then following through the _proof_ of [1, Proposition 5.4.12], which is essentially based upon Proposition 1.3.3 quoted above, proves that in fact \(\mathscr{F}\) is locally free. If \(u\colon(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime})\to(X,Y,\mathfrak{P})\) is a morphism of frames, it is a straightforward check to show that the functor \[u^{*}\colon\mathbf{Mod}(\mathcal{O}_{]X[_{\mathfrak{Y}}})\to\mathbf{Mod}( \mathcal{O}_{]X^{\prime}[_{\mathfrak{Y}^{\prime}}})\] preserves both constructible and constructible locally free modules. I don't know in what generality, the projection formula holds for constructible \(\mathcal{O}_{]X[_{\mathfrak{Y}}}\)-modules, but the following special cases will suffice. **2.1.6 Proposition**.: _Let \(\mathfrak{P}\) be a flat formal scheme, \(X\hookrightarrow\mathfrak{P}\) a locally closed immersion, \(\mathscr{F}\) a constructible locally free \(\mathcal{O}_{]X[_{\mathfrak{P}}}\)-module, and \(f\colon V\to\left]X[_{\mathfrak{P}}\) a morphism of germs._ 1. _If_ \(f\) _is quasi-compact, then the map_ \[\mathbf{R}f_{*}\mathcal{O}_{V}\otimes_{\mathcal{O}_{]X[_{\mathfrak{P}}}}^{ \mathbf{L}}\mathscr{F}\to\mathbf{R}f_{*}\mathbf{L}f^{*}\mathscr{F}\] _is an isomorphism in_ \(\mathbf{D}(\mathcal{O}_{]X[_{\mathfrak{P}}})\)_._ 2. _If_ \(f\) _is partially proper, then there is a natural (in_ \(\mathscr{F}\)_) isomorphism_ \[\mathbf{R}f_{!}\mathcal{O}_{V}\otimes_{\mathcal{O}_{]X[_{\mathfrak{P}}}}^{ \mathbf{L}}\mathscr{F}\xrightarrow{\cong}\mathbf{R}f_{!}\mathbf{L}f^{*} \mathscr{F}\] _in_ \(\mathbf{D}^{+}(\mathcal{O}_{]X[_{\mathfrak{P}}})\)_._ Proof.: The proofs in the two cases both ultimately rely on the same result in general topology: if \(g\colon S\to T\) is a coherent12 morphism of locally coherent13 and sober14 topological spaces, \(\mathscr{G}\) is a sheaf on \(S\), \(t\in T\), and \(G(t)\subset T\) is the set of generalisations of \(t\), then Footnote 12: quasi-compact and quasi-separated Footnote 13: coherent = quasi-compact, quasi-separated, and with a basis of quasi-compact opens Footnote 14: every irreducible subset has a unique generic point \[(\mathbf{R}^{q}g_{*}\mathscr{G})_{t}\xrightarrow{\cong}\mathrm{H}^{q}(g^{-1}( G(t)),\mathscr{G})\] for all \(q\geq 0\). This can be seen, for example, by taking a cofinal system \(\{U_{i}\}_{i\in I}\) of neighbourhoods of \(t\) in \(T\), and applying [18, Chapter 0, Proposition 3.1.19] to the inverse system \(g^{-1}(U_{i})\). It then follows, by passing to stalks, that if \(i\colon T^{\prime}\to T\) is the inclusion of a locally closed subset which is stable under generalisation, then the base change map \[i^{-1}\mathbf{R}g_{*}\to\mathbf{R}g_{*}^{\prime}i^{\prime-1}\] associated to the Cartesian square is an isomorphism. In case (1), the dependence on this base change result is explicit in the proof. In this case, since constructible locally free modules are flat, the tensor products and pullbacks in the statement above are really underived, thus I need to show that the map \[\mathbf{R}f_{*}\mathcal{O}_{V}\otimes_{\mathcal{O}_{]X[_{\mathfrak{P}}}} \mathscr{F}\to\mathbf{R}f_{*}f^{*}\mathscr{F}\] is an isomorphism. Choose a stratification \(\{i_{\alpha}\colon X_{\alpha}\to X\}_{\alpha\in A}\) such that each \(\mathscr{F}\left.\right|_{]X_{\alpha}[_{\mathfrak{P}}}\) is locally finite free. Letting \[f_{\alpha}\colon V_{\alpha}:=f^{-1}\left(\left|X_{\alpha}\right[_{\mathfrak{P} }\right)\to\left|X_{\alpha}\right[_{\mathfrak{P}}\] denote the pullback, the classical projection formula shows that \[\mathbf{R}f_{\alpha*}\mathcal{O}_{V_{\alpha}}\otimes_{\mathcal{O}_{]X_{ \alpha}[_{\mathfrak{P}}}}(\mathscr{F}\left.\right|_{]X_{\alpha}[_{\mathfrak{P }}})\to\mathbf{R}f_{\alpha*}f_{\alpha}^{*}(\mathscr{F}\left.\right|_{]X_{ \alpha}[_{\mathfrak{P}}})\] is an isomorphism. Hence, arguing via stalks, it suffices to show that for each \(\alpha\in A\), the base change map \[(\mathbf{R}f_{*}f^{*}\mathscr{F})\left.\right|_{]X_{\alpha}[_{\mathfrak{P}}} \to\mathbf{R}f_{\alpha*}f_{\alpha}^{*}(\mathscr{F}\left.\right|_{]X_{\alpha}[_ {\mathfrak{P}}})\] is an isomorphism, together with the analogous result with \(\mathscr{F}\) replaced by \(\mathcal{O}_{]X[_{\mathfrak{P}}}\). Since each \(]X_{\alpha}[_{\mathfrak{P}}\subset]X[_{\mathfrak{P}}\) is stable under generalisation, this follows from the general topological result above. In case (2), the dependence on the above-mentioned 'base change' result is hidden in the proof of Proposition 1.4.7. In this case, I first construct a morphism \[\mathscr{F}\otimes_{\mathcal{O}_{|X|_{\mathfrak{P}}}}\mathbf{R}f_{!}\mathcal{O} _{V}\to\mathbf{R}f_{!}f^{*}\mathscr{F}. \tag{2.1.7}\] To do this, choose an injective resolution \(\mathcal{O}_{V}\to\mathscr{I}\). Then flatness of \(\mathscr{F}\) over \(\mathcal{O}_{|X|_{\mathfrak{P}}}\) implies that \(f^{*}\mathscr{F}\to f^{*}\mathscr{F}\otimes_{\mathcal{O}_{V}}\mathscr{I}\) is a quasi-isomorphism. Next, choose an injective resolution \(f^{*}\mathscr{F}\otimes_{\mathcal{O}_{V}}\mathscr{I}\to\mathscr{J}\), so that \(\mathscr{J}\) is also an injective resolution of \(f^{*}\mathscr{F}\). It is therefore enough to construct a map \[\mathscr{F}\otimes_{\mathcal{O}_{|X|_{\mathfrak{P}}}}f_{!}\mathscr{I}\to f_{! }\mathscr{J}.\] But now, such a map can be found factoring through the map \[f_{!}(f^{*}\mathscr{F}\otimes_{\mathcal{O}_{V}}\mathscr{I})\to f_{!}\mathscr{J}\] simply by noting that the map \[\mathscr{F}\otimes_{\mathcal{O}_{|X|_{\mathfrak{P}}}}f_{*}\mathscr{I}\to f_{ *}(f^{*}\mathscr{F}\otimes_{\mathcal{O}_{V}}\mathscr{I})\] coming from adjunction sends \[\mathscr{F}\otimes_{\mathcal{O}_{|X|_{\mathfrak{P}}}}f_{!}\mathscr{I}\subset \mathscr{F}\otimes_{\mathcal{O}_{|X|_{\mathfrak{P}}}}f_{*}\mathscr{I}\] into \[f_{!}(f^{*}\mathscr{F}\otimes_{\mathcal{O}_{V}}\mathscr{I})\subset f_{*}(f^{* }\mathscr{F}\otimes_{\mathcal{O}_{V}}\mathscr{I}).\] The proof that (2.1.7) is an isomorphism now proceeds exactly as in case (1), using devissage and Proposition 1.4.7 to reduce the projection formula for locally free sheaves, that is, to Lemma 1.4.6. ### Convergent stratifications and connections I now introduction convergent stratifications on \(\mathcal{O}_{|X|_{\mathfrak{P}}}\)-modules. If \((X,Y,\mathfrak{P})\) is a frame, then embedding \(Y\) in \(\mathfrak{P}^{2}\) via the diagonal gives rise to a frame \((X,Y,\mathfrak{P}^{2})\). Consider the two morphisms of frames \[p_{0},p_{1}\colon(X,Y,\mathfrak{P}^{2})\to(X,Y,\mathfrak{P})\] coming from the projection maps \(p_{i}\colon\mathfrak{P}^{2}\to\mathfrak{P}\), as well as the morphism \[\Delta\colon(X,Y,\mathfrak{P})\to(X,Y,\mathfrak{P}^{2})\] induced by the diagonal map \(\Delta\colon\mathfrak{P}\to\mathfrak{P}^{2}\), which is a common section to the \(p_{i}\). As in Remark 1.2.5, I will write \[p_{i}\colon\,|X|_{\mathfrak{P}^{2}}\to|X|_{\mathfrak{P}}\,,\quad\Delta\colon \,|X|_{\mathfrak{P}}\to|X|_{\mathfrak{P}^{2}}\] for the induced morphisms of tubes. **2.2.1 Definition**.: Let \((X,Y,\mathfrak{P})\) be a frame with \(\mathfrak{P}\) smooth around \(X\), and \(\mathscr{F}\) an \(\mathcal{O}_{|X|_{\mathfrak{P}}}\)-module. Then a convergent stratification on \(\mathscr{F}\) is an isomorphism \[\epsilon\colon p_{1}^{*}\mathscr{F}\to p_{0}^{*}\mathscr{F}\] of \(\mathcal{O}_{|X|_{\mathfrak{P}^{2}}}\)-modules, restricting to the identity along \(\Delta\), and satisfying the cocycle condition on \(|X|_{\mathfrak{P}^{3}}\). **2.2.2 Remark**.: 1. Of course, the definition makes sense without any smoothness hypothesis on \(\mathfrak{P}\), but will only be a reasonable one when \(\mathfrak{P}\) is smooth around \(X\). 2. Note that what I have called 'convergent' here might more properly be termed 'overconvergent along \(Y\setminus X\)'. Given the wide range of different kinds of isocrystals that will appear in this article, I have chosen the slightly simpler terminology to avoid multiplying adjectives beyond necessity. I will try to avoid any potential confusion this might cause. Now let \(\mathfrak{P}^{(n)}\) denote the \(n\)th infinitesimal neighbourhood of \(\mathfrak{P}\) inside \(\mathfrak{P}^{2}\). The restriction of \(\epsilon\) to each \(|X[_{\mathfrak{P}^{(n)}}\) gives rise to a stratification on \(\mathscr{F}\) in the usual sense. Since \(\mathfrak{P}\) is smooth around \(X\), there exists an open neighbourhood \(]X[_{\mathfrak{P}}\subset V\subset]Y[_{\mathfrak{P}}\) which is smooth, and hence any such stratification can be viewed as an integrable connection. This gives rise to a faithful (but not in general full) functor from modules with convergent stratification on \(]X[_{\mathfrak{P}}\) to modules with integrable connection on \(]X[_{\mathfrak{P}}\). **2.2.3 Definition**.: A constructible isocrystal on \((X,Y,\mathfrak{P})\) is a constructible \(\mathcal{O}_{]X[_{\mathfrak{P}}}\)-module \(\mathscr{F}\) together with a convergent stratification \(\epsilon\) on \(\mathscr{F}\). The category of constructible isocrystals on \((X,Y,\mathfrak{P})\) is denoted \(\mathbf{Isoc}_{\mathrm{cons}}(X,Y,\mathfrak{P})\). In the special case that \(X=Y=P\) (and therefore \(\mathfrak{P}\) is everywhere smooth) I will generally write \(\mathbf{Isoc}_{\mathrm{cons}}(\mathfrak{P})\) instead of \(\mathbf{Isoc}_{\mathrm{cons}}(P,P,\mathfrak{P})\), and call these objects constructible isocrystals on \(\mathfrak{P}\). **2.2.4 Lemma**.: _The \(\mathcal{O}_{]X[_{\mathfrak{P}}}\)-module underlying any \(\mathscr{F}\in\mathbf{Isoc}_{\mathrm{cons}}(X,Y,\mathfrak{P})\) is constructible locally free, and in particular, flat._ Proof.: Since \(\mathfrak{P}\) is smooth around \(X\), it is also smooth around any locally closed subscheme of \(X\). After possibly passing to a suitable stratification of \(X\), I may assume that \(\mathscr{F}\) itself is coherent. Applying Proposition 1.3.3 I can then choose an open neighbourhood \(V\) of \(]X[_{\mathfrak{P}}\) such that \(\mathscr{F}\) extends to a coherent \(\mathcal{O}_{V}\)-module. Since \(]X[_{\mathfrak{P}^{(n)}}\) has the same underlying topological spaces as \(]X[_{\mathfrak{P}}\), applying Proposition 1.3.3 on \(]X[_{\mathfrak{P}^{(n)}}\) shows that, after possibly shrinking \(V\) further, the stratification (and hence the integrable connection) on \(\mathscr{F}\) also extends to \(V\). It therefore suffices to show that coherent \(\mathcal{O}_{V}\)-modules admitting integrable connections are locally finite free. Since \(V\) is smooth over \(K\), this is well-known, see for example [2, Lemma 3.3.4]. Inside \(\mathbf{Isoc}_{\mathrm{cons}}(X,Y,\mathfrak{P})\) there is a natural full subcategory consisting of objects whose underlying \(\mathcal{O}_{]X[_{\mathfrak{P}}}\)-module is coherent, or equivalently locally free. I will denote this subcategory by \[\mathbf{Isoc}(X,Y,\mathfrak{P})\subset\mathbf{Isoc}_{\mathrm{cons}}(X,Y, \mathfrak{P})\] and refer to its objects as locally free isocrystals on \((X,Y,\mathfrak{P})\). Again, these objects have traditionally been called 'partially overconvergent isocrystals', and I will try to avoid any confusion that may arise from the more simplistic terminology I've chosen to use here. **2.2.5 Proposition**.: _The forgetful functor from \(\mathbf{Isoc}_{\mathrm{cons}}(X,Y,\mathfrak{P})\) to \(\mathcal{O}_{]X[_{\mathfrak{P}}}\)-modules with integrable connection is fully faithful._ Proof.: This is [16, Proposition 4.8]. Thus \(\mathbf{Isoc}_{\mathrm{cons}}(X,Y,\mathfrak{P})\) can be viewed as a full subcategory of \(\mathbf{Mod}(\mathscr{D}_{]X[_{\mathfrak{P}}})\). It is in fact an abelian subcategory by [16, Proposition 4.3]. It also follows relatively quickly from the definitions that if \(\mathscr{F}\) and \(\mathscr{G}\) are constructible isocrystals, then \(\mathscr{F}\otimes_{\mathcal{O}_{]X[_{\mathfrak{P}}}}\mathscr{G}\) can be endowed with the structure of a constructible isocrystal, by Proposition 2.2.5 this is uniquely determined by the fact that this is compatible (via the natural forgetful functor) with the tensor product (over \(\mathcal{O}_{]X[_{\mathfrak{P}}}\)) of \(\mathscr{D}_{]X[_{\mathfrak{P}}}\)-modules. Similarly, if is a morphism of frames, with \(\mathfrak{P}^{\prime}\) smooth around \(X^{\prime}\) and \(\mathfrak{P}\) smooth around \(X\), then the pullback functor \[]f[^{*}_{u}:\mathbf{Mod}(\mathscr{D}_{]X[_{\mathfrak{P}}})\to\mathbf{Mod}( \mathscr{D}_{]X^{\prime}[_{\mathfrak{P}^{\prime}}})\] sends \(\mathbf{Isoc}_{\mathrm{cons}}(X,Y,\mathfrak{P})\) into \(\mathbf{Isoc}_{\mathrm{cons}}(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime})\), essentially because \(\left|f\right|_{\mathrm{u}}^{*}\) commutes with the maps \(p_{i}^{*}\) used to construct convergent stratifications. Of course, pullback commutes with tensor product in the obvious sense. The analogues for constructible isocrystals of each of the 'elementary facts' about constructible modules are true, but some of them are rather harder to prove. We start with the following. **2.2.6 Lemma**.: _Let \(i\colon Z\to X\) be a locally closed immersion, and \(\overline{Z}\) the closure of \(Z\) in \(P\). Then the exact functors_ \[i_{!}\colon\mathbf{Mod}(\mathscr{D}_{\left|Z\right|_{\mathfrak{P}}})\leftrightarrow \mathbf{Mod}(\mathscr{D}_{\left|X\right|_{\mathfrak{P}}})\colon i^{-1}\] _preserve constructible isocrystals, and induce an equivalence of categories between \(\mathbf{Isoc}_{\mathrm{cons}}(Z,\overline{Z},\mathfrak{P})\) and the full subcategory of \(\mathbf{Isoc}_{\mathrm{cons}}(X,Y,\mathfrak{P})\) consisting of objects supported on \(\left|Z\right|_{\mathfrak{P}}\)._ _2.2.7 Remark_.: In this situation, I will generally refer to constructible isocrystals being supported on \(Z\), rather than being supported on \(\left|Z\right|_{\mathfrak{P}}\). Proof.: The fact that each of \(i^{-1}\) and \(i_{!}\) preserve constructible isocrystals follows from the fact that each commutes with \(p_{i}^{*}\). For \(i^{-1}\) this is pretty much immediate from the definitions, for \(i_{!}\) this is the content of Lemma 1.2.6. The statement on the equivalence of categories then follows. **2.2.8 Corollary**.: _Let \(i\colon Z\to X\) be a closed immersion, with open complement \(j\colon U\to X\), and let \(\overline{Z}\) be the closure of \(Z\) inside \(P\). Then every constructible isocrystal \(\mathscr{F}\in\mathbf{Isoc}_{\mathrm{cons}}(X,Y,\mathfrak{P})\) sits in a short exact sequence_ \[0\to i_{!}i^{-1}\mathscr{F}\to\mathscr{F}\to j_{*}j^{-1}\mathscr{F}\to 0\] _where \(i^{-1}\mathscr{F}\in\mathbf{Isoc}_{\mathrm{cons}}(Z,\overline{Z},\mathfrak{P})\) and \(j^{-1}\mathscr{F}\in\mathbf{Isoc}_{\mathrm{cons}}(U,Y,\mathfrak{P})\)._ **2.2.9 Corollary**.: _Let \(\mathscr{F}\in\mathbf{Isoc}_{\mathrm{cons}}(X,Y,\mathfrak{P})\). Then \(\mathscr{F}\) admits a finite composition series_ \[0=\mathscr{F}_{0}\subset\mathscr{F}_{1}\subset\ldots\subset\mathscr{F}_{n}= \mathscr{F},\] _such that for each \(1\leq\alpha\leq n\), there exists a locally closed subscheme \(i_{\alpha}\colon X_{\alpha}\to X\), with closure \(\overline{X}_{\alpha}\) in \(P\), a locally free isocrystal \(\mathscr{G}_{\alpha}\in\mathbf{Isoc}(X_{\alpha},\overline{X}_{\alpha}, \mathfrak{P})\), and an isomorphism_ \[\mathscr{F}_{\alpha}/\mathscr{F}_{\alpha-1}\xrightarrow[\alpha]{\Rightarrow}i_ {\alpha!}\mathscr{G}_{\alpha}.\] _2.2.10 Remark_.: I will call objects of the form \(i_{\alpha!}\mathscr{G}_{\alpha}\) 'locally free isocrystals supported on \(X_{\alpha}\)'. Thus a more succinct expression of the corollary is that every constructible isocrystal on \(X\) is an iterated extension of locally free isocrystals supported on locally closed subschemes of \(X\). **2.2.11 Lemma**.: _Suppose that \(\mathscr{F}\in\mathbf{Mod}(\mathscr{D}_{\left|X\right|_{\mathfrak{P}}})\)._ 1. _Let_ \(\{\mathfrak{P}_{\alpha}\}_{\alpha\in A}\) _be a finite open cover of_ \(\mathfrak{P}\)_, and set_ \(X_{\alpha}=X\cap\mathfrak{P}_{\alpha}\)_. Then_ \(\mathscr{F}\) _is a constructible isocrystal if and only if each_ \(\mathscr{F}||_{X_{\alpha}\left|\vphantom{X_{\alpha}}{}_{}_{}_{}_{}_{}_{}_{}_{}_{} {}_{}{}_{}{}_{}{}_{}{}_{}{}_{}{}_{}{}_{}{}_{}{}_{}{}_{}{}_{}{}_{}{}_{}{}_{}{}_{} }\) _is._ 2. _Let_ \(\{X_{\alpha}\}_{\alpha\in A}\) _be a finite open cover of_ \(X\)_. Then_ \(\mathscr{F}\) _is a constructible isocrystal if and only if each_ \(\mathscr{F}||_{X_{\alpha}\left|\vphantom{X_{\alpha}}{}_{}_{}{}_{}{}_{}{}_{}{}{}_{ }{}{}_{}{}{}_{}{}_{}{}_{}{}{}_{}{}_{}{}_{}{}_{}{}_{}{}{}_{}{}_{}{}_{}{}{}_{}{} }\) _is._ Proof.: As in the proof of Lemma 2.1.5, in both cases the 'only if' direction is clear, and I will concentrate on the 'if' direction. In both cases, the constructibility was proved in 2.1.5, so what is left to prove is the convergence of the stratification. The case of an open cover of \(\mathfrak{P}\) is straightforward, since this induces an open cover of \(\left|X\right|_{\mathfrak{P}^{2}}\) over which the convergent stratification \(\epsilon\) can be constructed (that they coincide on the intersections follows from the full faithfulness result Proposition 2.2.5). For the case of an open cover of \(X\), I argue as in Lemma 2.1.5, and write \(\mathscr{F}\) as the kernel of the map \[\bigoplus_{\alpha}j_{X_{\alpha}}^{\dagger}\mathscr{F}\to\bigoplus_{\alpha,\beta }j_{X_{\alpha}\cap X_{\beta}}^{\dagger}\mathscr{F}. \tag{2.2.12}\] The fact that each \(\mathscr{F}|_{|X_{\alpha}[_{\mathfrak{Y}}}\) is a constructible isocrystal, together with Lemma 2.2.6, implies that both terms in (2.2.12) are constructible isocrystals. Hence I can conclude using the fact that \(\mathbf{Isoc}_{\mathrm{cons}}(X,Y,\mathfrak{P})\) is an abelian subcategory of \(\mathbf{Mod}(\mathscr{D}_{|X|_{\mathfrak{Y}}})\). The final two properties to establish are the facts that constructible isocrystals can be detected over a stratification of \(X\), and are stable under extensions. The proof of the latter of these depends on the former, which is a lot involved than the properties we have established so far, and is in fact the first result in this section that genuinely goes beyond what was proved in [15]. The key calculation that we need in the proof (this calculation is Lemma 2.2.14 below) applies in the following situation. #### 2.2.13. **Setup** Suppose that \((X,Y,\mathfrak{P})\) is a frame with \(\mathfrak{P}\) rig-smooth around \(X\), and consider the morphism of frames where \(Y\) is embedded into \(\widehat{\mathbb{A}}^{d}_{\mathfrak{Y}}\) via the zero section, and \(\pi\) is the projection. Let \(\mathfrak{O}^{(n)}_{\mathfrak{Y}}\subset\widehat{\mathbb{A}}^{d}_{\mathfrak{Y}}\) denote the \(n\)th infinitesimal neighbourhood of the zero section, and write \(\pi_{n}:\mathfrak{O}^{(n)}_{\mathfrak{Y}}\to\mathfrak{P}\) for the projection. Let \[i\colon Z\to X\gets U\colon j\] be a complementary closed and open immersion. If \(T=X,U\) or \(Z\) we write \[\pi\colon |T[_{\widehat{\mathbb{A}}^{d}_{\mathfrak{Y}}}\to]T[_{ \mathfrak{Y}}\] \[\pi_{n}\colon |T[_{\mathfrak{O}^{(n)}_{\mathfrak{Y}}}\to]T[_{ \mathfrak{Y}}\] for the projections. Similarly, if \(\mathfrak{T}=\mathfrak{P}\), \(\widehat{\mathbb{A}}^{d}_{\mathfrak{Y}}\) or \(\mathfrak{O}^{(n)}_{\mathfrak{Y}}\) we write \[i\colon |Z[_{\mathfrak{T}}\to]X[_{\mathfrak{T}}\gets U[_{\mathfrak{T}}:j\] for the immersions. **2.2.14 Lemma**.: _Consider Setup 2.2.13, and assume that, locally on \(Y\), \(U\subset Y\) is the complement of a hypersurface. Let \(\mathscr{F}\) be a coherent \(\mathcal{O}_{|U|_{\widehat{\mathbb{A}}^{d}_{\mathfrak{Y}}}}\)-module, and \(\mathscr{G}\) a constructible locally free \(\mathcal{O}_{|Z|_{\mathfrak{Y}}}\)-module. Then the natural map_ \[\mathrm{Hom}_{\mathcal{O}_{|U|_{\widehat{\mathbb{A}}^{d}_{\mathfrak{Y}}}}}( \mathscr{F},j^{-1}i_{*}\pi^{*}\mathscr{G})\to\lim_{n}\mathrm{Hom}_{\mathcal{ O}_{|U|_{\mathcal{O}^{(n)}_{\mathfrak{Y}}}}}(\mathscr{F},j^{-1}i_{*}\pi^{*} \mathscr{G})\] _is injective._ Proof.: The question is local on \(\mathfrak{P}\) (and hence \(Y\)), so we may assume that \(\mathfrak{P}\) is affine, and that \(U\subset Y\) is the complement of a hypersurface. The functors \(\mathrm{Hom}(\mathscr{F},-)\), \(j^{-1}\), \(i_{*}\), \(\pi^{*}\), \(\lim_{n}\), \(\pi^{*}_{n}\) are all left exact, and so the claim is additive over short exact sequences in \(\mathscr{G}\). I can therefore assume that there exists a locally closed immersion \(a\colon T\to Z\) and a locally free \(\mathcal{O}_{|T|_{\mathfrak{Y}}}\)-module \(\mathscr{H}\) such that \(\mathscr{G}=a_{!}\mathscr{H}\). I will continue to use \(\pi\) and \(\pi_{n}\) to denote the natural maps \[\pi\colon |T[_{\widehat{\mathbb{A}}^{d}_{\mathfrak{Y}}}\to]T[_{ \mathfrak{Y}}\] \[_{\mathfrak{Y}}\] \[_{\mathfrak{Y}}\] \[_{\mathfrak{Y}}\] Since \(a_{!}\) commutes with \(\pi^{*}\) by Lemma 1.2.6, the natural map \(a_{!}\to a_{*}\) is injective, and \(i_{*}\), \(j^{-1}\) are left exact, there is a natural injective map \[j^{-1}i_{*}\pi^{*}\mathscr{G}=j^{-1}i_{*}\pi^{*}a_{!}\mathscr{H}=j^{-1}i_{*}a_ {!}\pi^{*}\mathscr{H}\to j^{-1}i_{*}a_{*}\pi^{*}\mathscr{H}.\] Similarly, there is an injective map \[j^{-1}i_{*}\pi_{n}^{*}\mathscr{G}\to j^{-1}i_{*}a_{*}\pi_{n}^{*}\mathscr{H}.\] Again appealing to left exactness of \(\operatorname{Hom}\) and \(\operatorname{lim}\), it suffices to show that the map \[\operatorname{Hom}_{\mathcal{O}_{|U|_{\mathfrak{P}}^{d}}}(\mathscr{F},j^{-1}i_ {*}a_{*}\pi^{*}\mathscr{H})\to\lim_{n}\operatorname{Hom}_{\mathcal{O}_{|U|_{ \mathfrak{P}}^{(n)}}}(\mathscr{F},j^{-1}i_{*}a_{*}\pi_{n}^{*}\mathscr{H}) \tag{2.2.15}\] is injective. Next, let \(i^{\prime}\colon Z\to Y\) and \(j^{\prime}:U\to Y\) denote the inclusions into \(Y\), thus \[j^{\prime-1}i_{*}^{\prime}=j^{-1}i_{*}.\] This shows that \(\operatorname{I}\) can replace \(j^{-1}i_{*}a_{*}\pi^{*}\mathscr{H}\) with \(j^{\prime-1}i_{*}^{\prime}a_{*}\pi^{*}\mathscr{H}\), and similarly \(j^{-1}i_{*}a_{*}\pi_{n}^{*}\mathscr{H}\) with \(j^{\prime-1}i_{*}^{\prime}a_{*}\pi_{n}^{*}\mathscr{H}\). Now, the given claim is local on the tube \(|Y|_{\mathfrak{P}}\) of \(Y\), so replacing \(\mathfrak{P}\) by a formal model for the closed tubes \([Y]_{\mathfrak{P}\eta}\) for varying \(\eta\), \(\operatorname{I}\) can assume that \(Y=P\). The LHS of (2.2.15) is now identified with \[\operatorname{Hom}_{\mathcal{O}_{\mathbb{D}_{K}^{d}(0;1^{-})\times_{K}|U|_{ \mathfrak{P}}}}(\mathscr{F},j^{\prime-1}i_{*}^{\prime}a_{*}\pi^{*}\mathscr{H}) =\lim_{\rho<1}\operatorname{Hom}_{\mathcal{O}_{\mathbb{D}_{K}^{d}(0;\rho) \times_{K}|U|_{\mathfrak{P}}}}(\mathscr{F},j^{\prime-1}i_{*}^{\prime}a_{*}\pi^ {*}\mathscr{H}),\] so it is enough to show that the natural map \[\operatorname{Hom}_{\mathcal{O}_{\mathbb{D}_{K}^{d}(0;\rho)\times_{K}|U|_{ \mathfrak{P}}}}(\mathscr{F},j^{\prime-1}i_{*}^{\prime}a_{*}\pi^{*}\mathscr{H}) \to\lim_{n}\operatorname{Hom}_{\mathcal{O}_{|U|_{\mathfrak{P}}^{(n)}}}( \mathscr{F},j^{-1}i_{*}a_{*}\pi_{n}^{*}\mathscr{H})\] is injective, for each \(\rho<1\). Let \(\{V_{\lambda}\}_{\lambda\geq\lambda_{0}}\) be a cofinal sequence of neighbourhoods of \(|U|_{\mathfrak{P}}\) in \(|Y|_{\mathfrak{P}}=\mathfrak{P}_{K}\). Since \(\mathfrak{P}\) is affine, and \(U\) is the complement of a hypersurface in \(Y=P\), we can take each \(V_{\lambda}\) to be affinoid. By quasi-compactness, \(\mathbb{D}_{K}^{d}(0;\rho)\times_{K}V_{\lambda}\) (resp. \(\mathfrak{D}_{\mathfrak{P},K}^{(n)}\times_{\mathfrak{P}_{K}}V_{\lambda}\)) is a cofinal system of neighbourhoods of \(\mathbb{D}_{K}^{d}(0;\rho)\times_{K}|U|_{\mathfrak{P}}\) (resp. \(\mathfrak{D}_{\mathfrak{P},K}^{(n)}\times_{\mathfrak{P}_{K}}|U|_{\mathfrak{P}}\)) inside \(\mathbb{D}_{K}^{d}(0;\rho)\times_{K}\mathfrak{P}_{K}\) (resp. \(\mathfrak{D}_{\mathfrak{P},K}^{(n)}\)). Thus, for all sufficiently large \(\lambda\), \(\mathscr{F}\) extends to a coherent sheaf on \(\mathbb{D}_{K}^{d}(0;\rho)\times_{K}V_{\lambda}\) by Proposition 1.3.3. Since \(\mathbb{D}_{K}^{d}(0;\rho)\times_{K}V_{\lambda}\) is affinoid, the extension of \(\mathscr{F}\) to \(\mathbb{D}_{K}^{d}(0;\rho)\times_{K}V_{\lambda}\) is generated by finitely many global section. Hence \(\mathscr{F}\) is generated by finitely many global sections as a coherent \(\mathcal{O}_{\mathbb{D}_{K}^{d}(0;\rho)\times_{K}|U|_{\mathfrak{P}}}\)-module. Thus choosing a surjection \[\mathcal{O}_{\mathbb{D}_{K}^{d}(0;\rho)\times_{K}|U|_{\mathfrak{P}}}^{\otimes m }\twoheadrightarrow\mathscr{F},\] and once again appealing to left exactness, \(\operatorname{I}\) can reduce to the case \(\mathscr{F}=\mathcal{O}_{\mathbb{D}_{K}^{d}(0;\rho)\times_{K}|U|_{\mathfrak{P }}}\), in other words to showing that \[\Gamma(\mathbb{D}_{K}^{d}(0;\rho)\times_{K}|U|_{\mathfrak{P}}\,,j^{\prime-1}i _{*}^{\prime}a_{*}\pi^{*}\mathscr{H})\to\lim_{n}\Gamma(\mathfrak{O}_{ \mathfrak{P},K}\times_{\mathfrak{P}_{K}}|U|_{\mathfrak{P}}\,,j^{-1}i_{*}a_{*} \pi_{n}^{*}\mathscr{H})\] is injective. Writing things out explicitly, this is the map \[\operatorname{colim}_{\lambda}\Gamma(\mathbb{D}_{K}^{d}(0;\rho)\times_{K}(V_{ \lambda}\cap|T|_{\mathfrak{P}}),\pi^{*}\mathscr{H})\to\lim_{n}\operatorname{ colim}_{\lambda}\Gamma(\mathfrak{O}_{\mathfrak{P},K}^{(n)}\times_{ \mathfrak{P}_{K}}(V_{\lambda}\cap|T|_{\mathfrak{P}}),\pi_{n}^{*}\mathscr{H}).\] By an explicit calculation, \(\operatorname{I}\) can identify \[\Gamma(\mathfrak{O}_{\mathfrak{P},K}^{(n)}\times_{\mathfrak{P}_{K}}(V_{ \lambda}\cap|T|_{\mathfrak{P}}),\pi_{n}^{*}\mathscr{H})=\Gamma(V_{\lambda}\cap |T|_{\mathfrak{P}}\,,\mathscr{H})\otimes_{K}\frac{K[z_{1},\dots,z_{d}]}{(z_{1}, \dots,z_{d})^{n+1}}.\] Thus, by applying Lemma 1.7.1, it suffices to show that: 1. for each sufficiently large \(\lambda\), the map \[\Gamma(\mathbb{D}_{K}^{d}(0;\rho)\times_{K}(V_{\lambda}\cap|T|_{\mathfrak{P}}), \pi^{*}\mathscr{H})\to\lim_{n}\Gamma(\mathfrak{O}_{\mathfrak{P},K}^{(n)} \times_{\mathfrak{P}_{K}}(V_{\lambda}\cap|T|_{\mathfrak{P}}),\pi_{n}^{*} \mathscr{H})\] is injective; 2. for each \(\lambda\), the kernels of the transition maps \[\Gamma(V_{\lambda}\cap|T|_{\mathfrak{P}}\,,\mathscr{H})\to\Gamma(V_{\lambda^{ \prime}}\cap|T|_{\mathfrak{P}}\,,\mathscr{H})\] eventually stabilise. Since each \(V_{\lambda}\cap|T[_{\mathfrak{P}}\) has only finitely many connected components, the second of these follows from Lemma 1.4.5. For the first, note that the claim can be checked over an open cover of \(|T[_{\mathfrak{P}}\). I am therefore again free to replace the (non-quasi-compact) open tube \(]T[_{\mathfrak{P}}\) by the (quasi-compact) closed tube \(]T[_{\mathfrak{P}}\cap[\overline{T}]_{\mathfrak{P}\eta}\), where \(\overline{T}\) is the closure of \(T\) in \(Y\). Now choose a cofinal sequence \(\{W_{\delta}\}_{\delta\geq\delta_{0}}\) of open neighbourhoods of \(]T[_{\mathfrak{P}}\cap[\overline{T}]_{\mathfrak{P}\eta}\) in \([\overline{T}]_{\mathfrak{P}\eta}\). Then the spaces \(\mathbb{D}_{K}^{d}(0;\rho)\times_{K}(V_{\lambda}\cap W_{\delta})\) and \(\mathfrak{O}_{\mathfrak{P},K}^{(n)}\times_{\mathfrak{P}_{K}}(V_{\lambda}\cap W _{\delta})\) form cofinal sequences of neighbourhoods of \[\mathbb{D}_{K}^{d}(0;\rho)\times_{K}(V_{\lambda}\cap]T[_{\mathfrak{P}}\cap[ \overline{T}]_{\mathfrak{P}\eta})\text{ in }\mathbb{D}_{K}^{d}(0;\rho)\times_{K}(V_{ \lambda}\cap[\overline{T}]_{\mathfrak{P}\eta})\] and \[\mathfrak{O}_{\mathfrak{P},K}^{(n)}\times_{\mathfrak{P}_{K}}(V_{\lambda}\cap ]T[_{\mathfrak{P}}\cap[\overline{T}]_{\mathfrak{P}\eta})\text{ in }\mathfrak{O}_{ \mathfrak{P},K}^{(n)}\times_{\mathfrak{P}_{K}}(V_{\lambda}\cap[\overline{T}] _{\mathfrak{P}\eta})\] respectively. I may also assume, by increasing \(\delta_{0}\) if necessary, that \(\mathscr{H}\) extends to a locally free sheaf on \(W_{\delta_{0}}\), and therefore on all \(W_{\delta}\). I thus need to show that the map \[\operatorname*{colim}_{\delta}\Gamma(\mathbb{D}_{K}^{d}(0;\rho)\times_{K}(V_{ \lambda}\cap W_{\delta}),\pi^{*}\mathscr{H})\to\lim_{n}\operatorname*{colim}_{ \delta}\Gamma(\mathfrak{O}_{\mathfrak{P},K}^{(n)}\times_{\mathfrak{P}_{K}}(V _{\lambda}\cap W_{\delta}),\pi_{n}^{*}\mathscr{H})\] is injective, at least for large enough \(\lambda\). Again appealing to Lemma 1.7.1, it is enough to prove that: 1. the map (2.2.16) \[\Gamma(\mathbb{D}_{K}^{d}(0;\rho)\times_{K}(V_{\lambda}\cap W_{\delta}),\pi^{ *}\mathscr{H})\to\lim_{n}\Gamma(\mathfrak{O}_{K}^{(n)}\times_{K}(V_{\lambda} \cap W_{\delta}),\pi_{n}^{*}\mathscr{H})\] is injective; 2. for each \(\eta\), the kernels of the transition maps \[\Gamma(V_{\lambda}\cap W_{\delta},\mathscr{H})\to\Gamma(V_{\lambda}\cap W_{ \delta^{\prime}},\mathscr{H})\] eventually stabilise. Again, the second of these follows from Lemma 1.4.5, since each \(V_{\lambda}\cap W_{\delta}\) has only finitely many connected components. For the first, take an open affinoid \(\operatorname{Spa}\left(R,R^{+}\right)\subset V_{\lambda}\cap W_{\delta}\), and set \(M=\Gamma(\operatorname{Spa}\left(R,R^{+}\right),\mathscr{H})\). Thus \(M\) is a finite projective \(R\)-module. The given map can be identified with the map \[R\langle\rho^{-1}\boldsymbol{z}\rangle\otimes_{R}M\to R[\![\boldsymbol{z}] \!]\otimes_{R}M\] from the restricted power series ring to the full power series ring, tensored with \(M\). Since this map is injective for each such \(\operatorname{Spa}\left(R,R^{+}\right)\), the map (2.2.16) is injective as required. I can now show that constructible isocrystals can be detected over a stratification. **2.2.17 Proposition**.: _Let \(\{i_{\alpha}:X_{\alpha}\to X\}_{\alpha\in A}\) be a stratification of \(X\), and let \(\mathscr{F}\in\operatorname{Mod}(\mathscr{D}_{]X[_{\mathfrak{P}}})\). Then \(\mathscr{F}\) is a constructible isocrystal if and only if, for each \(\alpha\in A\), \(i_{\alpha}^{-1}\mathscr{F}\in\operatorname{Mod}(\mathscr{D}_{]X_{\alpha}[_{ \mathfrak{P}}})\) is a constructible isocrystal._ Proof.: The 'only if' direction has already been proved. For the 'if' direction, by induction on \(|A|\), I may assume that there are precisely two strata \(\{U,Z\}\) consisting of an open subscheme \(j:U\to X\) and a closed complement \(i:Z\to X\). Thanks to Lemma 2.2.11 the property of being a constructible isocrystal is local on both \(\mathfrak{P}\) and \(X\), so I may assume that both are affine. Moreover, I can appeal to Noetherian induction to shrink \(U\) if required. I can therefore assume that \(U\) is the complement of a hypersurface in \(Y\), and that \(j^{-1}\mathscr{F}\) is a locally free isocrystal on \((U,Y,\mathfrak{P})\). In this case, I will follow closely the proof of [14, Proposition 5.11]. If \(T=X,U\) or \(Z\), I will write \[p_{i}\colon\ ]T[_{\mathfrak{P}^{2}}\to]T[_{\mathfrak{P}}\] \[p_{i}^{(n)}\colon\ ]T[_{\mathfrak{P}^{(n)}}\to]T[_{\mathfrak{P}}\] for the projections, and if \(\mathfrak{T}=\mathfrak{P},\mathfrak{P}^{2}\) or \(\mathfrak{P}^{(n)}\), I will write \[i\colon\left]Z\right[_{\mathfrak{T}}\to\left]X\right[_{\mathfrak{T}}\gets \left]U\right[_{\mathfrak{T}}:j\] for the immersions. There are therefore isomorphisms \[\epsilon_{U}:j^{-1}p_{1}^{*}\mathscr{F}\xrightarrow{\cong}j^{-1}p_{0}^{*} \mathscr{F},\quad\epsilon_{Z}:i^{-1}p_{1}^{*}\mathscr{F}\xrightarrow{\cong}i^ {-1}p_{0}^{*}\mathscr{F}\] extending the Taylor isomorphisms on each \(\left]X\right[_{\mathfrak{P}^{(n)}}\), and I need to show that these glue to an isomorphism \(p_{1}^{*}\mathscr{F}\to p_{2}^{*}\mathscr{F}\) on \(\left]X\right[_{\mathfrak{P}^{2}}\). To do this, I will use the fact that that sheaves on \(\left]X\right[_{\mathfrak{P}^{2}}\) are determined by their restriction to each of \(\left]U\right[_{\mathfrak{P}^{2}}\) and \(\left]Z\right[_{\mathfrak{P}^{2}}\), together with the natural adjunction morphism between then. Moreover, this construction gives rise to an equivalence of categories. Thus \(\epsilon_{U}\) and \(\epsilon_{Z}\) glue to give the required stratification if and only if the diagram commutes. Equivalently, this happens if and only if the difference between the two maps \[j^{-1}p_{1}^{*}\mathscr{F}\to j^{-1}i_{*}i^{-1}p_{0}^{*}\mathscr{F}\] is zero. Note that the analogous diagram commutes, for all \(n\), so it suffices to show that the natural map \[\operatorname{Hom}_{\mathcal{O}_{\left]U\right]\mathfrak{P}^{2}}}(j^{-1}p_{1}^ {*}\mathscr{F},j^{-1}i_{*}i^{-1}p_{0}^{*}\mathscr{F})\to\lim_{n} \operatorname{Hom}_{\mathcal{O}_{\left]U\right]\mathfrak{P}^{(n)}}}(j^{-1}p_{1}^ {(n)*}\mathscr{F},j^{-1}i_{*}i^{-1}p_{0}^{(n)*}\mathscr{F}) \tag{2.2.18}\] is injective. Since \(\mathfrak{P}\) is smooth over \(\mathcal{V}\) in a neighbourhood of \(X\), I can now appeal to Berthelot's strong fibration theorem (see in particular the form stated in Theorem 1.4.2 above). This says that, after localising on \(X\) and \(\mathfrak{P}\), I can replace the morphism of frames \[p_{0}:(X,Y,\mathfrak{P}^{2})\to(X,Y,\mathfrak{P})\] by the natural projection \[\pi:(X,Y,\widehat{\mathbb{A}}^{d}_{\mathfrak{P}})\to(X,Y,\mathfrak{P}),\] where \(d\) is the relative dimension of \(\mathfrak{P}\) over \(\mathcal{V}\) around \(X\), and \(Y\) is embedded into \(\widehat{\mathbb{A}}^{d}_{\mathfrak{P}}\) via the zero section. Since \(\mathfrak{P}^{(n)}\) now gets identified with the \(n\)th infinitesimal neighbourhood of the zero section \(\mathfrak{P}\to\widehat{\mathbb{A}}^{d}_{\mathfrak{P}}\), the claim is now precisely the one proved in Lemma 2.2.14 above. **2.2.19 Corollary**.: _The subcategory \(\mathbf{Isoc}_{\mathrm{cons}}(X,Y,\mathfrak{P})\subset\mathbf{Mod}(\mathscr{D }_{\left]X\right[_{\mathfrak{P}}})\) is closed under extensions._ Proof.: Let \[0\to\mathscr{F}\to\mathscr{G}\to\mathscr{H}\to 0\] be a short exact sequence of \(\mathscr{D}_{\left]X\right[_{\mathfrak{P}}}\)-modules, with both \(\mathscr{F},\mathscr{H}\in\mathbf{Isoc}_{\mathrm{cons}}(X,Y,\mathfrak{P})\). Thanks to Proposition 2.2.17, it suffices to find a stratification of \(X\) over which \(\mathscr{G}\) becomes a constructible isocrystal. To do so, choose stratifications of \(X\) over which \(\mathscr{F}\) and \(\mathscr{H}\) become locally free isocrystals, and take a common refinement. This gives a stratification over which both \(\mathscr{F}\) and \(\mathscr{H}\) become locally free isocrystals, in which case it is proved in [10, Proposition 1.2.2] that \(\mathscr{G}\) is a locally free isocrystal. I leave for now the question of to what extent the category \(\mathbf{Isoc}_{\mathrm{cons}}(X,Y,\mathfrak{P})\) is independent of \(Y\) and \(\mathfrak{P}\), since I shall shortly deduce it from a more general invariance result for the derived analogue of these categories. ## 3. The derived category of constructible isocrystals In SS2 above I introduced abelian categories of constructible isocrystals. The next task is to establish the basic properties of the analogous triangulated categories. ### Triangulated categories of constructible isocrystals Let \((X,Y,\mathfrak{P})\) be a frame, with \(\mathfrak{P}\) smooth around \(X\). The following is the derived analogue of Definition 2.2.3. **3.1.1 Definition**.: A constructible complex on \((X,Y,\mathfrak{P})\) is a complex \(\mathscr{K}\in\mathbf{D}^{b}(\mathscr{D}_{]X[_{\mathfrak{P}}})\) whose cohomology sheaves are constructible isocrystals. We define \(\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P})\) to be the corresponding full subcategory of \(\mathbf{D}^{b}(\mathscr{D}_{]X[_{\mathfrak{P}}})\). _3.1.2 Remark_.: 1. Note that if \(\mathfrak{P}\) is everywhere smooth, and \(i:X\to P\) is the given immersion, then the extension by zero functor \[i_{!}:\mathbf{D}^{b}(\mathscr{D}_{]X[_{\mathfrak{P}}})\to\mathbf{D}^{b}( \mathscr{D}_{\mathfrak{P}_{K}})\] is fully faithful, with essential image those constructible complexes supported on \(X\) (see Remark 2.2.7). 2. Again, if \(X=Y=P\), I will generally write \(\mathbf{D}^{b}_{\mathrm{cons}}(\mathfrak{P})\) instead of \(\mathbf{D}^{b}_{\mathrm{cons}}(P,P,\mathfrak{P})\). **3.1.3 Lemma**.: _The subcategory \(\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P})\subset\mathbf{D}^{b}( \mathscr{D}_{]X[_{\mathfrak{P}}})\) is triangulated. Moreover, it is the smallest triangulated subcategory containing all locally free isocrystals supported on locally closed subschemes of \(X\)._ Proof.: This follows from Corollaries 2.2.9 and 2.2.19. Since constructible isocrystals are \(\mathcal{O}_{]X[_{\mathfrak{P}}}\)-flat by Lemma 2.2.4, and stable under tensor product over \(\mathcal{O}_{]X[_{\mathfrak{P}}}\), this tensor product descends to a t-exact functor \[-\otimes_{\mathcal{O}_{]X[_{\mathfrak{P}}}}-:\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P})\times\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P})\to \mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P}).\] Similarly, if \((f,g,u):(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime})\to(X,Y,\mathfrak{P})\) is a morphism of frames, the pullback functor \(]f[^{\ast}_{u}\) from \(\mathscr{D}_{]X[_{\mathfrak{P}}}\)-modules to \(\mathscr{D}_{]X^{\prime}[_{\mathfrak{P}^{\prime}}}\)-modules preserves constructible isocrystals and derives trivially to give a t-exact functor \[]f[^{\ast}_{u}:\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P})\to\mathbf{D}^{ b}_{\mathrm{cons}}(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime}).\] Pullback and tensor product commute in the obvious sense. ### Independence of the frame The key point will be to show that \(\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P})\) enjoys the same functorial properties as classical categories of (locally free) isocrystals \(\mathbf{Isoc}(X,Y,\mathfrak{P})\). As in that case, the crucial invariance result is the following. **3.2.1 Theorem**.: _Let_ _be a morphism of frames, such that \(g\) is proper, and both \(\mathfrak{P}\) and \(u\) are smooth around \(X\). Let \(d\) be the relative dimension of \(u\), and write \(u=\left|\mathrm{id}\right|_{u}\) for the induced morphism \(\left|X\right|_{\mathfrak{P}^{\prime}}\to\left|X\right|_{\mathfrak{P}}\). Then the functor_ \[u^{\ast}\colon\mathbf{D}(\mathscr{D}_{]X[_{\mathfrak{P}}})\to\mathbf{D}( \mathscr{D}_{]X[_{\mathfrak{P}^{\prime}}})\] induces an t-exact equivalence of categories_ \[\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P})\to\mathbf{D}^{b}_{\mathrm{cons}}( X,Y,\mathfrak{P}^{\prime}).\] _The functors_ \[\mathbf{R}u_{\mathrm{dR}}[2d]\colon\mathbf{D}(\mathscr{D}_{|X|_{ \mathfrak{P}^{\prime}}})\to\mathbf{D}(\mathscr{D}_{|X|_{\mathfrak{P}}})\] \[\mathbf{R}u_{\mathrm{dR}*}\colon\mathbf{D}(\mathscr{D}_{|X|_{ \mathfrak{P}^{\prime}}})\to\mathbf{D}(\mathscr{D}_{|X|_{\mathfrak{P}}})\] _both induce quasi-inverses to \(u^{*}\)._ To begin the proof of Theorem 3.2.1, I make the following simple observation. **3.2.2 Lemma**.: _The functor \(u^{*}\) sends \(\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P})\) into \(\mathbf{D}^{b}_{\mathrm{cons}}(X,Y^{\prime},\mathfrak{P}^{\prime})\), and the essential image contains all locally free isocrystals supported on locally closed subschemes of \(X\)._ Proof.: The first claim has already been observed in SS3.1 above. The second claim is then a consequence of the following classical result, proved for example in [13, Theorem 7.1.8]: for any locally closed subscheme \(i\colon Z\hookrightarrow X\), with closures \(\overline{Z}\) in \(Y\) and \(\overline{Z}^{\prime}\) in \(Y^{\prime}\), the pullback functor \[u^{*}:\mathbf{Isoc}(Z,\overline{Z},\mathfrak{P})\to\mathbf{Isoc}(Z,\overline{ Z}^{\prime},\mathfrak{P}^{\prime})\] induced by the (abusively denoted) morphism \[u\colon\,]Z[_{\mathfrak{P}^{\prime}}\to\,]Z[_{\mathfrak{P}}\] on locally free isocrystals is an equivalence of categories. Indeed, if \(\mathscr{F}\) is a locally free isocrystal on \((Z,\overline{Z}^{\prime},\mathfrak{P}^{\prime})\), then \(\mathscr{F}=u^{*}\mathscr{G}\) for some locally free isocrystal on \((Z,\overline{Z},\mathfrak{P})\). It then follows from Lemma 1.2.6 that \(i_{!}\mathscr{F}=i_{!}u^{*}\mathscr{G}=u^{*}i\mathscr{G}\) is in the essential image of \(u^{*}\). Proving Theorem 3.2.1 eventually boils down to a local calculation in the same situation as that considered in Lemma 2.2.14. **3.2.3 Proposition**.: _Consider Setup 2.2.13, and suppose that \(\mathscr{F},\mathscr{G}\in\mathbf{Mod}(\mathscr{D}_{|X|_{\mathfrak{P}}})\) are constructible as \(\mathcal{O}_{|X|_{\mathfrak{P}}}\)-modules._ 1. _The natural map_ \(\mathscr{F}\to\mathbf{R}\pi_{\mathrm{dR}*}\pi^{*}\mathscr{F}\) _is an isomorphism in_ \(\mathbf{D}(\mathscr{D}_{|X|_{\mathfrak{P}}})\)_._ 2. _The natural map_ \[\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{|X|_{\mathfrak{P}}}}(\mathscr{F},\mathscr{ G})\to\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{|X|_{\mathfrak{P}}}}(\pi^{*}\mathscr{F}, \pi^{*}\mathscr{G})\] _is an isomorphism in_ \(\mathbf{D}(K)\)_._ _3.2.4 Remark_.: Note that the hypotheses of Theorem 3.2.1 have been weakened slightly here. First of all, I only require that \(\mathfrak{P}\) is rig-smooth, rather than smooth, in a neighbourhood of \(X\). Secondly, I only require that \(\mathscr{F}\) and \(\mathscr{G}\), are constructible, and not necessarily convergent. This implies that the full faithfulness part of Theorem 3.2.1 holds under similarly weaker hypotheses. The essential surjectivity, of course, does not. Proof.: 1. First of all, the claim is local on \(|Y|_{\mathfrak{P}}\), and so, replacing \(\mathfrak{P}\) by a formal model for the closed tube \([Y]_{\mathfrak{P}\eta}\), I can assume that \(Y=P\). Letting \(j\colon X\to P\) denote the given open immersion, the claim for \(\mathscr{F}\) on \(]X[_{\mathfrak{P}}\) is equivalent to the claim for \(j_{*}\mathscr{F}\) on \(\mathfrak{P}_{K}\) by Lemma 1.2.6, so I can also assume that \(X=P\). Now, for any \(\rho<1\), let \(\pi_{\rho}:\mathbb{D}^{d}_{\mathfrak{P}_{K}}(0;\rho)\to\mathfrak{P}_{K}\) denote the projection, thus \[\mathbf{R}\pi_{\mathrm{dR}*}\pi^{*}\mathscr{F}\overset{\cong}{\longrightarrow} \mathbf{R}\varinjlim_{\rho}\mathbf{R}\pi_{\rho\mathrm{dR}*}\pi_{\rho}^{*} \mathscr{F}.\] Since \(\pi_{\rho}\) is quasi-compact, it follows from Proposition 2.1.6 that \[\mathbf{R}\pi_{\rho\mathrm{dR}*}\mathcal{O}_{\mathbb{D}^{d}_{\mathfrak{P}_{K}} (0;\rho)}\otimes_{\mathcal{O}_{\mathfrak{P}_{K}}}\mathscr{F}\overset{\cong}{ \longrightarrow}\mathbf{R}\pi_{\rho\mathrm{dR}*}\pi_{\rho}^{*}\mathscr{F},\] thus \[\mathbf{R}\pi_{\mathrm{dR}\ast}\pi^{*}\mathscr{F}\xrightarrow{\cong}\mathbf{R} \mathrm{lim}_{\rho}\,\left(\mathbf{R}\pi_{\rho\mathrm{dR}\ast}\mathcal{O}_{ \mathbb{P}^{d}_{\Psi_{K}}(0;\rho)}\otimes_{\mathcal{O}_{\Psi_{K}}}\mathscr{F} \right).\] Next, let \(\pi_{\rho^{-}}:\mathbb{D}^{d}_{\Psi_{K}}(0;\rho^{-})\to\mathfrak{P}_{K}\) denote the projection from the _open_ disc, thus \[\mathbf{R}\mathrm{lim}_{\rho}\,\left(\mathbf{R}\pi_{\rho\mathrm{dR}\ast} \mathcal{O}_{\mathbb{D}^{d}_{\Psi_{K}}(0;\rho)}\otimes_{\mathcal{O}_{\Psi_{K}} }\mathscr{F}\right)\xrightarrow{\cong}\mathbf{R}\mathrm{lim}_{\rho}\,\left( \mathbf{R}\pi_{\rho^{-}\mathrm{dR}\ast}\mathcal{O}_{\mathbb{D}^{d}_{\Psi_{K}} (0;\rho^{-})}\otimes_{\mathcal{O}_{\Psi_{K}}}\mathscr{F}\right).\] It therefore suffices to show that \[\mathcal{O}_{\Psi_{K}}\xrightarrow{\cong}\mathbf{R}\pi_{\rho^{-}\mathrm{dR} \ast}\mathcal{O}_{\mathbb{D}^{d}_{\Psi_{K}}(0;\rho^{-})},\] which is a well-known calculation in rigid analytic geometry. 2. This is a simple consequence of adjunction and part (1): \[\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{|X|_{\mathfrak{P}}}}(\pi^{*}\mathscr{F}, \pi^{*}\mathscr{G})=\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{|X|_{\mathfrak{P}}}}( \mathscr{F},\mathbf{R}\pi_{\mathrm{dR}\ast}\pi^{*}\mathscr{G})=\mathbf{R} \mathrm{Hom}_{\mathscr{D}_{|X|_{\mathfrak{P}}}}(\mathscr{F},\mathscr{G}).\qed\] Proof of Theorem 3.2.1.: First consider the claim that \(u^{*}\) induces an equivalence of categories \[\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P})\to\mathbf{D}^{b}_{\mathrm{ cons}}(X,Y^{\prime},\mathfrak{P}^{\prime}).\] To prove this, it suffices to show that the map \[\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{|X|_{\mathfrak{P}}}}(\mathscr{F}, \mathscr{G})\to\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{|X|_{\mathfrak{P}^{\prime }}}}(u^{*}\mathscr{F},u^{*}\mathscr{G}) \tag{3.2.5}\] is an isomorphism. Indeed, this immediately implies that \(u^{*}\) is fully faithful, but in conjunction with Lemmas 3.1.3 and 3.2.2, it also implies that \(u^{*}\) is essentially surjective, since the essential image is triangulated and contains all locally free isocrystals supported on locally closed subschemes of \(X\). To prove this 'derived full faithfulness', I can clearly localise on \(\mathfrak{P}\), and I claim that I can also localise on \(X\). To see this, cover \(X\) by open subspaces \(\{X_{i}\}_{1\leq i\leq n}\), and for any \(I\subset\{1,\ldots,n\}\) set \(X_{I}=\cap_{i\in I}X_{i}\). Then, as in [1, Proposition 2.1.8], any sheaf \(\mathscr{E}\) on \(|X|_{\mathfrak{P}}\) has a canonical resolution \[0\to\mathscr{E}\to\bigoplus_{i=1}^{n}j^{\dagger}_{X_{i}}\mathscr{E}\to\ldots \to j^{\dagger}_{X_{\{1,\ldots,n\}}}\mathscr{E}\to 0.\] Applying this to the complexes \(\mathscr{F}\) and \(\mathscr{G}\), it follows that we can deduce the isomorphy of (3.2.5) from the isomorphy of \[\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{|X|_{\mathfrak{P}}}}(j^{\dagger}_{X_{I} }\mathscr{F},j^{\dagger}_{X_{J}}\mathscr{G})\to\mathbf{R}\mathrm{Hom}_{ \mathscr{D}_{|X|_{\mathfrak{P}^{\prime}}}}(u^{*}j^{\dagger}_{X_{I}}\mathscr{F},u^{*}j^{\dagger}_{X_{J}}\mathscr{G}) \tag{3.2.6}\] for any pair of non-empty open subset \(I,J\subset\{1,\ldots,n\}\). Set \(\mathscr{F}^{\prime}=(j^{\dagger}_{X_{I}}\mathscr{F})|_{|X_{J}|_{\mathfrak{P}}}\), and \(\mathscr{G}^{\prime}=\mathscr{G}|_{|X_{J}|_{\mathfrak{P}}}\), these are constructible complexes on \((X_{J},Y,\mathfrak{P})\). Applying Lemma 1.2.6 identifies (3.2.6) with the map \[\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{|X_{J}|_{\mathfrak{P}}}}(\mathscr{F}^{ \prime},\mathscr{G}^{\prime})\to\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{|X_{J}|_{ \mathfrak{P}^{\prime}}}}(u^{*}\mathscr{F}^{\prime},u^{*}\mathscr{G}^{\prime}).\] Hence the claim for each \(X_{J}\) implies the claim for \(X\). If \(g\) is an isomorphism, then by Theorem 1.4.2, we can see that, after localising on \(X\) and \(\mathfrak{P}\), there exists an isomorphism \[]X[_{\mathfrak{P}^{\prime}}\xrightarrow{\cong}\mathbb{D}^{d}_{K}(0;1^{-})\times _{K}|X|_{\mathfrak{P}}\] of germs identifying \(u\) with the second projection. Hence applying Proposition 3.2.3 shows that (3.2.5) is an isomorphism. In general I now argue as in [13, Theorem 7.1.8]. Indeed, it follows from what I have already proved that if \((X,Y)\) is a weakly realisable pair (in the sense of Definition 1.2.3), and \((X,Y,\mathfrak{P})\) is a frame with \(\mathfrak{P}\) smooth around \(X\), then \(\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P})\) only depends on \((X,Y)\) and not on \(\mathfrak{P}\), and is moreover functorial in the pair \((X,Y)\). I can therefore denote \(\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P})\) by \(\mathbf{D}^{b}_{\mathrm{cons}}(X,Y)\), and the claim amounts to showing that if \((\mathrm{id},g)\colon(X,Y^{\prime})\to(X,Y)\) is a morphism of weakly realisable pairs, with \(g\) proper, then \(g^{*}\colon\mathbf{D}^{b}_{\mathrm{cons}}(X,Y)\to\mathbf{D}^{b}_{\mathrm{cons}} (X,Y^{\prime})\) is an equivalence, or indeed 'derived fully faithful' in the sense that the analogue of (3.2.5) is an isomorphism. The question is local on \(Y\), which I may therefore assume to be quasi-projective, and thus by Chow's lemma there exists a projective morphism \(g^{\prime}\colon Y^{\prime\prime}\to Y^{\prime}\) such that \(g\circ g^{\prime}\) is also projective. Thus I can reduce to the case that \(g\) is projective. Hence by [10, Lemma 6.5.1]\((\operatorname{id},g)\) extends to a morphism of frames such that \(u\) is etale around \(X\). Hence \(u\colon\left]X\right[_{\mathfrak{P}^{\prime}}\to\left]X\right[_{\mathfrak{P}}\) is an isomorphism by Theorem 1.4.2, and so \(u^{*}\) is trivially 'derived fully faithful'. Next, consider the claims that both \(\mathbf{R}u_{\operatorname{dR}*}\) and \(\mathbf{R}u_{\operatorname{dR}!}[2d]\) are quasi-inverses to \(u^{*}\). To begin with, \[\mathbf{R}u_{\operatorname{dR}*}\colon\mathbf{D}^{b}(\mathscr{D}_{\left]X \right[_{\mathfrak{P}}})\to\mathbf{D}^{b}(\mathscr{D}_{\left]X\right[_{ \mathfrak{P}}})\] is right adjoint to \(u^{*}\). Thus I can argue exactly as above (that is, following the proof of [10, Theorem 7.1.8]) to show that the map \[\operatorname{id}\xrightarrow{\cong}\mathbf{R}u_{\operatorname{dR}*}u^{*}\] is an isomorphism on objects of \(\mathbf{D}^{b}_{\operatorname{cons}}(X,Y,\mathfrak{P})\), by reducing to Proposition 3.2.3(1). It thus follows that \(\mathbf{R}u_{\operatorname{dR}*}\) does indeed descend to a quasi-inverse \[\mathbf{D}^{b}_{\operatorname{cons}}(X,Y^{\prime},\mathfrak{P}^{\prime})\to \mathbf{D}^{b}_{\operatorname{cons}}(X,Y,\mathfrak{P})\] to \(u^{*}\). To prove the same is true for \(\mathbf{R}u_{\operatorname{dR}!}[2d]\), Proposition 2.1.6 shows that there is, for any \(\mathscr{F}\in\mathbf{D}^{b}_{\operatorname{cons}}(X,Y,\mathfrak{P})\), an isomorphism \[\mathscr{F}\otimes_{\mathcal{O}_{\left]X\right[_{\mathfrak{P}}}}\mathbf{R}u_ {\operatorname{dR}!}\mathcal{O}_{\left]X\right[_{\mathfrak{P}^{\prime}}} \xrightarrow{\cong}\mathbf{R}u_{\operatorname{dR}!}u^{*}\mathscr{F} \tag{3.2.7}\] in \(\mathbf{D}^{b}(\mathscr{D}_{\left]X\right[_{\mathfrak{P}}})\). It follows from Corollary 1.5.1 that the trace map gives an isomorphism \[\operatorname{Tr}\colon\mathbf{R}u_{\operatorname{dR}}\mathcal{O}_{\left]X \right[_{\mathfrak{P}}}[2d]\to\mathcal{O}_{\left]X\right[_{\mathfrak{P}}},\] and therefore an isomorphism \[\mathbf{R}u_{\operatorname{dR}}u^{*}\mathscr{F}[2d]\xrightarrow{\cong} \mathscr{F}\] for any \(\mathscr{F}\in\mathbf{D}^{b}_{\operatorname{cons}}(X,Y,\mathfrak{P})\). This completes the proof. _3.2.8 Remark_.: In the situation of Theorem 3.2.1, suppose in addition that the morphism of frames \(u\colon(X,Y^{\prime},\mathfrak{P}^{\prime})\to(X,Y,\mathfrak{P})\) admits a section \(v\colon(X,Y,\mathfrak{P})\to(X,Y^{\prime},\mathfrak{P}^{\prime})\). Then \(v^{*}\) is a left inverse to \(u^{*}\), and hence an inverse to \(u^{*}\). In particular, it follows that \[v^{*}\cong\mathbf{R}u_{\operatorname{dR}*}\cong\mathbf{R}u_{\operatorname{dR }}[2d]\] as functors \(\mathbf{D}^{b}_{\operatorname{cons}}(X,Y^{\prime},\mathfrak{P}^{\prime})\to \mathbf{D}^{b}_{\operatorname{cons}}(X,Y,\mathfrak{P})\). _3.2.9 Corollary_.: In the situation of Theorem 3.2.1, the functor \[u^{*}\colon\mathbf{Isoc}_{\operatorname{cons}}(X,Y,\mathfrak{P})\to\mathbf{ Isoc}_{\operatorname{cons}}(X,Y^{\prime},\mathfrak{P}^{\prime})\] is an equivalence of categories, with quasi-inverse given by either \(\mathbf{R}u_{\operatorname{dR}*}\) or \(\mathbf{R}u_{\operatorname{dR}!}[2d]\). _3.2.10 Corollary_.: The categories \(\mathbf{Isoc}_{\operatorname{cons}}(X,Y,\mathfrak{P})\) and \(\mathbf{D}^{b}_{\operatorname{cons}}(X,Y,\mathfrak{P})\) are independent of \(\mathfrak{P}\) up to canonical equivalence, and functorial in \((X,Y)\). I may therefore denote then by \(\mathbf{Isoc}_{\operatorname{cons}}(X,Y)\) and \(\mathbf{D}^{b}_{\operatorname{cons}}(X,Y)\) respectively. Similarly there is the full subcategory \(\mathbf{Isoc}(X,Y)\subset\mathbf{Isoc}_{\operatorname{cons}}(X,Y)\) of locally free isocrystals. **3.2.11 Corollary**.: _If \((\operatorname{id},g)\colon(X,Y^{\prime})\to(X,Y)\) is a morphism of pairs, with \(g\) proper, then_ \[g^{*}\colon\mathbf{D}^{b}_{\operatorname{cons}}(X,Y)\to\mathbf{D}^{b}_{ \operatorname{cons}}(X,Y^{\prime})\] _is a t-exact equivalence of categories. It follows that if \(Y\) is proper, then \(\mathbf{Isoc}_{\operatorname{cons}}(X,Y)\) and \(\mathbf{D}^{b}_{\operatorname{cons}}(X,Y)\) are independent of \(Y\) up to canonical equivalence, and functorial in \(X\)._ I will therefore denote these categories by \(\mathbf{Isoc}_{\operatorname{cons}}(X)\) and \(\mathbf{D}^{b}_{\operatorname{cons}}(X)\) respectively, again there is the full subcategory \(\mathbf{Isoc}(X)\subset\mathbf{Isoc}_{\operatorname{cons}}(X)\) of locally free isocrystals. Note that these are all categories of _overconvergent_ objects on \(X\). In particular, what I write as \(\mathbf{Isoc}(X)\) is the category of _overconvergent_ isocrystals on \(X\), and is more commonly written \(\mathbf{Isoc}^{\dagger}(X)\). In my notation, the category of _convergent_ isocrystals on \(X\) is written \(\mathbf{Isoc}(X,X)\). 15 Unfortunately, this does slightly clash with the notation \(\mathbf{Isoc}_{\operatorname{cons}}(\mathfrak{P})\) used above for smooth formal schemes \(\mathfrak{P}\), thus Footnote 15: Another natural choice for the category of convergent isocrystals on \(X\) might be \(\mathbf{Isoc}^{\circ}(X)\), using the fact that passing from \(\mathbf{Isoc}(X)\) to \(\mathbf{Isoc}^{\circ}(X)\) involves restricting to the _interior_ of \(]X[_{\mathfrak{P}}\) (as a subset of \(]Y[_{\mathfrak{P}}\)). I will use this notation later on for log convergent isocrystals. \[\mathbf{Isoc}_{\operatorname{cons}}(\mathfrak{P})=\mathbf{Isoc}_{ \operatorname{cons}}(P,P)\neq\mathbf{Isoc}_{\operatorname{cons}}(P).\] The reader should therefore bear in mind that when \(\mathfrak{P}\) is a smooth formal scheme, \(\mathbf{Isoc}_{\operatorname{cons}}(\mathfrak{P})\) means the category of _convergent_ (constructible) isocrystals on \(\mathfrak{P}\) (or equivalently \(P\)), whereas when \(X\) is a variety, \(\mathbf{Isoc}_{\operatorname{cons}}(X)\) will mean the category of _overconvergent_ (constructible) isocrystals on \(X\). Anyway, if \((f,g)\colon(X^{\prime},Y^{\prime})\to(X,Y)\) is a morphism of pairs, I will usually write \[f^{*}\colon\mathbf{Isoc}_{\operatorname{cons}}(X,Y)\to\mathbf{ Isoc}_{\operatorname{cons}}(X^{\prime},Y^{\prime})\] \[f^{*}\colon\mathbf{D}^{b}_{\operatorname{cons}}(X,Y)\to\mathbf{ D}^{b}_{\operatorname{cons}}(X^{\prime},Y^{\prime})\] for the pullback functors, and similarly for morphisms of varieties over \(k\). If \(g\) is a closed immersion, I will also write \[f_{!}\colon\mathbf{Isoc}_{\operatorname{cons}}(X^{\prime},Y^{ \prime})\to\mathbf{Isoc}_{\operatorname{cons}}(X,Y)\] \[f_{!}\colon\mathbf{D}^{b}_{\operatorname{cons}}(X^{\prime},Y^{ \prime})\to\mathbf{D}^{b}_{\operatorname{cons}}(X,Y)\] for the (t-)exact functors which, on realisations, is the extension by zero along the inclusion \(]X^{\prime}[_{\mathfrak{P}}\to]X[_{\mathfrak{P}}\) of the locally closed subspace \(]X^{\prime}[_{\mathfrak{P}}\) of \(]X[_{\mathfrak{P}}\). Note that when \(f\) is an open immersion, these are right adjoint to \(f^{*}\), and when \(f\) is a closed immersion, they are left adjoint to \(f^{*}\). It is a straightforward check that the tensor product \(\otimes_{\mathcal{O}_{|X|_{\mathfrak{P}}}}\) descends to a well-defined functor on either of \(\mathbf{D}^{b}_{\operatorname{cons}}(X,Y)\) or \(\mathbf{D}^{b}_{\operatorname{cons}}(X)\), which I will denote by \(\otimes_{\mathcal{O}_{X}}\). ### Frobenius structures Functoriality in \((X,Y)\) now means that it makes sense to talk about Frobenius structures on constructible isocrystals and complexes. Indeed, the absolute Frobenius morphism \(F\colon(X,Y)\to(X,Y)\) induces a \(\sigma\)-linear pullback functor \[F^{*}\colon\mathbf{Isoc}_{\operatorname{cons}}(X,Y,\mathfrak{P})\to\mathbf{Isoc }_{\operatorname{cons}}(X,Y,\mathfrak{P}).\] **3.3.1 Definition**.: A Frobenius structure on an object \(\mathscr{F}\in\mathbf{Isoc}_{\operatorname{cons}}(X,Y,\mathfrak{P})\) is an isomorphism \[\varphi\colon F^{n*}\mathscr{F}\xrightarrow{\cong}\mathscr{F}\] for some \(n\). An object \(\mathscr{F}\in\mathbf{Isoc}_{\operatorname{cons}}(X,Y,\mathfrak{P})\) is said to be of Frobenius type if it is an iterated extension of objects admitting a Frobenius structure. The reader should be warned that again, this terminology is not completely standard, and what I have termed 'of Frobenius type' has been previously referred to as '\(F\)-able' in the literature. The terminology here matches that used in [1], however, is different from that used in [1]. Anyway, I will write \(\mathbf{Isoc}_{\operatorname{cons},F}(X,Y,\mathfrak{P})\subset\mathbf{Isoc}_{ \operatorname{cons}}(X,Y,\mathfrak{P})\) for the full subcategory consisting of objects which are of Frobenius type. There are similarly defined categories \(\mathbf{Isoc}_{\operatorname{cons},F}(X,Y)\) **Isoc\({}_{\mathrm{cons},F}(X)\)** and **Isoc\({}_{\mathrm{cons},F}(\mathfrak{P})\)**, as well as the obvious analogues for categories of locally free isocrystals. #### 3.3.2. Definition A complex \(\mathscr{K}\in\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P})\) is said to be of Frobenius type if all of its cohomology sheaves are. I will denote by \[\mathbf{D}^{b}_{\mathrm{cons},F}(X,Y,\mathfrak{P})\subset\mathbf{D}^{b}_{ \mathrm{cons}}(X,Y,\mathfrak{P})\] the full subcategory consisting of objects of Frobenius type. I will similarly write \(\mathbf{D}^{b}_{\mathrm{cons},F}(X,Y)\), \(\mathbf{D}^{b}_{\mathrm{cons},F}(X)\) and \(\mathbf{D}^{b}_{\mathrm{cons},F}(\mathfrak{P})\) for the analogous categories associated to pairs, varieties, and formal schemes. ### Finite etale pushflowards and pullbacks There is one last general result I will need on constructible isocrystals, and that is a version of Theorem 3.2.1 in a slightly more general setting. Instead of having two different frames enclosing a single variety \(X\), I will need to consider a morphism of frames inducing a finite etale cover of \(X\). That is, a morphism such that \(\mathfrak{P}\) is smooth around \(X\), \(u\) is smooth around \(X^{\prime}\) of relative dimension \(d\), \(g\) is proper, and \(f\) is finite etale. For locally free isocrystals, the result I require can be proved rather abstractly. #### 3.4.1. Proposition _The functor_ \[f^{*}\colon\mathbf{Isoc}(X,Y)\to\mathbf{Isoc}(X^{\prime},Y^{\prime})\] _admits a simultaneous left and right adjoint \(f_{*}\). Moreover, for any \(\mathscr{F}\in\mathbf{Isoc}(X,Y)\), \(\mathscr{G}\in\mathbf{Isoc}(X^{\prime},Y^{\prime})\) the compositions_ \[\mathscr{F}\to f_{*}f^{*}\mathscr{G}\to\mathscr{F}\] \[\mathscr{G}\to f^{*}f_{*}\mathscr{G}\to\mathscr{G}\] _are the identity maps of \(\mathscr{F}\) and \(\mathscr{G}\) respectively.._ Proof.: Let \((X^{\prime\prime},Y^{\prime\prime})\to(X^{\prime},Y^{\prime})\) be a morphism of frames with \(Y^{\prime\prime}\to Y^{\prime}\) proper and \(X^{\prime\prime}\to X^{\prime}\) a Galois closure of \(f\). Let \(G\) be the Galois group of \(X^{\prime\prime}/X\), and \(H\leq G\) that of \(X^{\prime\prime}/X^{\prime}\). The morphism \((f,g)\colon(X^{\prime\prime},Y^{\prime\prime})\to(X,Y)\) of pairs is then one of effective descent for locally free isocrystals: indeed, thanks to [1, Theorem 4.1] this can be proved word for word the same as the corresponding result [1, Theorem 5.1] when \(Y\) is proper. It therefore follows in the usual manner that \(\mathbf{Isoc}(X,Y)\) is equivalent to the category of \(G\)-equivariant objects in \(\mathbf{Isoc}(X^{\prime\prime},Y^{\prime\prime})\). Similarly, \(\mathbf{Isoc}(X^{\prime},Y^{\prime})\) is equivalent to the category of \(H\)-equivariant objects in \(\mathbf{Isoc}(X^{\prime\prime},Y^{\prime\prime})\). I now define \(f_{*}\) to be the composite \[\mathbf{Isoc}(X^{\prime},Y^{\prime})\xrightarrow{\cong}H\mathbf{-Isoc}(X^{ \prime\prime},Y^{\prime\prime})\xrightarrow{\mathrm{Ind}^{G}_{\mathrm{}}}G \mathbf{-Isoc}(X^{\prime},Y^{\prime})\xleftarrow{\cong}\mathbf{Isoc}(X,Y).\] The verification that it satisfies the claimed properties is a straightforward exercise in representation theory. As I said, this result was proved rather abstractly. With additional hypothesis on \((X,Y,\mathfrak{P})\), however, it is possible to describe \(f_{*}\) in the expected explicit manner, and generalise from locally free to constructible isocrystals. **3.4.2 Theorem**.: _Assume that \(\mathfrak{P}\) is affine, and that \(X\) is the complement of a hypersurface in \(Y\) 1. _The functors_ \[\mathbf{R}]f[_{\mathrm{udR}*}\,,\mathbf{R}]f[_{\mathrm{udR}!}\,[2d]\colon\mathbf{D} ^{b}(\mathscr{D}_{X^{\prime}[_{\mathfrak{Y}}})\to\mathbf{D}^{b}(\mathscr{D}_{X[ _{\mathfrak{Y}}})\] _induce isomorphic t-exact functors_ \[\mathbf{D}^{b}_{\mathrm{cons}}(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime}) \to\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P}),\] _which send_ \(\mathbf{Isoc}(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime})\) _into_ \(\mathbf{Isoc}(X,Y,\mathfrak{P})\)_._ 2. _These functors are both left and right adjoints to_ \[]f[^{*}_{u}:\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P})\to\mathbf{D}^{b}_{ \mathrm{cons}}(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime}).\] 3. _For any objects_ \[\mathscr{F}\in\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P}),\quad\mathscr{ G}\in\mathbf{D}^{b}_{\mathrm{cons}}(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime}),\] _the compositions_ \[\mathscr{F}\to\mathbf{R}]f[_{\mathrm{udR}!}\,]f[^{*}_{u}\mathscr{F} \cong\mathbf{R}]f[_{\mathrm{udR}!}\,]f[^{*}_{u}\mathscr{F}[2d]\to\mathscr{F}\] \[\mathscr{G}\to\left]f[^{*}_{u}\mathbf{R}]f[_{\mathrm{udR}!}\, \mathscr{G}[2d]\cong\left]f[^{*}_{u}\mathbf{R}]f[_{\mathrm{udR}*}\mathscr{G} \to\mathscr{G}\right.\] _are the identity maps._ #### 3.4.3. Remark The additional hypotheses here on \(\mathfrak{P}\) and \(X\) are surely unnecessary. However, it seems rather difficult to construct a global comparison morphism between the functors \(\mathbf{R}]f[_{\mathrm{udR}*}\) and \(\mathbf{R}]f[_{\mathrm{udR}!}\,[2d]\) in full generality, and these hypotheses provide a way of doing this. The proof of Theorem 3.4.2 will be in several stages. **3.4.4 Lemma**.: _It suffices to prove Theorem 3.4.2 under the additional assumptions that \(u\) is proper, and etale around \(X^{\prime}\) (i.e. \(d=0\))._ Proof.: Note that since \(\mathfrak{P}\) is affine, so is \(Y\), and since \(X\) is the complement of a hypersurface in \(Y\), so is \(X\). Since \(f\) is finite etale, it is proved in [Stacks, Lemma 00U9] that \(f\) is a global complete intersection, for the reader's convenience I will recall the argument here. Choose a presentation of \(X^{\prime}\), that is, a closed immersion \(X^{\prime}\to\mathbb{A}^{n-1}_{X}\) for some \(n\). Let \(\mathcal{I}\) denote the ideal of \(X^{\prime}\) inside \(\mathbb{A}^{n-1}_{X}\). Since \(f\) is etale, the conormal exact sequence then implies that \[\mathcal{I}/\mathcal{I}^{2}\stackrel{{\cong}}{{\longrightarrow}} \Omega^{1}_{\mathbb{A}^{n-1}_{X}/X}\otimes_{\mathcal{O}_{\mathbb{A}^{n-1}_{X}} }\mathcal{O}_{X^{\prime}},\] thus the conormal sheaf of \(X^{\prime}\) inside \(\mathbb{A}^{n-1}_{X}\) is free. Now choose functions \(s_{1},\dots,s_{n-1}\in\mathcal{I}\) lifting a basis of \(\mathcal{I}/\mathcal{I}^{2}\). Then Nakayama's lemma implies that there exists some \(t\in 1+\mathcal{I}\) such that \(\mathcal{I}[1/t]=(s_{1},\dots,s_{n-1})[1/t]\). Letting \(z_{n}\) be a new indeterminate, and setting \(s_{n}=tz_{n}-1\), it now follows that \(X^{\prime}=V(s_{1},\dots,s_{n})\subset\mathbb{A}^{n}_{X}\). The \(s_{i}\) can now be successively lifted to provide a proper morphism \(\mathfrak{P}^{\prime\prime}\to\mathfrak{P}\) lifting \(f\), which etale around \(X^{\prime}\). First, homogenizing the \(s_{i}\) makes \(X^{\prime}\) into a complete intersection inside \(\mathbb{P}^{n}_{X}\), given by the zero locus of sections \[s_{i}\in\Gamma(\mathbb{P}^{n}_{X},\mathcal{O}_{\mathbb{P}^{n}_{X}}(n_{i}))\] for some positive integers \(n_{i}\). Chose \(s\in\Gamma(Y,\mathcal{O}_{Y})\) so that \(X=D(s)\). Now clear denominators to ensure that the \(s_{i}\) extend to sections \[s^{\prime}_{i}\in\Gamma(\mathbb{P}^{n}_{Y},\mathcal{O}_{\mathbb{P}^{n}_{Y}}(n_{ i})),\] and then lift the \(s^{\prime}_{i}\) to sections \[\tilde{s}^{\prime}_{i}\in\Gamma(\widehat{\mathbb{P}}^{n}_{\mathfrak{P}}, \mathcal{O}_{\widehat{\mathbb{P}}^{n}_{\mathfrak{P}}}(n_{i})).\] Note that the Jacobian criterion for smoothness implies that the zero locus \[V(\tilde{s}^{\prime}_{1},\dots,\tilde{s}^{\prime}_{n})\subset\widehat{\mathbb{ P}}^{n}_{\mathfrak{P}}\] is etale over \(\mathfrak{P}\) in a neighbourhood of \(X^{\prime}\). I now let \(\mathfrak{P}^{\prime\prime}\) denote the maximal closed formal subscheme of \(V(\bar{s}^{\prime}_{1},\ldots\bar{s}^{\prime}_{n})\) which is flat over \(\mathfrak{P}\). Thus the locally closed immersion \(X^{\prime}\to V(\bar{s}^{\prime}_{1},\ldots\bar{s}^{\prime}_{n})\) factors through \(\mathfrak{P}^{\prime\prime}\). Let \(Y^{\prime\prime}\) denote the closure of \(X^{\prime}\) inside \(P^{\prime\prime}\). The upshot of all of this is that there exists morphism of frames such that \(u^{\prime}\) is proper, and etale around \(X^{\prime}\). But now repeated applications of Theorem 3.2.1 imply, in the usual way, that proving the theorem for \((f,g^{\prime},u^{\prime})\) is equivalent to proving the theorem \((f,g,u)\). In the case where \(u\) is etale, there is an obvious comparison map between \(\mathbf{R}|f|_{\mathrm{udR}*}\) and \(\mathbf{R}|f|_{\mathrm{udR}!}\,[2d]\), namely the 'forget support' map. **3.4.5 Lemma**.: _In the situation of Theorem 3.4.2, assume that \(u\) is proper and that \(d=0\). Then the natural 'forget supports' map_ \[\mathbf{R}|f|_{\mathrm{udR}!}\to\mathbf{R}|f|_{\mathrm{udR}*}\] _is an isomorphism of functors \(\mathbf{D}^{b}(\mathscr{D}_{|X^{\prime}[_{\mathfrak{P}^{\prime}}]})\to \mathbf{D}^{b}(\mathscr{D}_{|X[_{\mathfrak{P}}})\)._ Proof.: The natural map \(X^{\prime}\to u^{-1}(X)\) is a closed immersion (since it is a proper, locally closed immersion), and since \(u\) is etale around \(X^{\prime}\), there exists an open subscheme \(U\subset u^{-1}(X)\) containing \(X^{\prime}\) which is etale over \(X\). It follows that \(X^{\prime}\to U\) is both etale and a closed immersion, thus \(X^{\prime}\) is in fact open in \(U\). Hence \(X^{\prime}\) is both open and closed in \(u^{-1}(X)\). Hence \(|X^{\prime}[_{\mathfrak{P}^{\prime}}\) is both open and closed in \(\big{]}u^{-1}(X)\big{[}_{\mathfrak{P}^{\prime}}=u^{-1}\,]X[_{\mathfrak{P}}\). Since \(u\) is proper, the 'forget supports' map \[\mathbf{R}u_{\mathrm{dR}!}\to\mathbf{R}u_{\mathrm{dR}*}\] is an isomorphism of functors from \(\mathbf{D}^{b}(\mathscr{D}_{u^{-1}]X[_{\mathfrak{P}}})\) to \(\mathbf{D}^{b}(\mathscr{D}_{|X[_{\mathfrak{P}}})\). Since \(|X^{\prime}[_{\mathfrak{P}^{\prime}}\) is open and closed in \(u^{-1}\,]X[_{\mathfrak{P}}\), we therefore deduce the 'forget supports' map \[\mathbf{R}|f|_{\mathrm{udR}!}\to\mathbf{R}|f|_{\mathrm{udR}*}\] is an isomorphism as required. Proof of Theorem 3.4.2(1).: We may assume that \(u\) is proper, and that \(d=0\), in which case \[\mathbf{R}|f|_{\mathrm{udR}!}\to\mathbf{R}|f|_{\mathrm{udR}*}\] is an isomorphism. It remains to show that these functors induce t-exact functors \[\mathbf{D}^{b}_{\mathrm{cons}}(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime}) \to\mathbf{D}^{b}_{\mathrm{cons}}(X,Y,\mathfrak{P}),\] or in other words that the functor \(\mathbf{R}|f|_{\mathrm{udR}*}\) sends any \[\mathscr{F}\in\mathbf{Isoc}_{\mathrm{cons}}(X^{\prime},Y^{\prime},\mathfrak{P }^{\prime})\subset\mathbf{D}^{b}_{\mathrm{cons}}(X^{\prime},Y^{\prime}, \mathfrak{P}^{\prime})\] to an object of \[\mathbf{Isoc}_{\mathrm{cons}}(X,Y,\mathfrak{P})\subset\mathbf{D}^{b}_{\mathrm{ cons}}(X,Y,\mathfrak{P}).\] This claim is additive over exact sequences in \(\mathscr{F}\), moreover, every stratification of \(X^{\prime}\) admits a refinement which is the pullback via \(f\) of a stratification of \(X\). We may therefore assume that there exists a locally closed subscheme \(i\colon Z\to X\), and a locally free isocrystal \(\mathscr{G}\) on \(Z^{\prime}:=f^{-1}(X)\), such that, writing \(i^{\prime}\) for the induced immersion \(i^{\prime}\colon Z^{\prime}\to X^{\prime}\), \(\mathscr{F}=i^{\prime}_{i}\mathscr{G}\). By Noetherian induction, I can always replace \(Z\) by an open subscheme, hence I can ensure \(Z\) shares the hypothesis with \(X\) that it is the complement of a hypersurface inside its closure in \(P\). Writing \(f^{\prime}\colon Z^{\prime}\to Z\) for the induced morphism, transitivity of proper pushforwards implies that \[\mathbf{R}]f[_{\mathrm{udR}^{\dagger}}\mathscr{F} =\mathbf{R}]f[_{\mathrm{udR}}i^{\prime}_{!}\mathscr{G}\] \[=i_{\mathbf{R}}]f^{\prime}[_{\mathrm{udR}^{\dagger}}\mathscr{G}.\] Thus, replacing \(X\) with \(Z\), I can assume that \(\mathscr{F}\) itself is locally free on \(]X^{\prime}[_{\mathfrak{U}^{\prime}}\). Now, thanks to Proposition 3.4.1 above, there exists a locally free isocrystal \(\mathscr{G}\) on \((X,Y,\mathfrak{P})\) such that \(\mathscr{F}\) is a direct summand of \(]f[^{*}_{u}\mathscr{G}\). Thus it suffices to prove the claim for \(\mathscr{F}=]f[^{\circ}_{u}\mathscr{G}\), which, via the projection formula (Lemma 1.4.6), reduces to the case \(\mathscr{F}=\mathcal{O}_{]X^{\prime}[_{\mathfrak{U}^{\prime}}}\). To prove the result in this case, note that since \(u\) is etale in a neighbourhood of \(X^{\prime}\), the non-etale locus of \(u\colon\mathfrak{P}^{\prime}_{K}\to\mathfrak{P}_{K}\) is a closed analytic subspace of \(\mathfrak{P}^{\prime}_{K}\) disjoint from \(]X^{\prime}[_{\mathfrak{U}^{\prime}}\). It follows that \(\Omega^{\bullet}_{]X^{\prime}[_{\mathfrak{U}^{\prime}}[_{\mathfrak{U}^{ \prime}}]X[_{\mathfrak{U}^{\prime}}]}=\mathcal{O}_{]X^{\prime}[_{\mathfrak{U} ^{\prime}}}\), thus \(\mathbf{R}]f[_{\mathrm{udR}^{\dagger}}\mathcal{O}_{]X^{\prime}[_{\mathfrak{U} ^{\prime}}}=\mathbf{R}]f[_{\mathrm{udR}*}\mathcal{O}_{]X^{\prime}[_{\mathfrak{ U}^{\prime}}}=\mathbf{R}]f[_{\mathrm{udR}*}\mathcal{O}_{]X^{\prime}[_{ \mathfrak{U}^{\prime}}}\), and I will show that \(\mathbf{R}]f[_{\mathrm{udR}*}\mathcal{O}_{]X^{\prime}[_{\mathfrak{U}^{\prime}}}\), is a coherent \(\mathcal{O}_{]X[_{\mathfrak{U}}}\)-module sitting in degree \(0\). I first show that \(\mathbf{R}]f[_{\mathrm{udR}*}\mathcal{O}_{]X^{\prime}[_{\mathfrak{U}^{\prime}}}\) has coherent cohomology sheaves. To see this, note that since \(u\) is proper, \(u:u^{-1}]Y[_{\mathfrak{U}^{\prime}}\to]Y[_{\mathfrak{U}^{\prime}}\) is also proper, and hence by applying Proposition 1.4.7 to the Cartesian diagram shows that \(\mathbf{R}u_{*}\mathcal{O}_{u^{-1}]X[_{\mathfrak{U}^{\prime}}}\) has coherent cohomology sheaves on \(]X[_{\mathfrak{U}^{\prime}}\). Since \(]X^{\prime}[_{\mathfrak{U}^{\prime}}\) is open and closed in \(u^{-1}]X[_{\mathfrak{U}^{\prime}}\) (as was shown during the proof of Lemma 3.4.5), \(\mathbf{R}]f[_{\mathrm{ud}*}\mathcal{O}_{]X^{\prime}[_{\mathfrak{U}^{\prime}}}\) is a direct summand of \(\mathbf{R}u_{*}\mathcal{O}_{u^{-1}]X[_{\mathfrak{U}^{\prime}}}\), and thus also has coherent cohomology sheaves. I next show that the higher cohomology sheaves of \(\mathbf{R}]f[_{\mathrm{ud}*}\mathcal{O}_{]X^{\prime}[_{\mathfrak{U}^{\prime}}}\) vanish. To see this, consider \(\mathbf{R}^{q}u_{*}\mathcal{O}_{u^{-1}]Y[_{\mathfrak{U}^{\prime}}}\) for some \(q>0\), which is a coherent sheaf on \(]X[_{\mathfrak{U}^{\prime}}\). After restricting to \(]X[_{\mathfrak{U}^{\prime}}\), this contains \(\mathbf{R}^{q}]f[_{\mathrm{ud}*}\mathcal{O}_{]X^{\prime}[_{\mathfrak{U}^{\prime}}}\), as a direct summand. Now Proposition 1.3.3 implies that there exists an open neighbourhood \(V\) of \(]X[_{\mathfrak{U}^{\prime}}\) in \(]Y[_{\mathfrak{U}^{\prime}}\), and a decomposition \[\left(\mathbf{R}^{q}u_{*}\mathcal{O}_{u^{-1}]Y[_{\mathfrak{U}^{\prime}}} \right)\Big{|}_{V}\cong\mathscr{F}_{1}\oplus\mathscr{F}_{2}\] such that \(\mathscr{F}_{1}]_{]X[_{\mathfrak{U}^{\prime}}}=\mathbf{R}^{q}]f[_{\mathrm{ud}*} \mathcal{O}_{]X^{\prime}[_{\mathfrak{U}^{\prime}}}\). Let \(]X[_{\mathfrak{U}^{\prime}}^{\circ}\) and \(]X^{\prime}[_{\mathfrak{U}^{\prime}}^{\circ}\) denote the interiors of \(]X[_{\mathfrak{U}^{\prime}}\) and \(]X^{\prime}[_{\mathfrak{U}^{\prime}}\) respectively. Since \(u\) is proper, and etale in a neighbourhood of \(X^{\prime}\), we know that the map \[u\colon\,]\,X^{\prime}[_{\mathfrak{U}^{\prime}}^{\circ}\to]X[_{\mathfrak{U}^{ \prime}}^{\circ}\] of adic spaces is etale, and in fact every point \(x\in]X[_{\mathfrak{U}^{\prime}}^{\circ}\) has only finitely many preimages under this map. Hence, by Proposition 1.4.7, the stalk of \(\mathbf{R}^{q}]f[_{\mathrm{ud}*}\mathcal{O}_{]X^{\prime}[_{\mathfrak{U}^{ \prime}}}=\mathbf{R}^{q}]f[_{\mathrm{ud}}\mathcal{O}_{]X^{\prime}[_{ \mathfrak{U}^{\prime}}}\) at any maximal point of \(]X[_{\mathfrak{U}^{\prime}}^{\circ}\) is zero. The support of \(\mathscr{F}_{1}\) is therefore a closed analytic subspace of \(V\), not containing any maximal point of \(]X[_{\mathfrak{U}^{\prime}}^{\circ}\). It is therefore disjoint from \(]X[_{\mathfrak{U}^{\prime}}\), and so \(\mathbf{R}^{q}]f[_{\mathrm{ud}*}\mathcal{O}_{]X^{\prime}[_{\mathfrak{U}^{ \prime}}}=0\). The last thing left to prove is that the connection on \(]f[_{\mathrm{ud}*}\mathcal{O}_{]X^{\prime}[_{\mathfrak{U}^{\prime}}}\) is overconvergent, which follows from [13, Theorem 4.3.9], since any set of local co-ordinates on \((X,Y,\mathfrak{P})\) is also a set of local co-ordinates on \((X^{\prime},Y^{\prime},\mathfrak{P}^{\prime})\). Thus neither the derivations, nor the spaces of local sections \(\Gamma(V_{\delta}^{*},\mathcal{E})\), occurring in the statement of the Theorem, change on passing from \(j^{\dagger}_{X^{\prime}}\mathcal{O}_{]Y^{\prime}[_{\mathfrak{U}^{\prime}}}\) to \(]g[_{\mathrm{ud}*}\,j^{\dagger}_{X^{\prime}}\mathcal{O}_{]Y^{\prime}[_{ \mathfrak{U}^{\prime}}}\) (at least up to cofinality in \(\lambda\)). Proof of Theorem 3.4.2 (2) and (3).: Again, I can assume that \(u\) is proper, and that \(d=0\). Then \(\mathbf{R}]f[_{\mathrm{udR}*}\) is clearly a right adjoint to \(]f[^{*}_{u}\), since this is already true as a functor \[\mathbf{D}(\mathscr{D}_{]X^{\prime}[_{\mathfrak{U}^{\prime}}})\to\mathbf{D}( \mathscr{D}_{]X[_{\mathfrak{U}^{\prime}}}).\] I therefore need to prove that \(\mathbf{R}]f[_{\text{udR}*}\) is also a left adjoint. To construct the counit \[\varepsilon_{\mathscr{F}}\colon\mathbf{R}]f[_{\text{udR}*}\,]f[^{*}_{u}\mathscr{ F}\to\mathscr{F},\] I appeal to Proposition 2.1.6 (note that \(\mathbf{R}]f[_{\text{udR}*}=\mathbf{R}]f[_{\text{udR}!}\) here) to show that \[\mathscr{F}\otimes^{\mathbf{L}}_{\mathcal{O}_{|X|_{\mathfrak{Y}}}}\mathbf{R}]f [_{\text{udR}*}\,\mathcal{O}_{]X^{\prime}[_{\mathfrak{Y}^{\prime}}}\xrightarrow{ \cong}\mathbf{R}]f[_{\text{udR}*}\,]f[^{*}_{u}\mathscr{F}.\] Now, by uniqueness of (right) adjoints, I know that \(\mathbf{R}]f[_{\text{udR}*}\) coincides on locally free objects with the abstract adjoint coming from Proposition 3.4.1. The counit \[\varepsilon_{\mathcal{O}_{|X|_{\mathfrak{Y}}}}:\mathbf{R}]f[_{\text{udR}*}\, \mathcal{O}_{]X^{\prime}[_{\mathfrak{Y}^{\prime}}}\to\mathcal{O}_{]X[_{ \mathfrak{Y}}}\] may therefore be tensored with \(\mathscr{F}\) to provide the required morphism \[\varepsilon_{\mathscr{F}}\colon\mathbf{R}]f[_{\text{udR}*}\,]f[^{*}_{u} \mathscr{F}\to\mathscr{F}.\] I next construct the unit \[\eta_{\mathscr{G}}\colon\mathscr{G}\to]f[^{*}_{u}\,\mathbf{R}]f[_{\text{udR}*} \,\mathscr{G}\] in an entirely similar way. Indeed, I first construct a natural morphism \[]f[^{*}_{u}\,\mathbf{R}]f[_{\text{udR}*}\,\mathscr{G}\to\mathscr{G}\otimes^{ \mathbf{L}}_{\mathcal{C}_{]X^{\prime}[_{\mathfrak{Y}^{\prime}}}}]f[^{*}_{u}\, \mathbf{R}]f[_{\text{udR}*}\,\mathcal{O}_{]X^{\prime}[_{\mathfrak{Y}^{\prime}}} \tag{3.4.6}\] of endofunctors of \(\mathbf{D}^{b}_{\text{cons}}(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime})\). To do this, note that by adjunction, it suffices to construct \[\mathbf{R}]f[_{\text{udR}*}\,\mathscr{G}\to\mathbf{R}]f[_{\text{udR}*}\,( \mathscr{G}\otimes^{\mathbf{L}}_{]X^{\prime}[_{\mathfrak{Y}^{\prime}}}]f[^{*}_ {u}\,\mathbf{R}]f[_{\text{udR}*}\,\mathcal{O}_{]X^{\prime}[_{\mathfrak{Y}^{ \prime}}}).\] By the projection formula (note that \(\mathbf{R}\,]f[_{\text{udR}*}\,\mathcal{O}_{]X^{\prime}[_{\mathfrak{Y}^{ \prime}}}\) is a locally free isocrystal) this amounts to producing a natural morphism \[\mathbf{R}]f[_{\text{udR}*}\,\mathscr{G}\to\mathbf{R}]f[_{\text{udR}*}\, \mathscr{G}\otimes^{\mathbf{L}}_{\mathcal{O}_{]X[_{\mathfrak{Y}^{\prime}}}} \mathbf{R}]f[_{\text{udR}*}\,\mathcal{O}_{]X^{\prime}[_{\mathfrak{Y}^{\prime}}}.\] For this, we just take the natural map \[\mathcal{O}_{]X[_{\mathfrak{Y}}}\to\mathbf{R}]f[_{\text{udR}*}\,\mathcal{O}_{ ]X^{\prime}[_{\mathfrak{Y}^{\prime}}}\] and tensor with \(\mathbf{R}]f[_{\text{udR}*}\,\mathscr{G}\). Next, I claim that (3.4.6) is an isomorphism. Since \(u^{*}\) commutes with extension by zero, as does \(\mathbf{R}]f[_{\text{udR}!}=\mathbf{R}]f[_{\text{udR}*}\), the usual devissage argument reduces to the case when \(\mathscr{F}\) is locally free. In this case, as remarked above, \(\mathbf{R}]f[_{\text{udR}*}\) coincides with the functor \(f_{*}\) from Proposition 3.4.1, and the assertion then just reduces to elementary computations in the representation theory of finite groups. I can therefore define \[\eta_{\mathscr{G}}\colon\mathscr{G}\to]f[^{*}_{u}\,\mathbf{R}]f[_{\text{udR}* }\,\mathscr{G}\] by tensoring the unit \[\eta_{\mathcal{O}_{]X^{\prime}[_{\mathfrak{Y}^{\prime}}}}:\mathcal{O}_{]X^{ \prime}[_{\mathfrak{Y}^{\prime}}}\to]f[^{*}_{u}\,\mathbf{R}]f[_{\text{udR}*}\, \mathcal{O}_{]X^{\prime}[_{\mathfrak{Y}^{\prime}}}\] from Proposition 3.4.1 with \(\mathscr{G}\). Now, checking that \(\varepsilon\) and \(\eta\) give rise to an adjunction amount to checking that the natural transformations \[\mathbf{R}]f[_{\text{udR}*}\,\mathscr{G}\xrightarrow{\eta}\mathbf{R}]f[_{ \text{udR}*}\,]f[_{\text{udR}*}\,\mathscr{G}\xrightarrow{\varepsilon}\, \mathbf{R}]f[_{\text{udR}*}\,\mathscr{G}\] \[]f[^{*}_{u}\mathscr{F}\xrightarrow{\eta}]f[^{*}_{u}\,\mathbf{R}]f[_{ \text{udR}*}\,]f[^{*}_{u}\mathscr{F}\xrightarrow{\varepsilon}\,]f[^{*}_{u}\, \mathscr{F}\] are the identity. Unwinding the definitions, this follows from Proposition 3.4.1. The final claim, concerning the compositions of the units and counits giving the identity maps, reduces, by the construction, to the locally free case considered in Proposition 3.4.1. ## 4. Overholonomic \(\mathscr{D}^{\dagger}\)-modules In this section, I will briefly recall the basic theory of overholonomic \(\mathscr{D}^{\dagger}\)-modules on formal schemes and varieties, mostly following the exposition in [1, SS1]. I will also introduce the dual constructible t-structure on the category of overholonomic complexes of \(\mathscr{D}^{\dagger}\)-modules, which will be the one matching up with the natural t-structure on \(\mathbf{D}^{b}_{\mathrm{cons}}\) via the overconvergent Riemann-Hilbert correspondence. ### Cohomological formalism of overholonomic \(\mathscr{D}^{\dagger}\)-modules: formal schemes Let \(\mathfrak{P}\) be a smooth formal scheme, and \(\mathscr{D}^{\dagger}_{\mathfrak{P}\mathbb{Q}}\) Berthelot's ring of overconvergent differential operators of \(\mathfrak{P}\). Then Caro has defined in [10] the notion of an overholonomic complex of \(\mathscr{D}^{\dagger}_{\mathfrak{P}\mathbb{Q}}\)-modules, giving rise to a full, triangulated subcategory \[\mathbf{D}^{b}_{\mathrm{hol}}(\mathfrak{P})\subset\mathbf{D}^{b}_{\mathrm{ coh}}(\mathscr{D}^{\dagger}_{\mathfrak{P}\mathbb{Q}}).\] The category \(\mathbf{D}^{b}_{\mathrm{coh}}(\mathscr{D}^{\dagger}_{\mathfrak{P}\mathbb{Q}})\) admits a \(\sigma\)-linear Frobenius pullback functor \[F^{*}\colon\mathbf{D}^{b}_{\mathrm{coh}}(\mathscr{D}^{\dagger}_{\mathfrak{P} \mathbb{Q}})\to\mathbf{D}^{b}_{\mathrm{coh}}(\mathscr{D}^{\dagger}_{\mathfrak{ P}\mathbb{Q}}),\] which is t-exact for the natural t-structure [1, Theoreme 4.2.4]. **4.1.1 Definition**.: An object \(\mathcal{M}\in\mathbf{D}^{b}_{\mathrm{coh}}(\mathscr{D}^{\dagger}_{\mathfrak{ P}\mathbb{Q}})\) is said to be 'of Frobenius type' if it's cohomology sheaves are iterated extensions of objects admitting some Frobenius structure. Again, the terminology here is slightly non-standard. I will denote by \[\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\subset\mathbf{D}^{b}_{\mathrm{ hol}}(\mathfrak{P})\] the full subcategory on objects which are of Frobenius type. Thanks to work of Caro and Caro-Tsuzuki, this category admits a good formalism of cohomological operations, which I now describe. First of all, there are the duality and tensor product functors \[\mathbf{D}_{\mathfrak{P}}\colon\mathbf{D}^{b}_{\mathrm{hol},F}( \mathfrak{P})^{\mathrm{op}}\to\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\] \[-\otimes_{\mathcal{O}_{\mathfrak{P}}}-\colon\mathbf{D}^{b}_{ \mathrm{hol},F}(\mathfrak{P})\times\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P })\to\mathbf{D}^{b}_{\mathrm{hol}}(\mathfrak{P})\] defined in [1, SS4.3.10] and [1, SS2.1]. These preserve overholonomicity by [1, Corollary 3.4] and [1, Theoreme 4.2.4] respectively. Caro then defines \[\widetilde{\otimes}_{\mathcal{O}_{\mathfrak{P}}}:=\otimes_{\mathcal{O}_{ \mathfrak{P}}}[-\dim\mathfrak{P}]\] which will turn out to match up better with the tensor product of constructible isocrystals. For any locally closed subscheme \(X\hookrightarrow P\), Caro has defined the idempotent endofunctor \[\mathbf{R}\underline{\Gamma}^{\dagger}_{X}\colon\mathbf{D}^{b}_{\mathrm{hol},F }(\mathfrak{P})\to\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\] of sections with support on \(X\), see [1, SS1.1.5] and the references given there. This works as follows. If \(X\) is closed in \(P\), then we can write it as an intersection of divisors \(X=\cap_{i=1}^{n}D_{i}\). For each \(D_{i}\) Berthelot defined functor \(\mathbf{R}\underline{\Gamma}^{\dagger}_{D_{i}}\) in [1, SS4.4.4-4.4.5], and Caro defines \[\mathbf{R}\underline{\Gamma}^{\dagger}_{X}:=\mathbf{R}\underline{\Gamma}^{ \dagger}_{D_{1}}\circ\ldots\circ\mathbf{R}\underline{\Gamma}^{\dagger}_{D_{n}}.\] He shows that this doesn't depend on the choice of the \(D_{i}\), and that the natural morphism \[\mathbf{R}\underline{\Gamma}^{\dagger}_{X}\to\mathrm{id}\] is an isomorphism when evaluated on overholonomic complexes supported (set theoretically) on \(X\). He further shows that the morphism \[\mathbf{R}\underline{\Gamma}^{\dagger}_{X}\to\mathrm{id}\] has a functorial cone, denoted \((^{\dagger}X)\). If \(X\) is locally closed, with closure \(Y\) in \(P\), Caro then defines \[\mathbf{R}\underline{\Gamma}^{\dagger}_{X}:=(^{\dagger}Y\setminus X)\circ \mathbf{R}\underline{\Gamma}^{\dagger}_{Y}.\] The order of composition here can be reversed, and the result is this same if \(Y\) and \(Y\setminus X\) are replaced by arbitrary closed subschemes \(Z\) and \(T\) of \(P\) such that \(X=Z\setminus T\). The functor \(\mathbf{R}\underline{\Gamma}_{X}^{\dagger}\) only depends on \(X_{\operatorname{red}}\). If \(u\colon\mathfrak{P}\to\mathfrak{Q}\) is a morphism of smooth formal schemes, then Berthelot defined in [1, SS4.3] the functors \[u^{!}\colon\mathbf{D}^{b}_{\operatorname{coh}}(\mathscr{D}^{ \dagger}_{\mathfrak{Q}\mathbb{Q}}) \to\mathbf{D}^{b}(\mathscr{D}^{\dagger}_{\mathfrak{Q}\mathbb{Q}})\] \[u_{+}\colon\mathbf{D}^{b}_{\operatorname{coh}}(\mathscr{D}^{ \dagger}_{\mathfrak{Q}\mathbb{Q}}) \to\mathbf{D}^{b}(\mathscr{D}^{\dagger}_{\mathfrak{Q}\mathbb{Q}}).\] Caro showed in [1, Theoreme 3.8] that \(u^{!}\) preserves overlononomicity, and moreover in [1, Theoreme 3.9] that so does \(u_{+}\) whenever \(u\) is proper. In this case, there is the natural duality isomorphism \[u_{+}\circ\mathbf{D}_{\mathfrak{P}}\xrightarrow{\cong}\mathbf{D}_{\mathfrak{ Q}}\circ u_{+}\] by [13, Corollaire 7.3], and \((u_{+},u^{!})\) form an adjoint pair by [13, Theoreme 7.4, and following N.B. i)]. For arbitrary \(u\), \(u^{!}\) is compatible with tensor product in the sense that \[u^{!}(-\widehat{\otimes}_{\mathcal{O}_{\mathfrak{Q}}}-)\xrightarrow{\cong}u^ {!}(-)\widehat{\otimes}_{\mathcal{O}_{\mathfrak{P}}}u^{!}(-),\] see [1, Proposition 2.1.9]. If is a Cartesian diagram of smooth formal schemes, with \(u\) proper, then there is a natural base change isomorphism \[f^{!}u_{+}\xrightarrow{\cong}u^{\prime}_{+}f^{\prime\dagger}.\] Indeed, the general case can be reduced to the two special cases where \(f\) is either smooth or a closed immersion. When \(f\) is smooth this is [1, Proposition 3.1.8], and when \(f\) is a closed immersion it amounts to proving that \[\mathbf{R}\underline{\Gamma}_{Q^{\prime}}^{\dagger}\circ u_{+}\xrightarrow{ \cong}u_{+}\circ\mathbf{R}\underline{\Gamma}_{u^{-1}(Q^{\prime})}^{\dagger}\] which was shown in [1, Theoreme 2.2.17]. ### Cohomological formalism of overholonomic \(\mathscr{D}^{\dagger}\)-modules: pairs and varieties To obtain a theory for pairs and varieties, Caro makes the following definition. #### 4.2.1. Definition Let \((X,Y,\mathfrak{P})\) be a frame, with \(\mathfrak{P}\) smooth. We say that \(\mathcal{M}\in\mathbf{D}^{b}_{\operatorname{hol}}(\mathfrak{P})\) is supported on \(X\) if there exists an isomorphism \[\mathcal{M}\cong\mathbf{R}\underline{\Gamma}_{X}^{\dagger}\mathcal{M}.\] Let \[\mathbf{D}^{b}_{\operatorname{hol},F}(X,Y,\mathfrak{P})\subset\mathbf{D}^{b}_ {\operatorname{hol},F}(\mathfrak{P})\] denote the full subcategory of objects supported on \(X\). _4.2.2 Remark_.: 1. Note that if \(X\) is closed, then the natural direction of this isomorphism is \(\mathbf{R}\underline{\Gamma}_{X}^{\dagger}\mathcal{M}\to\mathcal{M}\), and if \(X\) is open it is \(\mathcal{M}\to\mathbf{R}\underline{\Gamma}_{X}^{\dagger}\mathcal{M}\). 2. If \(\mathcal{M}\in\mathbf{D}^{b}_{\operatorname{hol},F}(\mathfrak{P})\) is any object, then \(\mathbf{R}\underline{\Gamma}_{X}^{\dagger}\mathcal{M}\) is supported on \(X\), since \(\mathbf{R}\underline{\Gamma}_{X}^{\dagger}\) is idempotent. These categories admit similar cohomological operations to the versions for formal schemes, although for a theory that is provably independent of \(\mathfrak{P}\), it is necessary to work with l.p. frames. **4.2.3 Proposition**.: _Let_ _be a morphism of l.p. frames such that \(g\) is proper and \(u\) is smooth. Then the functors \(u_{+}\) and \(\mathbf{R}\underline{\Gamma}_{X}^{\dagger}\circ u^{i}\) induce inverse equivalences of categories_ \[\mathbf{R}\underline{\Gamma}_{X}^{\dagger}\circ u^{i}\colon\mathbf{D}_{\mathrm{ hol}}^{b}(X,Y,\mathfrak{P})\rightleftarrows\mathbf{D}_{\mathrm{hol}}^{b}(X,Y^{ \prime},\mathfrak{P}^{\prime})\colon u_{+}.\] Proof.: This is [12, Lemma 2.5]. It follows that \(\mathbf{D}_{\mathrm{hol},F}^{b}(X,Y,\mathfrak{P})\) depends only on the pair \((X,Y)\), and not on the choice of l.p. frame \((X,Y,\mathfrak{P})\) enclosing it, and may therefore be denoted \(\mathbf{D}_{\mathrm{hol},F}^{b}(X,Y)\). Similarly, if \(Y\) is proper, then \(\mathbf{D}_{\mathrm{hol},F}^{b}(X,Y,\mathfrak{P})\) only depends on \(X\), and is this denoted \(\mathbf{D}_{\mathrm{hol},F}^{b}(X)\). Again, let me repeat an earlier warning that \(\mathbf{D}_{\mathrm{hol},F}^{b}(X)\) consists of 'overconvergent' objects on \(X\), the analogous category of 'convergent' objects on \(X\) would be denoted \(\mathbf{D}_{\mathrm{hol},F}^{b}(X,X)\). I now explain how the cohomological operations on \(\mathbf{D}_{\mathrm{hol},F}^{b}(X,Y)\) are defined, following [1, SS1]. First of all, there are the dual and tensor product functors \[\mathbf{D}_{X} \colon\mathbf{D}_{\mathrm{hol},F}^{b}(X,Y)^{\mathrm{op}}\to \mathbf{D}_{\mathrm{hol},F}^{b}(X,Y)\] \[-\widetilde{\otimes}_{\mathcal{O}_{X}}- \colon\mathbf{D}_{\mathrm{hol},F}^{b}(X,Y)\times\mathbf{D}_{ \mathrm{hol},F}^{b}(X,Y)\to\mathbf{D}_{\mathrm{hol},F}^{b}(X,Y)\] which are defined by taking an l.p. frame \((X,Y,\mathfrak{P})\) and setting \[\mathbf{D}_{X} \colon=\mathbf{R}\underline{\Gamma}_{X}^{\dagger}\circ\mathbf{D}_{ \mathfrak{P}}\] \[\mathcal{M}\widetilde{\otimes}_{\mathcal{O}_{X}}\mathcal{N} \colon=\mathcal{M}\widetilde{\otimes}_{\mathcal{O}_{\mathfrak{P}}} \mathcal{N}.\] The resulting functors only depend on \((X,Y)\) up to canonical isomorphism [1, SS1.1.6]. Note that the notation here only refers to \(X\) and not the full pair \((X,Y)\). If \((f,g)\colon(X^{\prime},Y^{\prime})\to(X,Y)\) is a morphism of (strongly realisable) pairs then there are functors \[f^{\dagger},f^{+}:\mathbf{D}_{\mathrm{hol},F}^{b}(X,Y)\to\mathbf{D}_{\mathrm{ hol},F}^{b}(X^{\prime},Y^{\prime}),\] and if \(g\) is proper there are functors \[f_{\dagger},f_{+}:\mathbf{D}_{\mathrm{hol},F}^{b}(X^{\prime},Y^{\prime})\to \mathbf{D}_{\mathrm{hol},F}^{b}(X,Y).\] These are defined as follows: choose a morphism \[(f,g,u)\colon(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime})\to(X,Y,\mathfrak{P })\] of l.p. frames extending \(u\), and set \[f^{\dagger} \colon=\mathbf{R}\underline{\Gamma}_{X^{\prime}}^{\dagger}\circ u ^{\dagger}, f^{+}=\mathbf{D}_{X^{\prime}}\circ f^{\dagger}\circ\mathbf{D}_{X}\] \[f_{+} =u_{+} f_{\dagger}=\mathbf{D}_{X}\circ f_{+}\circ\mathbf{D}_{X^{\prime}}.\] There is a morphism of functors \[f_{\dagger}\to f_{+}\] which is an isomorphism whenever \(f\) (as well as \(g\)) is proper [1, SS1.1.9]. Again, let me emphasize that these functors are all defined for morphisms of (strongly realisable) pairs, even if the notation only refers to the morphism \(f\colon X^{\prime}\to X\). There is also the compatibility \[f^{\dagger}(-\widetilde{\otimes}_{\mathcal{O}_{X}}-)\stackrel{{ \cong}}{{\longrightarrow}}f^{\dagger}(-)\widetilde{\otimes}_{ \mathcal{O}_{X^{\prime}}}f^{\dagger}(-)\] as in [1, SS1.1.9], as well as the projection formula \[f_{+}(f^{\dagger}(-)\widetilde{\otimes}_{\mathcal{O}_{X^{\prime}}}(-)) \stackrel{{\cong}}{{\longrightarrow}}(-)\widetilde{\otimes}_{ \mathcal{O}_{X}}f_{+}(-)\] by [1, Proposition A.6] Both \((f^{+},f_{+})\) and \((f_{\mathrm{f}},f^{\dagger})\) are adjoint pairs [1, Lemma 1.1.10], and if is a Cartesian morphism of pairs, with \(g\) proper, then by [1, Lemma 1.3.10] there is a natural isomorphism \[s^{\dagger}f_{+}\cong f^{\prime}_{+}s^{\prime\dagger}\] of functors \[\mathbf{D}^{b}_{\mathrm{hol},F}(X^{\prime},Y^{\prime})\to\mathbf{D}^{b}_{ \mathrm{hol},F}(U,V).\] There is of course a similar formalism for strongly realisable varieties, obtained by choosing a strongly realisable pair \((X,Y)\) with \(Y\) proper, and setting \(\mathbf{D}^{b}_{\mathrm{hol},F}(X)=\mathbf{D}^{b}_{\mathrm{hol},F}(X,Y)\). ### Relation with locally free isocrystals If \((X,Y,\mathfrak{P})\) is a frame, with \(\mathfrak{P}\) smooth over \(\mathcal{V}\) and \(X\) smooth over \(k\), then Caro defined in [1] a fully faithful functor \[\mathrm{sp}_{X+}\colon\mathbf{Isoc}_{F}(X,Y)\to\mathbf{D}^{b}(\mathscr{D}^{ \dagger}_{\mathfrak{P}\mathfrak{Q}}),\] and it is the main result of [1] that this lands inside \(\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\). In fact, for reasons explained in [1] (see also SS10 below) I will want to rename this functor \(\widetilde{\mathrm{sp}}_{X+}\), and then define \[\mathrm{sp}_{X+}:=\widetilde{\mathrm{sp}}_{X+}[-\dim X]\] to be the shifted version of Caro's functor. Caro also showed in [1] that \(\widetilde{\mathrm{sp}}_{X+}\) is compatible with duality and pullback, in the following sense. For compatibility with duality, let \((-)^{\vee}\) denote the dual functor for locally free isocrystals. It was proved in [1, Proposition 5.2.7] that there is a natural isomorphism Note that if \((X,Y,\mathfrak{P})\) is an l.p. frame, then \(\mathbf{R}\underline{\Gamma}^{\dagger}_{X}\circ\mathbf{D}_{\mathfrak{P}}\) is precisely the definition of \(\mathbf{D}_{X}\), but I do not want to make this assumption on the frame here. Compatibility with duality might therefore be slightly abusively written as For compatibility with pullback, suppose that is a morphism of frames, with \(\mathfrak{P},\mathfrak{P}^{\prime},X\) and \(X^{\prime}\) all smooth. Thanks to the compatibility with duality already quoted, it was proved in [1, Proposition 4.2.4], that there is a natural isomorphism Again, if both of these frames are l.p. frames, then \(\mathbf{R}\underline{\Gamma}^{\dagger}_{X^{\prime}}\circ u^{\dagger}\) is precisely the definition of \(f^{\dagger}\), but, again, I don't want to make this assumption here. The following definition of the 'dual' of \(\mathrm{sp}_{+}\) was given in [1]. #### 4.3.1. Definition Let \((X,Y,\mathfrak{P})\) be a frame with \(\mathfrak{P}\) smooth over \(\mathcal{V}\) and \(X\) smooth over \(k\). Then define \[\mathrm{sp}_{X!}:=\widetilde{\mathrm{sp}}_{X+}[\dim X]=\mathbf{R}\underline{ \Gamma}^{\dagger}_{X}\circ\mathbf{D}_{\mathfrak{P}}\circ\mathrm{sp}_{X+} \circ(-)^{\vee}.\] #### 4.3.2. Remark it may seem rather odd to have defined both \(\mathrm{sp}_{+}\) and \(\mathrm{sp}_{!}\) separately as shifts of Caro's functor \(\widetilde{\mathrm{sp}}_{+}\). The point is that the definitions are only this simple when \(X\) is smooth. Both \(\mathrm{sp}_{+}\) and \(\mathrm{sp}_{!}\) generalise to the case when \(X\) is not necessarily smooth, however, they are no longer just shifts of one another. Instead, it is the duality relation between \(\mathrm{sp}_{+}\) and \(\mathrm{sp}_{!}\) which persists. The analogy to bear in mind is that of a lisse \(\ell\)-adic sheaf \(\mathscr{F}\) on \(X\). If \(X\) is smooth, then the Verdier dual \(\mathbf{D}_{X}(\mathscr{F})\) is just a shift of \(\mathscr{F}^{\vee}\), however, this is no longer necessarily the case when \(X\) is singular. Thus compatibility with pullbacks can be (again, slightly abusively) written as \[f^{!}\circ\mathrm{sp}_{X!}\stackrel{{\cong}}{{\longrightarrow}} \mathrm{sp}_{X^{\prime}!}\circ f^{*}.\] As expected, everything in \(\mathbf{D}^{b}_{\mathrm{hol},F}(X,Y,\mathfrak{P})\) is generically in the essential image of \(\widetilde{\mathrm{sp}}_{X+}\). **4.3.3 Proposition**.: _Let \(\mathfrak{P}\) be a smooth formal scheme, and \(\mathcal{M}\in\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\) supported on a reduced closed subscheme \(Y\hookrightarrow P\). Then there exists a divisor \(D\subset P\) such that \(X:=Y\setminus D\) is smooth, non-empty, and each \(\mathcal{H}^{q}(\mathcal{M}(^{\dagger}D))\) is in the essential image of_ \[\widetilde{\mathrm{sp}}_{X+}\colon\mathbf{Isoc}_{F}(X,Y)\to\mathbf{D}^{b}_{ \mathrm{hol},F}(\mathfrak{P}).\] #### 4.3.4. Remark Implicit in the statement is the fact that \(\widetilde{\mathrm{sp}}_{X+}\) lands in the category of \(\mathscr{D}^{\dagger}_{\mathfrak{P}\mathbb{Q}^{\ast}}\) modules, not just complexes. Also note that since \(D\) is a divisor, the functor \((^{\dagger}D)\) is exact for the natural t-structure on \(\mathbf{D}^{b}_{\mathrm{hol}}(\mathfrak{P})\), and in fact \(\mathcal{H}^{q}(\mathcal{M}(^{\dagger}D))\stackrel{{\cong}}{{ \longrightarrow}}\mathscr{D}^{\dagger}_{\mathfrak{P}\mathbb{Q}}(^{\dagger}D) \otimes_{\mathscr{D}^{\dagger}_{\mathfrak{P}\mathbb{Q}}}\mathcal{H}^{q}( \mathcal{M})\). It's also worth pointing out that \(Y\) is not assumed to be irreducible, but \(X\) need not necessarily be dense in \(Y\) (just non-empty). Proof.: This follows from [1, Lemme 6.2.1]. This can be used to deduce the following important 'conservativity' result for extraordinary stalks. **4.3.5 Proposition**.: _Let \(\mathfrak{P}\) be a smooth formal scheme, and \(\mathcal{M}\in\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\). If \(\mathbf{R}\underline{\Gamma}^{\dagger}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(\mathbf{Con}(X_{\mathrm{\acute{e}t}},\mathbb{Q}_{\ell})\), via \(\mathbf{D}_{X}\). This third t-structure appears less often in the literature than the other two, and is, in a sense, of lesser importance, since most of its relevant properties can be deduced from that of the constructible t-structure by dualising. However, it will be important for us, since it is the analogue of the _dual constructible_ t-structure on \(\mathbf{D}^{b}_{\mathrm{hol},F}\) that will match up with the natural t-structure on \(\mathbf{D}^{b}_{\mathrm{cons},F}\). These three t-structures all have analogues in the world of overlononomic \(\mathscr{D}^{\dagger}\)-modules, which I will now describe in the case of a smooth formal scheme \(\mathfrak{P}\). #### 4.4.1. Holonomic t-structure Let \(\mathfrak{P}\) be a smooth formal scheme. The first t-structure on \(\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\) is simply the natural one coming from the inclusion \(\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\hookrightarrow\mathbf{D}^{b}( \mathscr{D}^{\dagger}_{\mathfrak{P}\mathbb{Q}})\). That this is indeed a t-structure follows from the fact that a complex of \(\mathscr{D}^{\dagger}_{\mathfrak{P}\mathbb{Q}}\)-modules is overlononomic iff its cohomology sheaves are. We denote by \(\mathbf{D}^{\geq q},\mathbf{D}^{\leq q}\) the full subcategories of objects concentrated in degrees \(\geq q\) and \(\leq q\) respectively, and by \(\tau^{\geq q},\tau^{\leq q}\) the truncation functors. By combining [1, Proposition 3.3 4)] with [12, SS4], we see that the holonomic t-structure is self-dual, in the sense that \[\mathcal{M}\in\mathbf{D}^{\geq 0}\iff\mathbf{D}_{X}\mathcal{M}\in\mathbf{D}^{ \leq 0}.\] I will denote the heart of the holonomic t-structure by \(\mathbf{Hol}_{F}(\mathfrak{P})\), and refer to its objects as holonomic modules on \(\mathfrak{P}\). Cohomology functors will be denoted by \[\mathcal{H}^{q}:\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\to\mathbf{Hol}_ {F}(\mathfrak{P}).\] These are just the restriction to \(\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\) of the natural cohomology functors \[\mathcal{H}^{q}\colon\mathbf{D}^{b}_{\mathrm{coh}}(\mathscr{D}^{\dagger}_{ \mathfrak{P}\mathbb{Q}})\to\mathbf{Mod}(\mathscr{D}^{\dagger}_{\mathfrak{P} \mathbb{Q}}).\] #### 4.4.2. Constructible t-structure The constructible t-structure on \(\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\) is analogous to the t-structure on the derived category \(\mathbf{D}^{b}_{\mathrm{rh}}(\mathscr{D}_{X})\) of regular holonomic \(\mathscr{D}\)-modules on a smooth complex variety \(X\) induced by the ordinary t-structure on \(\mathbf{D}^{b}_{c}(X^{\mathrm{an}},\mathbb{C})\) via the (covariant) Riemann-Hilbert correspondence \[\mathcal{M}\mapsto\omega_{X^{\mathrm{an}}/\mathbb{C}}\otimes^{\mathbb{L}}_{ \mathscr{D}_{X^{\mathrm{an}}}}\mathcal{M}^{\mathrm{an}}\cong\Omega^{\bullet}_ {X^{\mathrm{an}}}\otimes_{\mathcal{O}_{X^{\mathrm{an}}}}\mathcal{M}^{\mathrm{ an}}[\dim X].\] For overlononomic \(\mathscr{D}^{\dagger}\)-modules, this t-structure was defined for curves in [15] and in general in [1]. Concretely, if \(\mathcal{M}\in\mathbf{Hol}_{F}(\mathfrak{P})\), define the support \(\mathrm{Supp}(\mathcal{M})\) of \(\mathcal{M}\) to be the smallest closed subscheme \(Z\subset P\) such that \(\mathcal{M}|_{P\setminus Z}=0\). Then define a pair of subcategories \(({}^{\mathrm{c}}\mathbf{D}^{\geq 0},{}^{\mathrm{c}}\mathbf{D}^{\leq 0})\) of \(\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\) as follows: \[\mathcal{M}\in{}^{\mathrm{c}}\mathbf{D}^{\geq 0} \iff\dim\mathrm{Supp}\,\mathcal{H}^{n}(\mathcal{M})\leq n\;\; \forall n\geq 0,\;\;\mathcal{H}^{n}(\mathcal{M})=0\;\;\forall n<0\] \[\mathcal{M}\in{}^{\mathrm{c}}\mathbf{D}^{\leq 0} \iff\mathcal{H}^{-n}(\mathbf{R}\underline{\Gamma}^{\dagger}_{Z} \mathbf{D}_{\mathfrak{P}}\mathcal{M})=0\text{ for any closed subscheme }Z\hookrightarrow P\text{ with }\dim Z<n.\] I will postpone for now the proof that this is a t-structure, since I will deduce it from the corresponding claim for the dual constructible t-structure, which I will introduce next. #### 4.4.3. Dual constructible t-structure Whereas the holonomic t-structure is self-dual, the constructible t-structure is _not_. There is therefore a third t-structure \(({}^{\mathrm{dc}}\mathbf{D}^{\geq 0},{}^{\mathrm{dc}}\mathbf{D}^{\leq 0})\) on \(\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\) by setting \[\mathcal{M}\in{}^{\mathrm{dc}}\mathbf{D}^{\geq 0} \iff\mathbf{D}_{\mathfrak{P}}\mathcal{M}\in{}^{\mathrm{c}} \mathbf{D}^{\leq 0}\] \[\mathcal{M}\in{}^{\mathrm{dc}}\mathbf{D}^{\leq 0} \iff\mathbf{D}_{\mathfrak{P}}\mathcal{M}\in{}^{\mathrm{c}} \mathbf{D}^{\geq 0}.\] Explicitly, \[\mathcal{M}\in{}^{\mathrm{dc}}\mathbf{D}^{\geq 0} \iff\mathcal{H}^{-n}(\mathbf{R}\underline{\Gamma}^{\dagger}_{Z} \mathcal{M})=0,\;\;\forall Z\hookrightarrow P,\;\;\dim Z<n\] \[\mathcal{M}\in{}^{\mathrm{dc}}\mathbf{D}^{\leq 0} \iff\dim\mathrm{Supp}\,\mathcal{H}^{-n}(\mathcal{M})\leq n\;\; \forall n\geq 0,\;\;\mathcal{H}^{-n}(\mathcal{M})=0\;\;\forall n<0.\] Note that the condition for \(\mathcal{M}\) to lie in \({}^{\mathrm{dc}}\mathbf{D}^{\geq 0}\) may be tested on irreducible \(Z\). To prove that this is indeed a t-structure, I follow the approach of [1, Proposition 1.3.3]. **4.4.4 Lemma**.: _Let \(Y\to P\) be a closed subscheme. Then the functors \(({}^{\dagger}Y)\) and \(\mathbf{R}\underline{\Gamma}^{\dagger}_{Y}\) both preserve \({}^{\mathrm{dc}}\mathbf{D}^{\geq 0}\) and \({}^{\mathrm{dc}}\mathbf{D}^{\leq 0}\)._ Proof.: The claims for \({}^{\mathrm{dc}}\mathbf{D}^{\geq 0}\) are relatively straightforward. Indeed, the fact that \(\mathbf{R}\underline{\Gamma}^{\dagger}_{Y}\) preserves \({}^{\mathrm{dc}}\mathbf{D}^{\geq 0}\) simply follows from the fact that \(\mathbf{R}\underline{\Gamma}^{\dagger}_{Z}\mathbf{R}\underline{\Gamma}^{ \dagger}_{Y}=\mathbf{R}\underline{\Gamma}^{\dagger}_{Y\cap Z}\), and \(\dim Y\cap Z\leq\dim Z\). For \(({}^{\dagger}Y)\), take \(Z\hookrightarrow P\) irreducible with \(\dim Z<n\), \(\mathcal{M}\in{}^{\mathrm{dc}}\mathbf{D}^{\geq 0}\), and consider the exact sequence \[\mathcal{H}^{-n}(\mathbf{R}\underline{\Gamma}^{\dagger}_{Z}\mathcal{M}) \rightarrow\mathcal{H}^{-n}(\mathbf{R}\underline{\Gamma}^{\dagger}_{Z} \mathcal{M}({}^{\dagger}Y))\rightarrow\mathcal{H}^{-n+1}(\mathbf{R} \underline{\Gamma}^{\dagger}_{Z\cap Y}\mathcal{M}).\] The left hand term here is zero, I need to show that so is the right hand term. If \(\dim Z\cap Y<\dim Z\), then \(\dim Z\cap Y<n-1\), and thus \(\mathcal{H}^{-n}(\mathbf{R}\underline{\Gamma}^{\dagger}_{Z}\mathcal{M}({}^{ \dagger}Y))=0\). Since \(Z\) is irreducible, the only way that this can fail to happen is if \(Z\subset Y\), in which case \(\mathbf{R}\underline{\Gamma}^{\dagger}_{Z}\mathcal{M}({}^{\dagger}Y)=0\). The hardest part is to prove that \(\mathbf{R}\underline{\Gamma}^{\dagger}_{Y}\) preserves \({}^{\mathrm{dc}}\mathbf{D}^{\leq 0}\). To show this, write \(Y\) as an intersection of divisors, this reduces to the case when \(Y\) itself is a divisor. In this case, the functor \(({}^{\dagger}Y)\) is t-exact for the ordinary t-structure, and hence \(\mathbf{R}\underline{\Gamma}^{\dagger}_{Y}\) has cohomological amplitude \([0,1]\). Write \(\mathcal{H}^{i,{\dagger}}_{Y}\) for the cohomology sheaves of \(\mathbf{R}\underline{\Gamma}^{\dagger}\). If \(\mathcal{M}\in{}^{\mathrm{dc}}\mathbf{D}^{\leq 0}\), it then follows that \(\mathcal{H}^{-n}(\mathbf{R}\underline{\Gamma}^{\dagger}_{Y}\mathcal{M})=0\) whenever \(n<-1\). When \(n\geq-1\), there is an exact sequence \[0\rightarrow\mathcal{H}^{1,{\dagger}}_{Y}(\mathcal{H}^{-(n+1)}(\mathcal{M})) \rightarrow\mathcal{H}^{-n}(\mathbf{R}\underline{\Gamma}^{\dagger}_{X} \mathcal{M})\rightarrow\mathcal{H}^{0,{\dagger}}_{Y}(\mathcal{H}^{-n}( \mathcal{M}))\to 0.\] Since \(\mathcal{M}\in{}^{\mathrm{dc}}\mathbf{D}^{\leq 0}\), we know that \[\dim\operatorname{Supp}\mathcal{H}^{-n}(\mathcal{M})\leq n,\quad\dim \operatorname{Supp}\mathcal{H}^{-(n+1)}(\mathcal{M}))\leq n+1\] and hence trivially \[\dim\operatorname{Supp}\mathcal{H}^{0,{\dagger}}_{Y}(\mathcal{H}^{-n}( \mathcal{M}))\leq n,\quad\dim\operatorname{Supp}\mathcal{H}^{1,{\dagger}}_{Y }(\mathcal{H}^{-(n+1)}(\mathcal{M}))\leq n+1.\] What is needed is to show that \[\dim\operatorname{Supp}\mathcal{H}^{1,{\dagger}}_{Y}(\mathcal{H}^{-(n+1)}( \mathcal{M}))\leq n\] (which should be interpreted as saying that \(\mathcal{H}^{1,{\dagger}}_{Y}(\mathcal{H}^{0}(\mathcal{M}))=0\) when \(n=-1\)). It is therefore enough to show that if \(\mathcal{N}\in\mathbf{Hol}_{F}(\mathfrak{P})\), then \[\dim\operatorname{Supp}(\mathcal{N})\leq n+1\implies\dim\operatorname{Supp }\mathcal{H}^{1,{\dagger}}_{Y}(\mathcal{N})\leq n,\] for all \(n\geq-1\) (again, meaning that \(\mathcal{H}^{1,{\dagger}}_{Y}(\mathcal{N})\)=0 when \(n=-1\)). To prove this, write \(\operatorname{Supp}(\mathcal{N})=Z_{1}\cup Z_{2}\), where \(Z_{1}\subset Y\), and no irreducible component of \(Z_{2}\) is contained in \(Y\). Then \(\dim Y\cap Z_{2}\leq n\), and I claim that \(\mathcal{H}^{1,{\dagger}}_{Y}(\mathcal{N})\) is supported on \(Y\cap Z_{2}\). Indeed, \(\mathcal{H}^{1,{\dagger}}_{Y}(\mathcal{N})\) is clearly supported on \(Y\), so it will suffice to show that it is zero on \(P\setminus Z_{2}\). But after restricting to \(P\setminus Z_{2}\), we have \(\operatorname{Supp}(\mathcal{N})\subset Y\), whence \(\mathbf{R}\underline{\Gamma}^{\dagger}_{Y}\mathcal{N}\xrightarrow{\cong} \mathcal{N}\), and so \(\mathcal{H}^{1,{\dagger}}_{Y}(\mathcal{N})=\mathcal{H}^{1}(\mathcal{N})=0\). Finally, the fact that \(({}^{\dagger}Y)\) preserves \({}^{\mathrm{dc}}\mathbf{D}^{\leq 0}\) now follows from taking cohomology sheaves of the exact triangle \[\mathbf{R}\underline{\Gamma}^{\dagger}_{Y}\rightarrow\mathrm{id}\to({}^{ \dagger}Y)\xrightarrow{+1}\] and using the already proved result for \(\mathbf{R}\underline{\Gamma}^{\dagger}_{Y}\). **4.4.5 Theorem**.: _The pair of full subcategories \(({}^{\mathrm{dc}}\mathbf{D}^{\geq 0},{}^{\mathrm{dc}}\mathbf{D}^{\leq 0})\) defines a t-structure on \(\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\)._ Proof.: To set up a Noetherian induction, I will actually prove that \(({}^{\mathrm{dc}}\mathbf{D}^{\geq 0},{}^{\mathrm{dc}}\mathbf{D}^{\leq 0})\) defines a t-structure on \(\mathbf{D}^{b}_{\mathrm{hol},F}(Y,Y,\mathfrak{P})\subset\mathbf{D}^{b}_{ \mathrm{hol},F}(\mathfrak{P})\) for any closed subscheme \(Y\hookrightarrow P\), which I may as well assume to be reduced. I first reduce to the case that \(Y\) is irreducible. Indeed, if \(Y=Y_{1}\cup Y_{2}\) is a union of proper, non-empty closed subschemes, then any \(\mathcal{M}\in\mathbf{D}^{b}_{\mathrm{hol},F}(Y,Y,\mathfrak{P})\) sits in an exact triangle \[\mathbf{R}\underline{\Gamma}^{\dagger}_{Y_{1}\cap Y_{2}}\mathcal{M}\to \mathbf{R}\underline{\Gamma}^{\dagger}_{Y_{1}}\mathcal{M}\oplus\mathbf{R} \underline{\Gamma}^{\dagger}_{Y_{2}}\mathcal{M}\rightarrow\mathcal{M}\stackrel{{ +1}}{{\rightarrow}}.\] Assuming that \(({}^{\mathrm{dc}}\mathbf{D}^{\geq 0},{}^{\mathrm{dc}}\mathbf{D}^{\leq 0})\) defines a t-structure on \(\mathbf{D}^{b}_{\mathrm{hol},F}(Y_{i},Y_{i},\mathfrak{P})\) for \(i=1,2\), I can check the axioms [1, Definition 1.3.1] explicitly as follows. First of all (ii) is clear, and (i) follows from the fact that if two out of the three morphisms in a morphism of exact triangles are zero, then so is the third. For (iii), I can appeal to [1, Proposition 1.1.11] to form the commutative diagram with all rows and columns exact triangles. It follows directly from the definitions that \(\mathcal{L}\in{}^{\mathrm{dc}}\mathbf{D}^{\leq 0}\). Since the condition to lie in \({}^{\mathrm{dc}}\mathbf{D}^{>0}\) can be checked on irreducible closed subschemes \(Z\hookrightarrow P\), the fact that \(\mathcal{N}\in{}^{\mathrm{dc}}\mathbf{D}^{>0}\) follows from the fact that both \(\underline{\mathbf{R}}\Gamma^{\dagger}_{Y_{i}}\mathcal{N}\) and \(\underline{\mathbf{R}}\Gamma^{\dagger}_{Y_{2}}\mathcal{N}\) are in \({}^{\mathrm{dc}}\mathbf{D}^{>0}\). I may therefore assume that \(Y\) is irreducible. In this case, take a divisor \(D\subset P\), with smooth, non-empty complement \(X=Y\setminus D\) on \(Y\), and define \[T(Y,D):=\left\{\,\mathcal{M}\in\mathbf{D}^{b}_{\mathrm{hol},F}(Y,Y,\mathfrak{P })\;\big{|}\;\mathcal{H}^{q}((\mathord{\dagger}D)\mathcal{M})\in\widetilde{ \mathrm{sp}}_{+}(\mathbf{Isoc}_{F}(X,Y))\;\;\forall q\right\}.\] Then by Proposition 4.3.3, \[\mathbf{D}^{b}_{\mathrm{hol},F}(Y,Y,\mathfrak{P})=\operatorname*{colim}_{D}T(Y,D),\] and it therefore suffices to show that \(({}^{\mathrm{dc}}\mathbf{D}^{\geq 0},{}^{\mathrm{dc}}\mathbf{D}^{\leq 0})\) defines a t-structure on \(T(Y,D)\). Now set set \(Z:=(Y\cap D)_{\mathrm{red}}\), by Noetherian induction I can assume that \(({}^{\mathrm{dc}}\mathbf{D}^{\geq 0},{}^{\mathrm{dc}}\mathbf{D}^{\leq 0})\) defines a t-structure on \(T(Z):=\mathbf{D}^{b}_{\mathrm{hol},F}(Z,Z,\mathfrak{P})\). I now set \(T(X):=T(Y,D)\cap\mathbf{D}^{b}_{\mathrm{hol},F}(X,Y,\mathfrak{P})\), thus \[(\mathord{\dagger}D)=(\mathord{\dagger}Z)\colon T(Y,D)\to T(X)\] \[\mathbf{R}\underline{\Gamma}^{\dagger}_{D}=\mathbf{R}\underline{\Gamma}^{ \dagger}_{Z}\colon T(Y,D)\to T(Z),\] and the localisation triangle \[\mathbf{R}\underline{\Gamma}^{\dagger}_{D}\to\mathrm{id}\to(\mathord{\dagger }D)\stackrel{{+1}}{{\to}},\] together with Lemma 4.4.4, shows that \(\mathcal{M}\in T(Y,D)\) is in \({}^{\mathrm{dc}}\mathbf{D}^{\geq 0}\) or \({}^{\mathrm{dc}}\mathbf{D}^{\leq 0}\) if and only if both \(\mathbf{R}\underline{\Gamma}^{\dagger}_{D}\mathcal{M}\) and \(\mathcal{M}(\mathord{\dagger}D)\) are. Moreover, since \(D\) is a divisor, and \(X\) is smooth, it follows that on \(T(X)\), \(({}^{\mathrm{dc}}\mathbf{D}^{\geq 0},{}^{\mathrm{dc}}\mathbf{D}^{\leq 0})\) is simply the shift of the ordinary t-structure \((\mathbf{D}^{\geq 0},\mathbf{D}^{\leq 0})\) by the dimension of \(X\), and therefore defines a t-structure on \(T(X)\). I now check the axioms [1, Definition 1.3.1]. Of course (ii) is straightforward. To prove (i), the fact that \(({}^{\mathrm{dc}}\mathbf{D}^{\geq 0},{}^{\mathrm{dc}}\mathbf{D}^{\leq 0})\) defines a t-structure on both \(T(X)\) and \(T(Z)\) means that I can apply the localisation triangle \[\mathbf{R}\underline{\Gamma}^{\dagger}_{D}\to\mathrm{id}\to(\mathord{ \dagger}D)\stackrel{{+1}}{{\to}},\] twice to reduce to showing that \[\mathrm{Hom}(\mathbf{R}\underline{\Gamma}^{\dagger}_{D}\mathcal{M},\mathcal{N }(\mathord{\dagger}D))=0,\quad\mathrm{Hom}(\mathcal{M}(\mathord{\dagger}D), \mathbf{R}\underline{\Gamma}^{\dagger}_{D}\mathcal{N})=0\] whenever \(\mathcal{M}\in T(Y,D)\cap{}^{\mathrm{dc}}\mathbf{D}^{<0}\) and \(\mathcal{N}\in T(Y,D)\cap{}^{\mathrm{dc}}\mathbf{D}^{\geq 0}\). The first is straightforward, since \[\mathrm{Hom}(\mathbf{R}\underline{\Gamma}^{\dagger}_{D}\mathcal{M},\mathcal{N }(\mathord{\dagger}D))=\mathrm{Hom}(\mathbf{R}\underline{\Gamma}^{\dagger}_{D }\mathcal{M},\mathbf{R}\underline{\Gamma}^{\dagger}_{D}\mathcal{N}(\mathord{ \dagger}D))=\mathrm{Hom}(\mathbf{R}\underline{\Gamma}^{\dagger}_{D}\mathcal{ M},0)=0.\] For the second, note that \(\mathcal{M}(\mathord{\dagger}D)\in T(X)\cap{}^{\mathrm{dc}}\mathbf{D}^{<0}\), and hence \(\mathcal{H}^{n}(\mathcal{M}(\mathord{\dagger}D))=0\) if \(n\geq-\dim X\). On the other hand, \(\mathbf{R}\underline{\Gamma}^{\dagger}_{D}\mathcal{N}\in T(Z)\cap{}^{ \mathrm{dc}}\mathbf{D}^{\geq 0}\), and hence \(\mathcal{H}^{n}(\mathbf{R}\underline{\Gamma}^{\dagger}_{D}\mathcal{N})=0\) if \(n<-\dim Z\). Thus \(\mathrm{Hom}(\mathcal{M}(\mathord{\dagger}D),\mathbf{R}\underline{\Gamma}^{ \dagger}_{D}\mathcal{N})=0\) as required. Finally, to prove (iii), I consider, for any \(\mathcal{M}\in T(Y,D)\), the shifted localisation triangle \[\mathcal{M}\to\mathcal{M}({}^{\dagger}D)\to\mathbf{R}\underline{\Gamma}_{D}^{ \dagger}\mathcal{M}[1]\xrightarrow{+1}.\] Since \(({}^{\mathrm{dc}}\mathbf{D}^{\geq 0},{}^{\mathrm{dc}}\mathbf{D}^{\leq 0})\) defines a t-structure on both \(T(X)\) and \(T(Z)\), I can extend this to a diagram (4.4.6) Now consider the morphism \[\tau_{\leq 0}\mathcal{M}({}^{\dagger}D)\to\left(\tau_{>0}\mathbf{R}\underline{ \Gamma}_{D}^{\dagger}\mathcal{M}\right)[1] \tag{4.4.7}\] Since \(\tau_{\leq 0}\mathcal{M}({}^{\dagger}D)\in{}^{\mathrm{dc}}\mathbf{D}^{\leq 0} \cap T(X)\), it follows that \[\mathcal{H}^{n}(\tau_{\leq 0}\mathcal{M}({}^{\dagger}D))=0,\quad\forall n>- \dim X\] On the other hand, since \((\tau_{>0}\mathbf{R}\underline{\Gamma}_{D}^{\dagger}\mathcal{M})[1]\in{}^{ \mathrm{dc}}\mathbf{D}^{\geq 0}\cap T(Z)\) it follows that \[\mathcal{H}^{n}((\tau_{>0}\mathbf{R}\underline{\Gamma}_{D}^{\dagger}\mathcal{M })[1])=0,\quad\forall n<-\dim Z.\] Since \(Y\) is irreducible, \(\dim Z<\dim X\), and it immediately follows that \[\mathrm{Hom}(\tau_{\leq 0}\mathcal{M}({}^{\dagger}D),(\tau_{>0}\mathbf{R} \underline{\Gamma}_{D}^{\dagger}\mathcal{M})[1])=0.\] Hence the diagram (4.4.6) can be completed (and rotated) to obtain a diagram with all rows and columns exact triangles. Then \(\mathcal{L}\in{}^{\mathrm{dc}}\mathbf{D}^{\leq 0}\) and \(\mathcal{N}\in{}^{\mathrm{dc}}\mathbf{D}^{>0}\), completing the proof. It follows that the pair \(({}^{\mathrm{c}}\mathbf{D}^{\geq 0},{}^{\mathrm{c}}\mathbf{D}^{\leq 0})\) also defines a t-structure on \(\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\). I will denote the hearts by of these two t-structures by \(\mathbf{Con}_{F}(\mathfrak{P})\) and \(\mathbf{DCon}_{F}(\mathfrak{P})\), and cohomology functors by \({}^{\mathrm{c}}\mathcal{H}^{q}\) and \({}^{\mathrm{dc}}\mathcal{H}^{q}\). The truncation functors will be \({}^{\mathrm{c}}\tau\) and \({}^{\mathrm{dc}}\tau\), and objects of the hearts will be called constructible and dual constructible modules on \(\mathfrak{P}\) respectively. _4.4.8_.: _Example._ Let \((X,Y,\mathfrak{P})\) be a frame, with \(\mathfrak{P}\) and \(X\) smooth, and suppose \(\mathscr{F}\in\mathbf{Isoc}_{F}(X,Y)\). Then \(\mathrm{sp}_{+}\mathscr{F}\in\mathbf{DCon}_{F}(\mathfrak{P})\). If there exists a divisor \(D\subset\mathfrak{P}\) such that \(X=Y\setminus D\), then \(\widetilde{\mathrm{sp}}_{+}\mathscr{F}\in\mathbf{Hol}_{F}(\mathfrak{P})\). If \(X=Y\) then \(\mathrm{sp}_{+}\mathscr{F}\in\mathbf{Con}_{F}(\mathfrak{P})\). As part of the proof that \((^{\mathrm{dc}}\mathbf{D}^{\geq 0},^{\mathrm{dc}}\mathbf{D}^{\leq 0})\) is indeed a t-structure, we saw that the functors \(\mathbf{R}\underline{\mathbf{L}}_{Y}^{\dagger}\) and \((^{\dagger}Y)\) for a closed subscheme \(Y\hookrightarrow P\) are t-exact for the dual constructible t-structure, I will abbreviate this as being 'det-exact'. If \(u\colon\mathfrak{P}\to\mathfrak{Q}\) is a closed immersion of smooth formal schemes, it is easy to check that \(u_{+}\) is dct-exact. **4.4.9 Proposition**.: _Let \(u\colon\mathfrak{P}\to\mathfrak{Q}\) be any morphism of smooth formal schemes. Then \(u^{!}\) is dct-exact._ _4.4.10 Remark_.: This is the dual of the fact that \(u^{+}\) is t-exact for the constructible t-structure. Proof.: If \(u\) is a closed immersion, then \(u_{+}u^{!}\stackrel{{\cong}}{{\longrightarrow}}\mathbf{R} \underline{\mathbf{L}}_{P}^{\dagger}\). Hence the dct-exactness of \(u^{!}\) follows from that of \(\mathbf{R}\underline{\mathbf{L}}_{P}^{\dagger}\) and \(u_{+}\). If \(K^{\prime}\subset K^{\prime\prime}\) are finite unramified extensions of \(K\), with rings of integers \(\mathcal{V}^{\prime}\subset\mathcal{V}^{\prime\prime}\), then the proposition clearly holds for the morphism \(u\colon\operatorname{Spf}\left(\mathcal{V}^{\prime\prime}\right)\to \operatorname{Spf}\left(\mathcal{V}^{\prime}\right)\). Hence the proposition holds whenever \(u\) is a \(\mathcal{V}^{\prime}\)-valued point of \(\mathfrak{Q}\), for any such \(\mathcal{V}^{\prime}\). But using Proposition 4.3.5, the general case follows from the particular case of \(\mathcal{V}^{\prime}\)-valued points for \(\mathcal{V}^{\prime}/\mathcal{V}\) unramified. Indeed, taking \(\mathcal{M}\in{}^{\mathrm{dc}}\mathbf{D}^{\geq 0}\) (resp. \(\mathcal{M}\in{}^{\mathrm{dc}}\mathbf{D}^{\leq 0}\)), to prove that \(u^{!}\mathcal{M}\in{}^{\mathrm{dc}}\mathbf{D}^{\geq 0}\) (resp. \(\mathcal{M}\in{}^{\mathrm{dc}}\mathbf{D}^{\leq 0}\)), it suffices to show that \({}^{\mathrm{dc}}\tau_{<0}u^{!}\mathcal{M}=0\) (resp. \({}^{\mathrm{dc}}\tau_{>0}u^{!}\mathcal{M}=0\)). But now, taking any \(\mathcal{V}^{\prime}\)-valued point \(i\colon\operatorname{Spf}\left(\mathcal{V}^{\prime}\right)\to\mathfrak{P}\), I can just calculate \[i^{\mathrm{dc}}\tau_{<0}u^{!}\mathcal{M}={}^{\mathrm{dc}}\tau_{<0}u(i)^{!} \mathcal{M}=u(i)^{\mathrm{dc}}\tau_{<0}\mathcal{M}=0\] (resp. \(i^{\mathrm{dc}}\tau_{>0}u^{!}\mathcal{M}=0\)), and therefore conclude using Proposition 4.3.5. A similar method to Proposition 4.4.9 can be used to prove that \(\widetilde{\otimes}_{\mathcal{O}_{\mathfrak{P}}}\) is dct-exact. Indeed, dct-exactness can be checked after taking extraordinary stalks, and \(\widetilde{\otimes}_{\mathcal{O}_{\mathfrak{P}}}\) commutes with extraordinary pullback. This therefore reduces to the trivial case when \(\mathfrak{P}=\operatorname{Spf}\left(\mathcal{V}^{\prime}\right)\) for \(\mathcal{V}^{\prime}/\mathcal{V}\) unramified. If \(u\colon\mathfrak{P}\to\operatorname{Spf}\left(\mathcal{V}\right)\) is the structure morphism, then \(\mathcal{O}_{\mathfrak{P}}^{*}:=u^{!}\mathcal{O}_{\mathcal{V}\mathbb{Q}}\in \mathbf{D}\mathbf{C}\mathbf{on}_{F}(\mathfrak{P})\). I will call this the _constant_ dual constructible module on \(\mathfrak{P}\). Be warned that \(\mathcal{O}_{\mathfrak{P}}^{*}\) is _not_ the \(\mathscr{D}_{\mathfrak{P}\mathbb{Q}}^{!}\)-module \(\mathcal{O}_{\mathfrak{P}\mathbb{Q}}\), instead it is a shift of this module by the dimension of \(\mathfrak{P}\). This shifted version of the 'constant' module will instead match up with the constant locally free isocrystal on \(\mathfrak{P}\) via the overconvergent Riemann-Hilbert correspondence. ### Dual constructible modules on pairs and varieties For pairs \((X,Y)\), I will only define the dual constructible t-structure. This is done by taking an l.p. frame \((X,Y,\mathfrak{P})\) and simply restricting the dual constructible t-structure on \(\mathbf{D}_{\mathrm{hol},F}^{b}(\mathfrak{P})\) to \(\mathbf{D}_{\mathrm{hol},F}^{b}(X,Y,\mathfrak{P})\).The heart of this structure will be denoted \(\mathbf{D}\mathbf{C}\mathbf{on}_{F}(X,Y)\), and referred to as the category of dual constructible modules on \((X,Y)\). _4.5.1 Remark_.: The analogous definition is not the correct one for either the holonomic or constructible t-structures, and this fact largely explains why it is the dual constructible t-structure which matches up with the natural one on the category of constructible complexes. In fact, the increasing list of hypotheses in Example 4.4.8 can be removed by replacing \(\mathbf{Hol}_{F}(\mathfrak{P})\) and \(\mathbf{C}\mathbf{on}_{F}(\mathfrak{P})\) by appropriate analogues for the pair \((X,Y)\). If \((f,g)\colon(X,Y)\to(\operatorname{Spec}\left(k\right),\operatorname{Spec} \left(k\right))\) denotes the structure morphism, I define \[\mathcal{O}_{(X,Y)}^{*}:=f^{!}\mathcal{O}_{\operatorname{Spf}(\mathcal{V}) \mathbb{Q}}\] to be the constant dual constructible module on \((X,Y)\). If \((X,Y,\mathfrak{P})\) is an l.p. frame, then \(\mathcal{O}_{(X,Y)}^{*}=\mathbf{R}\underline{\mathbf{L}}_{X}^{\dagger}\mathcal{O }_{\mathfrak{P}}^{*}\). If \(Y\) is proper over \(k\), then \(\mathcal{O}_{(X,Y)}^{*}\in\mathbf{D}_{\mathrm{hol},F}^{b}(X,Y)=\mathbf{D}_{ \mathrm{hol},F}^{b}(X)\) is independent of the choice of \(Y\), in which case I will write it as \(\mathcal{O}_{X}^{*}\). Also be warned that this is _not_ the same as Caro's object defined in [10, SS4.23]. Indeed, his is a constant holonomic module (that is, it lies in the heart of the holonomic t-structure on \(\mathbf{D}_{\mathrm{hol},F}^{b}(X,Y,\mathfrak{P})\), that we haven't defined), whereas ours is a constant dual constructible module. Our \(\mathcal{O}_{X}^{*}\) is the direct analogue of the Verdier dual \(\mathbf{D}_{X}(\underline{\mathbb{Q}}_{,X})\) of the constant sheaf in \(\ell\)-adic etale cohomology. **4.5.2 Lemma**.: _The object \(\mathcal{O}_{(X,Y)}^{*}\in\mathbf{D}_{\mathrm{hol},F}^{b}(X,Y)\) is a unit for the tensor product \(\widetilde{\otimes}_{\mathcal{O}_{(X,Y)}}\)._ Proof.: Let \((X,Y,\mathfrak{P})\) be an l.p. frame. In the case \(X=Y=P\), it is clear that \(\mathcal{O}_{\mathfrak{P}}^{*}\) is a unit for the tensor product \(\widetilde{\otimes}_{\mathcal{O}_{\mathfrak{P}}}\). If \(X=Y\), then this provides a natural transformation \[\mathcal{O}_{(Y,Y)}^{*}\widetilde{\otimes}_{\mathcal{O}_{(Y,Y)}}(-)\to(-)\] of endofunctors of \(\mathbf{D}_{\mathrm{hol},F}^{b}(Y,Y)\). To check that this natural transformation is an isomorphism, it suffices to do so on extraordinary stalks, which reduces to the trivial case \[(Y,Y)=(\operatorname{Spec}\left(k^{\prime}\right),\operatorname{Spec}\left(k^ {\prime}\right))\] for \(k^{\prime}/k\) a finite extension. In general, this in turn induces a natural transformation \[(-)\to\mathcal{O}_{(X,Y)}^{*}\widetilde{\otimes}_{\mathcal{O}_{(X,Y)}}(-)\] of endofunctors of \(\mathbf{D}_{\mathrm{hol},F}^{b}(X,Y)\) which is proved to be an isomorphism in the same way. As in the case of formal schemes, the tensor product \(\widetilde{\otimes}_{\mathcal{O}_{X}}\) is dct-exact. If \((f,g)\colon(X^{\prime},Y^{\prime})\to(X,Y)\) is a morphism of pairs, then \(f^{!}\) is always dct-exact. If \(g\) is a closed immersion, so \(f\) is a locally closed immersion, then \(f_{+}\) is dct-exact. In particular, if \((X,Y)\) is a pair, \[j\colon U\to X\gets Z\colon i\] are complementary open and closed subschemes, and \(\mathcal{M}\in\mathbf{DCon}_{F}(X,Y)\), then the localisation triangle \[i_{+}i^{!}\mathcal{M}\to\mathcal{M}\to j_{+}j^{+}\mathcal{M}\stackrel{{ +1}}{{\to}} \tag{4.5.3}\] can be viewed as a short exact sequence of dual constructible modules. There is then the following version of devissage for dual constructible modules. **4.5.4 Proposition**.: _Every \(\mathcal{M}\in\mathbf{DCon}_{F}(X,Y)\) admits a finite composition series_ \[0=\mathcal{M}_{0}\subset\mathcal{M}_{1}\subset\ldots\subset\mathcal{M}_{n}= \mathcal{M},\] _such that for each \(1\leq\alpha\leq n\), there exists a smooth locally closed subscheme \(i_{\alpha}\colon X_{\alpha}\to X\), with closure \(\overline{X}_{\alpha}\) in \(P\), a locally free isocrystal \(\mathscr{F}_{\alpha}\in\mathbf{Isoc}_{F}(X_{\alpha},\overline{X}_{\alpha})\), and an isomorphism_ \[\mathcal{M}_{\alpha}/\mathcal{M}_{\alpha-1}\stackrel{{\cong}}{{ \longrightarrow}}i_{\alpha+}\mathrm{sp}_{X_{\alpha}!}\mathscr{F}_{\alpha}.\] Proof.: Thanks to the localisation exact sequence (4.5.3), this follows from Proposition 4.3.3. I will end this section with the following \(\mathscr{D}^{!}\)-module analogue of Theorem 3.4.2. **4.5.5 Proposition**.: _Let_ _be a morphism of strongly realisable pairs, such that \(g\) is proper, \(f\) is finite etale, and \(X\) is smooth. Then the natural morphism \(f_{!}\to f_{+}\) of functors \(\mathbf{D}_{\mathrm{hol},F}^{b}(X^{\prime},Y^{\prime})\to\mathbf{D}_{\mathrm{ cons},F}^{b}(X,Y)\) is an isomorphism, and there exists a trace isomorphism_ \[f^{+}\stackrel{{\cong}}{{\longrightarrow}}f^{!}\] _of functors \(\mathbf{D}_{\mathrm{hol},F}^{b}(X,Y)\to\mathbf{D}_{\mathrm{cons},F}^{b}(X^{ \prime},Y^{\prime})\). All four functors \((f^{+},f_{+},f_{!},f^{!})\) are dct-exact._ _4.5.6 Remark_.: The hypothesis that \(X\) is smooth is almost certainly unnecessary, but I will only need the result under this assumption, which makes the proof marginally simpler. Proof.: That \(f_{!}\stackrel{{\cong}}{{\longrightarrow}}f_{+}\) follows from the fact that \(f\) is proper. To construct the trace morphism \[f^{+}\to f^{!},\] I will construct the adjoint \[\operatorname{id}\to f_{+}f^{!}.\] Indeed, since \(\mathcal{O}^{*}_{(X,Y)}\) is a unit for the tensor product, the projection formula shows that \[f_{+}f^{!}\mathcal{M}\stackrel{{\cong}}{{\longrightarrow}}f_{+} f^{!}\mathcal{O}^{*}_{(X,Y)}\widetilde{\otimes}_{\mathcal{O}_{X}}\mathcal{M},\] so it suffices to construct \[f_{+}f^{!}\mathcal{O}^{*}_{(X,Y)}\to\mathcal{O}^{*}_{(X,Y)}.\] But now \(\mathcal{O}^{*}_{(X,Y)}\) clearly extends to an object of the _overconvergent_ category \(\mathbf{DCon}_{F}(X)\subset\mathbf{D}^{b}_{\operatorname{hol},F}(X)\), and so I can just restrict the trace morphism constructed in [1, SS1.5] from the overconvergent to the partially overconvergent category. To prove that \(f^{+}\to f^{!}\) is an isomorphism, [1, Lemma 1.2.3] shows that I can replace \((X,Y)\) by \((X,X)\), in other words I can work in the convergent category. Thus, after localising on \(X\), I can assume that it lifts to a smooth formal scheme \(\mathfrak{X}\), and that finite etale cover \(f\colon X^{\prime}\to X\) lifts to a finite etale cover \(u\colon\mathfrak{X}^{\prime}\to\mathfrak{X}\). In this case, it is a straightforward computation that \(u^{!}\circ\mathbf{D}_{\mathfrak{X}}\cong\mathbf{D}_{\mathfrak{X}^{\prime}}\circ u\)', and the claim follows. For the dct-exactness claims, the case of \(f^{!}\cong f^{+}\) was handled in Proposition 4.4.9, and the case of \(f_{+}\cong f_{!}\) then follows because it is both a left and a right adjoint to \(f^{!}\cong f^{+}\). #### 4.5.7. Remark Since \(f^{!}=f^{+}\) and \(f_{!}=f_{+}\), it follows that any \(\mathcal{M}\in\mathbf{DCon}_{F}(X,Y)\) is a direct summand of \(f_{+}f^{+}\mathcal{M}\in\mathbf{DCon}_{F}(X,Y)\), and any \(\mathcal{N}\in\mathbf{DCon}_{F}(X^{\prime},Y^{\prime})\) is a direct summand of \(f^{+}f_{+}\mathcal{N}\in\mathbf{DCon}_{F}(X^{\prime},Y^{\prime})\). Needless to say, there are analogues of all of the results in SS4.5 with the strongly realisable pair \((X,Y)\) replaced by a strongly realisable variety \(X\). ## 5. Quasi-coherent complexes and rigidification A key tool in the comparison between \(\mathbf{D}^{b}_{\operatorname{cons}}\) and \(\mathbf{D}^{b}_{\operatorname{hol}}\) will be a completed version of the module pullback functor along the specialisation morphism \[\operatorname{sp}\colon\mathfrak{P}_{K}\to\mathfrak{P}\] for \(\mathfrak{P}\) a flat formal scheme. This will only work for complexes which are quasi-coherent in the sense of Berthelot (I will recall the definition below), and the goal in this section is to describe this construction. ### Quasi-coherent complexes Let \(\mathfrak{P}\) be a flat formal scheme, and set \(P_{n}:=\mathfrak{P}\times_{\mathcal{V}}\mathcal{V}/\mathfrak{m}^{n+1}\). Thus \(P_{0}=P\). I will write \(\mathbf{D}_{\operatorname{qc}}(\mathcal{O}_{\mathfrak{P}})\) for the derived category of complexes of \(\mathcal{O}_{\mathfrak{P}}\)-modules which are _quasi-coherent_ in the sense of [1, SS3.2]. Thus a complex of \(\mathcal{O}_{\mathfrak{P}}\)-modules \(\mathcal{M}\) is quasi-coherent iff: 1. \(\mathcal{O}_{P_{0}}\otimes^{\mathbf{L}}_{\mathcal{O}_{\mathfrak{P}}}\mathcal{M}\) is a quasi-coherent complex of \(\mathcal{O}_{P_{0}}\)-modules; 2. the natural map \(\mathcal{M}\to\mathbf{R}\!\!\lim_{n}\mathcal{O}_{P_{n}}\otimes^{\mathbf{L}}_{ \mathcal{O}_{\mathfrak{P}}}\mathcal{M}\) is an isomorphism in \(\mathbf{D}(\mathcal{O}_{\mathfrak{P}})\). It follows by induction on \(n\) that \(\mathcal{M}_{n}:=\mathcal{O}_{P_{n}}\otimes^{\mathbf{L}}_{\mathcal{O}_{ \mathfrak{P}}}\mathcal{M}\) is a quasi-coherent complex of \(\mathcal{O}_{P_{n}}\)-modules for all \(n\). In fact, Berthelot phrases the definition in terms of the topos \(\mathfrak{P}_{\bullet}\) of \(\mathbb{N}\)-indexed projective systems of sheaves on \(\mathfrak{P}\). This is ringed via the projective system \(\mathcal{O}_{P_{\bullet}}=\{\mathcal{O}_{P_{n}}\}_{n\in\mathbb{N}}\), and there is a morphism of ringed toposes \[l_{\mathfrak{P}}\colon(\mathfrak{P},\mathcal{O}_{P_{\bullet}})\to(\mathfrak{P}, \mathcal{O}_{\mathfrak{P}}),\] where the pushforward functor takes the inverse limit, and the (module) pullback functor tensors over \(\mathcal{O}_{\mathfrak{P}}\) with \(\mathcal{O}_{P_{\bullet}}\). A complex \(\mathcal{M}\) is then quasi-coherent iff \((\mathbf{L}l_{\mathfrak{P}}^{*}\mathcal{M})_{0}\in\mathbf{D}(\mathcal{O}_{P_{0}})\) is quasi-coherent, and the natural map \(\mathcal{M}\to\mathbf{R}l_{\mathfrak{P}_{\bullet}}\mathbf{L}l_{\mathfrak{P}}^{*} \mathcal{M}\) is an isomorphism. #### 5.1.1. Example Any (possibly unbounded) complex with coherent cohomology sheaves is quasi-coherent. Thus \(\mathbf{D}_{\mathrm{coh}}(\mathcal{O}_{\mathfrak{P}})\subset\mathbf{D}_{\mathrm{qc }}(\mathcal{O}_{\mathfrak{P}})\) as a full subcategory. _5.1.2 Remark_.: If \(\mathcal{A}\) is an \(\mathcal{O}_{\mathfrak{P}}\)-algebra, a complex of \(\mathcal{A}\)-modules will be called quasi-coherent if it is so as a complex of \(\mathcal{O}_{\mathfrak{P}}\)-modules. The (derived) category of quasi-coherent complexes of \(\mathcal{A}\)-modules will be denoted \(\mathbf{D}_{\mathrm{qc}}(\mathcal{A})\), and is viewed as a full subcategory of \(\mathbf{D}(\mathcal{A})\). Note that I do not assume that \(\mathcal{A}\) is itself quasi-coherent as a complex of \(\mathcal{O}_{\mathfrak{P}}\)-modules. For example, if \(\mathfrak{P}\) is smooth, I will later want to take \(\mathcal{A}=\widehat{\mathcal{D}}_{\mathfrak{P}}^{(m)}\) and \(\mathcal{A}=\mathscr{D}_{\mathfrak{P}}^{(m)}\), although of course in this case the forgetful functor \(\mathbf{D}_{\mathrm{qc}}(\widehat{\mathcal{D}}_{\mathfrak{P}}^{(m)})\to \mathbf{D}_{\mathrm{qc}}(\mathscr{D}_{\mathfrak{P}}^{(m)})\) is an equivalence. One important way of constructing quasi-coherent complexes is the following lemma. **5.1.3 Lemma**.: _Suppose that \(\left\{\mathcal{M}_{n}\right\}_{n\in\mathbb{N}}\) is an inverse system of complexes of \(\mathcal{O}_{\mathfrak{P}}\)-modules, such that:_ 1. _each_ \(\mathcal{M}_{n}\) _is quasi-coherent complex of_ \(\mathcal{O}_{P_{n}}\)_-modules;_ 2. _for each_ \(n\)_, the induced map_ \[\mathcal{O}_{P_{n}}\otimes_{\mathcal{O}_{P_{n+1}}}^{\mathbf{L}}\mathcal{M}_{n+ 1}\to\mathcal{M}_{n}\] _is an isomorphism in_ \(\mathbf{D}(\mathcal{O}_{P_{n}})\)_._ _Then \(\mathcal{M}:=\mathbf{R}\underset{n}{\lim}\mathcal{M}_{n}\in\mathbf{D}( \mathcal{O}_{\mathfrak{P}})\) is quasi-coherent._ Proof.: I need to show that the map \[\mathcal{O}_{P_{n}}\otimes_{\mathcal{O}_{\mathfrak{P}}}^{\mathbf{L}}\mathcal{M }\to\mathcal{M}_{n}\] is an isomorphism in \(\mathbf{D}(\mathcal{O}_{P_{n}})\). By tensoring both sides over \(\mathcal{O}_{P_{n}}\) with the exact sequence \[0\to\mathcal{O}_{P_{n-1}}\stackrel{{\times\varpi}}{{\longrightarrow}} \mathcal{O}_{P_{n}}\to\mathcal{O}_{P_{0}}\to 0\] and using condition (2), I can argue by induction on \(n\) to reduce to the case \(n=0\). Since \(\mathcal{O}_{P_{0}}\) is a perfect complex of \(\mathcal{O}_{\mathfrak{P}}\)-modules, I can then calculate the LHS as \[\mathcal{O}_{P_{0}}\otimes_{\mathcal{O}_{\mathfrak{P}}}^{\mathbf{ L}}\mathbf{R}\underset{n}{\lim}\mathcal{M}_{n} \cong\mathbf{R}\underset{n}{\lim}\left(\mathcal{O}_{P_{0}}\otimes_{ \mathcal{O}_{\mathfrak{P}}}^{\mathbf{L}}\mathcal{M}_{n}\right)\] \[\cong\mathbf{R}\underset{n}{\lim}\left(\mathcal{O}_{P_{0}}\otimes _{\mathcal{O}_{\mathfrak{P}}}^{\mathbf{L}}\mathcal{O}_{P_{n}}\otimes_{ \mathcal{O}_{P_{n}}}^{\mathbf{L}}\mathcal{M}_{n}\right)\] \[\cong\mathbf{R}\underset{n}{\lim}\left(\mathcal{O}_{P_{n}}\otimes _{\mathcal{O}_{\mathfrak{P}}}^{\mathbf{L}}\mathcal{M}_{0}\right).\] Now each complex \(\mathcal{O}_{P_{n}}\otimes_{\mathcal{O}_{\mathfrak{P}}}^{\mathbf{L}}\mathcal{ M}_{0}\) is quasi-isomorphic to the mapping cone of \[\mathcal{M}_{0}\stackrel{{ 0}}{{\to}}\mathcal{M}_{0},\] and moreover the transition maps \(\mathcal{O}_{P_{n+1}}\otimes_{\mathcal{O}_{\mathfrak{P}}}^{\mathbf{L}}\mathcal{ M}_{0}\to\mathcal{O}_{P_{n}}\otimes_{\mathcal{O}_{\mathfrak{P}}}^{\mathbf{L}} \mathcal{M}_{0}\) are realised by the commutative diagram It therefore follows that \[\mathbf{R}\underset{n}{\lim}\left(\mathcal{O}_{P_{n}}\otimes_{\mathcal{O}_{ \mathfrak{P}}}^{\mathbf{L}}\mathcal{M}_{0}\right)\cong\mathcal{M}_{0}\] as required. Mapping complexes between quasi-coherent complexes have the following straightforward description. **5.1.4 Lemma**.: _Suppose that \(\mathcal{M},\mathcal{N}\in\mathbf{D}_{\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}})\). Then the natural map_ \[\mathbf{R}\mathrm{Hom}_{\mathcal{O}_{\mathfrak{P}}}(\mathcal{M},\mathcal{N}) \rightarrow\mathbf{R}\underset{n}{\lim}\,\mathbf{R}\mathrm{Hom}_{\mathcal{O}_ {P_{n}}}(\mathcal{M}_{n},\mathcal{N}_{n})\] _is an isomorphism._ Proof.: This is a simple calculation: \[\mathbf{R}\mathrm{Hom}_{\mathcal{O}_{\mathfrak{P}}}(\mathcal{M}, \mathcal{N}) =\mathbf{R}\mathrm{Hom}_{\mathcal{O}_{\mathfrak{P}}}(\mathcal{M}, \underset{n}{\lim}\,\mathcal{N}_{n})\] \[=\mathbf{R}\underset{n}{\lim}\,\mathbf{R}\mathrm{Hom}_{\mathcal{O }_{\mathfrak{P}}}(\mathcal{M},\mathcal{N}_{n})\] \[=\mathbf{R}\underset{n}{\lim}\,\mathbf{R}\mathrm{Hom}_{\mathcal{ O}_{P_{n}}}(\mathcal{O}_{P_{n}}\otimes_{\mathcal{O}_{\mathfrak{P}}}^{\mathbf{L}} \mathcal{M},\mathcal{N}_{n})\] \[=\mathbf{R}\underset{n}{\lim}\,\mathbf{R}\mathrm{Hom}_{\mathcal{ O}_{P_{n}}}(\mathcal{M}_{n},\mathcal{N}_{n}).\qed\] Following [1, SS3.4], I can define a completed tensor product \[-\widehat{\otimes}_{\mathcal{O}_{\mathfrak{P}}}^{\mathbf{L}}-: \mathbf{D}_{\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}})\times\mathbf{D}_{\mathrm{ qc}}(\mathcal{O}_{\mathfrak{P}})\rightarrow\mathbf{D}_{\mathrm{qc}}(\mathcal{O}_{ \mathfrak{P}})\] \[\mathcal{M}\widehat{\otimes}_{\mathcal{O}_{\mathfrak{P}}} \mathcal{N}:=\mathbf{R}\underset{n}{\lim}\,\left(\mathcal{M}_{n}\otimes_{ \mathcal{O}_{P_{n}}}^{\mathbf{L}}\mathcal{N}_{n}\right),\] the result is a quasi-coherent complex by Lemma 5.1.3. Formally, the definition is \[\mathcal{M}\widehat{\otimes}_{\mathcal{O}_{\mathfrak{P}}}^{\mathbf{L}}\mathcal{ N}:=\mathbf{R}l_{\mathfrak{P}*}(\mathbf{L}l_{\mathfrak{P}}^{*}\mathcal{M} \otimes_{\mathcal{O}_{P_{n}}}^{\mathbf{L}}\mathbf{L}l_{\mathfrak{P}}^{*} \mathcal{N}).\] Similarly, if \(\pi:\mathfrak{P}^{\prime}\rightarrow\mathfrak{P}\) is a morphism of flat formal schemes, with induced maps \(\pi_{n}:P^{\prime}_{n}\to P_{n}\) for each \(n\), there is a functor \[\mathbf{L}\hat{\pi}^{*}:\mathbf{D}_{\mathrm{qc}}(\mathcal{O}_{ \mathfrak{P}})\rightarrow\mathbf{D}_{\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}^ {\prime}})\] \[\mathbf{L}\hat{\pi}^{*}\mathcal{M}:=\mathbf{R}\underset{n}{\lim} \,\mathbf{L}\pi_{n}^{*}\mathcal{M}_{n},\] again, it follows from Lemma 5.1.3 that this is indeed a quasi-coherent complex. The formal definition is given by extending \(\pi\) to a morphism \[\pi_{*}\colon(\mathfrak{P}^{\prime},\mathcal{O}_{P^{\prime}_{\bullet}}) \rightarrow(\mathfrak{P},\mathcal{O}_{P_{\bullet}})\] and then defining \[\mathbf{L}\hat{\pi}^{*}:=\mathbf{R}l_{\mathfrak{P}^{\prime}*}\circ\mathbf{L} \pi_{*}^{*}\circ\mathbf{L}l_{\mathfrak{P}}^{*}.\] Note that \[\mathbf{L}\hat{\pi}^{*}\mathcal{M}\widehat{\otimes}_{\mathcal{O}_{\mathfrak{P }^{\prime}}}^{\mathbf{L}}\mathbf{L}\hat{\pi}^{*}\mathcal{N}\stackrel{{ \infty}}{{\longrightarrow}}\mathbf{L}\hat{\pi}^{*}(\mathcal{M} \widehat{\otimes}_{\mathcal{O}_{\mathfrak{P}}}^{\mathbf{L}}\mathcal{N}).\] Moreover, letting \(\mathbf{L}\pi^{*}\) denote abstract module pullback along \(\pi\), then the natural maps \[\mathbf{L}\pi^{*}\mathcal{M}\rightarrow\mathbf{L}\pi_{n}^{*}\mathcal{M}_{n}\] for \(n\geq 0\) induce a map \[\mathbf{L}\pi^{*}\mathcal{M}\rightarrow\mathbf{L}\hat{\pi}^{*}\mathcal{M}\] in \(\mathbf{D}(\mathcal{O}_{\mathfrak{P}^{\prime}})\). Of course, \(\mathbf{L}\pi^{*}\) won't preserve quasi-coherence in general. **5.1.5 Lemma**.: _The functor \(\mathbf{R}\pi_{*}:\mathbf{D}(\mathcal{O}_{\mathfrak{P}})\rightarrow\mathbf{D}( \mathcal{O}_{\mathfrak{P}^{\prime}})\) preserves quasi-coherence, and_ \[\mathbf{L}\hat{\pi}^{*}:\mathbf{D}_{\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}}) \leftrightarrowleftrightarrow\mathbf{D}_{\mathrm{qc}}(\mathcal{O}_{\mathfrak{P} ^{\prime}}):\mathbf{R}\pi_{*}\] _form an adjoint pair._ Proof.: Suppose that \(\mathcal{N}\in\mathbf{D}_{\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}^{\prime}})\). Then \[\mathbf{R}\pi_{*}\mathcal{N} \cong\mathbf{R}\pi_{*}\mathbf{R}\underset{n}{\lim}\,\mathcal{N} _{n}\] \[\cong\mathbf{R}\underset{n}{\lim}\,\mathbf{R}\pi_{n*}\mathcal{N} _{n}.\] Now, since each \(\mathcal{N}_{n}\) is quasi-coherent as an \(\mathcal{O}_{P_{n}^{\prime}}\)-module, each \(\mathbf{R}\pi_{n*}\mathcal{N}_{n}\) is quasi-coherent as an \(\mathcal{O}_{P_{n}}\)-module. Since \(\mathbf{R}\pi_{n*}\) has finite cohomological dimension, the base change formula \[\mathcal{O}_{P_{n}}\otimes_{\mathcal{O}_{P_{n+1}}}^{\mathbf{L}}\mathbf{R}\pi_{ n+1*}\mathcal{N}_{n+1}\stackrel{{\cong}}{{\longrightarrow}} \mathbf{R}\pi_{n*}\mathcal{N}_{n}\] can be proved by reducing to the case of bounded complexes and applying [14, IV, Proposition 3.1.0]. Thus I can apply Lemma 5.1.3 to deduce that \(\mathbf{R}\pi_{*}\mathcal{N}\) is quasi-coherent. Now, thanks to Lemma 5.1.4, the chain of identifications \[\mathbf{R}\mathrm{Hom}_{\mathcal{O}_{\Psi^{\prime}}}(\mathbf{L} \hat{\pi}^{*}\mathcal{M},\mathcal{N}) =\mathbf{R}\underset{n}{\lim}\,\mathbf{R}\mathrm{Hom}_{\mathcal{ O}_{P_{n}^{\prime}}}(\mathbf{L}\pi_{n}^{*}\mathcal{M}_{n},\mathcal{N}_{n})\] \[=\mathbf{R}\underset{n}{\lim}\,\mathbf{R}\mathrm{Hom}_{\mathcal{ O}_{\Psi}}(\mathcal{M}_{n},\mathbf{R}\pi_{n*}\mathcal{N}_{n})\] \[=\mathbf{R}\mathrm{Hom}_{\mathcal{O}_{\Psi}}(\mathcal{M}, \mathbf{R}\pi_{*}\mathcal{N})\] shows that \((\mathbf{L}\hat{\pi}^{*},\mathbf{R}\pi_{*})\) do indeed form an adjoint pair as claimed. For open immersions, the abstract module pullback is already complete. **5.1.6 Lemma**.: _Let \(j:\mathfrak{U}\to\mathfrak{P}\) be an open immersion of flat formal schemes. Then the natural morphism of functors \(j^{-1}\to\mathbf{L}\hat{j}^{*}\) is an isomorphism._ Proof.: First note that \(j^{-1}\) has a left adjoint \(j_{!}\), and thus commutes with limits. The fact that \(j_{!}\) is exact means that \(j^{-1}\) moreover commutes with derived limits, since \(j^{-1}\) preserves \(K\)-injective complexes in the category of inverse systems. I then simply compute \[j^{-1}\mathcal{M}\stackrel{{\cong}}{{\longrightarrow}}j^{-1} \mathbf{R}\underset{n}{\lim}\,\mathcal{M}_{n}=\mathbf{R}\underset{n}{\lim}\, j_{n}^{-1}\mathcal{M}_{n}\stackrel{{\cong}}{{\longrightarrow}} \mathbf{R}\underset{n}{\lim}\,\mathbf{L}j_{n}^{*}\mathcal{M}_{n}=\mathbf{L} \hat{j}^{*}\mathcal{M}\] as required. **5.1.7 Remark**.: The observation that \(j^{-1}\) commutes with derived limits (together with the analogous assertion that \(j^{-1}\) commutes with derived tensor products) implies that quasi-coherence of a complex \(\mathcal{M}\in\mathbf{D}(\mathcal{O}_{\mathfrak{P}})\) can be checked locally on \(\mathfrak{P}\). Another situation in which there is no need to complete is the following. **5.1.8 Lemma**.: _Let \(\pi\colon\mathfrak{P}^{\prime}\to\mathfrak{P}\) be a finite morphism of flat formal schemes, which is an isomorphism on the underlying topological spaces. Then the morphism \(\mathbf{L}\pi^{*}\to\mathbf{L}\hat{\pi}^{*}\) of functors \(\mathbf{D}(\mathcal{O}_{\mathfrak{P}})\to\mathbf{D}(\mathcal{O}_{\mathfrak{P}^ {\prime}})\) is an isomorphism._ **5.1.9 Remark**.: It seems reasonable to suppose that the lemma holds for more general finite morphisms, but I will only need this special case. Proof.: The lemma amounts to showing that, for any \(\mathcal{M}\in\mathbf{D}_{\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}})\), the natural morphism \[\mathcal{O}_{\mathfrak{P}^{\prime}}\otimes_{\mathcal{O}_{\mathfrak{P}}}^{ \mathbf{L}}\mathbf{R}\underset{n}{\lim}\,\mathcal{M}_{n}\to\mathbf{R}\underset {n}{\lim}\,(\mathcal{O}_{P_{n}^{\prime}}\otimes_{\mathcal{O}_{P_{n}}}^{ \mathbf{L}}\mathcal{M}_{n})\] is an isomorphism. This can be checked locally on \(\mathfrak{P}\), so I can assume that both \(\mathfrak{P}\) and \(\mathfrak{P}^{\prime}\) are affine. In this case, \(\mathcal{O}_{\mathfrak{P}^{\prime}}\) admits a (possibly infinite, but bounded above) resolution by finite free \(\mathcal{O}_{\mathfrak{P}}\)-modules. By truncating, and using the fact that \(\mathbf{R}\underset{n}{\lim}\) has finite cohomological dimension (at least for systems \(\{\mathcal{M}_{n}\}_{n\in\mathbb{N}}\) with each \(\mathcal{M}_{n}\) quasi-coherent) I can therefore replace \(\mathcal{O}_{\mathfrak{P}^{\prime}}\) by \(\mathcal{O}_{\mathfrak{P}}^{\oplus m}\), and each \(\mathcal{O}_{P_{n}^{\prime}}\) by \(\mathcal{O}_{P_{n}}^{\oplus m}\), in which case the claim is clear. Quasi-coherent complexes satisfy the following version of the projection formula. **5.1.10 Lemma**.: _Let \(\pi:\mathfrak{P}^{\prime}\to\mathfrak{P}\) be a morphism of flat formal schemes. Then for any \(\mathcal{M}\in\mathbf{D}_{\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}})\) the map_ \[\mathcal{M}\widehat{\otimes}_{\mathcal{O}_{\mathfrak{P}}}^{\mathbf{L}}\mathbf{R }\pi_{*}\mathcal{O}_{\mathfrak{P}^{\prime}}\to\mathbf{R}\pi_{*}\mathbf{L}\hat{ \pi}^{*}\mathcal{M}\] _is an isomorphism._ Proof.: It suffices to show that the map \[\mathcal{M}_{n}\otimes^{\mathbf{L}}_{\mathcal{O}_{p_{n}}}\mathbf{R}\pi_{n*} \mathcal{O}_{P^{\prime}_{n}}\to\mathbf{R}\pi_{n*}\mathbf{L}\pi_{n}^{*}\mathcal{M }_{n}\] is an isomorphism, for each \(n\). This question is local on \(P_{n}\), which I can therefore assume to be affine. Now, the standard construction of \(K\)-flat resolutions gives a resolution of \(\mathcal{M}_{n}\) with terms direct sums of sheaves of the form \(j_{!}\mathcal{O}_{U}\) for an open subscheme \(j\colon U\to P_{n}\). But since \(\mathcal{M}_{n}\) is quasi-coherent, and \(P_{n}\) is affine, I can actually find a \(K\)-flat resolution whose terms are all free \(\mathcal{O}_{P_{n}}\)-modules. Now using the fact that \(\mathbf{R}\pi_{n*}\) has finite cohomological dimension and commutes with filtered colimits, I can reduce to the tautological case \(\mathcal{M}_{n}=\mathcal{O}_{P_{n}}\). I will let \(\mathbf{D}_{\mathrm{qc},\mathbb{Q}}(\mathcal{O}_{\mathfrak{P}})\) denote the isogeny category of quasi-coherent complexes, and \(\mathbf{D}^{b}_{\mathrm{coh},\mathbb{Q}}(\mathcal{O}_{\mathfrak{P}})\) its full subcategory spanned by bounded, coherent complexes. There is a functor \[\mathbf{D}_{\mathrm{qc},\mathbb{Q}}(\mathcal{O}_{\mathfrak{P}})\to\mathbf{D}( \mathcal{O}_{\mathfrak{P}\mathbb{Q}})\] defined on objects by \(\mathcal{M}\mapsto\mathcal{M}_{\mathbb{Q}}\). In general, this need not be fully faithful, but it will be after restricting to \(\mathbf{D}^{b}_{\mathrm{coh},\mathbb{Q}}(\mathcal{O}_{\mathfrak{P}})\). **5.1.11 Lemma**.: _Let \(\mathfrak{P}\) be a flat formal scheme, and \(\mathcal{M},\mathcal{N}\in\mathbf{Coh}(\mathcal{O}_{\mathfrak{P}})\) coherent \(\mathcal{O}_{\mathfrak{P}}\)-modules. Then the natural map_ \[\mathrm{Ext}^{g}_{\mathcal{O}_{\mathfrak{P}}}(\mathcal{M},\mathcal{N})\otimes _{\mathbb{Z}}\mathbb{Q}\to\mathrm{Ext}^{g}_{\mathcal{O}_{\mathfrak{P}\mathfrak{ Q}}}(\mathcal{M}_{\mathbb{Q}},\mathcal{N}_{\mathbb{Q}})\] _is an isomorphism, for all \(q\geq 0\)._ Proof.: Since \(\mathfrak{P}\) is quasi-compact, the question is local on \(\mathfrak{P}\), which I may therefore assume to be affine. In this case, \(\mathcal{M}\) admits a resolution (possibly unbounded below) by finite free \(\mathcal{O}_{\mathfrak{P}}\)-modules. This reduces to the case \(\mathcal{M}=\mathcal{O}_{\mathfrak{P}}\), which is clear. **5.1.12 Corollary**.: _Let \(\mathfrak{P}\) be a flat formal scheme. Then the functor_ \[\mathbf{D}^{b}_{\mathbb{Q},\mathrm{coh}}(\mathcal{O}_{\mathfrak{P}})\to \mathbf{D}^{b}(\mathcal{O}_{\mathfrak{P}\mathbb{Q}}),\] _defined on objects by \(\mathcal{M}\mapsto\mathcal{M}_{\mathbb{Q}}\), is fully faithful._ I can now deduce an invariance result for quasi-coherent complexes under admissible blowups. **5.1.13 Proposition**.: _Let \(\pi:\mathfrak{P}^{\prime}\to\mathfrak{P}\) be an admissible blowup of flat formal schemes. Then, for any \(\mathcal{M}\in\mathbf{D}_{\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}})\), the map_ \[\mathcal{M}\to\mathbf{R}\pi_{*}\mathbf{L}\hat{\pi}^{*}\mathcal{M}\] _is an isogeny. That is, it becomes invertible in \(\mathbf{D}_{\mathrm{qc},\mathbb{Q}}(\mathcal{O}_{\mathfrak{P}})\)._ Proof.: Thanks to the projection formula (Lemma 5.1.10 above), the given map can be identified with \[\mathcal{M}\to\mathcal{M}\widehat{\otimes}^{\mathbf{L}}_{\mathcal{O}_{ \mathfrak{P}}}\mathbf{R}\pi_{*}\mathcal{O}_{\mathfrak{P}^{\prime}}.\] Since isogenies are preserved by applying the functor \(\mathcal{M}\widehat{\otimes}^{\mathbf{L}}_{\mathcal{O}_{\mathfrak{P}}}-\), it suffices to treat the case \(\mathcal{M}=\mathcal{O}_{\mathfrak{P}}\). In this case, after tensoring with \(\mathbb{Q}\), \[\mathcal{O}_{\mathfrak{P}\mathbb{Q}} =\mathbf{R}\mathrm{sp}_{\mathfrak{P}*}\mathcal{O}_{\mathfrak{P}_{ K}}\] \[\mathbf{R}\pi_{*}\mathcal{O}_{\mathfrak{P}^{\prime}\mathbb{Q}} =\mathbf{R}\pi_{*}\mathbf{R}\mathrm{sp}_{\mathfrak{P}^{\prime}} \mathcal{O}_{\mathfrak{P}^{\prime}_{K}}=\mathbf{R}\mathrm{sp}_{\mathfrak{P}*} \mathbf{R}\pi_{*}\mathcal{O}_{\mathfrak{P}^{\prime}_{K}}.\] Since \(\pi:\mathfrak{P}^{\prime}_{K}\to\mathfrak{P}_{K}\) is an isomorphism, it follows that \[\mathcal{O}_{\mathfrak{P}}\to\mathbf{R}\pi_{*}\mathcal{O}_{\mathfrak{P}^{\prime}}\] is a morphism of bounded complexes on \(\mathfrak{P}\), with coherent cohomology sheaves, which becomes a quasi-isomorphism after applying \(-\otimes_{\mathbb{Z}}\mathbb{Q}\). It therefore follows from Corollary 5.1.12 that it is an isogeny. I will also need to consider inductive systems of quasi-coherent complexes, the formalism of which works exactly as in [3, SS4.2]. Thus \(\mathfrak{P}^{(\bullet)}\) (resp. \(\mathfrak{P}^{(\bullet)}_{\bullet}\)) will denotes the topos of \(\mathbb{N}\)-indexed inductive systems of sheaves on \(\mathfrak{P}\) (resp. on \(\mathfrak{P}_{\bullet}\)), as in SS1.6, which is ringed via the constant ind-object \(\mathcal{O}_{\mathfrak{P}}\) (resp. \(\mathcal{O}_{P_{\bullet}}\)). Berthelot then defines a double localisation \(\underline{\mathbf{LD}}_{\mathbb{Q}}(\mathcal{O}_{\mathfrak{P}})\) of the derived category of \(\mathcal{O}_{\mathfrak{P}}\)-modules on \(\mathfrak{P}^{(\bullet)}\). Roughly speaking this corresponds to tensoring with \(\mathbb{Q}\) and then taking the colimit over the inductive system. If I need to emphasize the fact that I am considering categories of inductive systems of complexes, rather than just complexes, I will write the rings on \(\mathfrak{P}^{(\bullet)}\) and \(\mathfrak{P}^{(\bullet)}_{\bullet}\) as \(\mathcal{O}_{\mathfrak{P}^{(\bullet)}}\) and \(\mathcal{O}_{P^{(\bullet)}_{\bullet}}\) respectively, thus \(\underline{\mathbf{LD}}_{\mathbb{Q}}(\mathcal{O}_{\mathfrak{P}})\) is a localisation of the category \(\mathbf{D}(\mathcal{O}_{\mathfrak{P}^{(\bullet)}})\). As in [3, SS4.2], I will denote by \(\underline{\mathbf{LD}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}}) \subset\underline{\mathbf{LD}}_{\mathbb{Q}}(\mathcal{O}_{\mathfrak{P}})\) the full subcategory on objects which are levelwise quasi-coherent. Exactly as above, any morphism \(\pi\colon\mathfrak{P}^{\prime}\to\mathfrak{P}\) of flat formal schemes gives rise to a commutative square \[\begin{CD}(\mathfrak{P}^{\prime(\bullet)}_{\bullet},\mathcal{O}_{P^{( \bullet)}_{\bullet}})@>{l_{\mathfrak{P}^{\prime(\bullet)}}}>{}>(\mathfrak{P}^{ \prime(\bullet)},\mathcal{O}_{\mathfrak{P}^{\prime}})\\ @V{\pi^{(\bullet)}}V{}V@V{\pi^{(\bullet)}}V{}V\\ (\mathfrak{P}^{(\bullet)}_{\bullet},\mathcal{O}_{P_{\bullet}})@>{l_{\mathfrak{ P}^{(\bullet)}}}>{}>(\mathfrak{P}^{(\bullet)},\mathcal{O}_{\mathfrak{P}})\end{CD}\] of ringed toposes, and \(\mathbf{R}l_{\mathfrak{P}^{\prime(\bullet)}_{\bullet}}\circ\mathbf{L}\pi^{( \bullet),*}_{\bullet}\circ\mathbf{L}^{*}_{\mathfrak{P}^{(\bullet)}}\) descends to a functor \[\mathbf{L}\hat{\pi}^{*}\colon\underline{\mathbf{LD}}_{\mathbb{Q},\mathrm{qc }}(\mathcal{O}_{\mathfrak{P}})\to\underline{\mathbf{LD}}_{\mathbb{Q},\mathrm{qc }}(\mathcal{O}_{\mathfrak{P}^{\prime}})\] on the localised categories. Informally, this just applies \(\mathbf{L}\mathbf{S}\mathbf{P}^{*}\) levelwise, and then passes to the localisation. There is of course a similar definition of the completed tensor product \[-\widehat{\otimes}^{\mathbf{L}}_{\mathcal{O}_{\mathfrak{P}}}-:\underline{ \mathbf{LD}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}})\times \underline{\mathbf{LD}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}}) \to\underline{\mathbf{LD}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}}).\] Note that the formula \[\mathbf{L}\hat{\pi}^{*}\mathcal{M}\widehat{\otimes}^{\mathbf{L}}_{\mathcal{O }_{\mathfrak{P}^{\prime}}}\mathbf{L}\hat{\pi}^{*}\mathcal{N}\overset{\cong} {\longrightarrow}\mathbf{L}\hat{\pi}^{*}(\mathcal{M}\widehat{\otimes}^{ \mathbf{L}}_{\mathcal{O}_{\mathfrak{P}}}\mathcal{N})\] still holds for objects of \(\underline{\mathbf{LD}}_{\mathbb{Q},\mathrm{qc}}\). ### The rigidification functor The categories \(\mathbf{D}_{\mathbb{Q},\mathrm{qc}}\) and \(\underline{\mathbf{LD}}_{\mathbb{Q},\mathrm{qc}}\) are now the natural source of the functor of completed pullback along the specialisation map. Roughly speaking, for any flat formal scheme \(\mathfrak{P}\), I define \[\mathbf{L}\hat{\mathbf{S}}\mathbf{p}^{*}=\mathbf{L}\hat{\mathbf{S}}\mathbf{p} ^{*}_{\mathfrak{P}}:\mathbf{D}_{\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}})\to \mathbf{D}(\mathcal{O}_{\mathfrak{P}_{K}}^{+})\] \[\mathbf{L}\hat{\mathbf{S}}\mathbf{p}^{*}_{\mathfrak{P}}\mathcal{M}:= \operatorname*{colim}_{\pi:\mathfrak{P}^{\prime}\to\mathfrak{P}}\mathrm{sp}^{-1 }_{\mathfrak{P}^{\prime}}\mathbf{L}\hat{\pi}^{*}\mathcal{M}\] where the colimit is over all admissible blowups \(\pi:\mathfrak{P}^{\prime}\to\mathfrak{P}\). Tensoring with \(\mathbb{Q}\) and then passing to the colimit gives a functor \[\mathbf{L}\hat{\mathbf{S}}\mathbf{p}^{*}:\underline{\mathbf{LD}}_{\mathbb{Q}, \mathrm{qc}}(\mathcal{O}_{\mathfrak{P}})\to\mathbf{D}(\mathcal{O}_{\mathfrak{ P}_{K}}). \tag{5.2.1}\] As above, the formal definition of \(\mathbf{L}\hat{\mathbf{S}}\mathbf{p}^{*}\) requires the use of sheaves on diagrams of spaces. I first let \(I_{\mathfrak{P}}\) denote the category of admissible blowups of \(\mathfrak{P}\), and \[\mathcal{B}_{\mathfrak{P}}\colon I_{\mathfrak{P}}\to\mathbf{FSch}\] the tautological diagram in the category of formal schemes. I then consider: * the topos \(\mathcal{B}^{(\bullet)}_{\mathfrak{P}}\) of \(\mathbb{N}\)-indexed inductive systems of sheaves on the diagram \(\mathcal{B}_{\mathfrak{P}}\), ringed via the sheaf \(\mathcal{O}_{\mathcal{B}_{\mathfrak{P}}}\) whose restriction to each \(\mathfrak{P}^{\prime}\in I_{\mathfrak{P}}\) is the constant inductive system \(\mathcal{O}_{\mathfrak{P}^{\prime}}\); * the topos \(\mathcal{B}^{(\bullet)}_{\mathfrak{P}_{\bullet}}\) of \(\mathbb{N}\)-indexed inductive systems of \(\mathbb{N}\)-indexed projective systems of sheaves on \(\mathcal{B}_{\mathfrak{P}}\), ringed via the sheaf \(\mathcal{O}_{\mathcal{B}_{\mathfrak{P}_{\bullet}}}\) whose restriction to each \(\mathfrak{P}^{\prime}\in I_{\mathfrak{P}}\) is the constant inductive system \(\mathcal{O}_{P^{\prime}_{\bullet}}\). As in the case of a single formal scheme \(\mathfrak{P}\), if I need to emphasize the fact that I am considering categories of inductive systems, I will sometimes write these two rings as \(\mathcal{O}_{\mathcal{B}^{(\bullet)}_{\mathfrak{P}}}\) and \(\mathcal{O}_{\mathcal{B}^{(\bullet)}_{F_{\bullet}}}\) respectively, for example in considering the derived category \(\mathbf{D}(\mathcal{O}_{\mathcal{B}^{(\bullet)}_{\mathfrak{P}}})\) (and in particular to distinguish it from \(\mathbf{D}(\mathcal{O}_{\mathcal{B}_{\mathfrak{P}}})\)). There are then natural morphisms \[l_{\mathcal{B}^{(\bullet)}_{\mathfrak{P}}} \colon(\mathcal{B}^{(\bullet)}_{\mathfrak{P}_{\bullet}},\mathcal{O }_{\mathcal{B}_{F_{\bullet}}})\to(\mathcal{B}^{(\bullet)}_{\mathfrak{P}}, \mathcal{O}_{\mathcal{B}_{\mathfrak{P}}})\] \[\pi\colon(\mathcal{B}_{\mathfrak{P}},\mathcal{O}_{\mathcal{B}_{ \mathfrak{P}}})\to(\mathfrak{P},\mathcal{O}_{\mathfrak{P}})\] \[\pi^{(\bullet)}_{\bullet}\colon(\mathcal{B}^{(\bullet)}_{ \mathfrak{P}_{\bullet}},\mathcal{O}_{\mathcal{B}_{F_{\bullet}}})\to(\mathfrak{ P}^{(\bullet)}_{\bullet},\mathcal{O}_{F_{\bullet}}).\] Just as in the case of a single admissible blowup \(\mathfrak{P}^{\prime}\to\mathfrak{P}\), I can then define the completed pullback functor \[\mathbf{L}\hat{\pi}^{*}\colon\mathbf{D}(\mathcal{O}_{\mathfrak{P}(\bullet)}) \to\mathbf{D}(\mathcal{O}_{\mathcal{B}^{(\bullet)}_{\mathfrak{P}}})\] as the composite \(\mathbf{R}\mathbf{I}_{\mathcal{B}^{(\bullet)}_{\mathfrak{P}}\ast}^{\mathbf{ \prime}}\circ\mathbf{L}\pi^{(\bullet)*}_{\bullet}\circ\mathbf{L}l^{*}_{ \mathfrak{P}(\bullet)}\). Finally, there is a morphism \[\mathrm{sp}_{\mathcal{B}^{(\bullet)}_{\mathfrak{P}}} \colon(\mathfrak{P}_{K},\mathcal{O}_{\mathfrak{P}_{K}})\to( \mathcal{B}^{(\bullet)}_{\mathfrak{P}},\mathcal{O}_{\mathcal{B}_{\mathfrak{P}}})\] whose inverse image functor takes the colimit over both \(I^{\mathrm{op}}_{\mathfrak{P}}\) and \(\mathbb{N}\). The functor (5.2.1) I am after is then defined by (easily!) checking that the composite functor \[\mathbf{L}\hat{\mathrm{sp}}^{*}=\mathbf{L}\hat{\mathrm{sp}}^{*}_{ \mathfrak{P}}:\mathbf{D}(\mathcal{O}_{\mathfrak{P}(\bullet)})\to\mathbf{D}( \mathcal{O}_{\mathfrak{P}_{K}})\] \[\mathbf{L}\hat{\mathrm{sp}}^{*}:=\mathbf{L}\mathrm{sp}^{*}_{ \mathcal{B}^{(\bullet)}_{\mathfrak{P}}}\circ\mathbf{L}\hat{\pi}^{*}\] descends to the quotient \(\underline{\mathbf{L}\mathbf{D}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{ \mathfrak{P}})\) of \(\mathbf{D}_{\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}(\bullet)})\). Note that the functor \(\mathbf{L}\mathrm{sp}^{*}_{\mathcal{B}^{(\bullet)}_{\mathfrak{P}}}\) appearing in the definition applies \(\mathrm{sp}^{-1}_{\mathfrak{P}^{\prime}}\) on each admissible blowup \(\mathfrak{P}^{\prime}\to\mathfrak{P}\), then takes the colimit over both \(I^{\mathrm{op}}_{\mathfrak{P}}\) and \(\mathbb{N}\), and then finally tensors with \(\mathbb{Q}\). In particular, all of the composite factors of \(\mathbf{L}\mathrm{sp}^{*}_{\mathfrak{P}}\) are exact and derive trivially. The natural maps \[\mathbf{L}\pi^{*}\to\mathbf{L}\hat{\pi}^{*}\] defined for each \(\pi\colon\mathfrak{P}^{\prime}\to\mathfrak{P}\) fit together to give a morphism of functors \[\mathbf{L}\mathrm{sp}^{*}\to\mathbf{L}\hat{\mathrm{sp}}^{*} \tag{5.2.2}\] with source the abstract module pullback along \(\mathrm{sp}\colon(\mathfrak{P}_{K},\mathcal{O}_{\mathfrak{P}_{K}})\to( \mathfrak{P},\mathcal{O}_{\mathfrak{P}})\). 5.2.3 ExampleIf \(\mathcal{M}\) is a coherent \(\mathcal{O}_{\mathfrak{P}}\)-module such that \(\mathcal{M}_{\mathbb{Q}}\) is a locally projective \(\mathcal{O}_{\mathfrak{P}\mathbb{Q}}\)-module, then the morphism (5.2.2) induces an isomorphism \[\mathrm{sp}^{*}\mathcal{M}\stackrel{{\cong}}{{\longrightarrow}} \mathbf{L}\hat{\mathrm{sp}}^{*}\mathcal{M}\] in \(\mathbf{D}(\mathcal{O}_{\mathfrak{P}_{K}})\). ### Functoriality of \(\mathbf{L}\hat{\mathrm{sp}}^{*}_{\mathfrak{P}}\) Now suppose that \(u\colon\mathfrak{P}\to\mathfrak{Q}\) is a morphism of flat formal schemes over \(\mathcal{V}\). Then taking the strict transform induces a functor \[I_{\mathfrak{Q}}\to I_{\mathfrak{P}},\] and it is straightforward to check that the hypotheses of Lemma 1.6.2 are satisfied. This therefore gives rise to a morphism of (ringed) sites \[u\colon\mathcal{B}_{\mathfrak{P}}\to\mathcal{B}_{\mathfrak{Q}}.\] Of course, there are analogous morphisms for the sites of inductive and projective systems of sheaves on \(\mathfrak{P}\) and \(\mathfrak{Q}\). The commutative squares now give rise to a base change morphism \[\mathbf{L}\mathfrak{S}_{\mathfrak{Q}}^{*}\circ\mathbf{R}u_{*}\to\mathbf{R}u_{*} \circ\mathbf{L}\mathfrak{S}_{\mathfrak{P}\mathfrak{P}}^{*} \tag{5.3.1}\] of functors \[\underline{\mathbf{L}\mathbf{D}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{ \mathfrak{P}})\to\mathbf{D}(\mathcal{O}_{\mathfrak{Q}_{K}}).\] When \(\mathfrak{Q}=\operatorname{Spf}\left(\mathcal{V}\right)\) is a point, \(u\colon\mathfrak{P}\to\operatorname{Spf}\left(\mathcal{V}\right)\) is the structure morphism, and \(\mathcal{M}\in\underline{\mathbf{L}\mathbf{D}}_{\mathbb{Q},\mathrm{qc}}( \mathcal{O}_{\mathfrak{P}})\), I then define \[\mathbf{R}\Gamma(\mathfrak{P},\mathcal{M}):=\mathbf{L}\mathfrak{S}_{ \operatorname{Spf}\left(\mathcal{V}\right)}^{*}\mathbf{R}u_{*}\mathcal{M}\in \mathbf{D}(K).\] Concretely, if \(\mathcal{M}=\{\mathcal{M}^{(m)}\}_{m\in\mathbb{N}}\in\underline{\mathbf{L} \mathbf{D}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}})\), then \[\mathbf{R}\Gamma(\mathfrak{P},\mathcal{M})=\mathbf{R}\Gamma(\mathfrak{P}, \operatorname*{colim}_{m}\mathcal{M}^{(m)}\otimes_{\mathbb{Z}}\mathbb{Q})= \operatorname*{colim}_{m}\mathbf{R}\Gamma(\mathfrak{P},\mathcal{M}^{(m)}) \otimes_{\mathbb{Z}}\mathbb{Q}.\] Before proving that (5.3.1) is an isomorphism, I need to show that \(\mathbf{L}\mathfrak{S}\mathfrak{p}^{*}\) is compatible with open immersions. **5.3.2 Lemma**.: _Let \(j:\mathfrak{U}\to\mathfrak{P}\) denote an open immersion of flat formal schemes. Then the natural map_ \[j^{-1}\mathbf{L}\mathfrak{S}\mathfrak{p}^{*}_{\mathfrak{P}\mathfrak{P}}\to \mathbf{L}\mathfrak{S}\mathfrak{p}^{*}_{\mathfrak{U}}j^{-1}\] _is an isomorphism of functors \(\underline{\mathbf{L}\mathbf{D}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{ \mathfrak{P}})\to\mathbf{D}(\mathcal{O}^{+}_{\mathfrak{U}_{\mathfrak{U}_{ \mathfrak{U}}}})\)._ Proof.: It suffices to prove the analogous statement with \(\underline{\mathbf{L}\mathbf{D}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{ \mathfrak{P}})\) replaced by \(\mathbf{D}_{\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}})\). The functor \(\mathfrak{P}^{\prime}\mapsto\mathfrak{P}^{\prime}\times_{\mathfrak{P}} \mathfrak{U}\) from admissible blowups of \(\mathfrak{P}\) to those of \(\mathfrak{U}\) is cofinal, so I can compute \[\mathbf{L}\mathfrak{S}\mathfrak{p}^{*}_{\mathfrak{U}}\mathbf{L}j^ {*}\mathcal{M} =\operatorname*{colim}_{\pi\colon\mathfrak{U}^{\prime}\to\mathfrak{U }}\mathfrak{S}\mathfrak{p}^{-1}_{\mathfrak{U}^{\prime}}\mathbf{L}\hat{\pi}^{*} \mathbf{L}j^{*}\mathcal{M}\] \[=\operatorname*{colim}_{\pi\colon\mathfrak{U}^{\prime}\to \mathfrak{U}}\mathfrak{S}\mathfrak{p}^{-1}_{\mathfrak{U}^{\prime}}j^{-1} \mathbf{L}\hat{\pi}^{*}\mathcal{M}\] \[=\operatorname*{colim}_{\pi\colon\mathfrak{P}^{\prime}\to \mathfrak{P}}\mathfrak{S}\mathfrak{p}^{-1}_{\pi^{-1}(\mathfrak{U})}j^{-1} \mathbf{L}\hat{\pi}^{*}\mathcal{M}\] \[=j^{-1}\operatorname*{colim}_{\pi\colon\mathfrak{P}^{\prime}\to \mathfrak{P}}\mathfrak{S}\mathfrak{p}^{-1}_{\mathfrak{U}^{\prime}}\mathbf{L} \hat{\pi}^{*}\mathcal{M}\] \[=j^{-1}\mathbf{L}\mathfrak{S}\mathfrak{p}^{*}_{\mathfrak{P}^{ \prime}}\mathcal{M}\] using Lemma 5.1.6 for the second (canonical) isomorphism. **5.3.3 Theorem**.: _Let \(u\colon\mathfrak{P}\to\mathfrak{Q}\) be a flat morphism of flat formal schemes. Then_ \[\mathbf{L}\mathfrak{S}\mathfrak{p}^{*}_{\mathfrak{Q}}\circ\mathbf{R}u_{*}\to \mathbf{R}u_{*}\circ\mathbf{L}\mathfrak{S}\mathfrak{p}^{*}_{\mathfrak{P}}\] _is an isomorphism of functors \(\underline{\mathbf{L}\mathbf{D}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{ \mathfrak{P}})\to\mathbf{D}(\mathcal{O}_{\mathfrak{Q}_{K}})\)._ Proof.: Since both \(\mathfrak{P}\) and \(\mathfrak{P}_{K}\) are quasi-compact, cohomology commutes with filtered colimits. It therefore suffices to prove the analogous statement with \(\underline{\mathbf{L}\mathbf{D}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{ \mathfrak{P}})\) replaced by \(\mathbf{D}_{\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}})\). First suppose that \(\mathfrak{Q}=\operatorname{Spf}\left(\mathcal{V}\right)\) is a point. Then the base change morphism amounts to a morphism \[\mathbf{R}\Gamma(\mathfrak{P},\mathcal{M})\to\mathbf{R}\Gamma(\mathfrak{P}_{K},\mathbf{L}\mathfrak{S}\mathfrak{p}^{*}_{\mathfrak{P}}\mathcal{M})\] in \(\mathbf{D}(K)\). To prove that this is an isomorphism, recall from [10, SS4] that, as topological spaces, \[\mathfrak{P}_{K}=\lim_{I_{\mathfrak{P}}}\mathfrak{P}^{\prime}.\] In this case, since each topological space \(\mathfrak{P}^{\prime}\) is coherent and sober, and the transition maps are quasi-compact, it follows from [10, Chapter 0, Proposition 3.1.19] that the map \[\operatorname*{colim}_{\mathfrak{P}^{\prime}}\mathbf{R}\Gamma(\mathfrak{P}^{ \prime},\mathbf{L}\hat{\pi}^{*}\mathcal{M})\to\mathbf{R}\Gamma(\mathfrak{P}_{ K},\mathbf{L}\hat{\wp}^{*}\mathcal{M})\] is an isomorphism. In fact, the result in [10] is stated for sheaves, rather than complexes, but it is straightforward to extend it to complexes using \(K\)-injective resolutions. It now simply suffices to observe that, by Proposition 5.1.13, the map \[\mathbf{R}\Gamma(\mathfrak{P},\mathcal{M})\to\mathbf{R}\Gamma(\mathfrak{P}^{ \prime},\mathbf{L}\hat{\pi}^{*}\mathcal{M})\] is an isomorphism for all \(\mathfrak{P}^{\prime}\). For the general case, it suffices to show that the given map induces an isomorphism on derived global sections for any quasi-compact open \(V\subset\mathfrak{Q}_{K}\). To prove this, I can replace \(\mathfrak{Q}\) by an admissible blowup without changing either side, thus I can assume that \(V=\mathfrak{P}_{K}\) for an open subscheme \(\mathfrak{V}\subset\mathfrak{Q}\). By Lemma 5.3.2, I can then replace \(\mathfrak{Q}\) by \(\mathfrak{P}\), and hence assume that \(V=\mathfrak{Q}_{K}\). In this case, the base change map fits into the following commutative diagram. The two horizontal maps are clearly isomorphisms, and the two vertical maps are isomorphisms by the already treated case \(\mathfrak{Q}=\operatorname{Spf}\left(\mathcal{V}\right)\). Hence the top left diagonal map is an isomorphism. It's worth explicitly stating the special case \(\mathfrak{Q}=\operatorname{Spf}\left(\mathcal{V}\right)\). #### 5.3.4. Corollary _Let \(\mathfrak{P}\) be a flat formal scheme, and \(\mathcal{M}\in\underline{\mathbf{L}\mathbf{D}}_{\mathbb{Q},\mathrm{qc}}( \mathcal{O}_{\mathfrak{P}})\). Then the natural map_ \[\mathbf{R}\Gamma(\mathfrak{P},\mathcal{M})\to\mathbf{R}\Gamma(\mathfrak{P}_{ K},\mathbf{L}\hat{\wp}^{*}_{\mathfrak{P}}\mathcal{M})\] _is an isomorphism in \(\mathbf{D}(K)\)._ Eventually, I will want to upgrade Theorem 5.3.3 to include \(\mathscr{D}\)-module structures, and in order to do so I will need a slightly different description of the functor \(\mathbf{R}u_{*}\) that works under the additional assumption that \(u\) is flat. Restricting \(\mathcal{B}_{\mathfrak{P}}\) along the functor \(I_{\mathfrak{Q}}\to I_{\mathfrak{P}}\) gives rise to a diagram \(\mathcal{B}_{\mathfrak{P}/\mathfrak{Q}}\colon I_{\mathfrak{Q}}\to\mathbf{FSch}\) which by the flatness hypothesis on \(u\) sends an admissible blowup \(\mathfrak{Q}^{\prime}\to\mathfrak{Q}\) to the admissible blowup \(\mathfrak{Q}^{\prime}\times_{\mathfrak{Q}}\mathfrak{P}\to\mathfrak{P}\). This gives rise to a category \(\underline{\mathbf{L}\mathbf{D}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{ \mathcal{B}_{\mathfrak{P}/\mathfrak{Q}}})\) of inductive systems of quasi-coherent complexes on \(\mathcal{B}_{\mathfrak{P}/\mathfrak{Q}}\) as above, and restriction along \(I_{\mathfrak{Q}}\to I_{\mathfrak{P}}\) induces a 'forgetful' functor \[\underline{\mathbf{L}\mathbf{D}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{ \mathcal{B}_{\mathfrak{P}}})\to\underline{\mathbf{L}\mathbf{D}}_{\mathbb{Q}, \mathrm{qc}}(\mathcal{O}_{\mathcal{B}_{\mathfrak{P}/\mathfrak{Q}}})\] Of course, the morphism of diagrams \(u\colon\mathcal{B}_{\mathfrak{P}/\mathfrak{Q}}\to\mathcal{B}_{\mathfrak{Q}}\) induces a functor \[\mathbf{R}u_{*}\colon\underline{\mathbf{L}\mathbf{D}}_{\mathbb{Q},\mathrm{qc} }(\mathcal{O}_{\mathcal{B}_{\mathfrak{P}/\mathfrak{Q}}})\to\underline{ \mathbf{L}\mathbf{D}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{\mathcal{B}_{ \mathfrak{Q}}}).\] **5.3.5 Lemma**.: _The diagram_ _commutes upto \(2\)-isomorphism._ Proof.: This is simply a matter of unwinding the definitions. I end this section with a brief discussion of the compatibility of \(\mathbf{L}\mathrm{\hat{s}p}^{*}\) with tensor products. If \(\mathfrak{P}\) is a flat formal scheme, and \(\pi:\mathfrak{P}^{\prime}\to\mathfrak{P}\) is an admissible blowup, there is a natural map \[\mathbf{L}\hat{\pi}^{*}\mathcal{M}\otimes^{\mathbf{L}}_{\mathcal{O}_{ \mathfrak{P}^{\prime}}}\mathbf{L}\hat{\pi}^{*}\mathcal{N}\to\mathbf{L}\hat{\pi }^{*}\mathcal{M}\widetilde{\otimes}^{\mathbf{L}}_{\mathcal{O}_{\mathfrak{P}^{ \prime}}}\mathbf{L}\hat{\pi}^{*}\mathcal{N}=\mathbf{L}\hat{\pi}^{*}(\mathcal{M }\widetilde{\otimes}^{\mathbf{L}}_{\mathcal{O}_{\mathfrak{P}^{\prime}}} \mathcal{N}).\] Applying \(\mathrm{sp}^{-1}_{\mathfrak{P}^{\prime}}\) gives \[\mathrm{sp}^{-1}_{\mathfrak{P}^{\prime}}\mathbf{L}\hat{\pi}^{*}\mathcal{M} \otimes^{\mathbf{L}}_{\mathrm{sp}^{-1}_{\mathfrak{P}^{\prime}}\mathcal{O}_{ \mathfrak{P}^{\prime}}}\mathrm{sp}^{-1}_{\mathfrak{P}^{\prime}}\mathbf{L} \hat{\pi}^{*}\mathcal{N}\to\mathrm{sp}^{-1}_{\mathfrak{P}^{\prime}}\mathbf{L }\hat{\pi}^{*}(\mathcal{M}\widetilde{\otimes}^{\mathbf{L}}_{\mathcal{O}_{ \mathfrak{P}^{\prime}}}\mathcal{N}),\] and passing to the colimit in \(\mathfrak{P}^{\prime}\) gives a map \[\mathbf{L}\mathrm{\hat{s}p}^{*}\mathcal{M}\otimes^{\mathbf{L}}_{\mathcal{O}_{ \mathfrak{P}^{+}_{K}}}\mathbf{L}\mathrm{\hat{s}p}^{*}\mathcal{N}\to\mathbf{L} \mathrm{\hat{s}p}^{*}(\mathcal{M}\widetilde{\otimes}^{\mathbf{L}}_{\mathcal{O }_{\mathfrak{P}^{\prime}}}\mathcal{N})\] in \(\mathbf{D}(\mathcal{O}^{+}_{\mathfrak{P}_{K}})\). Tensoring with \(\mathbb{Q}\) therefore gives, for any \(\mathcal{M},\mathcal{N}\in\mathbf{D}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{ \mathfrak{P}})\), a map \[\mathbf{L}\mathrm{\hat{s}p}^{*}\mathcal{M}\otimes^{\mathbf{L}}_{\mathcal{O}_{ \mathfrak{P}_{K}}}\mathbf{L}\mathrm{\hat{s}p}^{*}\mathcal{N}\to\mathbf{L} \mathrm{\hat{s}p}^{*}(\mathcal{M}\widetilde{\otimes}^{\mathbf{L}}_{\mathcal{O }_{\mathfrak{P}}}\mathcal{N}) \tag{5.3.6}\] in \(\mathbf{D}(\mathcal{O}_{\mathfrak{P}_{K}})\). Passing to the colimit gives a similar map whenever \(\mathcal{M},\mathcal{N}\in\mathbf{\underline{LD}_{\mathbb{Q},\mathrm{qc}}}( \mathcal{O}_{\mathfrak{P}})\), I leave it to the reader to give a precise construction of this map. In general (5.3.6) won't be an isomorphism, however, I will show later on that this will be the case whenever \(\mathcal{M},\mathcal{N}\) underly overholonomic \(\mathscr{D}^{\dagger}_{\mathfrak{P}\mathbb{Q}}\)-modules. ### Functions with overconvergent singularities Now suppose that \(\mathfrak{P}\) is smooth, and consider a divisor \(D\subset P\). In this situation, Berthelot has defined in [2, SS4.4] the \(\mathcal{O}_{\mathfrak{P}}\)-algebra \(\widehat{\mathcal{B}}_{\mathfrak{P}}(D,r)\) for any \(r\in\mathbb{N}\). Locally, if there exists some \(f\in\mathcal{O}_{\mathfrak{P}}\) such that \(D=P\cap V(f)\), then \[\widehat{\mathcal{B}}_{\mathfrak{P}}(D,r)=\frac{\mathcal{O}_{\mathfrak{P}} \langle X\rangle}{(f^{r}X-p)}.\] **5.4.1 Lemma**.: \(\widehat{\mathcal{B}}_{\mathfrak{P}}(D,r)\) _is quasi-coherent as a complex of \(\mathcal{O}_{\mathfrak{P}}\)-modules._ Proof.: Quasi-coherence can be checked locally, by Remark 5.1.7, so I can assume that \(\mathfrak{P}\) is affine and that \(D\subset P\) is the vanishing locus of a single function \(f\in\mathcal{O}_{\mathfrak{P}}\), which is a non-zero divisor in \(\mathcal{O}_{P}\). In this case \[\widehat{\mathcal{B}}_{\mathfrak{P}}(D,r)=\frac{\mathcal{O}_{\mathfrak{P}} \langle X\rangle}{(f^{r}X-p)}\] is \(p\)-torsion-free, and hence flat over \(\mathcal{O}_{\mathfrak{P}}\). Thus \[\mathcal{O}_{P_{n}}\otimes^{\mathbf{L}}_{\mathcal{O}_{\mathfrak{P}}}\frac{ \mathcal{O}_{\mathfrak{P}}\langle X\rangle}{(f^{r}X-p)}\cong\mathcal{O}_{P_ {n}}\otimes_{\mathcal{O}_{\mathfrak{P}}}\frac{\mathcal{O}_{\mathfrak{P}} \langle X\rangle}{(f^{r}X-p)}=\frac{\mathcal{O}_{P_{n}}[X]}{(f^{r}X-p)}\] is quasi-coherent, and the transition maps \(\frac{\mathcal{O}_{P_{n+1}}[X]}{(f^{r}X-p)}\to\frac{\mathcal{O}_{P_{n}}[X]}{(f^ {r}X-p)}\) are surjective. Hence \[\mathbf{R}\underset{n}{\lim}\frac{\mathcal{O}_{P_{n}}[X]}{(f^{r}X-p)}\cong \underset{n}{\lim}\frac{\mathcal{O}_{P_{n}}[X]}{(f^{r}X-p)}\cong\frac{ \mathcal{O}_{\mathfrak{P}}\langle X\rangle}{(f^{r}X-p)}\] as required. Berthelot then defines \(\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}}(D):=\widehat{\mathcal{B}}_{\mathfrak{P}}(D,p ^{m+1})\), which form an inductive system as \(m\) varies. This gives rise to an object \[\widehat{\mathcal{B}}^{(\bullet)}_{\mathfrak{P}}(D)\in\underline{\mathbf{L}} \underline{\mathbf{D}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}}).\] **5.4.2 Theorem**.: _Let \(\mathfrak{P}\) be a smooth formal scheme, \(D\subset P\) a divisor, and \(U:=P\setminus D\). Suppose that \(\mathcal{M}\in\underline{\mathbf{L}}\underline{\mathbf{D}}_{\mathbb{Q}, \mathrm{qc}}(\mathcal{O}_{\mathfrak{P}})\). Then the morphism_ \[\mathcal{M}\to\widehat{\mathcal{B}}^{(\bullet)}_{\mathfrak{P}}(D)\widehat{ \otimes}^{\mathbf{L}}_{\mathcal{O}_{\mathfrak{P}}}\mathcal{M}\] _in \(\mathbf{D}^{(}\mathcal{O}_{\mathfrak{P}_{K}})\)._ Proof.: Set \(r_{m}=p^{m+1}\), \(\lambda_{m}:=p^{-1/r_{m}}\), let \(\left\lvert D\right\rvert_{\lambda_{m}}\subset\mathfrak{P}_{K}\) be the open tube of radius \(\lambda_{m}\), \(\overline{D}_{m}\) its closure in \(\mathfrak{P}_{K}\), and write \(V_{m}:=\mathfrak{P}_{K}\setminus\overline{D}_{m}\). Let \(j_{m}:V_{m}\to\mathfrak{P}_{K}\) denote the inclusion. Thus, if \(D\) is defined by \(f=0\) inside \(P\), for some \(f\in\mathcal{O}_{\mathfrak{P}}\), then \(V_{m}\) is defined inside \(\mathfrak{P}_{K}\) by \(\left\lvert f\right\rvert\geq\lambda_{m}\). Therefore, for any complex \(\mathscr{K}\) on \(\mathfrak{P}_{K}\), \[j^{\dagger}_{U}\mathscr{K}\xrightarrow{\cong}\operatorname{colim}_{m}\mathbf{ R}j_{m*}j_{m}^{-1}\mathscr{K}.\] The map \[\mathbf{L}\hat{\mathrm{s}}^{*}\mathcal{M}\to\mathbf{L}\hat{\mathrm{s}}^{*}( \widehat{\mathcal{B}}^{(\bullet)}_{\mathfrak{P}}(D)\widehat{\otimes}^{ \mathbf{L}}_{\mathcal{O}_{\mathfrak{P}}}\mathcal{M})\] is the colimit of the maps \[\mathbf{L}\hat{\mathrm{s}}^{*}\mathcal{M}\to\mathbf{L}\hat{\mathrm{s}}^{*}( \widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}}(D)\widehat{\otimes}^{\mathbf{L}}_{ \mathcal{O}_{\mathfrak{P}}}\mathcal{M}).\] It therefore suffices to show that \[\mathbf{L}\hat{\mathrm{s}}^{*}\mathcal{M}\to\mathbf{L}\hat{\mathrm{s}}^{*}( \widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}}(D)\widehat{\otimes}^{\mathbf{L}}_{ \mathcal{O}_{\mathfrak{P}}}\mathcal{M})\] factors through an isomorphism \[\mathbf{R}j_{m*}j_{m}^{-1}\mathbf{L}\hat{\mathrm{s}}\hat{\mathrm{p}}^{*} \mathcal{M}\to\mathbf{L}\hat{\mathrm{s}}\hat{\mathrm{p}}^{*}(\widehat{\mathcal{ B}}^{(m)}_{\mathfrak{P}}(D)\widehat{\otimes}^{\mathbf{L}}_{\mathcal{O}_{ \mathfrak{P}}}\mathcal{M})\] (which is then necessarily unique). To prove this, consider the admissible blowup \[\pi\colon\mathfrak{P}^{\prime}:=\mathrm{Bl}_{(p,f^{r_{m}})}\mathfrak{P}\to \mathfrak{P}.\] There exists a unique open subset \(j_{m}\colon\mathfrak{V}_{m}^{\prime}\to\mathfrak{P}^{\prime}\) such that \(V_{m}=\mathrm{sp}_{\mathfrak{P}^{\prime}}^{-1}(\mathfrak{V}_{m}^{\prime})\). Now set \(\mathcal{M}^{\prime}:=\mathbf{L}\hat{\pi}^{*}\mathcal{M}\) and \(\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^{\prime}}(D):=\mathbf{L}\hat{\pi}^{*} \widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}}(D)\), thus what I need to show is that the map \[\mathbf{L}\hat{\mathrm{s}}\hat{\mathrm{p}}_{\mathfrak{P}^{\prime}}^{*} \mathcal{M}^{\prime}\to\mathbf{L}\hat{\mathrm{s}}\hat{\mathrm{p}}_{\mathfrak{P} ^{\prime}}^{*}(\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^{\prime}}(D)\widehat{ \otimes}^{\mathbf{L}}_{\mathcal{O}_{\mathfrak{P}^{\prime}}}\mathcal{M}^{ \prime}),\] factors through an isomorphism \[\mathbf{R}j_{m*}j_{m}^{-1}\mathbf{L}\hat{\mathrm{s}}\hat{\mathrm{p}}_{ \mathfrak{P}^{\prime}}^{*}\mathcal{M}^{\prime}\to\mathbf{L}\hat{\mathrm{s}} \hat{\mathrm{p}}_{\mathfrak{P}^{\prime}}^{*}(\widehat{\mathcal{B}}^{(m)}_{ \mathfrak{P}^{\prime}}(D)\widehat{\otimes}^{\mathbf{L}}_{\mathcal{O}_{\mathfrak{ P}^{\prime}}}\mathcal{M}^{\prime})\] (again, this isomorphism is then necessarily unique). Applying Theorem 5.3.3, it therefore suffices to prove that the map \[\mathcal{M}^{\prime}\to\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^{\prime}}(D) \widehat{\otimes}^{\mathbf{L}}_{\mathcal{O}_{\mathfrak{P}^{\prime}}}\mathcal{M}^ {\prime}\] factors through an isomorphism \[\mathbf{R}j_{m*}j_{m}^{-1}\mathcal{M}^{\prime}\to\widehat{\mathcal{B}}^{(m)}_{ \mathfrak{P}^{\prime}}(D)\widehat{\otimes}^{\mathbf{L}}_{\mathcal{O}_{ \mathfrak{P}^{\prime}}}\mathcal{M}^{\prime}\] in \(\underline{\mathbf{L}}\underline{\mathbf{D}}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_ {\mathfrak{P}^{\prime}})\). By Lemma 5.1.10 I can reduce to the case \(\mathcal{M}^{\prime}=\mathcal{O}_{\mathfrak{P}^{\prime}}\), in other words I need to show that \[\mathcal{O}_{\mathfrak{P}^{\prime}}\to\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^ {\prime}}(D)\] factors through an isomorphism \[\mathbf{R}j_{m*}\mathcal{O}_{\mathfrak{P}^{\prime}_{m}}\to\widehat{\mathcal{B}}^{ (m)}_{\mathfrak{P}^{\prime}}(D)\] in \(\mathbf{D}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}^{\prime}})\). To see this, I will construct the inverse isogeny \[\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^{\prime}}(D)\to\mathbf{R}j_{m*} \mathcal{O}_{\mathfrak{P}^{\prime}_{m}}.\] Locally on \(\mathfrak{P}\), \[\mathfrak{P}^{\prime}=\mathbf{Proj}_{\mathcal{O}_{\mathfrak{P}}}\left(\frac{ \mathcal{O}_{\mathfrak{P}}[X,Y]}{(f^{r_{m}}X-pY)}\right),\] where \(D=P\cap V(f)\). This is covered by the two open formal subschemes \[\mathfrak{V}^{\prime}_{m} =\mathrm{Spf}\left(\frac{\mathcal{O}_{\mathfrak{P}}\langle X \rangle}{(f^{r_{m}}X-p)}\right)\] \[\mathfrak{U} =\mathrm{Spf}\left(\frac{\mathcal{O}_{\mathfrak{P}}\langle Y \rangle}{(f^{r_{m}}-pY)}\right)\] In particular, locally on \(\mathfrak{P}\), \(\mathfrak{V}^{\prime}_{m}=D(Y)\) is the complement of a hypersurface in \(\mathfrak{P}^{\prime}\), and hence \(\mathbf{R}j_{m*}\mathcal{O}_{\mathfrak{V}^{\prime}_{m}}=j_{m*}\mathcal{O}_{ \mathfrak{V}^{\prime}_{m}}\) is a module (and not just a complex). Thus I am allowed to work locally on \(\mathfrak{P}\), and therefore assume that there is indeed such an \(f\). I can then calculate \[\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^{\prime}}(D)\left|{}_{\mathfrak{V} ^{\prime}_{m}}\!\right.=\mathbf{R}\lim_{n}\frac{\mathcal{O}_{P_{n}}[T]}{(f^{r _{m}}T-p)}\otimes^{\mathbf{L}}_{\mathcal{O}_{P_{n}}}\frac{\mathcal{O}_{P_{n}} [X]}{(f^{r_{m}}X-p)}.\] Since \(f\) is not a zero divisor in \(\mathcal{O}_{P_{n}}\), a direct calculation then shows that \[\mathcal{O}_{P_{n}}[X]\stackrel{{ f^{r_{m}}X-p}}{{\longrightarrow}} \mathcal{O}_{P_{n}}[X]\] is a flat resolution of \(\frac{\mathcal{O}_{P_{n}}[X]}{(f^{r_{m}}X-p)}\). Thus \[\frac{\mathcal{O}_{P_{n}}[T]}{(f^{r_{n}}T-p)}\otimes^{\mathbf{L}}_{\mathcal{O }_{P_{n}}}\frac{\mathcal{O}_{P_{n}}[X]}{(f^{r_{m}}X-p)}=\left[\frac{\mathcal{ O}_{P_{n}}[X,T]}{(f^{r_{m}}T-p)}\stackrel{{ f^{r_{m}}X-p}}{{\longrightarrow}}\frac{\mathcal{O}_{P_{n}}[X,T]}{(f^{r_{m}}T-p) }\right].\] Since each \(\frac{\mathcal{O}_{P_{n}}[X,T]}{(f^{r_{m}}T-p)}\) is a quasi-coherent \(\mathcal{O}_{P_{n}}\)-module, and the transition maps \[\frac{\mathcal{O}_{P_{n+1}}[X,T]}{(f^{r_{m}}T-p)}\stackrel{{ \mathcal{O}_{P_{n}}[X,T]}}{{\longrightarrow}}\frac{\mathcal{O}_{P_{n}}[X,T]}{ (f^{r_{m}}T-p)}\] are surjective, \[\mathbf{R}\lim_{n}\frac{\mathcal{O}_{P_{n}}[X,T]}{(f^{r_{m}}T-p)}=\lim_{n} \frac{\mathcal{O}_{P_{n}}[X,T]}{(f^{r_{m}}T-p)}=\frac{\mathcal{O}_{\mathfrak{P }}\langle X,T\rangle}{(f^{r_{m}}T-p)}.\] Thus \[\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^{\prime}}(D)\left|{}_{\mathfrak{V} ^{\prime}_{m}}\!\right.=\left[\frac{\mathcal{O}_{\mathfrak{P}}\langle X,T \rangle}{(f^{r_{m}}T-p)}\right.\stackrel{{ f^{r_{m}}X-p}}{{ \longrightarrow}}\frac{\mathcal{O}_{\mathfrak{P}}\langle X,T\rangle}{(f^{r_{m }}T-p)}\right].\] A direct calculation shows that this map is injective, thus \[\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^{\prime}}(D)\left|{}_{\mathfrak{V} ^{\prime}_{m}}\!\right.=\frac{\mathcal{O}_{\mathfrak{P}}\langle X,T\rangle}{( f^{r_{m}}T-p,f^{r_{m}}X-p)}.\] On \(\mathfrak{V}^{\prime}_{m}\), the factorisation \[\mathcal{O}_{\mathfrak{P}^{\prime}}\to\mathbf{R}j_{m*}\mathcal{O}_{\mathfrak{ V}^{\prime}_{m}}\to\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^{\prime}}(D)\] I am after is simply the inclusion \[\frac{\mathcal{O}_{\mathfrak{P}}\langle X\rangle}{(f^{r_{m}}X-p)}\to\frac{ \mathcal{O}_{\mathfrak{P}}\langle T,X\rangle}{(f^{r_{m}}T-p,f^{r_{m}}X-p)}.\] The inverse map \[\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^{\prime}}(D)\to\mathbf{R}j_{m*} \mathcal{O}_{\mathfrak{V}^{\prime}_{m}}\] is then the multiplication \[\frac{\mathcal{O}_{\mathfrak{P}}\langle T,X\rangle}{(f^{r_{m}}T-p,f^{r_{m}}X-p )}\to\frac{\mathcal{O}_{\mathfrak{P}}\langle X\rangle}{(f^{r_{m}}X-p)}.\] which is surjective, with kernel \((T-X)\) annihilated by \(p\). Similarly, on \(\mathfrak{U}\), \[\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^{\prime}}(D)|_{\mathfrak{U}} =\mathbf{R}\!\lim_{n}\frac{\mathcal{O}_{P_{n}}[T]}{(f^{r_{m}}T-p) }\otimes^{\mathbf{L}}_{\mathcal{O}_{P_{n}}}\frac{\mathcal{O}_{P_{n}}[Y]}{(f^{ r_{m}}-pY)}\] \[=\frac{\mathcal{O}_{\mathfrak{P}}(T,Y)}{(f^{r_{m}}T-p,f^{r_{m}}-pY )}.\] The inverse map \[\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^{\prime}}(D)\to\mathbf{R}j_{m*} \mathcal{O}_{\mathfrak{P}^{\prime}_{m}}\] is therefore the map \[\frac{\mathcal{O}_{\mathfrak{P}}(T,Y)}{(f^{r_{m}}T-p,f^{r_{m}}-pY)}\to\frac{ \mathcal{O}_{\mathfrak{P}}(Y,Y^{-1})}{(f^{r_{m}}-pY)}\] given by \(T\mapsto Y^{-1}\). Again, this map is surjective, with kernel \((1-TY)\) annihilated by \(p\). Putting this together, I have constructed a map \[\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^{\prime}}(D)\to j_{m*}\mathcal{O}_{ \mathfrak{P}^{\prime}_{m}}\] which is surjective, and whose kernel is annihilated by \(p\). It follows that it is an isogeny, and hence induces an isomorphism \[\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^{\prime}}(D)\stackrel{{ \cong}}{{\longrightarrow}}\mathbf{R}j_{m*}\mathcal{O}_{\mathfrak{P}^{\prime}_ {m}}\] in \(\mathbf{D}_{\mathbb{Q},\mathrm{qc}}(\mathcal{O}_{\mathfrak{P}^{\prime}})\). To identify this with the inverse of an isomorphism \[\mathbf{R}j_{m*}\mathcal{O}_{\mathfrak{P}^{\prime}_{m}}\stackrel{{ \cong}}{{\longrightarrow}}\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^{\prime}} (D)\] factoring the natural map \[\mathcal{O}_{\mathfrak{P}^{\prime}}\stackrel{{\cong}}{{ \longrightarrow}}\widehat{\mathcal{B}}^{(m)}_{\mathfrak{P}^{\prime}}(D)\] I can therefore restrict to \(\mathfrak{P}^{\prime}_{m}\), in which case everything has already been worked out explicitly. ## 6. Rigidification of \(\mathscr{D}^{\dagger}\)-modules and constructibility Suppose now that the formal scheme \(\mathfrak{P}\) is smooth over \(\mathcal{V}\). Thus Berthelot defined in [1, SS4.2] the category \[\mathbf{L}\!\mathbf{D}_{\mathbb{Q},\mathrm{qc}}(\widehat{\mathcal{D}}^{( \bullet)}_{\mathfrak{P}})\] of inductive systems of quasi-coherent complexes of \(\widehat{\mathcal{D}}^{(\bullet)}_{\mathfrak{P}}\)-modules, up to isogeny. In this section, my goal is to upgrade \(\mathbf{L}\!\mathrm{sp}^{*}\) to a functor \[\mathbf{L}\!\mathrm{sp}^{*}\colon\mathbf{L}\!\mathbf{D}_{\mathbb{Q},\mathrm{qc }}(\widehat{\mathcal{D}}^{(\bullet)}_{\mathfrak{P}})\to\mathbf{D}(\mathscr{D} _{\mathfrak{P}_{K}}),\] and prove that if \(\mathcal{M}\) is an overholonomic complex of Frobenius type, then \(\mathbf{L}\!\mathrm{sp}^{*}\mathcal{M}\) is a constructible complex on \(\mathfrak{P}\). ### \(\boldsymbol{\mathscr{G}^{(k,0)}}\)-modules on admissible blowups The functor \(\mathbf{L}\!\mathrm{sp}^{*}_{\mathfrak{P}}\) passes through the category of quasi-coherent complexes of \(\mathcal{O}\)-modules on the diagram of all admissible blowups of \(\mathfrak{P}\), and the same will be therefore need to be true for its \(\mathscr{D}\)-module enhancement. In trying to do this directly, one quickly runs into the following problem: if \(\pi\colon\mathfrak{P}^{\prime}\to\mathfrak{P}\) is an admissible blowup of a smooth formal scheme \(\mathfrak{P}\), then \(\pi^{-1}\mathscr{D}^{(0)}_{\mathfrak{P}}\) does not necessarily act on \(\mathcal{O}_{\mathfrak{P}^{\prime}}\). Thus \(\pi^{*}\mathscr{D}^{(0)}_{\mathfrak{P}}\) does not have any natural ring structure over which the pullback \(\pi^{*}\mathcal{M}\) of a \(\mathscr{D}^{(0)}_{\mathfrak{P}}\)-module \(\mathcal{M}\) could be a module. To get around this problem, I will need to use the rings of differential operators with congruence level from [13]. In that article, the authors define, for any \(k\geq 0\), a subring \[\mathscr{D}^{(k,0)}_{\mathfrak{P}}\subset\mathscr{D}^{(0)}_{\mathfrak{P}}\] consisting of 'differential operators of congruence level \(k\)'. In terms of local co-ordinates \(z_{1},\ldots,z_{d}\), with corresponding derivations \(\partial_{1},\ldots,\partial_{d}\), there is the usual description \[\mathscr{D}_{\mathfrak{P}}^{(0)}=\left\{\left.\sum_{\underline{i}=(i_{1},\ldots,i_{d})\in\mathbb{N}^{d}}a_{\underline{i}}\partial_{1}^{i_{1}}\ldots\partial_{ d}^{i_{d}}\ \right|\ a_{\underline{i}}\in\mathcal{O}_{\mathfrak{P}},\text{ only finitely many }\neq 0\right\},\] and then \(\mathscr{D}_{\mathfrak{P}}^{(k,0)}\) consists of those elements for which \(a_{\underline{i}}\in\mathfrak{m}^{k[\underline{i}]}=\mathfrak{m}^{k(i_{1}+ \ldots+i_{d})}\). In particular, \(\mathscr{D}_{\mathfrak{P}\mathfrak{Q}}^{(k,0)}=\mathscr{D}_{\mathfrak{P} \mathfrak{Q}}^{(0)}\). If \(\pi\colon\mathfrak{P}^{\prime}\to\mathfrak{P}\) is an admissible blowup, then \(\pi^{-1}\mathscr{D}_{\mathfrak{P}\mathfrak{Q}}^{(k,0)}=\pi^{-1}\mathscr{D}_{ \mathfrak{P}\mathfrak{Q}}^{(0)}\) acts on \(\mathcal{O}_{\mathfrak{P}^{\prime}\mathbb{Q}}\). Since \(\mathfrak{P}^{\prime}\) is flat over \(\mathcal{V}\), it therefore makes sense to ask whether or not this induces an action of \(\pi^{-1}\mathscr{D}_{\mathfrak{P}}^{(k,0)}\) on \(\mathcal{O}_{\mathfrak{P}^{\prime}}\). In [11, Corollary 2.1.15] the authors show that this will be the case for all sufficiently large \(k\). In particular, this will be true if \(\mathfrak{P}^{\prime}=\operatorname{Bl}_{\mathcal{I}}(\mathfrak{P})\) and \(\varpi^{k}\in\mathcal{I}\). **6.1.1 Definition**.: If \(\pi\colon\mathfrak{P}^{\prime}\to\mathfrak{P}\) is an admissible blowup, with \(\mathfrak{P}\) smooth, define \(k_{\mathfrak{P}^{\prime}}:=\min\{k\mid\pi^{-1}\mathscr{D}_{\mathfrak{P}}^{(k,0 )}\text{ acts on }\mathcal{O}_{\mathfrak{P}^{\prime}}\}\). If \(\rho\colon\mathfrak{P}^{\prime\prime}\to\mathfrak{P}^{\prime}\) is a morphism of admissible blowups, define \(k_{\rho}:=\max\{k_{\mathfrak{P}^{\prime}},k_{\mathfrak{P}^{\prime\prime}}\}\). Thus, for any \(k\geq k_{\mathfrak{P}^{\prime}}\) it is possible to form the sheaf of rings \(\mathscr{D}_{\mathfrak{P}^{\prime}}^{(k,0)}:=\pi^{*}\mathscr{D}_{\mathfrak{P} }^{(k,0)}\). If \(\mathcal{M}\) is a \(\mathscr{D}_{\mathfrak{P}}^{(0)}\)-module, then \(\pi^{*}\mathcal{M}\) is naturally a \(\mathscr{D}_{\mathfrak{P}^{\prime}}^{(k,0)}\)-module, for all \(k\geq k_{\mathfrak{P}^{\prime}}\). More generally, if \(\rho\colon\mathfrak{P}^{\prime\prime}\to\mathfrak{P}^{\prime}\) is a morphism of admissible blowups, and \(\mathcal{M}\) is a \(\mathscr{D}_{\mathfrak{P}^{\prime}}^{(k_{\mathfrak{P}^{\prime}},0)}\)-module, then \(\rho^{*}\mathcal{M}\) is naturally a \(\mathscr{D}_{\mathfrak{P}^{\prime}}^{(k,0)}\)-module, for all \(k\geq k_{\rho}\). **6.1.2 Definition**.: Let \(\mathcal{B}_{\mathfrak{P}}\) denote the diagram of admissible blowups of \(\mathfrak{P}\). A (left) \(\mathscr{D}_{\mathcal{B}_{\mathfrak{P}}}^{(0)}\)-module consists of: * for each admissible blowup \(\mathfrak{P}^{\prime}\to\mathfrak{P}\), a (left) \(\mathscr{D}_{\mathfrak{P}^{\prime}}^{(k_{\mathfrak{P}^{\prime}},0)}\)-module \(\mathcal{M}_{\mathfrak{P}^{\prime}}\); * for each morphism \(\rho\colon\mathfrak{P}^{\prime}\to\mathfrak{P}\), a morphism \(\chi_{\rho}\colon\rho^{*}\mathcal{M}_{\mathfrak{P}^{\prime}}\to\mathcal{M}_{ \mathfrak{P}^{\prime\prime}}\) of (left) \(\mathscr{D}_{\mathfrak{P}^{\prime\prime}}^{(k_{\rho},0)}\)-modules, subject to the condition that, for any commutative triangle of admissible blowups, \(\chi_{\rho\circ\tau}=\chi_{\tau}\circ\tau^{*}\chi(\rho)\). Note that the condition \(\chi_{\rho\circ\tau}=\chi_{\tau}\circ\tau^{*}\chi(\rho)\) can be checked on the underlying \(\mathcal{O}\)-modules, hence does not depend on the congruence level of any of the terms. There is, of course, an analogous category of \(\mathscr{D}_{\mathcal{B}_{\mathcal{P}_{\star}}}^{(0)}\)-modules, consisting of projective systems of \(\mathscr{D}_{P_{\star}^{\prime}}^{(k_{\mathfrak{P}^{\prime}})}\)-modules on each \(\mathfrak{P}^{\prime}\), The major difficulty in working with this category comes from the fact that the varying rings \(\mathscr{D}_{\mathfrak{P}^{\prime}}^{(k_{\mathfrak{P}^{\prime}}^{\prime},0)}\) do _not_ form a sheaf of rings on \(\mathcal{B}_{\mathfrak{P}}\), thus a \(\mathscr{D}_{\mathcal{B}_{\mathfrak{P}}}^{(0)}\)-module is not strictly speaking a module over a sheaf of rings on a site in any natural way. Despite this, it is nevertheless easy to extend the formalism of \(K\)-flat and \(K\)-injective resolutions to the category of \(\mathscr{D}_{\mathcal{B}_{\mathfrak{P}}}^{(0)}\)-modules, and therefore do homological algebra as if we were just working in the category of modules over a sheaf of rings on \(\mathcal{B}_{\mathfrak{P}}\). For example: **6.1.3 Lemma**.: _The category of left \(\mathscr{D}_{\mathcal{B}_{\mathfrak{P}}}^{(0)}\)-modules is an abelian category with enough injectives._ Proof.: **6.1.4 Definition**.: Let \(\mathcal{B}_{\mathfrak{P}}\) denote the diagram of admissible blowups of \(\mathfrak{P}\). A (left) \(\mathscr{D}_{\mathcal{B}_{\mathfrak{P}}}^{(0)}\)-module consists of: * for each admissible blowup \(\mathfrak{P}^{\prime}\to\mathfrak{P}\), a (left) \(\mathscr{D}_{\mathfrak{P}^{\prime}}^{(k_{\mathfrak{P}^{\prime}},0)}\)-module \(\mathcal{M}_{\mathfrak{P}^{\prime}}\); * for each morphism \(\rho\colon\mathfrak{P}^{\prime}\to\mathfrak{P}\), a morphism \(\chi_{\rho}\colon\rho^{*}\mathcal{M}_{\mathfrak{P}^{\prime}}\to\mathcal{M}_{ \mathfrak{P}^{\prime\prime}}\) of (left) \(\mathscr{D}_{\mathfrak{P}^{\prime\prime}}^{(k_{\rho},0)}\)-modules, subject to the condition that, for any commutative triangle of admissible blowups, \(\chi_{\rho\circ\tau}=\chi_{\tau}\circ\tau^{*}\chi(\rho)\). Note that the condition \(\chi_{\rho\circ\tau}=\chi_{\tau}\circ\tau^{*}\chi(\rho)\) can be checked on the underlying \(\mathcal{O}\)-modules, hence does not depend on the congruence level of any of the terms. There is, of course, an analogous category of \(\mathscr{D}_{\mathcal{B}_{\mathcal{P}_{\star}}}^{(0)}\)-modules, consisting of projective systems of \(\mathscr{D}_{P_{\star}^{\prime}}^{(k_{\mathfrak{P}^{\prime\prime}})}\)-modules on each \(\mathfrak{P}^{\prime}\), The major difficulty in working with this category comes from the fact that the varying rings \(\mathscr{D}_{\mathfrak{P}^{\prime}}^{(k_{\mathfrak{P}^{\prime}}^{\prime},0)}\) do _not_ form a sheaf of rings on \(\mathcal{B}_{\mathfrak{P}}\), thus a \(\mathscr{D}_{\mathcal{B}_{\mathfrak{P}}}^{(0)}\)-module is not strictly speaking a module over a sheaf of rings on a site in any natural way. Despite this, it is nevertheless easy to extend the formalism of \(K\)-flat and \(K\)-injective resolutions to the category of \(\mathscr{D}_{\mathcal{B}_{\mathfrak{P}}}^{(0)}\)-modules, and therefore do homological algebra as if we were just working in the category of modules over a sheaf of rings on \(\mathcal{B}_{\mathfrak{P}}\). For example: **6.1.3 Lemma**.: _The category of left \(\mathscr{D}_{\mathcal{B}_{\mathfrak{P}}}^{(0)}\)-modules is an abelian category with enough injectives._ Proof.: I will denote by \(\mathbf{D}(\mathscr{D}^{(0)}_{\mathcal{B}_{\mathfrak{Y}}})\) the derived category of \(\mathscr{D}^{(0)}_{\mathcal{B}_{\mathfrak{Y}}}\)-modules, and \(\mathbf{D}_{\mathrm{qc}}(\mathscr{D}^{(0)}_{\mathcal{B}_{\mathfrak{Y}}})\) its full subcategory on quasi-coherent objects, that is, objects \(\mathcal{M}\) such that each \(\mathcal{M}_{\mathfrak{Y}^{\prime}}\) is quasi-coherent as a complex of \(\mathcal{O}_{\mathfrak{Y}^{\prime}}\)-modules. For such objects, the maps \(\chi_{\rho}\) extend to morphisms \[\mathbf{L}\hat{\rho}^{*}\mathcal{M}_{\mathfrak{Y}^{\prime}}\to\mathcal{M}_{ \mathfrak{Y}^{\prime\prime}}\] of quasi-coherent complexes of \(\mathscr{D}^{(k_{\rho},0)}\)-modules. There is of course a similar category \(\mathbf{D}_{\mathrm{qc}}(\mathscr{D}^{(0)}_{\mathcal{B}_{\mathfrak{Y}}^{(*)}})\) of inductive systems of such objects, as well as its localisation \(\underline{\mathbf{L}\mathbf{D}_{\mathrm{Q},\mathrm{qc}}}(\mathscr{D}^{(0)}_{ \mathcal{B}_{\mathfrak{Y}}})\). Exactly as in the case of \(\mathcal{O}_{\mathfrak{Y}}\)-modules, I can then define a functor \[\mathbf{L}\hat{\pi}^{*}\colon\mathbf{D}_{\mathrm{qc}}(\mathscr{D}^{(0)}_{ \mathfrak{Y}^{(*)}})\to\mathbf{D}_{\mathrm{qc}}(\mathscr{D}^{(0)}_{\mathcal{B }_{\mathfrak{Y}}^{(*)}}).\] Since \(\mathscr{D}^{(k_{\mathfrak{Y}}^{\prime},0)}_{\mathfrak{Y}^{\prime}\mathbb{Q}}= \mathrm{sp}_{\mathfrak{Y}^{*}}\mathscr{D}_{\mathfrak{Y}_{K}}\), I can also define a functor \[\mathbf{L}\mathrm{sp}^{*}\colon\mathbf{D}_{\mathrm{qc}}(\mathscr{D}^{(0)}_{ \mathcal{B}_{\mathfrak{Y}}^{(*)}})\to\mathbf{D}(\mathscr{D}_{\mathfrak{Y}}),\] which, roughly speaking, applies \(\mathrm{sp}_{\mathfrak{Y}^{\prime}}^{-1}\) on each admissible blowup \(\mathfrak{Y}^{\prime}\), then takes the colimit over both \(I^{\mathrm{op}}_{\mathfrak{Y}}\) and \(\mathbb{N}\), and then finally tensors with \(\mathbb{Q}\). **6.1.4 Definition**.: The functor \[\mathbf{L}\hat{\mathrm{sp}}^{*}=\mathbf{L}\hat{\mathrm{sp}}^{*}_{\mathfrak{Y} }\colon\underline{\mathbf{L}\mathbf{D}_{\mathrm{Q},\mathrm{qc}}}(\widehat{ \mathscr{D}}^{(\bullet)}_{\mathfrak{Y}})\to\mathbf{D}(\mathscr{D}_{\mathfrak{Y} _{K}})\] is defined to be the composite \[\mathbf{D}_{\mathrm{qc}}(\widehat{\mathscr{D}}^{(\bullet)}_{\mathfrak{Y}}) \stackrel{{\mathrm{forget}}}{{\longrightarrow}}\mathbf{D}_{\mathrm{qc }}(\mathscr{D}^{(0)}_{\mathfrak{Y}^{(*)}})\stackrel{{\mathbf{L} \hat{\pi}^{*}}}{{\longrightarrow}}\mathbf{D}_{\mathrm{qc}}(\mathscr{D}^{(0)}_{ \mathcal{B}_{\mathfrak{Y}_{0}}^{(*)}})\stackrel{{\mathbf{L} \mathbf{sp}^{*}}}{{\longrightarrow}}\mathbf{D}(\mathscr{D}_{\mathfrak{Y}_{K}}).\] I then define the shifted functor \(\mathrm{sp}^{!}=\mathrm{sp}^{!}_{\mathfrak{Y}}:=\mathbf{L}\hat{\mathrm{sp}}^{*}[ -\dim\mathfrak{Y}]\). It is straightforward to check that \(\mathbf{L}\hat{\mathrm{sp}}^{*}\) (and therefore the shifted version \(\mathrm{sp}^{!}\)) descends to the localisation \(\underline{\mathbf{L}\mathbf{D}_{\mathrm{Q},\mathrm{qc}}}(\widehat{\mathscr{D}} ^{(\bullet)}_{\mathfrak{Y}})\) of \(\mathbf{D}_{\mathrm{qc}}(\widehat{\mathscr{D}}^{(\bullet)}_{\mathfrak{Y}})\). ### Compatibility with de Rham pushforwards Now let \(u\colon\mathfrak{P}\to\mathfrak{Q}\) be a smooth morphism of smooth formal \(\mathcal{V}\)-schemes. Since \(u\) is in particular flat, the strict transform of an admissible blowup \(\mathfrak{Q}^{\prime}\to\mathfrak{Q}\) is just the fibre product \(\mathfrak{Q}^{\prime}\times_{\mathfrak{Q}}\mathfrak{P}\). **6.2.1 Lemma**.: _Let \(u\colon\mathfrak{P}\to\mathfrak{Q}\) be a smooth morphism of smooth formal schemes, and \(\mathfrak{Q}^{\prime}\to\mathfrak{Q}\) an admissible blowup. Then \(k_{\mathfrak{Q}^{\prime}\times_{\mathfrak{Q}}\mathfrak{P}}\leq k_{\mathfrak{Q}^ {\prime}}\)._ Proof.: Straightforward calculation. I now consider the functor \(I_{\mathfrak{Q}}\to I_{\mathfrak{Y}}\) given by taking the fibre product over \(\mathfrak{Q}\) with \(\mathfrak{P}\), and restrict the diagram \(\mathcal{B}_{\mathfrak{Y}}\) along this functor to produce a diagram \(\mathcal{B}_{\mathfrak{Y}/\mathfrak{Q}}\colon I_{\mathfrak{Q}}\to\mathbf{FSch}\) taking \(\mathfrak{Q}^{\prime}\) to \(\mathfrak{Q}^{\prime}\times_{\mathfrak{Q}}\mathfrak{P}\). I then consider the following variant of Definition 6.1.2. **6.2.2 Definition**.: A (left) \(\mathscr{D}^{(0)}_{\mathcal{B}_{\mathfrak{Y}/\mathfrak{Q}}}\)-module consists of: * for each admissible blowup \(\mathfrak{Q}^{\prime}\to\mathfrak{Q}\), a (left) \(\mathscr{D}^{(k_{\mathfrak{Q}^{\prime}},0)}_{\mathfrak{Q}^{\prime}\times_{ \mathfrak{Q}}\mathfrak{P}}\)-module \(\mathcal{M}_{\mathfrak{Q}^{\prime}}\); * for each morphism \(\rho\colon\mathfrak{Q}^{\prime\prime}\to\mathfrak{Q}^{\prime}\), a morphism \(\chi_{\rho}\colon\rho^{*}\mathcal{M}_{\mathfrak{Q}^{\prime}}\to\mathcal{M}_{ \mathfrak{Q}^{\prime\prime}}\) of (left) \(\mathscr{D}^{(k_{\rho},0)}_{\mathfrak{Q}^{\prime\prime}\times_{\mathfrak{Q}} \mathfrak{P}}\)-modules, subject to the condition that, for any commutative triangle of admissible blowups, \(\chi_{\rho\circ\tau}=\chi_{\tau}\circ\tau^{*}\chi(\rho)\). I can then define the transfer module \(\mathscr{D}^{(0)}_{\mathscr{B}_{\Psi/\Omega}}\) as a left \(\mathscr{D}^{(0)}_{\mathscr{B}_{\Psi/\Omega}}\)-module by taking its restriction to each \(\pi\colon\mathfrak{Q}^{\prime}\times_{\mathfrak{Q}}\mathfrak{P}\to\mathfrak{P}\) to be \(\pi^{*}\mathscr{D}^{(k_{\mathfrak{Q}^{\prime}},0)}_{\mathfrak{P}\to\Omega}\). The usual side switching operations therefore give a right \(\mathscr{D}^{(0)}_{\mathscr{B}_{\Psi/\Omega}}\)-module \(\mathscr{D}^{(0)}_{\mathscr{B}_{\Omega}\leftarrow\mathscr{B}_{\Psi}}\), which also comes with the usual structure of a 'left \(u^{-1}\mathscr{D}^{(0)}_{\mathscr{B}_{\Omega}}\)-module on \(\mathscr{B}_{\mathfrak{P}/\mathfrak{Q}}\)', in a sense that I leave to the reader to make precise. I can therefore define a functor \[u_{+}\colon\mathbf{D}(\mathscr{D}^{(0)}_{\mathscr{B}^{(\bullet)}_{ \mathfrak{P}/\Omega}}) \to\mathbf{D}(\mathscr{D}^{(0)}_{\mathscr{B}^{(\bullet)}_{\mathfrak{ P}/\Omega}})\] \[\mathcal{M} \mapsto\mathbf{R}u_{*}(\mathscr{D}^{(0)}_{\mathscr{B}_{\Omega} \leftarrow\mathscr{B}_{\Psi}}\otimes^{\mathbf{L}}_{\mathscr{B}^{(0)}_{ \mathscr{B}_{\Psi}}}\mathcal{M})\] in the usual way. Finally, I define \[u_{+}\colon\mathbf{D}(\mathscr{D}^{(0)}_{\mathscr{B}^{(\bullet)}_{\Psi}})\to \mathbf{D}(\mathscr{D}^{(0)}_{\mathscr{B}^{(\bullet)}_{\mathfrak{P}^{(\bullet)} }})\] by composing with the'restriction' functor \(\mathbf{D}(\mathscr{D}^{(0)}_{\mathscr{B}^{(\bullet)}_{\Psi}})\to\mathbf{D}( \mathscr{D}^{(0)}_{\mathscr{B}^{(\bullet)}_{\Psi/\Omega}})\), which exists by Lemma 6.2.1. Since \(u\) is smooth, the Spencer resolutions for \(\mathscr{D}^{(0)}_{\Omega\leftarrow\mathfrak{P}}\) and \(\widehat{\mathscr{D}}^{(m)}_{\Omega\leftarrow\mathfrak{P}}\) show that the natural map \[\mathscr{D}^{(0)}_{\Omega\leftarrow\mathfrak{P}}\otimes_{\mathscr{D}^{(0)}_{ \Psi}}\widehat{\mathscr{D}}^{(m)}_{\mathfrak{P}}\to\widehat{\mathscr{D}}^{(m) }_{\Omega\leftarrow\mathfrak{P}}\] is an isogeny [2, SS3.5.5]. Hence the diagram commutes up to natural isomorphism. Now using the facts that \(\mathscr{D}^{(0)}_{\mathscr{B}_{\Omega}\leftarrow\mathscr{B}_{\Psi},\mathbb{Q }}=\pi^{*}\mathscr{D}^{(0)}_{\Omega\leftarrow\mathfrak{P},\mathbb{Q}}\), and \(\operatorname{sp}^{-1}_{\mathscr{B}_{\Psi/\Omega}}\mathscr{D}^{(0)}_{\mathscr{B }_{\Omega}\leftarrow\mathscr{B}_{\Psi},\mathbb{Q}}=\mathscr{D}_{\Omega_{K} \leftarrow\Psi_{K}}\), I obtain a natural base change map \[\mathbf{L}\hat{\operatorname{sp}}^{*}_{\Omega}\circ u^{(0)}_{+}\to u_{+} \circ\mathbf{L}\hat{\operatorname{sp}}^{*}_{\mathfrak{P}}\] of functors \(\underline{\mathbf{L}\mathbf{D}}_{\mathbb{Q},\mathrm{qc}}(\mathscr{D}^{(0)}_{ \mathfrak{P}})\to\mathbf{D}(\mathscr{D}_{\Omega_{K}})\). Note that since \(u\) is smooth, say of relative dimension \(d\), the functor \(u_{+}\) on the RHS here is what I have previously called \(\mathbf{R}u_{\mathrm{dR}*}[d]\). **6.2.3 Proposition**.: _Let \(u\colon\mathfrak{P}\to\mathfrak{Q}\) be a smooth morphism of smooth formal schemes of relative dimension \(d\), and \(\mathcal{M}\in\underline{\mathbf{L}\mathbf{D}}_{\mathbb{Q},\mathrm{qc}}( \mathscr{D}^{(0)}_{\mathfrak{P}})\). Then the base change map_ \[\mathbf{L}\hat{\operatorname{sp}}^{*}_{\Omega}u^{(0)}_{+}\mathcal{M}\to \mathbf{R}u_{\mathrm{dR}*}\mathbf{L}\hat{\operatorname{sp}}^{*}_{\mathfrak{P} \mathfrak{P}}\mathcal{M}[d]\] _is an isomorphism. Thus the diagram_ _commutes up to natural isomorphism._ Proof.: By taking the stupid filtration on \[\mathscr{D}^{(0)}_{\Omega\leftarrow\mathfrak{P}}\otimes^{\mathbf{L}^{(0)}_{ \Psi}}_{\mathscr{D}^{(0)}_{\Psi}}\mathcal{M}=\Omega^{\bullet}_{\mathfrak{P}/ \Omega}\otimes_{\mathcal{O}_{\Psi}}\mathcal{M},\] I cam reduce to showing that the base change map \[\mathbf{L}\hat{\operatorname{sp}}^{*}_{\Omega}\mathbf{R}u_{*}(\Omega^{n}_{ \mathfrak{P}/\Omega}\widehat{\otimes}^{\mathbf{L}}_{\mathcal{O}_{\Psi}} \mathcal{M})\to\mathbf{R}u_{*}\mathbf{L}\hat{\operatorname{sp}}^{*}_{ \Psi}(\Omega^{n}_{\mathfrak{P}/\Omega}\widehat{\otimes}^{\mathbf{L}}_{\mathcal{O }_{\Psi}}\mathcal{M})\] is an isomorphism for each fixed \(n\). Thanks to Lemma 5.3.5, this follows from Theorem 5.3.3. For example, taking \(\mathfrak{Q}=\operatorname{Spf}\left(\mathcal{V}\right)\) shows that the natural map \[\mathbf{R}\Gamma_{\operatorname{dR}}(\mathfrak{P},\mathcal{M})\to\mathbf{R} \Gamma_{\operatorname{dR}}(\mathfrak{P}_{K},\mathbf{L}\hat{\operatorname{sp}} ^{*}_{\mathfrak{P}\mathfrak{P}}\mathcal{M})\] is an isomorphism in \(\mathbf{D}(K)\). Of course, this case could equally well be deduced slightly more directly from Theorem 5.3.3. ### Constructibility of \(\operatorname{sp}^{!}\) I now come to the first main result concerning \(\operatorname{sp}^{!}\), namely that it sends dual constructible overlononomic complexes on \(\mathfrak{P}\) to constructible isocrystals on \(\mathfrak{P}_{K}\). **6.3.1 Theorem**.: _Let \(\mathfrak{P}\) be a smooth formal scheme, and \(\mathcal{M}\in\mathbf{DCon}_{F}(\mathfrak{P})\subset\mathbf{D}^{b}_{ \operatorname{hol},F}(\mathfrak{P})\). Then \(\operatorname{sp}^{!}\mathcal{M}\in\mathbf{Isoc}_{\operatorname{cons},F}( \mathfrak{P})\subset\mathbf{D}(\mathscr{P}_{\mathfrak{P}_{K}})\) is a constructible isocrystal on \(\mathfrak{P}\). If \(\mathcal{M}\) is supported on some locally closed subscheme \(X\hookrightarrow P\), then so is \(\operatorname{sp}^{!}\mathcal{M}\)._ Before diving into the proof, I need a preparatory lemma, and a special case. **6.3.2 Lemma**.: _Let_ _be a morphism of l.p. frames such that:_ * \(u\) _is smooth and proper, of relative dimension_ \(d\)_;_ \(g\) _is proper, and_ \(f\) _is finite etale;_ * \(X\) _is locally (on_ \(Y\)_) the complement of a hypersurface in_ \(Y\)_._ _Let \(\mathcal{M}\in\mathbf{DCon}_{F}(\mathfrak{P})\) be a dual constructible module supported \(X\). If \(\operatorname{sp}^{!}_{\mathfrak{P}\mathfrak{P}}\mathbf{R}\underline{\Gamma}^{ \dagger}_{X^{\prime}}u^{!}\mathcal{M}\) is a constructible isocrystal supported on \(X^{\prime}\), then \(\operatorname{sp}^{!}_{\mathfrak{P}\mathfrak{P}}\mathcal{M}\) is a constructible isocrystal supported on \(X\)._ **6.3.3**.: _Remark._ Of course, if I am considering \(\mathcal{M}\) as an object of \(\mathbf{DCon}_{F}(X,Y)\), then \(\mathbf{R}\underline{\Gamma}^{\dagger}_{X^{\prime}}u^{!}\mathcal{M}\) is simply \(f^{!}\mathcal{M}\in\mathbf{DCon}_{F}(X^{\prime},Y^{\prime})\). Proof.: The question is local on \(\mathfrak{P}\), so I can therefore suppose that \(\mathfrak{P}\) is affine and \(X=D(s)\) for some \(s\in\Gamma(Y,\mathcal{O}_{Y})\). I am therefore in the situation of Theorem 3.4.2. Write \(\mathcal{N}:=\mathbf{R}\underline{\Gamma}^{\dagger}_{X^{\prime}}u^{!}\mathcal{M}\), this is a dual constructible module on \(\mathfrak{P}^{\prime}\). By Proposition 6.2.3, \[\operatorname{sp}^{!}_{\mathfrak{P}\mathfrak{P}}u_{+}\mathcal{N}=\mathbf{R}u_ {\operatorname{dR}*}\operatorname{sp}^{!}_{\mathfrak{P}\mathfrak{P}}\mathcal{N }[2d]\] where \(\operatorname{sp}^{!}_{\mathfrak{P}\mathfrak{P}}\mathcal{N}\) is a constructible isocrystal on \(\mathfrak{P}^{\prime}\), supported on \(X^{\prime}\). I then consider the commutative diagram Since \(u\) is proper, I can identify \(\mathbf{R}u_{\operatorname{dR}!}=\mathbf{R}u_{\operatorname{dR}*}\) as functors \[\mathbf{D}^{b}(\mathscr{P}_{\mathfrak{P}^{\prime}_{K}})\to\mathbf{D}^{b}( \mathscr{P}_{\mathfrak{P}_{K}}).\] Setting \(\mathscr{F}:=\operatorname{sp}^{!}_{\mathfrak{P}^{\prime}}\mathcal{N}|_{]X^{ \prime}[^{\prime}[^{\prime}\!]_{\mathfrak{P}^{\prime}}}\in\mathbf{Isoc}_{ \operatorname{cons}}(X^{\prime},Y^{\prime},\mathfrak{P}^{\prime})\), I therefore have \[\mathbf{R}u_{\operatorname{dR}*}\operatorname{sp}^{!}_{\mathfrak{P}^{\prime}} \mathcal{N}[2d]=i;\!\mathbf{R}]f[_{\operatorname{dR}!}\,\mathscr{F}[2d].\] Now applying Theorem 3.4.2 I deduce that \(\operatorname{sp}^{!}_{\mathfrak{P}\mathfrak{P}}u_{+}\mathcal{N}\) is a constructible isocrystal supported on \(X\). Since \(\mathcal{M}\) is a direct summand of \(u_{+}\mathcal{N}\) (see Remark 4.5.7), I can conclude. **6.3.4 Lemma**.: _Let \(\mathfrak{P}\) be a smooth formal scheme, \(D\subset P\) a divisor, and let \(j\colon U:=P\setminus D\to P\) be the complementary open immersion. Suppose that \(\mathscr{F}\in\mathbf{Isoc}_{F}(U,P)\), and let \(\mathcal{M}=\mathrm{sp}_{U!}\mathscr{F}\in\mathbf{DCon}_{F}(\mathfrak{P})\).16 Then \(\mathrm{sp}^{!}\mathcal{M}\cong j_{*}\mathscr{F}\in\mathbf{Isoc}_{\mathrm{cons} }(\mathfrak{P})\)._ Footnote 16: See §4.3 and Example 4.4.8. Proof.: Given the shifts involved, an equivalent way of stating the conclusion is that \[\mathbf{Ls}\mathfrak{p}_{\mathfrak{P}}^{*}\widetilde{\mathrm{sp}}_{U+} \mathscr{F}\cong j_{*}\mathscr{F}\] in \(\mathbf{D}^{b}(\mathscr{D}_{\mathfrak{P}_{K}})\). Also, since \(j_{*}\mathscr{F}\) is a module (not just a complex) the claim is local on \(\mathfrak{P}\), which I may therefore assume to be affine. I can also assume that there exists \(f\in\mathcal{O}_{\mathfrak{P}}\) which is a non-zero divisor in \(\mathcal{O}_{P}\), such that \(D=V(f)\cap P\). Let \(j_{m}\colon V_{m}\to\mathfrak{P}_{K}\) be as in the proof of Theorem 5.4.2, recall that \(V_{m}\) is affinoid. For \(m\) large enough, \(\mathscr{F}\) extends to a vector bundle with integrable connection \(\mathscr{F}_{m}\) on \(V_{m}\), and \[j_{*}\mathscr{F}\xrightarrow{\cong}\operatorname{colim}j_{m*}\mathscr{F}_{m} \xrightarrow{\cong}\operatorname{colim}_{m}\mathbf{R}j_{m*}\mathscr{F}_{m}.\] In this case, \(\widetilde{\mathrm{sp}}_{U+}\mathscr{F}\) is explicitly defined in [10, SS4.4] by showing that each \[\mathrm{sp}_{\mathfrak{P}*}j_{m*}\mathscr{F}_{m}\xrightarrow{\cong}\mathbf{R} \mathrm{sp}_{\mathfrak{P}*}\mathbf{R}j_{m*}\mathscr{F}_{m}\] is a coherent \(\hat{\mathcal{B}}^{(m)}_{\mathfrak{P}}(D)\)-module, for which the \(\mathscr{O}^{(0)}_{\mathfrak{P}}\)-module structure extends uniquely to a continuous \(\widehat{\mathscr{D}}^{(m)}_{\mathfrak{P}}\)-module structure compatible with that on \(\hat{\mathcal{B}}^{(m)}_{\mathfrak{P}}(D)\). Then \[\widetilde{\mathrm{sp}}_{U+}\mathscr{F}=\{\mathrm{sp}_{\mathfrak{P}*}j_{m*} \mathscr{F}_{m}\}_{m\in\mathbb{N}}\in\mathbf{L}\mathbf{D}_{\mathbb{Q},\mathrm{ qc}}(\widehat{\mathscr{D}}^{(\bullet)}_{\mathfrak{P}}).\] It therefore suffices to show that \[\mathbf{Ls}\mathfrak{p}_{\mathfrak{P}}^{*}\mathbf{R}\mathrm{sp}_{\mathfrak{P} *}\mathbf{R}j_{m*}\mathscr{F}_{m}\xrightarrow{\cong}\mathbf{R}j_{m*}\mathscr{F} _{m}=j_{m*}\mathscr{F}_{m} \tag{6.3.5}\] in \(\mathbf{D}^{b}(\mathscr{D}_{\mathfrak{P}_{K}})\). Indeed, if this is the case then everything in sight is just a module, rather than a complex, and so showing compatibility in \(m\) is straightforward. To prove (6.3.5), let \(\pi\colon\mathfrak{P}^{\prime}\to\mathfrak{P}\) be an admissible blowup such that the open immersion \(j_{m}\colon V_{m}\to\mathfrak{P}_{K}\) arises from an open immersion \(j_{m}\colon\mathfrak{P}^{\prime}_{m}\to\mathfrak{P}^{\prime}\) of formal schemes. I then claim that the natural morphism \[\mathbf{L}\hat{\pi}^{*}\mathbf{R}\mathrm{sp}_{\mathfrak{P}*}\mathbf{R}j_{m*} \mathscr{F}_{m}\to\mathbf{R}\mathrm{sp}_{\mathfrak{P}^{\prime}*}\mathbf{R}j_{m *}\mathscr{F}_{m}\] is an isogeny. To see this, since \(V_{m}\) is affinoid, and \(\mathscr{F}_{m}\) is a vector bundle on \(V_{m}\), it is in fact a direct summand of a trivial vector bundle. Hence I can reduce to the case \(\mathscr{F}_{m}=\mathcal{O}_{V_{m}}\). In this case, the given map can be identified with \[\mathbf{L}\hat{\pi}^{*}\hat{\mathcal{B}}^{(m)}_{\mathfrak{P}}(D)\to\mathbf{R}j _{m*}\mathcal{O}_{\mathfrak{P}^{\prime}_{m}},\] which was shown to be an isogeny during the proof of Theorem 5.4.2. I can now compute \[\mathbf{Ls}\mathfrak{p}_{\mathfrak{P}}^{*}\mathbf{R}\mathrm{sp}_{ \mathfrak{P}*}\mathbf{R}j_{m*}\mathscr{F}_{m} \xrightarrow{\cong}\mathbf{Ls}\mathfrak{p}_{\mathfrak{P}*}^{*} \mathbf{L}\hat{\pi}^{*}\mathbf{R}\mathrm{sp}_{\mathfrak{P}*}\mathbf{R}j_{m*} \mathscr{F}_{m}\] \[\xrightarrow{\cong}\mathbf{Ls}\mathfrak{p}_{\mathfrak{P}}^{*} \mathbf{R}\mathrm{sp}_{\mathfrak{P}^{\prime}*}\mathbf{R}j_{m*}\mathscr{F}_{m}\] \[\xrightarrow{\cong}\mathbf{R}j_{m*}\mathscr{F}_{m}\] where the first isomorphism follows simply from the definitions, the second is the isogeny proved above, the third follows from Theorem 5.3.3, and the last from the fact that \(\mathscr{F}_{m}\) is a locally free \(\mathcal{O}_{V_{m}}\)-module. #### 6.3.6. Remark The particular isomorphism \(\operatorname{sp}^{!}\!\mathcal{M}\cong j_{*}\mathscr{F}\) constructed can be identified as follows: if we set \(\mathfrak{U}=\mathfrak{P}\setminus D\), then on \(\mathfrak{U}_{K}\) it is simply the shift of the counit \(\operatorname{sp}^{*}_{\mathbb{H}}\!\operatorname{sp}_{\mathbb{H}*}\!\mathscr{ F}\to\mathscr{F}\) of the adjunction between \(\operatorname{sp}^{*}\) and \(\operatorname{sp}_{*}\) (note that since \(\mathcal{M}|_{\mathfrak{U}}\) is locally projective, \(\mathbf{L}\!\operatorname{sp}^{*}_{\mathbb{H}}=\operatorname{sp}^{*}_{ \mathbb{H}}\) in this case). Proof of Theorem 6.3.1.: The question is local on \(\mathfrak{P}\), which I can therefore assume to be affine, in particular \(\mathfrak{P}\) admits a locally closed immersion into a smooth and proper formal \(\mathcal{V}\)-scheme. This then means that every frame encountered during the proof will be an l.p. frame. By devissage, that is, by Proposition 4.3.3, every \(\mathcal{M}\in\mathbf{DCon}_{F}(\mathfrak{P})\) is an iterated extension of objects of the form \(\operatorname{sp}_{X!}\!\mathscr{F}\) with \(X\hookrightarrow P\) a smooth locally closed subscheme, with closure \(Y\) in \(P\), and \(\mathscr{F}\in\mathbf{Isoc}_{F}(X,Y)\). I can therefore assume that \(\mathcal{M}\) is of this form. Now, by de Jong's alterations [1], there exists a projective, generically etale morphism \(g\colon Y^{\prime}\to Y\) with \(Y^{\prime}\) smooth. By Noetherian induction on \(X\), I can therefore assume that \(g^{-1}(X)\to X\) is finite etale, and that \(X\) is the complement of a hypersurface in \(Y\). I then have a morphism of frames of the form with left hand square Cartesian, \(Y^{\prime}\) smooth, \(f\) finite etale, and \(u\) the projection. Appealing to Lemma 6.3.2, I can replace \((X,Y,\mathfrak{P})\) by \((X^{\prime},Y^{\prime},\mathfrak{P}^{\prime})\), in other words I can also assume that \(Y\) is smooth. Of course, I have now lost the assumption that \(\mathfrak{P}\) is affine, but I can then further localise on \(\mathfrak{P}\) to restore it. Since \(\mathfrak{P}\) is affine, and \(Y\) is smooth, there exists a closed immersion of smooth formal schemes \(\mathfrak{Y}\hookrightarrow\mathfrak{P}\) lifting the closed immersion \(Y\hookrightarrow P\). Now applying Lemma 6.3.2 to the morphism of frames (where the diagonal map \(Y\to\mathfrak{Y}\times_{\mathcal{V}}\mathfrak{P}\) is the product of the two given immersions, and the right hand vertical map is the second projection), it suffices to prove the claim with \(\mathfrak{P}\) replaced by \(\mathfrak{Y}\times_{\mathcal{V}}\mathfrak{P}\). In other words, I can assume that the closed immersion \(\mathfrak{Y}\to\mathfrak{P}\) is a section of a smooth morphism \(\pi\colon\mathfrak{P}\to\mathfrak{Y}\). Let \(c\) be the codimension of \(\mathfrak{Y}\) in \(\mathfrak{P}\). By further localising on \(\mathfrak{P}\) if necessary, I can choose functions \(t_{1}\ldots,t_{c}\in\Gamma(\mathfrak{P},\mathcal{O}_{\mathfrak{P}})\) such that \(\mathfrak{Y}=V(t_{1},\ldots,t_{c})\). For any subset \(I\subset\{1,\ldots,c\}\) set \(\mathfrak{U}_{I}:=\cap_{i\in I}D(t_{i})\). I also set \(\mathcal{N}:=\pi^{!}\pi_{+}\mathcal{M}\in\mathbf{DCon}_{F}(\mathfrak{P})\), thus, by the discussion in SS4.3, \(\mathcal{N}=\operatorname{sp}_{\pi^{-1}(X)!}\pi^{*}\mathscr{F}\) where \(\pi^{*}\mathscr{F}\in\mathbf{Isoc}_{F}(\pi^{-1}(X),P)\). Let \(j\) also denote the inclusion \(\pi^{-1}(X)\to P\), and write \(\mathscr{G}:=j_{*}\pi^{*}\mathscr{F}\in\mathbf{Isoc}_{\operatorname{cons},F}( \mathfrak{P})\). There is then the following exact sequence of dual constructible \(\mathscr{D}^{\dagger}\)-modules on \(\mathfrak{P}\): \[0\to\mathcal{M}\to\mathcal{N}\to\bigoplus_{i}\mathbf{R}\!\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatorname{\operatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatorname{\operatornameoperatornameoperatorname{ \operatorname{\operatornameoperatornameoperatorname{ \ \ \ \ \ \ }}}}}}}\)\ of \(\mathscr{D}_{\mathfrak{P}_{K}}\)-modules. I therefore deduce that \[\operatorname{sp}^{!}_{\mathfrak{P}_{\mathfrak{P}}}\mathcal{M}\stackrel{{ \cong}}{{\longrightarrow}}\left[\mathscr{G}\to\bigoplus_{i}j^{\dagger}_{U_{i }}\mathscr{G}\to\ldots\to j^{\dagger}_{U_{(1,\ldots,c)}}\mathscr{G}\right] \stackrel{{\cong}}{{\longrightarrow}}i_{!}i^{*}\mathscr{G},\] by using [1, Proposition 2.1.8]. This completes the proof. ### Consequences I now gather several important consequences of Theorem 6.3.1, starting with: **6.4.1 Corollary**.: _Let \(\mathfrak{P}\) be a smooth formal scheme. Then \(\operatorname{sp}^{!}\) induces a functor_ \[\operatorname{sp}^{!}\colon\mathbf{D}^{b}_{\operatorname{hol},F}(\mathfrak{P })\to\mathbf{D}^{b}_{\operatorname{cons},F}(\mathfrak{P})\] _which is t-exact for the dual constructible t-structure on the source, and the natural t-structure on the target._ Examining the proof of Theorem 6.3.1, I have also shown the following. **6.4.2 Corollary**.: _Let \(\mathfrak{P}\) be a smooth formal scheme, \(i\colon X\to P\) be a smooth, locally closed subscheme, with closure \(Y\), \(\mathscr{F}\in\mathbf{Isoc}_{F}(X,Y)\), and \(\mathcal{M}=\operatorname{sp}_{X!}\mathscr{F}\in\mathbf{DCon}_{F}(\mathfrak{P})\). Then \(\operatorname{sp}^{!}\mathcal{M}\stackrel{{\cong}}{{\longrightarrow }}i_{!}\mathscr{F}\in\mathbf{Isoc}_{\operatorname{cons},F}(\mathfrak{P})\)._ _6.4.3 Remark_.: It's not straightforward to do explicitly, but the particular isomorphism \(\operatorname{sp}^{!}\mathcal{M}\stackrel{{\cong}}{{ \longrightarrow}}i_{!}\mathscr{F}\in\mathbf{Isoc}_{\operatorname{cons},F}( \mathfrak{P})\) thus constructed can be identified by following through the various steps in the proof of Theorem 6.3.1 and reducing to the case considered in Lemma 6.3.4. The functor \(\operatorname{sp}^{!}\) is compatible with many of the cohomological functors introduced so far. Explicitly, if \(U\hookrightarrow P\hookhook Z\) are complementary open and closed immersions, and \(\mathcal{M}\in\mathbf{D}^{b}_{\operatorname{hol},F}(\mathfrak{P})\), then repeatedly applying Theorem 5.4.2 gives rise to an isomorphism \[j^{\dagger}_{U}\operatorname{sp}^{!}\mathcal{M}\stackrel{{ \cong}}{{\longrightarrow}}\operatorname{sp}^{!}\mathcal{M}(^{\dagger}Z)\] in \(\mathbf{D}^{b}_{\operatorname{cons},F}(\mathfrak{P})\), which is natural in \(\mathcal{M}\). Moreover, this is compatible with the unit of the two adjunctions, in the sense that the diagram commutes. Similarly, there exists an isomorphism natural in \(\mathcal{M}\), which is again compatible with the counit of the two adjunctions in the sense that the diagram commutes. This implies that for any locally closed subscheme \(i\colon X\to P\), there is a canonical isomorphism \[\operatorname{sp}^{!}\mathbf{R}\underline{\Gamma}^{\dagger}_{X}\mathcal{M} \cong i_{i}i^{-1}\operatorname{sp}^{!}\mathcal{M}\] in \(\mathbf{D}^{b}_{\operatorname{cons},F}(\mathfrak{P})\). If now \(u\colon\mathfrak{P}\to\mathfrak{Q}\) is a smooth and proper morphism of smooth formal schemes, of relative dimension \(d\), and \(\mathcal{M}\in\mathbf{D}^{b}_{\operatorname{hol},F}(\mathfrak{P})\), then Proposition 6.2.3 provides a canonical isomorphism \[\operatorname{sp}^{!}_{\mathfrak{Q}}u_{+}\mathcal{M}\stackrel{{ \cong}}{{\longrightarrow}}\mathbf{R}u_{\operatorname{dR}*}\operatorname{sp}^{! }_{\mathfrak{P}}\mathcal{M}[2d]\] inside \(\mathbf{D}^{b}(\mathscr{D}_{\Omega})\), in particular the RHS is in fact in \(\mathbf{D}^{b}_{\mathrm{cons},F}(\mathfrak{Q})\). Compatibility of \(\mathrm{sp}^{!}\) with \(u^{!}\) (and therefore with tensor product) will need to wait until later on, but I can at least record a special case for now. Consider a morphism of l.p. frames such that \(u\) is smooth and proper, of relative dimension \(d\), \(g\) is proper, and \(f\) is finite etale. By the support claim in Theorem 6.3.1, I can view \(\mathrm{sp}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{ \mathfrak{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{ \mathfrak{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{ \mathfrak{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{ \mathfrak{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{ \mathfrak{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{ \mathfrak{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{ \mathfrak{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}}_{\!{P}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}^{!}_{ \mathfrak{P}}^{!}_{\mathfrak{P}^{!}_{\mathfrak{P}}^{!}_{\mathfrak{P}}^{!}_{ \mathfrak{P}}^{!}_{\mathfrak{P}}^{!}_{\mathfrak{P}}}}}}}}}}}}}}}}}}}}}}}}}}}} \,\,\,\\\\\\\\, ### Log convergent isocrystals The general theory of log convergent isocrystals was developed in [10], I will recall here some of the fundamental definitions and results that I need. Throughout I will equip the bases \(\operatorname{Spec}{(k)}\) and \(\operatorname{Spf}{(\mathcal{V})}\) with the trivial log structure. A fine log variety (over \(k\)) will mean a fine log scheme over \(k\) whose underlying \(k\)-scheme is a variety. If \(X\) is smooth variety over \(k\), and \(D\subset X\) is a normal crossings divisor, I will let \(M(D)\) denote the log structure on \(X\) associated to \(D\), and \((X,M(D))\) the corresponding log scheme. I will use similar notation \((\mathfrak{X},M(\mathfrak{D}))\) in the case of a relative crossings divisor \(\mathfrak{D}\subset\mathfrak{X}\) inside a smooth formal scheme over \(\mathcal{V}\). **7.1.1 Definition**.: Let \((X,M)\) be a fine log variety. Then an _enlargement_ of \((X,M)\) is a fine log scheme \((Z,M_{Z})\to(X,M)\), together with an exact closed immersion \((Z,M_{Z})\to(\mathfrak{T},M_{\mathfrak{T}})\) into a fine log formal scheme over \(\mathcal{V}\), such that \(T_{\mathrm{red}}\subset Z\). There is the obvious notion of a morphism of enlargements of \((X,M)\). **7.1.2 Definition**.: A convergent log isocrystal on \((X,M)\) consists of the following data: 1. for every enlargement \((\mathfrak{T},M_{\mathfrak{T}})\) of \((X,M)\), a coherent \(\mathcal{O}_{\mathfrak{T}\mathbb{Q}}\)-module \(\mathscr{F}_{\mathfrak{T}}\); 2. for every morphism of enlargements \(g\colon(\mathfrak{T}^{\prime},M_{\mathfrak{T}^{\prime}})\to(\mathfrak{T},M_{ \mathfrak{T}})\) a 'transition' isomorphism \(g^{*}\mathscr{F}_{\mathfrak{T}}\to\mathscr{F}_{\mathfrak{T}^{\prime}}\) of \(\mathcal{O}_{\mathfrak{T}^{\prime}\mathbb{Q}}\)-modules, The transition isomorphisms are required to satisfy an appropriate cocycle condition. In fact, Shiho considers two variations, where \(\mathscr{F}\) is considered as a sheaf on \(\mathfrak{T}\) with respect to the Zariski and etale topologies. He then shows in [10, Proposition 2.1.21] these give rise to equivalent theories. There is an obvious notion of a morphism of convergent log isocrystals on \((X,M)\), and I will denote the resulting category by \(\mathbf{Isoc}^{\circ}(X,M)\). **7.1.3 Remark**.: My choice of notation here reflects the fact that I am using \(\mathbf{Isoc}(X)\), and not \(\mathbf{Isoc}^{\dagger}(X)\), to denote the category of overconvergent isocrystals on a variety \(X\). Thus, when the log structure \(M\) is trivial, \(\mathbf{Isoc}^{\circ}(X)\) is the category of convergent isocrystals on \(X\), which has previously been denoted \(\mathbf{Isoc}(X,X)\). As with the case of ordinary (that is, non-logarithmic) convergent isocrystals, these objects can be described in terms of their'realisations' in the world of rigid analytic geometry. Following Shiho, suppose that there is an exact closed immersion \[(X,M)\to(\mathfrak{P},L)\] of fine log formal schemes over \(\mathcal{V}\), such that \((\mathfrak{P},L)\) is log smooth over \(\mathcal{V}\).17 Then the tube \(]X[_{\mathfrak{P}}\) admits a natural log structure coming from the log structure induced by \(L\) on \(\mathfrak{P}_{K}\), and is log smooth over \(K\). In [10, SS2.1,SS2.2] Shiho constructs a functor Footnote 17: In fact, Shiho works in much greater generality than this, but this special case is technically simpler and will suffice for me. \[\mathbf{Isoc}^{\circ}(X,M) \to\mathbf{Coh}_{\nabla}(]X[_{\mathfrak{P}}\,,L)\] \[\mathscr{F} \mapsto\mathscr{F}_{]X[_{\mathfrak{P}}}\] from the category of log convergent isocrystals on \((X,M)\) to the category \(\mathbf{Coh}_{\nabla}(]X[_{\mathfrak{P}}\,,L)\) of coherent \(\mathcal{O}_{]X[_{\mathfrak{P}}}\)-modules with integrable log connection. In general, it is not clear whether or not this'realisation on \((\mathfrak{P},L)\)' functor is fully faithful, in order to obtain full faithfulness it is necessary to impose local freeness conditions on convergent isocrystals. **7.1.4 Definition**.: Let \((X,M)\) be a fine log variety over \(k\), and \(\mathscr{F}\) a convergent log isocrystals on \((X,M)\). Then \(\mathscr{F}\) is locally free if, for all enlargements \((\mathfrak{T},M_{\mathfrak{T}})\) of \((X,M)\), the coherent \(\mathcal{O}_{\mathfrak{T}\mathbb{Q}}\)-module \(\mathscr{F}_{\mathfrak{T}}\) is locally projective. It is then a consequence of [14, Corollary 2.3.9] that the restriction of the realisation functor to locally free isocrystals is fully faithful. Moreover, if we are given \(\mathscr{F}\in\mathbf{Isoc}^{\circ}(X,M)\) we can actually test whether or not it is locally free by taking its realisation on \((\mathfrak{P},L)\) (this follows from [14, Proposition 2.2.7]). #### 7.1.5. Frobenius structures Since log convergent isocrystals are clearly functorial in \((X,M)\), there is a well-defined Frobenius pullback functor \(F^{*}\colon\mathbf{Isoc}^{\circ}(X,M)\to\mathbf{Isoc}^{\circ}(X,M)\), which makes it possible to talk about Frobenius structures on convergent log isocrystals. The category of convergent log \(F\)-isocrystals on \((X,M)\), that is, the category of convergent log isocrystals on \((X,M)\) equipped with a Frobenius structure, will be denoted \(F\)-\(\mathbf{Isoc}^{\circ}(X,M)\). In certain cases, the presence of a Frobenius structure is enough to guarantee local freeness. **7.1.6 Lemma**.: _Let \(X\) be a smooth variety, and \(D\subset X\) a normal crossings divisor. Then every convergent log \(F\)-isocrystal \(\mathscr{F}\) on \((X,M(D))\) is locally free._ Proof.: It suffices to prove the claim over some etale cover of \(X\), so I can therefore assume that the pair \((X,D)\) lifts to a pair \((\mathfrak{X},\mathfrak{D})\) consisting of a smooth formal scheme \(\mathfrak{X}\) over \(\mathcal{V}\) and a strict relative normal crossings divisor \(\mathfrak{D}\subset\mathfrak{X}\). In fact, I can assume that \(\mathfrak{X}=\operatorname{Spf}\left(R\right)\) is affine and connected, and that there exists an etale morphism \(\mathfrak{X}\to\widehat{\mathbb{A}}_{\mathcal{V}}^{d}\), with co-ordinates \(z_{1},\ldots,z_{d}\) on the target, such that \(\mathfrak{D}\) is (the pullback of) the strict normal crossings divisor \(V(z_{1}\ldots z_{c})\) for some \(c\leq d\). In particular, the standard lift of Frobenius on \(\widehat{\mathbb{A}}_{\mathcal{V}}^{d}\) extends to an endomorphism of \(\mathfrak{X}\) lifting the absolute (\(q\)-power) Frobenius on \(X\). Let \(\sigma::R\to R\) denote the ring homomorphism induced by the Frobenius lift. It is then enough to prove the local freeness and of the realisation of \(\mathscr{F}\) on \(\mathfrak{X}_{K}\). The proof of this is a standard fitting ideal argument: it suffices to show that the only ideals \(I\subset R_{K}\) stable under \(\sigma\), in the sense that \(\sigma(I)R_{K}=I\), are the zero and unit ideals. Setting \(I^{+}=I\cap R\), this is a \(\pi\)-adically saturated ideal of \(R\), such that \(\sigma(I^{+})R\subset I^{+}\). Since \(\sigma\) is finite flat, it follows that \(\sigma(I_{0})R\) is also saturated, hence in fact \(\sigma(I^{+})R=I^{+}\). Again, it suffices to show that the only any such ideals are the zero and unit ideals. Let \(I_{0}=I^{+}\otimes_{\mathcal{V}}k\) be the reduction of \(I^{+}\) modulo \(\pi\), this is therefore a Frobenius stable ideal in \(R_{k}\), in the sense that \(F(I_{0})R_{k}=I_{0}\), and it is enough to show that the only such ideals are the zero and unit ideals. But if \(F(I_{0})R_{k}=I_{0}\) then \(I_{0}=I_{0}^{n}\) for all \(n\geq 1\), and so \(I_{0}=\bigcap_{n}I_{0}^{n}\). Since \(R_{k}\) is smooth, connected and Noetherian, it follows from Krull's intersection theorem [10, Corollary 5.4] that either \(I_{0}=\{0\}\) or \(I_{0}=R\). ### A vanishing result in log rigid cohomology The main result I will need in log rigid cohomology is a certain vanishing result, appearing as Theorem 7.2.1 below. It will take a little bit of effort to setup and formulate this result, but this is mostly an issue of keeping track of notation, the underlying geometry is actually rather simple. Anyway, Theorem 7.2.1 will apply in the situation when I have a smooth formal scheme \(\mathfrak{P}\), together with a strict relative normal crossings divisor \(\mathfrak{D}\subset\mathfrak{P}\). In this case, I will let \(\mathfrak{P}^{\sharp}\) denote the log scheme \((\mathfrak{P},M(\mathfrak{D}))\). It's generic fibre \(\mathfrak{P}^{\sharp}_{K}\) is therefore a logarithmic analytic variety over \(K\), and I will let \(\mathscr{D}_{\mathfrak{P}^{\sharp}_{K}}\subset\mathscr{D}_{\mathfrak{P}_{K}}\) denote the corresponding sheaf of logarithmic (algebraic) differential operators. For example, if \(z_{1},\ldots,z_{d}\) are local co-ordinates on \(\mathfrak{P}\), such that \(\mathfrak{D}\) is defined by \(z_{1}\ldots z_{c}=0\), and \(\partial_{1},\ldots,\partial_{d}\) are the corresponding derivations, then \(\mathscr{D}_{\mathfrak{P}^{\sharp}_{K}}\) is generated as a sub-\(\mathcal{O}_{\mathfrak{P}_{K}}\)-algebra of \(\mathscr{D}_{\mathfrak{P}_{K}}\) by \(z_{1}\partial_{1},\ldots,z_{c}\partial_{c},\partial_{c+1},\ldots,\partial_{d}\). In particular, any constructible isocrystal on \(\mathfrak{P}\) can be viewed a \(\mathscr{D}_{\mathfrak{P}^{\sharp}_{K}}\)-module via restriction of scalars along \(\mathscr{D}_{\mathfrak{P}^{\sharp}_{K}}\to\mathscr{D}_{\mathfrak{P}_{K}}\). Let \(\mathfrak{D}_{1},\ldots,\mathfrak{D}_{c}\) denote the irreducible components of \(\mathfrak{D}\), with special fibres \(D_{1},\ldots,D_{c}\). For each \(J\subset\{1,\ldots,c\}\), set \[\mathfrak{D}_{J} =\cap_{j\in J}\mathfrak{D}_{j},\quad D_{J}=\mathfrak{D}_{J,k}= \cap_{j\in J}D_{j}\] \[\mathfrak{D}^{(J)} =\cup_{j\notin J}\mathfrak{D}_{j},\quad D^{(J)}=\mathfrak{D}_{k}^ {(J)}=\cup_{j\notin J}D_{j}.\] The log structure \(M\mathfrak{D}\)) on \(\mathfrak{P}\) can be pulled back to each \(\mathfrak{D}_{J}\), giving rise to a log formal scheme \(\mathfrak{D}_{J}^{\sharp}\). Similarly, I let \(]D_{J}[_{\mathfrak{P}^{\sharp}}\) denote the tube of \(D_{J}\) equipped with the log structure obtained by pulling back the log structure \(M(\mathfrak{D}_{K})\) from \(\mathfrak{P}_{K}\) of the log structure \(M(\mathfrak{D}_{K})\). I will let \(\mathscr{D}_{]D_{J}[^{\sharp}_{\mathfrak{P}}}\) denote the ring of (algebraic) log differential operators on \(]D_{J}[^{\sharp}_{\mathfrak{P}}\), or in other words the restriction of \(\mathscr{D}_{\mathfrak{P}^{\sharp}_{K}}\) to \(]D_{J}[_{\mathfrak{P}}\). There is also a second is natural log structure on \(\mathfrak{D}_{J}\), namely that induced by the strict normal crossings divisor \[\mathfrak{D}_{J}\cap\mathfrak{D}^{(J)}\subset\mathfrak{D}_{J}.\] I will denote the resulting log formal scheme by \(\mathfrak{D}_{J}^{\flat}=(\mathfrak{D}_{J},M(\mathfrak{D}_{J}\cap\mathfrak{D}^ {(J)}))\), and denote the ring of (algebraic) log differential operators on its generic fibre \(\mathfrak{D}_{J,K}^{\flat}\) by \(\mathscr{D}_{\mathscr{D}_{J,K}^{\flat}}\). Now, if \(\mathscr{F}\) is a locally free log convergent isocrystal on the special fibre \(P^{\sharp}\) of \(\mathfrak{P}^{\sharp}\), I can take its realisation as a locally free \(\mathcal{O}_{\mathfrak{P}_{K}}\)-module with integrable log connection, which I will also denote by \(\mathscr{F}\). There is then an induced integrable log connection on the twist \(\mathscr{F}(\mathfrak{D}_{K})\) of \(\mathscr{F}\), coming from the canonical log connection on the line bundle \(\mathcal{O}_{\mathfrak{P}_{K}}(\mathfrak{D}_{K})\). In particular, \(\mathscr{F}(\mathfrak{D}_{K})\) is a \(\mathscr{D}_{\mathfrak{P}^{\sharp}_{K}}\)-module, which can then be restricted to any tube \(]D_{J}[_{\mathfrak{P}}\). **Theorem**.: _Let \(\mathscr{F}\in F\)-\(\mathbf{Isoc}^{\circ}(P^{\sharp})\) be a log convergent \(F\)-isocrystal on \(P^{\sharp}\), and let \(\mathscr{G}\in\mathbf{Isoc}_{\mathrm{cons}}(\mathfrak{P})\) a constructible isocrystal on \(\mathfrak{P}\). Then, for any \(J\subset\{1,\dots,c\}\),_ \[\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{J}[^{\sharp}_{\mathfrak{P}}}(\mathscr{F} (\mathfrak{D}_{K}),\mathscr{G})=0.\] #### 7.2.2. Remark The crucial observation of the proof is that the presence of a Frobenius structure forces \(\mathscr{F}\) to have nilpotent residues along \(\mathfrak{D}_{K}\), which then implies that the residues of the twist \(\mathscr{F}(\mathfrak{D}_{K})\) along \(\mathfrak{D}_{K}\) are in fact _isomorphisms_. Proof.: The question is local on \(\mathfrak{P}\), so I can assume that there exists an etale map \(\mathfrak{P}\to\widehat{\mathbb{A}}_{\mathcal{V}}^{d}\), with co-ordinates \(z_{1},\dots,z_{d}\) on the target, such that \(\mathfrak{D}=V(z_{1}\dots z_{c})\) for some \(c\leq d\), and \(\mathfrak{D}_{J}=V(z_{1},\dots,z_{b})\) for \(b=\#J\leq c\). The functions \(z_{b+1},\dots,z_{d}\) induce an etale map \(\mathfrak{D}_{J}\to\widehat{\mathbb{A}}_{\mathcal{V}}^{d-b}\), and adding in the remaining functions \(z_{1},\dots,z_{b}\) gives an etale map \(\mathfrak{D}_{J}\times_{\mathcal{V}}\widehat{\mathbb{A}}_{\mathcal{V}}^{b}\to \widehat{\mathbb{A}}_{\mathcal{V}}^{d}\). I now let \(\mathfrak{P}^{\prime}\) be the fibre product The closed immersion \(\mathfrak{D}_{J}\to\mathfrak{P}\) canonically extends to a closed immersion \(\mathfrak{D}_{J}\to\mathfrak{P}^{\prime}\) whose composition with the projection \(\mathfrak{P}^{\prime}\to\mathfrak{D}_{J}\times_{\mathcal{V}}\widehat{\mathbb{ A}}_{\mathcal{V}}^{b}\) is the identity on the first factor, and the zero section on the second. This gives rise to a section of the etale map \(u^{-1}(\mathfrak{D}_{J})\to\mathfrak{D}_{J}\), so there exists an open subspace of \(\mathfrak{P}^{\prime}\) on which \(u^{-1}(\mathfrak{D}_{J})\xrightarrow{\cong}\mathfrak{D}_{J}\). The upshot of all of this is that after replacing \(\mathfrak{P}\) by this open subspace of \(\mathfrak{P}^{\prime}\), I can assume that there exists a smooth morphism \(\mathfrak{P}\to\mathfrak{D}_{J}\) of which the canonical inclusion is a section. Hence by the weak fibration theorem, there exists an isomorphism \[]D_{J}[_{\mathfrak{P}}\xrightarrow{\cong}\mathbb{D}_{\mathfrak{D}_{J,K}}^{b}(0 ;1^{-})\] identifying \(\mathfrak{D}_{J,K}\to]D_{J}[_{\mathfrak{P}}\) is identified with the zero section of \(\mathbb{D}_{\mathfrak{D}_{J,K}}^{b}(0;1^{-})\). In fact, this isomorphism can be upgraded to include log structures as follows. As above, let \(\mathfrak{D}_{J}^{\flat}\) denote the log formal scheme given by \(\mathfrak{D}_{J}\) equipped with the log structure coming from the strict normal crossings divisor \(\mathfrak{D}_{J,K}\cap\mathfrak{D}_{K}^{(J)}\), and let \(\mathfrak{D}_{J,K}^{b}\) denote its special fibre. If \(z_{1},\dots,z_{b}\) are co-ordinates on \(\mathbb{D}_{K}^{b}(0;1^{-})\) then \(\mathbb{D}^{b}_{K}(0;1^{-})\) is equipped with the log structure coming from the strict normal crossings divisor \(V(z_{1}\dots z_{b})\). Then the above isomorphism can be upgraded to an isomorphism \[\left]D_{J}[^{\sharp}_{\mathfrak{P}}\;\stackrel{{\cong}}{{ \longrightarrow}}\mathfrak{D}^{\flat}_{J,K}\times_{K}\mathbb{D}^{b}_{K}(0;1^{ -})\right.\] of log analytic varieties. Let \(\pi\) denote the first projection \(\left]D_{J}[^{\sharp}_{\mathfrak{P}}\;\stackrel{{\to}}{{ \longrightarrow}}\mathfrak{D}^{\flat}_{J,K}\right.\). I now want to apply [10, Proposition 2.12] to show that \(\mathscr{F}_{\left]\right|_{D_{J}[_{\mathfrak{P}}}}\) is an iterated extension of pullbacks of locally free log integrable connections on \(\mathfrak{D}^{\flat}_{J,K}\) along \(\pi\), at least after possibly localising to ensure that \(\mathfrak{D}_{J}\) is affine and connected. For Shiho's result to apply, I need to check two conditions: firstly, that the supremum norm on \(\Gamma(\mathfrak{D}_{J,K},\mathcal{O}_{\mathfrak{D}_{J,K}})\) is multiplicative, and secondly, that \(\mathscr{F}\) has nilpotent residues along \(\mathfrak{D}_{K}\). The first of these simply follows from the fact that \(\mathfrak{D}_{J}\) is smooth over \(\mathcal{V}\), and the second from the fact that \(\mathscr{F}\) has a Frobenius structure. Therefore, I can assume that there exists a locally free log integrable connection \(\mathscr{F}_{0}\) on \(\mathfrak{D}^{\flat}_{J,K}\) such that \(\mathscr{F}_{\left]\right|_{D_{J}[_{\mathfrak{P}}}}=\pi^{*}\mathscr{F}_{0}\). In a similar vein, it follows from Theorem 3.2.1 there exists a constructible isocrystal \(\mathscr{G}_{0}\) on \(\mathfrak{D}_{J}\) such that \[\mathscr{G}_{\left]\right|_{D_{J}[_{\mathfrak{P}}}}=\pi^{*}\mathscr{G}_{0}.\] I now set \(\mathscr{H}_{0}:=\mathscr{G}_{0}\otimes_{\mathcal{O}_{\mathfrak{D}_{J,K}}} \mathscr{F}_{0}^{\flat}(-\mathfrak{D}_{J,K}\cap\mathfrak{D}^{(J)}_{K})\), which is thus a \(\mathscr{D}_{\mathfrak{D}^{\flat}_{J,K}}\)-module, and write \(\mathscr{T}\) for the divisor \(V(z_{1}\dots z_{b})\subset\left]D_{J}[_{\mathfrak{P}}\;\) arising via pullback from \(\mathbb{D}^{b}_{K}(0;1^{-})\). Now I can calculate \[\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{J}[_{\mathfrak{P}}}(\mathscr{ F}(\mathfrak{D}_{K}),\mathscr{G}) =\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{J}[_{\mathfrak{P}}}(\mathcal{O}_{D_{J }[_{\mathfrak{P}}}(\mathscr{T}),\pi^{*}\mathscr{H}_{0})\] \[=\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{J}[_{\mathfrak{P}}}(\mathcal{ O}_{D_{J}[_{\mathfrak{P}}},\pi^{*}\mathscr{H}_{0}\otimes_{\mathcal{O}_{ \left]D_{J}[_{\mathfrak{P}}}}\mathcal{O}_{\left]D_{J}[_{\mathfrak{P}}}(- \mathscr{T}))\] \[=\mathbf{R}\mathrm{Hom}_{\mathfrak{D}^{\flat}_{J,K}}(\mathcal{O}_ {\mathfrak{D}_{J,K}},\mathbf{R}\pi_{\mathrm{dR}*}(\pi^{*}\mathscr{H}_{0} \otimes_{\mathcal{O}_{\left]D_{J}[_{\mathfrak{P}}}}\mathcal{O}_{\left]D_{J}[_ {\mathfrak{P}}}(-\mathscr{T}))).\] I will show that in fact \[\mathbf{R}\pi_{\mathrm{dR}*}(\pi^{*}\mathscr{H}_{0}\otimes_{\mathcal{O}_{ \left]D_{J}[_{\mathfrak{P}}}}\mathcal{O}_{\left]D_{J}[_{\mathfrak{P}}}(- \mathscr{T}))=0\] which clearly suffices. To show this, note that since \(\mathscr{H}_{0}\) is a constructible locally free \(\mathcal{O}_{\mathfrak{D}_{J,K}}\)-module, Proposition 2.1.6 implies that, after replacing \[\pi\colon\left]D_{J}[_{\mathfrak{P}}=\mathbb{D}^{b}_{\mathfrak{D}_{J,K}}(0;1^{ -})\to\mathfrak{D}_{J,K}\right.\] by the projection \[\pi_{\rho}\colon\mathbb{D}^{b}_{\mathfrak{D}_{J,K}}(0;\rho)\to\mathfrak{D}_{J,K}\] from the closed polydisc of radius \(\rho<1\), the projection formula \[\mathbf{R}\pi_{\rho\mathrm{dR}*}(\pi^{*}\mathscr{H}_{0}\otimes_{\mathcal{O}_{ \left]D_{J}[_{\mathfrak{P}}}}\mathcal{O}_{\left]D_{J}[_{\mathfrak{P}}}(-\mathscr{ T}))=\mathscr{H}_{0}\otimes^{\mathbf{L}}_{\mathcal{O}_{\mathfrak{D}_{J,K}}} \mathbf{R}\pi_{\rho\mathrm{dR}*}\mathcal{O}_{\left]D_{J}[_{\mathfrak{P}}}(- \mathscr{T}).\] holds. Since \[\mathbf{R}\pi_{\mathrm{dR}*}(\pi^{*}\mathscr{H}_{0}\otimes_{\mathcal{O}_{ \left]D_{J}[_{\mathfrak{P}}}}\mathcal{O}_{\left]D_{J}[_{\mathfrak{P}}}(- \mathscr{T}))=\mathbf{R}\underset{\rho}{\mathrm{lim}}\mathbf{R}\pi_{\rho \mathrm{dR}*}(\pi^{*}\mathscr{H}_{0}\otimes_{\mathcal{O}_{\left]D_{J}[_{ \mathfrak{P}}}}\mathcal{O}_{\left]D_{J}[_{\mathfrak{P}}}(-\mathfrak{T}_{K})),\] it is enough to show that the pro-system \[\mathbf{R}\pi_{\rho\mathrm{dR}*}\mathcal{O}_{\left]D_{J}[_{\mathfrak{P}}}(- \mathscr{T})\] has zero transition maps. Letting \[\pi_{\rho}\colon\mathbb{D}^{b}_{\mathfrak{D}_{J,K}}(0;\rho^{-})\to\mathfrak{D}_ {J,K}\] denote the projection from the open polydisc, it is then enough to show that \[\mathbf{R}\pi_{\rho^{-}\mathrm{dR}*}\mathcal{O}_{\left]D_{J}[_{\mathfrak{P}}}(- \mathscr{T})=0.\] To see this, I can replace the log structure on \(\mathfrak{D}_{J,K}\) by the trivial one, and the log structure on \(\mathbb{D}^{b}_{\mathfrak{D}_{J,K}}(0;\rho^{-})\) by that coming from the strict normal crossings divisor \(\mathscr{T}=V(z_{1}\dots z_{b})\). Replacing \(\mathfrak{D}_{J,K}\) be an arbitrary smooth analytic variety \(V\) makes it possible to argue via induction and reduce to the case \(b=1\), that is, the case of the projection \[\pi_{\rho^{-}}\colon\mathbb{D}_{V}^{1}(0;\rho^{-})\to V.\] for a smooth analytic variety \(V\). The target here has the trivial log structure, and the source the log structure coming from the zero section. Thus, arguing locally on \(V\), and appealing to Theorems A and B for non-Archimedean quasi-Stein spaces [10, 11], I can reduce to calculating the cohomology of the complex \[zR\left\{\rho^{-1}z\right\}\xrightarrow{z\partial_{z}}zR\left\{\rho^{-1}z\right\}\] where \(R\) is an affinoid algebra over \(K\), and \(R\left\{\rho^{-1}z\right\}\) is the ring of functions on the open disc of radius \(\rho\) over \(R\). It is then a straightforward calculation that this cohomology vanishes. ### Rigidification of log \(\boldsymbol{\mathscr{P}^{\dagger}}\)-modules For the logarithmic analogue of the \(\mathscr{D}^{\dagger}\)-module side of the picture, I will stick to log formal schemes coming from strict normal crossings divisors on smooth formal schemes. Thus I will let \(\mathfrak{P}\) be a smooth formal scheme, \(\mathfrak{D}\subset\mathfrak{P}\) a strict normal crossings divisor relative to \(\mathcal{V}\), and \(\mathfrak{P}^{\sharp}=(\mathfrak{P},M(\mathfrak{D}))\) the corresponding log formal scheme. There is therefore the logarithmic analogue \(\mathscr{D}^{\dagger}_{\mathfrak{P}^{\dagger}\mathbb{Q}}\) of Berthelot's ring of overconvergent differential operators, as defined in [13] and [11]. As in the non-logarithmic case, \[\mathscr{D}^{\dagger}_{\mathfrak{P}^{\sharp}\mathbb{Q}}:=\operatorname{colim }_{m}\widehat{\mathscr{D}}^{(m)}_{\mathfrak{P}^{\sharp}\mathbb{Q}}\otimes_{ \mathbb{Z}}\mathbb{Q}\] where \(\widehat{\mathscr{D}}^{(m)}_{\mathfrak{P}^{\sharp}\mathbb{I}}\) is the \(p\)-adic completion of the sheaf of level \(m\) differential operators on \(\mathfrak{P}^{\sharp}\). The isogeny category \(\underline{\mathbf{L}\mathbf{D}_{\mathbb{Q},\mathrm{qc}}}(\widehat{\mathscr{D }}^{(\bullet)}_{\mathfrak{P}^{\sharp}\mathbb{I}})\) of ind-complexes of \(\widehat{\mathscr{D}}^{(\bullet)}_{\mathfrak{P}^{\sharp}\mathbb{I}}\)-modules is defined exactly as in the non-logarithmic case, and actually taking the colimit induces an equivalence \[\underline{\mathbf{L}\mathbf{D}^{\flat}_{\mathbb{Q},\mathrm{coh}}}(\widehat{ \mathscr{D}}^{(\bullet)}_{\mathfrak{P}^{\sharp}\mathbb{I}})\xrightarrow{ \cong}\mathbf{D}^{\flat}_{\mathrm{coh}}(\mathscr{D}^{\dagger}_{\mathfrak{P}^ {\sharp}\mathbb{Q}}),\] see [12, SS1.2]. The method of SS6 works _mutatis mutandis_ in the logarithmic case to define \[\mathrm{sp}^{\prime}=\mathrm{sp}^{\prime}_{\mathfrak{P}^{\sharp}\mathbb{I}}= \mathbf{L}\mathrm{sp}^{*}_{\mathfrak{P}^{\sharp}\mathbb{I}}[-\dim\mathfrak{P} ]\colon\underline{\mathbf{L}\mathbf{D}_{\mathbb{Q},\mathrm{qc}}}(\widehat{ \mathscr{D}}^{(\bullet)}_{\mathfrak{P}^{\sharp}\mathbb{I}})\to\mathbf{D}( \mathscr{D}_{\mathfrak{P}^{\sharp}_{K}}),\] where \(\mathfrak{P}^{\sharp}_{K}\) is the generic fibre of \(\mathfrak{P}^{\sharp}\), considered as a log analytic variety. It follows from the construction that \(\mathrm{sp}^{\prime}\) commutes with pullback along the morphism \(\mathfrak{P}^{\sharp}\to\mathfrak{P}\). The approach of SS6 can also be followed through in the logarithmic case to prove the following. **7.3.1 Proposition**.: _For any \(\mathcal{M}\in\underline{\mathbf{L}\mathbf{D}_{\mathbb{Q},\mathrm{qc}}}( \widehat{\mathscr{D}}^{(\bullet)}_{\mathfrak{P}^{\sharp}\mathbb{I}})\), \(\mathsf{L}\mathrm{sp}^{*}_{\mathfrak{P}^{\sharp}\mathbb{I}}\) induces a base change isomorphism_ \[\mathbf{R}\Gamma_{\mathrm{log-dR}}(\mathfrak{P}^{\sharp},\mathcal{M}) \xrightarrow{\cong}\mathbf{R}\Gamma_{\mathrm{log-dR}}(\mathfrak{P}^{\sharp}_ {K},\mathbf{L}\mathrm{sp}^{*}_{\mathfrak{P}^{\sharp}\mathcal{M}})\] _which is natural in \(\mathcal{M}\)._ **7.3.2 Remark**.: The definition of \(\mathbf{R}\Gamma_{\mathrm{log-dR}}(\mathfrak{P}^{\sharp},\mathcal{M})\) is analogous to the non-logarithmic case, namely if \(\mathcal{M}=\{\mathcal{M}^{(m)}\}_{m\in\mathbb{N}}\) then \[\mathbf{R}\Gamma_{\mathrm{log-dR}}(\mathfrak{P}^{\sharp},\mathcal{M})= \operatorname{colim}_{m}\mathbf{R}\Gamma_{\mathrm{log-dR}}(\mathfrak{P}^{ \sharp},\mathcal{M}^{(m)})\otimes_{\mathbb{Z}}\mathbb{Q}.\] In will also need a very rudimentary logarithmic version of the specialisation functors considered in SS4.3. Suppose, then, that \(\mathscr{F}\) is a locally free convergent log isocrystal on the special fibre \(P^{\sharp}\) of \(\mathfrak{P}^{\sharp}\), viewed as a module with integrable logarithmic connection on \(\mathfrak{P}^{\sharp}_{K}\). Then \(\mathrm{sp}_{\mathfrak{P}*}\mathscr{F}=\mathbf{R}\mathrm{sp}_{\mathfrak{P}*} \mathscr{F}\) is naturally an \(\mathcal{O}_{\mathfrak{P}\mathbb{Q}}\)-coherent \(\mathscr{D}^{\dagger}_{\mathfrak{P}^{\sharp}\mathbb{Q}}\)-module, and is moreover coherent as an \(\mathscr{D}^{\dagger}_{\mathfrak{P}^{\sharp}\mathbb{Q}}\)-module by [12, Theoreme 4.15]. As in the non-logarithmic case, it is straightforward to verify that there is a canonical isomorphism \[\mathbf{L}\mathrm{sp}^{*}_{\mathfrak{P}^{\sharp}\mathbb{I}}\mathrm{sp}_{ \mathfrak{P}*}\mathscr{F}\xrightarrow{\cong}\mathscr{F}\] of \(\mathscr{D}_{\mathfrak{P}^{\sharp}_{K}}\)-modules. ## 8. The overconvergent Riemann-Hilbert correspondence I can now finally prove the main result of this article. **8.0.1 Theorem**.: _Let \(\mathfrak{P}\) be a smooth formal scheme. Then the functor_ \[\operatorname{sp}^{!}\colon\mathbf{D}^{b}_{\operatorname{hol},F}(\mathfrak{P}) \to\mathbf{D}^{b}_{\operatorname{cons},F}(\mathfrak{P})\] _is an equivalence of categories._ As in SS3.2, this reduces, via devissage, to the following derived full faithfulness assertion. **8.0.2 Theorem**.: _Let \(\mathfrak{P}\) be a smooth formal scheme, \(j\colon X\to P\) a locally closed subscheme, smooth over \(k\), with closure \(Y\) in \(P\). Suppose \(\mathscr{F}\in F\text{-}\mathbf{Isoc}(X,Y)\), and set \(\mathcal{M}:=\operatorname{sp}_{X!}\mathscr{F}\in\mathbf{DCon}_{F}(\mathfrak{ P})\). Then, for any \(\mathcal{N}\in\mathbf{DCon}_{F}(\mathfrak{P})\), the map_ \[\mathbf{R}\mathrm{Hom}_{\mathscr{D}^{\dagger}_{\mathfrak{P}\mathfrak{Q}}}( \mathcal{M},\mathcal{N})\to\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{\mathfrak{P}_ {K}}}(\operatorname{sp}^{!}\mathcal{M},\operatorname{sp}^{!}\mathcal{N})\] _is an isomorphism in \(\mathbf{D}(K)\)._ Proof.: To begin with, the question is local on \(\mathfrak{P}\), which I can therefore assume to be affine. As in the proof of Theorem 6.3.1, one consequence of this is that every formal scheme appearing in the proof will admit a locally closed embedding into a smooth and proper formal scheme. The fact that \(\mathcal{M}\) is supported on \(Y\), together with compatibility of \(\operatorname{sp}^{!}\) with the natural morphism means that there is a commutative diagram Hence, replacing \(\mathcal{N}\) with \(\underline{\mathbf{R}}\Gamma^{\dagger}\mathcal{N}\), I can assume that \(\mathcal{N}\) is supported on \(Y\). Now, the semistable reduction theorem for \(F\)-isocrystals [11, Theorem 2.4.4] shows that there exists a morphism of pairs \((f,g)\colon(\widetilde{X},\widetilde{Y})\to(X,Y)\) such that: * \(\widetilde{Y}\) is smooth; * \(g\) is projective and generically etale; * \(\widetilde{X}=g^{-1}(X)\); * \(\widetilde{D}:=\widetilde{Y}\setminus\widetilde{X}\) is a strict normal crossings divisor in \(\widetilde{Y}\); * \(f^{*}\mathscr{F}\in F\text{-}\mathbf{Isoc}(\widetilde{X},\widetilde{Y})\) extends to a convergent log \(F\)-isocrystal on the log smooth log scheme \((\widetilde{Y},M(\widetilde{D}))\). Since \(g\) is projective, I can extend \((f,g)\) to a morphism of frames where \(u\) is smooth and projective. By Noetherian induction on \(X\), I am free to replace \(X\) with an open subscheme, hence I can assume moreover that \(f\) is finite etale. Now, by Proposition 4.5.5, \(\mathcal{N}\in\mathbf{D}^{b}_{\mathrm{hol},F}(X,Y,\mathfrak{P})\subset\mathbf{D}^{b }_{\mathrm{hol},F}(\mathfrak{P})\) is a direct summand of \(f_{+}f^{!}\mathcal{N}\), so I can replace \(\mathcal{N}\) by \(f_{+}f^{!}\mathcal{N}\). Then, by Proposition 6.4.4, combined with Proposition 6.2.3 and the preceding discussion, there is a commutative diagram \[\mathbf{R}\mathrm{Hom}_{\mathscr{P}^{\dagger}_{\mathfrak{P}^{ \dagger}_{\mathfrak{P}^{\dagger}_{\mathfrak{P}^{\dagger}}}}}(\mathcal{M},f_{+}f ^{!}\mathcal{N})\xrightarrow{\,\,\,}\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{ \mathfrak{P}_{K}}}(\mathrm{sp}^{!}_{\mathfrak{P}^{\dagger}_{\mathfrak{P}^{ \dagger}}}\mathcal{M},\mathrm{sp}^{!}_{\mathfrak{P}^{\dagger}_{\mathfrak{P}^{ \dagger}}}f_{+}f^{!}\mathcal{N})\xrightarrow{\cong}\mathbf{R}\mathrm{Hom}_{ \mathscr{D}_{\mathfrak{P}_{K}}}(\mathrm{sp}^{!}_{\mathfrak{P}^{\dagger}} \mathcal{M},\mathbf{R}\mathrm{})f[_{\mathrm{idR}_{*}}\,\mathrm{sp}^{!}_{ \mathfrak{P}^{\dagger}_{\mathfrak{P}^{\dagger}}}f^{!}\mathcal{N})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ It therefore suffices to show that the two maps \[\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{\mathfrak{P}_{k}^{\ddagger}}}(j_ {*}\mathscr{F},\mathrm{sp}_{\mathfrak{P}}^{\dagger}\mathcal{N}) \to\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{\mathfrak{P}_{k}^{\ddagger}}}( \mathscr{F}^{\sharp}(\mathfrak{D}_{K}),\mathrm{sp}_{\mathfrak{P}}^{\dagger} \mathcal{N})\] \[\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{\mathfrak{P}_{k}^{\ddagger}} ^{\dagger}}(\mathcal{M}^{\sharp}(\mathfrak{D}),\mathcal{N}) \to\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{\mathfrak{P}_{k}^{\ddagger}} }(\mathrm{sp}_{\mathfrak{P}}^{\dagger}\mathcal{M}^{\sharp}(\mathfrak{D}), \mathrm{sp}_{\mathfrak{P}}^{\dagger}\mathcal{N})\] are isomorphisms. The second is a relatively straightforward consequence of the fact that the shift \(\mathcal{M}^{\sharp}(\mathfrak{D})[-d]\) is a locally projective \(\mathcal{O}_{\mathfrak{P}\mathbb{Q}}\)-module. Indeed, letting \(\mathcal{M}^{*}\) denote the \(\mathcal{O}_{\mathfrak{P}\mathbb{Q}}\)-dual of \(\mathcal{M}^{\sharp}(\mathfrak{D})\), then \(\mathcal{M}^{*}[d]\) is a locally projective \(\mathcal{O}_{\mathfrak{P}\mathbb{Q}}\)-module, and there is a commutative diagram The bottom arrow is an isomorphism by Proposition 7.3.1, and therefore the top arrow is also an isomorphism. Finally, to see that \[\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{\mathfrak{P}_{K}}}(j_{*}\mathscr{F}, \mathrm{sp}_{\mathfrak{P}}^{\dagger}\mathcal{N})\to\mathbf{R}\mathrm{Hom}_{ \mathscr{D}_{\mathfrak{P}_{K}^{\ddagger}}}(\mathscr{F}^{\sharp}(\mathfrak{D}_ {K}),\mathrm{sp}_{\mathfrak{P}}^{\dagger}\mathcal{N})\] is an isomorphism, I will (after some set-up) apply Theorem 7.2.1. First, note that the support \(]X[_{\mathfrak{P}}\) of \(j_{*}\mathscr{F}\) is disjoint from the divisor \(\mathfrak{D}_{K}\). It therefore follows that the'restriction' map \[\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{\mathfrak{P}_{K}}}(j_{*}\mathscr{F}, \mathrm{sp}_{\mathfrak{P}}^{\dagger}\mathcal{N})\to\mathbf{R}\mathrm{Hom}_{ \mathscr{D}_{\mathfrak{P}_{K}^{\ddagger}}}(j_{*}\mathscr{F},\mathrm{sp}_{ \mathfrak{P}}^{\dagger}\mathcal{N})\] is an isomorphism. Now write \(i:D\to P\) for the natural inclusion, thus the localisation exact sequence \[0\to i!^{*}\mathscr{F}^{\sharp}(\mathfrak{D}_{K})\to\mathscr{F}^{\sharp}( \mathfrak{D}_{K})\to j_{*}\mathscr{F}\to 0\] reduces to proving that \[\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{\mathfrak{P}_{K}^{\ddagger}}}(i!^{*} \mathscr{F}^{\sharp}(\mathfrak{D}_{K}),\mathrm{sp}_{\mathfrak{P}}^{\dagger} \mathcal{N})=\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{\mid D[\mathfrak{P}}}( \mathscr{F}^{\sharp}(\mathfrak{D}_{K}),\mathrm{sp}_{\mathfrak{P}}^{\dagger} \mathcal{N})=0,\] where I have written \(]D[_{\mathfrak{P}}^{\ddagger}\) for \(]D[_{\mathfrak{P}}\) equipped with the log structure given by pulling back the given log structure on \(\mathfrak{P}_{K}^{\ddagger}\). Let \(\mathfrak{D}_{1},\dots,\mathfrak{D}_{n}\) denote the irreducible components of \(\mathfrak{D}\), with special fibres \(D_{1},\dots,D_{n}\). For each \(J\subset\{1,\dots,n\}\) set \[\mathfrak{D}_{J} =\cap_{j\in J}\mathfrak{D}_{j}, D_{J}=\mathfrak{D}_{J,k}=\cap_{j\in J}D_{j}\] \[\mathfrak{D}^{(J)} =\cup_{j\notin J}\mathfrak{D}_{j}, D^{(J)}=\mathfrak{D}_{k}^{(J)}=\cup_{j\notin J}D_{j}.\] Using the open cover of \(]D[_{\mathfrak{P}}\) given by the various tubes \(]D_{j}[_{\mathfrak{P}}\), I can therefore reduce to showing that \[\mathbf{R}\mathrm{Hom}_{\mathscr{D}_{\mid D_{J}[\mathfrak{P}}}(\mathscr{F}^{ \sharp}(\mathfrak{D}_{K}),\mathrm{sp}_{\mathfrak{P}}^{\dagger}\mathcal{N})=0\] for each \(J\subset\{1,\dots,n\}\). Theorem 6.3.1 says that \(\mathrm{sp}_{\mathfrak{P}}^{\dagger}\mathcal{N}\) is a constructible isocrystal on \(\mathfrak{P}\), and \(\mathscr{F}^{\sharp}\) is a log convergent \(F\)-isocrystal on \(P^{\sharp}\) by assumption. Hence the required vanishing is precisely the content of Theorem 7.2.1. Combining this with Proposition 6.2.3, I obtain the following. **8.0.3 Corollary**.: _Let \(u\colon\mathfrak{P}\to\mathfrak{Q}\) be a smooth and proper morphism of smooth formal schemes. Then the functor \(\mathbf{R}u_{\mathrm{dR}*}\) maps \(\mathbf{D}^{b}_{\mathrm{cons},F}(\mathfrak{P})\) into \(\mathbf{D}^{b}_{\mathrm{cons},F}(\mathfrak{Q})\)._ ## 9. Cohomological operations for constructible isocrystals I can now show compatibility of \(\operatorname{sp}^{!}\) with the various cohomological functors for \(\mathscr{D}^{\dagger}\)-modules and constructible isocrystals. This will then enable me to deduce an analogue of Theorem 8.0.1 for pairs and for varieties. ### The case of formal schemes I have already proved the compatibility of \(\operatorname{sp}^{!}\) with \(\operatorname{de}\operatorname{Rham}\) pushfoward along smooth morphisms \(u\colon\mathfrak{P}\to\mathfrak{Q}\) of smooth formal schemes in Proposition 6.2.3. Similarly, if \(\mathfrak{P}\) is a smooth formal scheme, and \(i\colon X\to P\) is a locally closed immersion, I have already shown in SS6.4 that the diagram commutes up to natural isomorphism, and that this is moreover compatible with the natural 'adjunctions' when \(X\) is either open or closed in \(P\). There are similar results for pullbacks and tensor products. For the case of pullbacks, suppose that \(u\colon\mathfrak{P}\to\mathfrak{Q}\) is a (not necessarily smooth) morphism of smooth formal schemes, and that \(\mathcal{M}\in\mathbf{D}^{b}_{\operatorname{hol},F}(\mathfrak{Q})\). Then I can construct a natural (in \(\mathcal{M}\)) morphism \[\psi_{u}\colon u^{*}\operatorname{sp}^{!}_{\mathfrak{Q}}\mathcal{M}\to \operatorname{sp}^{!}_{\mathfrak{P}}u^{!}\mathcal{M}\] as follows. I first use the usual trick of taking the graph two divide into two separate cases: the first when \(u\) is smooth, and the second when \(u\) is a section of a smooth morphism \(\pi\colon\mathfrak{Q}\to\mathfrak{P}\). In the first case, when \(u\) is smooth, of relative dimension \(d\), say, giving a morphism \[u^{*}\operatorname{sp}^{!}_{\mathfrak{Q}}\mathcal{M}\to\operatorname{sp}^{!}_ {\mathfrak{P}\mathfrak{P}}u^{!}\mathcal{M}\] is equivalent to giving a morphism \[\operatorname{sp}^{!}_{\mathfrak{Q}}\mathcal{M}\to\mathbf{R}u_{\operatorname{ dR}*}\operatorname{sp}^{!}_{\mathfrak{P}}u^{!}\mathcal{M}=\operatorname{sp}^{!}_{ \mathfrak{Q}}u_{+}u^{!}\mathcal{M}[-2d].\] Since I am not assuming that \(u\) is proper, the functor \(u_{+}\) appearing on the RHS should be viewed as taking values in \(\underline{\mathbf{L}\mathbf{D}}^{b}_{\mathbb{Q},\operatorname{qc}}(\widehat{ \mathscr{D}}^{(\bullet)}_{\mathfrak{Q}})\) It therefore suffices to construct a morphism \[\mathcal{M}\to u_{+}u^{!}\mathcal{M}[-2d]\] in \(\underline{\mathbf{L}\mathbf{D}}^{b}_{\mathbb{Q},\operatorname{qc}}(\widehat{ \mathscr{D}}^{(\bullet)}_{\mathfrak{Q}})\). Via the projection formula (which in the generality required here follows from [10, Proposition 1.2.27] as in [10, SS2.1.4]) it suffices to consider the case \(\mathcal{M}=\mathcal{O}_{\mathfrak{Q}\mathbb{Q}}\). In this case \(u^{!}\mathcal{M}=\mathcal{O}_{\mathfrak{P}\mathbb{Q}}[d]\) and the morphism I seek is one of the form \[\mathcal{O}_{\mathfrak{Q}\mathbb{Q}}\to u_{+}\mathcal{O}_{\mathfrak{P} \mathbb{Q}}[-d].\] After actually taking the colimit in the level \(m\), and tensoring with \(\mathbb{Q}\) to land inside \(\mathbf{D}(\mathscr{D}^{\dagger}_{\mathfrak{Q}\mathbb{Q}})\), this will simply the morphism \[\mathcal{O}_{\mathfrak{Q}\mathbb{Q}}\to\mathbf{R}u_{\operatorname{dR}*} \mathcal{O}_{\mathfrak{P}\mathbb{Q}}\] into the zeroeth relative \(\operatorname{de}\operatorname{Rham}\) cohomology group of \(\mathcal{O}_{\mathfrak{P}\mathbb{Q}}\). To see that this morphism lifts to \(\underline{\mathbf{L}\mathbf{D}}^{b}_{\mathbb{Q},\operatorname{qc}}(\widehat{ \mathscr{D}}^{(\bullet)}_{\mathfrak{Q}})\) simply amounts to making the same construction levelwise in \(m\). In the second case, suppose that \(u\) is a section of a smooth morphism \(\pi\colon\mathfrak{Q}\to\mathfrak{P}\), of relative dimension \(d\), say. Then I can calculate \[\mathrm{sp}^{!}_{\mathfrak{P}}u^{!}\mathcal{M} =\mathrm{sp}^{!}_{\mathfrak{P}}\pi_{+}u_{+}u^{!}\mathcal{M}\] \[=\mathrm{sp}^{!}_{\mathfrak{P}}\pi_{+}\underline{\mathbf{R}} \underline{\Gamma}^{!}_{P}\mathcal{M}\] \[=\mathbf{R}\pi_{\mathrm{dR}}\mathrm{s}^{!}_{\mathfrak{P}} \underline{\mathbf{R}}\underline{\Gamma}^{!}_{P}\mathcal{M}[2d]\] \[=\mathbf{R}\pi_{\mathrm{dR}}\underline{\mathbf{R}}\underline{ \Gamma}^{!}_{P}\mathrm{sp}^{!}_{\mathfrak{Q}}\mathcal{M}[2d]\] by compatibility of \(\mathrm{sp}^{!}\) with both \(\mathrm{de}\,\mathrm{Rham}\) pushflowards and the functor \(\underline{\mathbf{R}}\underline{\Gamma}^{!}_{P}\) of sections with overconvergent support. It therefore suffices to construct a morphism \[u^{*}\to\mathbf{R}\pi_{\mathrm{dR}}\underline{\mathbf{R}}\underline{\Gamma} ^{!}_{P}[2d]\] of functors from \(\mathbf{D}^{b}_{\mathrm{cons},F}(\mathfrak{Q})\) to \(\mathbf{D}^{b}_{\mathrm{cons},F}(\mathfrak{P})\). Let \[\mathrm{]id}[_{u}:\mathfrak{P}_{K}\to\left]P[_{\mathfrak{Q}}\,,\quad\left]u \right[_{\mathrm{id}}:\,\left]P[_{\mathfrak{Q}}\to\mathfrak{Q}_{K}\right.\] denote the natural inclusions, and \[\mathrm{]id}[_{\pi}:\,\left]P[_{\mathfrak{Q}}\to\mathfrak{P}_{K}\right.\] the restriction of \(\pi\). Thus \(\underline{\mathbf{R}}\underline{\Gamma}^{!}_{P}=\left]u\right[_{\mathrm{id}!}\cdot\left]u\right[_{\mathrm{id}}^{*}\). **9.1.1 Lemma**.: _The 'forget supports' map \(\mathbf{R}\,\mathrm{]id}[_{\pi!}\to\mathbf{R}\pi_{*}\circ\left]u\right[_{ \mathrm{id}!}\) is an isomorphism._ Proof.: Explicitly, if \(\mathscr{I}\) is an injective sheaf on \(\left]P[_{\mathfrak{Q}}\right]\), then \(\mathrm{]id}[_{\pi!}\mathscr{I}\) consists of sections of \(\mathscr{I}\) with compact support over \(\mathfrak{P}_{K}\), and \(\pi_{*}\circ\left]u\right[_{\mathrm{id}!}\mathscr{I}\) consists of sections with compact support over \(\mathfrak{Q}_{K}\). To show that \(\mathrm{]id}[_{\pi!}\mathscr{I}=\pi_{*}\circ\left]u\right[_{\mathrm{id}!} \mathscr{I}\) therefore amounts to showing that if \(V\subset\mathfrak{P}_{K}\) is open, and \(S\subset\mathrm{]id}[_{\pi}^{-1}\left(V\right)\) is a closed subset, then \(S\to V\) is proper if and only if \(S\to\pi^{-1}(V)\) is proper. Since \(S\to V\) is always partially proper, and \(\pi^{-1}(V)\to V\) is quasi-compact, this is clear. To complete the proof, then, I need to show that \(\left]u\right[_{\mathrm{id}!}\mathscr{I}\) is a flasque sheaf on \(\mathfrak{Q}_{K}\). But since \(\left]u\right[_{\mathrm{id}!}\) is partially proper, this was demonstrated in the proof of [1, Corollary 3.4.7] (note that the additional hypothesis there that the second morphism \(g\) is partially proper is not used to show that \(f_{!}\mathscr{I}\) is flasque). I am therefore looking to construct a morphism \[u^{*}\to\mathbf{R}\,\mathrm{]id}[_{\pi\mathrm{dR}!}\,\left]u\right[_{\mathrm{ id}!}^{*}\left[2d\right],\] and in fact an isomorphism between these two functors is provided by Remark 3.2.8. This therefore gives rise to an _isomorphism_ \[u^{*}\mathrm{sp}^{!}_{\mathfrak{Q}}\mathcal{M}\stackrel{{\cong}}{{ \longrightarrow}}\mathrm{sp}^{!}_{\mathfrak{P}}u^{!}\mathcal{M},\] which is natural in \(\mathcal{M}\). **9.1.2 Proposition**.: _Let \(u\colon\mathfrak{P}\to\mathfrak{Q}\) be a morphism of smooth formal schemes, and \(\mathcal{M}\in\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{Q})\). Then the morphism_ \[\psi_{u}\colon u^{*}\mathrm{sp}^{!}_{\mathfrak{Q}}\mathcal{M}\to\mathrm{sp}^{!}_{\mathfrak{P}}u^{!}\mathcal{M}\] _is an isomorphism in \(\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\)._ Proof.: To show that \(\psi_{u}\) is an isomorphism, I can check that it is so after taking the pullback along any closed immersion \(\mathrm{Spf}\left(\mathcal{V}^{\prime}\right)\to\mathfrak{P}\) where \(\mathcal{V}^{\prime}\) is the ring of integers in a finite extension of \(K\). After possibly making a finite base change (for which the result can be easily checked), I can in fact take such closed immersions with \(\mathcal{V}^{\prime}=\mathcal{V}\). Thus the general case reduces to the special case when \(u\) is a section of a smooth morphism, in which case the given morphism which has already been shown to be an isomorphism. To show compatibility of \(\operatorname{sp}^{!}\) with tensor product, recall that in SS5.2 I constructed, for \(\mathcal{M},\mathcal{N}\in\underline{\mathbf{L}}\mathbf{D}_{\mathbb{Q},\mathrm{qc }}(\widehat{\mathscr{D}}_{\mathfrak{P}}^{(\bullet)})\), a morphism \[\mathbf{L}\mathrm{s}\mathrm{p}_{\mathfrak{P}}^{*}\mathcal{M}\otimes_{\mathcal{ O}_{\mathfrak{P}_{K}}}^{\mathbf{L}}\mathbf{L}\mathrm{s}\mathrm{p}_{\mathfrak{P}}^{*} \mathcal{N}\to\mathbf{L}\mathrm{s}\mathrm{p}_{\mathfrak{P}}^{*}(\mathcal{M} \widehat{\otimes}_{\mathcal{O}_{\mathfrak{P}}}^{\mathbf{L}}\mathcal{N})\] in \(\mathbf{D}(\mathcal{O}_{\mathfrak{P}_{K}})\). It is straightforward, though rather tedious, to upgrade this to a morphism in \(\mathbf{D}(\mathscr{D}_{\mathfrak{P}_{K}})\). For \(\mathcal{M},\mathcal{N}\in\mathbf{D}_{\mathrm{hol},F}^{b}(\mathfrak{P})\), I can therefore view it as a morphism \[\operatorname{sp}^{!}\mathcal{M}\otimes_{\mathcal{O}_{\mathfrak{P}_{K}}} \operatorname{sp}^{!}_{\mathfrak{P}}\mathcal{N}\to\operatorname{sp}^{!}( \mathcal{M}\widehat{\otimes}_{\mathcal{O}_{\mathfrak{P}}}\mathcal{N}) \tag{9.1.3}\] in \(\mathbf{D}_{\mathrm{cons},F}^{b}(\mathfrak{P})\). #### 9.1.4. Corollary _The morphism (9.1.3) is an isomorphism._ Proof.: Again, this can be checked on stalks, in which case it follows from Proposition 9.1.2 together with the fact that both pullback functors \(u^{!}\) and \(u^{*}\) commute with the relevant tensor products. #### The case of pairs and varieties In Corollary 8.0.3, the fact that \(u\) is proper implies that \(\mathbf{R}u_{\mathrm{dR}^{*}}=\mathbf{R}u_{\mathrm{dR}^{!}}\), so I could have equally well phrased the result in terms of compactly supported de Rham cohomology. It turns out that this version is the one that generalises to the case of pairs. In fact, I can use the extra flexibility of constructible isocrystals to remove the need for the formal schemes involved to be immersible in smooth and proper ones. To obtain a functor that is independent of the frame, I also need to include a shift. #### 9.2.1. Corollary _Let_ _be a morphism of frames, such that \(g\) is proper, \(\mathfrak{P}\) is smooth around \(X\), and \(u\) is smooth around \(X^{\prime}\), of relative dimension \(d\)._ _Then the functor \(\mathbf{R}]f[_{\mathrm{dR}}]\) maps \(\mathbf{D}_{\mathrm{cons},F}^{b}(X^{\prime},Y^{\prime})\) into \(\mathbf{D}_{\mathrm{cons},F}^{b}(X,Y)\). The resulting functor only depends on the morphism of pairs \(g\colon(X^{\prime},Y^{\prime})\to(X,Y)\). If \(Y\) is proper, then it only depends on the morphism of varieties \(f\colon X^{\prime}\to X\)._ Proof.: Let me first consider the claim that \(\mathbf{R}]f[_{\mathrm{dR}}]\,[2d]\) preserves constructible isocrystals. The key point is that I can use Theorem 3.2.1 repeatedly to replace our morphism of frames by one that is covered by Corollary 8.0.3. Indeed, the question is local on \(X^{\prime}\), which I can therefore assume to be affine, in particular \(f\) is quasi-projective. By Chow's lemma, there is a morphism of varieties \(g^{\prime}\colon Y^{\prime\prime}\to Y^{\prime}\) such that \(g^{\prime-1}(X^{\prime})\stackrel{{\cong}}{{\longrightarrow}}X^{\prime}\), and \(g\circ g^{\prime}\) is projective (therefore \(g^{\prime}\) is also projective). Alternately embedding \(Y^{\prime\prime}\) into either \(\widehat{\mathbb{P}}^{N}_{\mathfrak{P}}\) or \(\widehat{\mathbb{P}}^{N^{\prime}}_{\mathfrak{P}^{\prime}}\), and using Theorem 3.2.1 repeatedly, I can replace the original morphism of frames by one of the form and therefore apply Corollary 8.0.3. Now the independence of \(u\) (and \(g\) when \(Y\) is proper) is a simple consequence of Theorem 3.2.1, using transitivity of \(\mathbf{R}]f[_{\mathrm{dR}}]\,[2d]\) and the usual trick of taking products of frames. #### 9.2.2. Definition If \((f,g)\colon(X^{\prime},Y^{\prime})\to(X,Y)\) is a morphism of weakly realisable pairs, I denote by \[\mathbf{R}f_{\flat}\colon\mathbf{D}_{\mathrm{cons},F}^{b}(X^{\prime},Y^{ \prime})\to\mathbf{D}_{\mathrm{cons},F}^{b}(X,Y)\] the functor implicitly given by Corollary 9.2.1. If \(f\colon X^{\prime}\to X\) is a morphism of weakly realisable varieties, I similarly denote \[\mathbf{R}f_{\!\!j}\colon\mathbf{D}^{b}_{\operatorname{cons},F}(X^{\prime}) \to\mathbf{D}^{b}_{\operatorname{cons},F}(X)\] the functor obtained by choosing a suitable compactification of \(f\). _9.2.3 Remark_.: Note that whenever \(g\) is a closed immersion, and \(Y\hookrightarrow\mathfrak{P}\) is a closed immersion into a formal scheme which is smooth around \(X\), then \(\mathbf{R}f_{\!\!j}\) can be identified with the functor \(f_{\!\!l}\) of extension by zero along the induced morphism of tubes \[f\colon\,]X^{\prime}[_{\mathfrak{P}}\to]X[_{\mathfrak{P}}\,.\] If \((X,Y)\) is a strongly realisable pair, and \((X,Y,\mathfrak{P})\) is an l.p. frame enclosing it, I can also deduce from Proposition 6.2.3 and Theorem 8.0.1 that the composite (equivalence!) \[\mathbf{D}^{b}_{\operatorname{hol},F}(X,Y)\subset\mathbf{D}^{b}_{\operatorname {hol},F}(\mathfrak{P})\stackrel{{\operatorname{sp}^{\operatorname{ v}^{\operatorname{v}^{\operatorname{v}^{\operatorname{v}^{\operatorname{v}^{ \operatorname{v}^{\operatorname{v}^{\operatorname{v}^{\operatorname{v}^{\operatorname{v }^{\operatorname{v}^{\operatorname{v}^{\operatorname{v}^{\operatorname{v}}^{ \operatorname{v}^{\operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{ \operatorname{v}^{\operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{ \operatorname{v}^{\operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{ \operatorname{v}^{\operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{ \operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{ \operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{ \operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{ \operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{ \operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{ \operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{\operatorname{v}}^{ \operatorname{v}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\,\,\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\} commutes up to natural isomorphism. If \(g\) is proper, then so does_ Proof.: The statement for pushforwards follows from Proposition 6.2.3. For pullbacks, it follows from Proposition 9.1.2 together with the fact that, for any smooth formal scheme \(\mathfrak{P}\), and any locally closed immersion \(i\colon X\to\mathfrak{P}\), there is an isomorphism \[i_{!}i^{-1}\mathrm{sp}^{!}_{\mathfrak{P}}\stackrel{{\cong}}{{ \longrightarrow}}\mathrm{sp}^{!}_{\mathfrak{P}}\mathbf{R}\underline{\Gamma}^{! }_{X}.\qed\] There is a similar statement for morphisms of strongly realisable varieties. Finally, I can show compatibility with tensor product using Corollary 9.1.4. **9.2.7 Corollary**.: _Let \((X,Y)\) be a strongly realisable pair. Then there is a natural isomorphism_ \[\mathrm{sp}^{!}_{X}(-)\otimes_{\mathcal{O}_{X}}\mathrm{sp}^{!}_{X} \stackrel{{\cong}}{{\longrightarrow}}\mathrm{sp}^{!}_{X}(- \widetilde{\otimes}_{\mathcal{O}_{X}}-)\] _of functors_ \[\mathbf{D}^{b}_{\mathrm{hol},F}(X,Y)\times\mathbf{D}^{b}_{\mathrm{hol},F}(X,Y) \to\mathbf{D}^{b}_{\mathrm{cons},F}(X,Y).\] Naturally, there is an analogous statement for varieties. _9.2.8 Remark_.: Via the analogy between \(\mathbf{Isoc}_{\mathrm{cons},F}(X,Y)\) and \(\mathbf{DCon}(X_{\mathrm{\acute{e}t}},\mathbb{Q}_{\ell})\), the standard tensor product on constructible isocrystals corresponds to the dual tensor product \[\mathscr{F}\widetilde{\otimes}_{\mathbb{Q}_{\ell}}\mathscr{G}:=\mathbf{D}_{X} (\mathbf{D}_{X}(\mathscr{F})\otimes_{\mathbb{Q}_{\ell}}\mathbf{D}_{X}( \mathscr{G}))\] for dual constructible \(\ell\)-adic sheaves. Note that this is t-exact for the dual constructible t-structure on \(\mathbf{D}^{b}_{c}(X_{\mathrm{\acute{e}t}},\mathbb{Q}_{\ell})\). ### Verdier duality Of the six cohomological functors \(f_{+},f^{+},f^{!},f_{!},\mathbf{D}_{X},\widetilde{\otimes}_{\mathcal{O}_{X}}\) defined on \(\mathbf{D}^{b}_{\mathrm{hol},F}\), we have seen how to interpret three in terms of \(\mathbf{D}^{b}_{\mathrm{cons},F}\), namely \(f_{+}\), \(f^{!}\) and \(\widetilde{\otimes}_{\mathcal{O}_{X}}\). This begs the question of how to interpret \(f^{+},f_{!}\) and \(\mathbf{D}_{X}\) in the theory of constructible isocrystals. Via the usual relations amongst \(f_{!},f_{+}\), \(f^{!}\) and \(f^{+}\), this amounts to describing the duality functor \(\mathbf{D}_{X}\). This is a subtle question that I don't know how to answer at the moment. The main reason that I don't yet have a good answer is the following: if \(\mathfrak{P}\) is a smooth and proper \(\mathcal{V}\)-scheme, say, then the composite functor \[\mathbf{D}^{b}_{\mathrm{cons},F}(\mathfrak{P})\stackrel{{\cong} }{{\longleftarrow}}\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\stackrel{{ \mathbf{D}_{\mathfrak{P}}}}{{\longrightarrow}}\mathbf{D}^{b}_{\mathrm{hol},F}( \mathfrak{P})\stackrel{{\cong}}{{\longrightarrow}}\mathbf{D}^{b}_{ \mathrm{cons},F}(\mathfrak{P})\] _cannot_ be of a local nature on \(\mathfrak{P}_{K}\). Indeed, if I denote by \[\mathbf{D}_{\mathfrak{P}}\colon\mathbf{D}^{b}_{\mathrm{cons},F}(\mathfrak{P} )\to\mathbf{D}^{b}_{\mathrm{cons},F}(\mathfrak{P})\] the above composition, then clearly \(\mathbf{D}_{\mathfrak{P}}(\mathcal{O}_{\mathfrak{P}_{K}})=\mathcal{O}_{ \mathfrak{P}_{K}}[-2\dim\mathfrak{P}]\). On the other hand, if \(i\colon X\to P\) is a smooth closed subscheme, then similarly \(\mathbf{D}_{\mathfrak{P}}(i\mathcal{O}_{|X|_{\mathfrak{P}}})\cong i_{!} \mathcal{O}_{|X|_{\mathfrak{P}}}[-2\dim X]\). Since \(\mathcal{O}_{\mathfrak{P}_{K}||X|_{\mathfrak{P}}}=\mathcal{O}_{|X|_{ \mathfrak{P}}}\), it follows that \(\mathbf{D}_{\mathfrak{P}}\) is not local on \(\mathfrak{P}_{K}\). For example, this rules out any construction of the form \(\mathbf{R}\underline{\mathrm{Hom}}_{\mathcal{A}}(-,\mathscr{K})\) where \(\mathcal{A}\) is a sheaf of rings on \(\mathfrak{P}_{K}\) and \(\mathscr{K}\) is a complex of \(\mathcal{A}\)-modules. ## 10. Rigid cohomology of varieties I will finish this article with a comparison theorem between rigid and \(\mathscr{D}^{\dagger}\)-module cohomology, at least for varieties \(X\) which admit an immersion \(X\hookrightarrow\mathfrak{P}\) into a smooth and proper formal scheme. In order to be able to state it properly, I need to extend (the shifted version of) Caro's functor \[\mathrm{sp}_{X+}\colon\mathbf{Isoc}_{F}(X)\to\mathbf{D}^{b}_{\mathrm{hol},F}(X) \subset\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\] from the case where \(X\) is smooth to the case of arbitrary \(X\). In fact, this was achieved in [1, SS3.8], by combining descent with the commutation of \(\mathrm{sp}_{+}\) with pullback. Note that the functor that I have been denoting \(\mathrm{sp}_{+}\) is denoted \(\rho\) in [1]. I then define \[\mathrm{sp}_{X!}:=\mathbf{D}_{X}\circ\mathrm{sp}_{X+}\circ(-)^{\vee}\colon \mathbf{Isoc}_{F}(X)\to\mathbf{DCon}_{F}(X)\subset\mathbf{DCon}(\mathfrak{P})\] where the functor \((-)^{\vee}\) is simply the ordinary dual of locally free isocrystals.18 Again, by using the fact that \(\mathrm{sp}^{\dagger}\) is compatible with pullback, I can then remove the smoothness hypothesis on \(X\) appearing in Corollary 6.4.2. Footnote 18: Note that, since I am only considering locally free isocrystals here, this functor coincides with that of the same name define in [1] by [1, Theorem 7.1.1]. **10.0.1 Lemma**.: _Let \(\mathfrak{P}\) be a smooth formal scheme, \(i\colon X\to P\) be a locally closed subscheme, with closure \(Y\), \(\mathscr{F}\in\mathbf{Isoc}_{F}(X,Y)\), and \(\mathcal{M}=\mathrm{sp}_{X!}\mathscr{F}\in\mathbf{DCon}_{F}(\mathfrak{P})\). Then \(\mathrm{sp}^{\dagger}\mathcal{M}\xrightarrow{\cong}i_{!}\mathscr{F}\in \mathbf{Isoc}_{\mathrm{cons},F}(\mathfrak{P})\)._ I can now state and prove my comparison result between rigid and \(\mathscr{D}^{\dagger}\)-module cohomology. **10.0.2 Theorem**.: _Let \(f:X\to\mathrm{Spec}\,(k)\) be a strongly realisable \(k\)-variety,19 and \(\mathscr{F}\in\mathbf{Isoc}_{F}(X)\) a locally free isocrystal on \(X\) of Frobenius type.20 Then \(\mathrm{sp}^{\dagger}_{X}\) induces an isomorphism_ Footnote 19: Recall that this means \(X\) admits an immersion into a smooth and proper formal \(\mathcal{V}\)-scheme. \[\mathbf{R}\Gamma_{\mathrm{rig}}(X,\mathscr{F})\cong f_{+}\mathrm{sp}_{X+} \mathscr{F}\] _in \(\mathbf{D}^{b}(K)\)._ Proof.: Choose an immersion \(i:X\hookrightarrow\mathfrak{P}\) of \(X\) into a smooth and proper formal scheme. Set \(d=\dim\mathfrak{P}\), and let \(u:\mathfrak{P}\to\mathrm{Spf}\,(\mathcal{V})\) denote the structure morphism. Note that the claimed result is _not_ an immediate consequence of Proposition 6.2.3, since the functor \(i_{!}\colon\mathbf{D}^{b}_{\mathrm{cons},F}(X)\hookrightarrow\mathbf{D}^{b}_{ \mathrm{cons},F}(\mathfrak{P})\) does not preserve rigid cohomology. Instead I will need to argue via duality.21 Footnote 20: See Definition 3.3.1. Footnote 21: Another explanation for why this is necessary is given in Remark 9.2.5. To start with, a direct calculation shows that \(u^{\prime}\mathcal{O}_{\mathrm{Spf}(\mathcal{V})\mathbb{Q}}=\mathcal{O}_{ \mathfrak{P}\mathbb{Q}}[d]\). Setting \(\mathcal{M}=\mathrm{sp}_{X+}\mathscr{F}\in\mathbf{D}^{b}_{\mathrm{hol},F}(X) \subset\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})\), then \(\mathrm{sp}^{\dagger}_{\mathfrak{P}}(\mathbf{D}_{X}\mathcal{M})=i_{!}\mathscr{ F}^{\vee}\in\mathbf{D}^{b}_{\mathrm{cons},F}(\mathfrak{P})\) by Lemma 10.0.1. I can therefore calculate \[\mathbf{R}\Gamma_{\mathrm{rig}}(X,\mathscr{F}) =\mathbf{R}\mathrm{Hom}_{\mathbf{D}^{b}_{\mathrm{cons},F}(X)}( \mathscr{F}^{\vee},\mathcal{O}_{|X_{!}})\] \[=\mathbf{R}\mathrm{Hom}_{\mathbf{D}^{b}_{\mathrm{cons},F}( \mathfrak{P})}(i_{!}\mathscr{F}^{\vee},i_{!}i^{!}\mathscr{O}_{\mathfrak{P}_{ k}})\] \[=\mathbf{R}\mathrm{Hom}_{\mathbf{D}^{b}_{\mathrm{hol},F}( \mathfrak{P})}(\mathbf{D}_{X}\mathcal{M},\mathbf{R}\mathbf{L}^{\dagger}_{X} \mathcal{O}_{\mathfrak{P}\mathbb{Q}}[d])\] \[=\mathbf{R}\mathrm{Hom}_{\mathbf{D}^{b}_{\mathrm{hol},F}( \mathfrak{P})}(\mathbf{D}_{X}\mathcal{M},\mathbf{R}\mathbf{L}^{\dagger}_{X}u^{!}\mathcal{O}_{\mathrm{Spf}(\mathcal{V})\mathbb{Q}})\] using Theorem 8.0.1. Now using the six operations formalism for overholonomic \(\mathscr{D}^{\dagger}\)-modules on varieties, I can rewrite this as \[\mathbf{R}\mathrm{Hom}_{\mathbf{D}^{b}_{\mathrm{hol},F}(\mathfrak{P})}( \mathbf{D}_{X}\mathcal{M},\mathbf{R}\mathbf{L}^{\dagger}_{X}u^{!}\mathcal{O}_{ \mathrm{Spf}(\mathcal{V})\mathbb{Q}}) =\mathbf{R}\mathrm{Hom}_{\mathbf{D}^{b}_{\mathrm{hol},F}(X)}( \mathbf{D}_{X}\mathcal{M},f^{!}\mathcal{O}^{\dagger}_{\mathrm{Spec}(k)})\] \[=\mathbf{R}\mathrm{Hom}_{\mathbf{D}^{b}_{\mathrm{hol},F}(X)}(f^{ +}\mathcal{O}^{\dagger}_{\mathrm{Spec}(k)},\mathcal{M})\] \[=R\mathrm{Hom}_{\mathbf{D}^{b}_{\mathrm{hol},F}(\mathrm{Spec}(k))}( \mathcal{O}^{\dagger}_{\mathrm{Spec}(k)},f_{+}\mathcal{M})\] \[=f_{+}\mathcal{M}.\qed\] 10.0.3 Remark: The analogous calculation for a lisse \(\ell\)-adic sheaf \(\mathscr{F}\in\mathbf{Loc}(X_{\operatorname{\acute{e}t}},\mathbb{Q}_{\ell})\) would be that \[\mathbf{R}\Gamma_{\operatorname{\acute{e}t}}(X_{\tilde{k}},\mathscr{F}) =\mathbf{R}\mathrm{Hom}_{X_{\tilde{k}}}(\mathscr{F}^{\vee}, \mathbb{Q}_{\ell,X})\] \[=\mathbf{R}\mathrm{Hom}_{X_{\tilde{k}}}(\mathbf{D}_{X}(\mathbb{Q }_{\ell,X}),\mathbf{D}_{X}(\mathscr{F}^{\vee})),\] see Remark 9.2.5.
2308.16359
Recognition and constructive membership for purely hyperbolic groups acting on trees
We present an algorithm which takes as input a finite set $X$ of automorphisms of a simplicial tree, and outputs a generating set $X'$ of $\langle X \rangle$ such that either $\langle X \rangle$ is purely hyperbolic and $X'$ is a free basis of $\langle X \rangle$, or $X'$ contains a non-trivial elliptic element. As a special case, the algorithm decides whether a finitely generated group acting on a locally finite tree is discrete and free. This algorithm, which is based on Nielsen's reduction method, works by repeatedly applying Nielsen transformations to $X$ to minimise the generators of $X'$ with respect to a given pre-well-ordering. We use this algorithm to solve the constructive membership problem for finitely generated purely hyperbolic automorphism groups of trees. We provide a Magma implementation of these algorithms, and report its performance.
Ari Markowitz
2023-08-30T23:19:07Z
http://arxiv.org/abs/2308.16359v1
# Recognition and constructive membership for purely hyperbolic groups acting on trees ###### Abstract. We present an algorithm which takes as input a finite set \(X\) of automorphisms of a simplicial tree, and outputs a generating set \(X^{\prime}\) of \(\langle X\rangle\) such that either \(\langle X\rangle\) is purely hyperbolic and \(X^{\prime}\) is a free basis of \(\langle X\rangle\), or \(X^{\prime}\) contains a non-trivial elliptic element. As a special case, the algorithm decides whether a finitely generated group acting on a locally finite tree is discrete and free. This algorithm, which is based on Nielsen's reduction method, works by repeatedly applying Nielsen transformations to \(X\) to minimise the generators of \(X^{\prime}\) with respect to a given pre-well-ordering. We use this algorithm to solve the constructive membership problem for finitely generated purely hyperbolic automorphism groups of trees. We provide a Magma implementation of these algorithms, and report its performance. ## 1. Introduction There is a rich theory of groups acting on geometric spaces. Of interest are groups which are discrete and free; examples are Schottky subgroups of \(\mathrm{PSL}_{2}(\mathbb{R})\) and \(\mathrm{PSL}_{2}(\mathbb{C})\) acting on hyperbolic space [1, 14], and of \(\mathrm{PSL}_{2}(K)\) for a non-archimedean local field \(K\) acting on the Bruhat-Tits tree [11, 15]. For a topological group \(G\) acting on such a space, some natural problems are the following: 1. Decide whether \(G\) is discrete and free, and if so find a free basis for \(G\). 2. _The constructive membership problem._ Suppose \(G\) is a subgroup of a group \(H\). Given \(g\in H\), decide whether \(g\in G\), and if so write \(g\) as a word in a specified generating set of \(G\). The first problem was solved for \(2\)-generator subgroups of \(\mathrm{PSL}_{2}(\mathbb{R})\) by Purzitsky [16]. The second problem was solved for discrete free \(2\)-generator subgroups of \(\mathrm{PSL}_{2}(\mathbb{R})\) by Eick, Kirschmer, and Leedham-Green [5], and was later generalised to discrete \(2\)-generator subgroups of \(\mathrm{PSL}_{2}(\mathbb{R})\) by Kirschmer and Ruther [10]. For larger numbers of generators, the problems remain open. Conder [4] solved (1) for \(2\)- and \(3\)-generator groups of automorphisms of locally finite trees, and conjectures an algorithm for every finite number of generators [4, Conjecture 2.3]. In this paper, we solve (1) for all finitely generated groups of automorphisms of locally finite trees. More generally, we provide an algorithm to decide whether a finitely generated group of automorphisms of a simplicial tree is purely hyperbolic (that is, every vertex has trivial stabiliser). We also solve (2) for finitely generated purely hyperbolic subgroups of automorphism groups of trees. The standard approach to these problems, as done in [4, 5, 7], uses the interactions between translation axes and translation lengths of the generators. This is difficult to generalise to higher numbers of generators, since the number of possible interactions grows rapidly. In Section 2 we prove the following theorem: **Theorem A**.: _Let \(T\) be a simplicial tree. Let \(X\) be a finite subset of \(\mathrm{Aut}(T)\) generating a group \(G\). There exists an algorithm that, given \(X\), outputs a basis \(X^{\prime}\) of \(G\) that either contains an elliptic element or is a free basis for \(G\)._ We use an analogue of Nielsen's method of finding a free basis for a finitely generated subgroup of a free group [12, Chapter I.2]. We define a _strongly N-reduced_ basis, and show that if \(G\) is purely hyperbolic (in particular, if \(T\) is locally finite and \(G\) is discrete and free), then a strongly N-reduced basis exists. This provides an algorithmic version of a theorem of Weidmann [18] which, while non-constructive, uses similar methods to show that such a basis exists. As a consequence, we obtain the following: **Theorem B**.: _Let \(T\) be a locally finite simplicial tree. Let \(X\) be a finite subset of \(\operatorname{Aut}(T)\) generating a group \(G\). There exists an algorithm that, given \(X\), decides whether \(G\) is discrete and free._ In Section 3 we prove that if \(G\) is purely hyperbolic, then it has a unique strongly N-reduced basis. In Section 4 we associate to \(G\) a fundamental domain \(\Gamma(G)\), and provide an algorithm that takes as input a vertex \(v\) of \(T\) and outputs the unique \(g\in G\) such that \(gv\in\Gamma(G)\). With this we prove the following: **Theorem C**.: _Every finitely generated purely hyperbolic subgroup of \(\operatorname{Aut}(T)\) has solvable constructive membership problem._ In Section 5, we discuss our implementation of these algorithms in Magma[2] for finitely generated subgroups of \(\operatorname{PGL}_{2}(K)\) acting on the Bruhat-Tits tree where \(K\) is a \(p\)-adic field, and report its performance. ### Related work The application of Nielsen reduction to trees is not new. Weidmann [18] uses similar methods to prove results relating to finitely generated groups acting on trees. In particular, he provides conditions under which a finite set of automorphisms of a tree generates an amalgamated product where each factor is either free or has a global fixed point [18, Theorem 2]. In another generalisation, Kapovich and Weidmann [9] prove that a finite generating set of a group \(G\) acting on a \(\delta\)-thin hyperbolic space is Nielsen equivalent to a generating set of \(G\) that is either a free basis for \(G\) or contains a generator of "small" translation length. Both results provide a non-constructive version of Theorem A as a special case. The method works by iteratively replacing a generator with an element of \(G\) that can be obtained via Nielsen transformations and is smaller with respect to a given pre-order, until no further replacements are possible. However, the methods in these papers are non-constructive, as they do not provide a bound on the word length of such an element. We show that if such a replacement exists, then one exists with word length \(2\). This allows us to obtain a practical algorithm to find a free basis, and to solve the constructive membership problem. There is also a generalisation of Nielsen's methods to groups with a length function, as detailed by Hoare [8] and by Lyndon and Schupp [12, Chapter I.9]. Depending on the axioms chosen, \(\operatorname{Aut}(T)\) can be endowed with a length function. However, to our knowledge none of the results on length functions specialise to the results in this paper. ## 2. Nielsen reduction For the remainder of this paper we fix the following notation: * \(T\) is a simplicial tree with vertex set \(V(T)\). We identify \(T\) with its geometric realisation, so that a point \(w\in T\) may be a vertex or lie on an edge. * \(X\) is a finite subset of \(\operatorname{Aut}(T)\) generating a group \(G\) equipped with the compact-open topology [6]. * \(\dot{v}\) is a distinguished vertex of \(T\). It may be chosen arbitrarily, but once chosen it remains fixed. * If \(u\) and \(w\) are vertices of \(T\), then \([u,w]\) is the unique path (without backtracking) from \(u\) to \(w\). We identify this path with the corresponding line segment in the geometric realisation of \(T\). * Given a path \(p\) on \(T\) and \(C\subseteq T\), if \(p\cap C\) is the geometric realisation of a subpath \(q\) of \(p\), then we identify \(p\cap C\) with \(q\). * If \(p\) is a path or a walk (that is, a path with possible backtracking) on \(T\), then \(|p|\) is the length of \(p\). For a walk, this is the number of terms in the corresponding sequence of edges. * \(X^{-}=\{g^{-1}:g\in X\}\) and \(X^{\pm}=X\cup X^{-}\). * Given \(g\in G\), define \(|g|=d(\hat{v},g\hat{v})\), where \(d\) is the graph distance on \(T\). * Given \(g\in G\), the _translation length_ of \(g\) is \(l(g)=\min\{d(w,gw):w\in T\}\). The _minimum translation set_ of \(g\) is \(\operatorname{Min}(g)=\{w\in T:d(w,gw)=l(g)\}\). Recall that \(g\) is _elliptic_ if \(l(g)=0\), and _hyperbolic_ if \(l(g)>0\). Elliptic elements are typically distinguished from _inversions_ which invert an edge [6]. By our definition, inversions are elliptic: If \(g\) inverts an edge \(e\), then \(l(g)=0\) and \(\operatorname{Min}(g)\) is the midpoint of \(e\). If all nontrivial elements of \(G\) are hyperbolic, then \(G\) is _purely hyperbolic_. This section closely follows Nielsen's proof of his Subgroup Theorem, which states that a finitely generated subgroup of a free group is free [12, Chapter I.2]. In particular, Nielsen uses cancellation of words in a free group; we use cancellation of paths on a tree. The use of conditions N1, N2, and N3 (see Definition 2.5), and the pre-well-ordering defined on \(G\) (see Definition 2.18), remains essentially the same. In our case, an added complication is the possible existence of elliptic elements. **Definition 2.1**.: If \(g,h\in G\), then \(\delta(g,h)=(|g|+|h|-|g^{-1}h|)/2\). Note that \(\delta(g,h)=\delta(h,g)\), but \(\delta(g,h)\neq\delta(g^{-1},h^{-1})\) in general. **Proposition 2.2**.: _For all \(g,h\in G\), \(\delta(g,h)=|[\hat{v},g\hat{v}]\cap[\hat{v},h\hat{v}]|\)._ Proof.: Let \(w\) be the vertex of \(T\) such that \([\hat{v},g\hat{v}]\cap[\hat{v},h\hat{v}]=[\hat{v},w]\). Now \([g\hat{v},h\hat{v}]=[g\hat{v},w]\cup[w,h\hat{v}]\); see Figure 1. We deduce that \[\delta(g,h) =(d(\hat{v},g\hat{v})+d(\hat{v},h\hat{v})-d(\hat{v},g^{-1}h\hat{ v}))/2\] \[=(d(\hat{v},g\hat{v})+d(\hat{v},h\hat{v})-d(g\hat{v},h\hat{v}))/2\] \[=(d(\hat{v},g\hat{v})+d(\hat{v},h\hat{v})-d(w,g\hat{v})-d(w,h\hat {v}))/2\] \[=d(\hat{v},w)\] \[=|[\hat{v},g\hat{v}]\cap[\hat{v},h\hat{v}]|.\qed\] **Proposition 2.3**.: _For all \(g\in G\), \(l(g)=|g^{2}|-|g|=|g|-2\delta(g^{-1},g)\)._ Proof.: By [17, Chapter I.6.4, Propositions 23 and 24], the midpoint of \([\ddot{v},g\ddot{v}]\) lies in \(\operatorname{Min}(g)\). Let \(w=\operatorname{proj}_{\operatorname{Min}(g)}(\ddot{v})\). Now, \[d(\dot{v},g^{2}\dot{v}) =d(\dot{v},w)+d(w,g^{2}w)+d(g^{2}w,g^{2}\dot{v})\] \[=d(\dot{v},w)+2d(w,gw)+d(g\ddot{v},gw)\] \[=d(\dot{v},g\ddot{v})+l(g).\] This proves the first equality; the second follows from Definition 2.1. Let \(g=a_{1}a_{2}\ldots a_{n}\) be a reduced word in \(X\). Each \(a_{i}\) may be identified with the path \(p_{i}=[b_{i}\ddot{v},b_{i}a_{i}\dot{v}]\), where \(b_{i}=a_{1}\ldots a_{i-1}\). We say that \(p_{i}\) is the _path of_\(a_{i}\) in \(g\). Note that a walk from \(\dot{v}\) to \(g\dot{v}\) is formed by the concatenation of the \(p_{i}\), each of which is isometric to \([\ddot{v},a_{i}\dot{v}]\). If a subpath \(q\) of \(p_{i}\) of nonzero length is disjoint from the interior of \([\ddot{v},g\ddot{v}]\), then it is _cancelled_ in \(g\). If \(q\) is contained in \([\dot{v},g\ddot{v}]\), then it is _uncancelled_ in \(g\). **Proposition 2.4**.: _If \(x,y\in X^{\pm}\), then \([\ddot{v},x\ddot{v}]\cup[x\ddot{v},xy\dot{v}]=[\ddot{v},xy\dot{v}]\cup p\), where \(p=[\ddot{v},x\dot{v}]\cap[x\dot{v},xy\ddot{v}]\) is a (possibly empty) path of length \(\delta(x^{-1},y)\)._ Proof.: By Proposition 2.2, \(\delta(x^{-1},y)=|[\ddot{v},x^{-1}\ddot{v}]\cap[\dot{v},y\ddot{v}]|=|[\ddot{v},x\ddot{v}]\cap[x\dot{v},xy\dot{v}]|\). We see from Figure 2 that \([\dot{v},x\ddot{v}]\cup[x\dot{v},xy\dot{v}]=[\ddot{v},xy\dot{v}]\cup p\). **Definition 2.5**.: Following the notation of [12, Chapter I.2], \(X\) is _Nielsen-reduced_ (or _N-reduced_ for short) if it has no non-trivial elliptic elements and the following are satisfied: 1. \(X\cap X^{-}=\varnothing\). 2. For all \(x,y\in X^{\pm}\), if \(x\neq y^{-1}\), then \(|xy|\geq\max\{|x|,|y|\}\). 3. For all \(x,y,z\in X^{\pm}\), if \(x\neq y^{-1}\) and \(y\neq z^{-1}\), then \(|xyz|>|x|+|z|-|y|\). **Remark 2.6**.: We use a stronger definition of N1 than [12, Chapter I.2]; that definition allows both \(x\) and \(x^{-1}\) to be in \(X\). **Lemma 2.7**.: _Let \(x,y,z\in X^{\pm}\). Let \(p=[\ddot{v},x\ddot{v}]\cap[x\ddot{v},xy\ddot{v}]\) and \(q=[x\ddot{v},xy\ddot{v}]\cap[xy\dot{v},xyz\ddot{v}]\). Let \(\Delta=|y|-\delta(x^{-1},y)-\delta(y^{-1},z)\). If \(\Delta>0\), then \(p\) and \(q\) do not intersect and \(d(p,q)=\Delta\). If \(\Delta\leq 0\), then \(p\cap q\) is a path of length \(-\Delta\)._ Figure 2. The path \(p\) is cancelled in \(xy\), while \([\ddot{v},w]\) and \([w,xy\ddot{v}]\) are uncancelled Proof.: Let \(\phi\) be the isometric embedding of \([x\ddot{v},xy\ddot{v}]\) into \(\mathbb{R}\) such that \(\phi(x\ddot{v})=0\) and \(\phi(xy\ddot{v})=|y|\). We identify the image of a path under \(\phi\) with the image of the points it contains. By Proposition 2.4, \(\phi(p)=[0,\delta(x^{-1},y)]\) and \(\phi(q)=[|y|-\delta(y^{-1},z),|y|]\). The result follows by computing either \(d(\phi(p),\phi(q))\) or the length of \(\phi(p)\cap\phi(q)\). **Definition 2.8**.: We define a similar set of conditions to N2 and N3 on \(X\): N2\({}^{\prime}\). For all \(x,y\in X^{\pm}\), if \(x\neq y^{-1}\), then \(\delta(x^{-1},y)\leq\min\{|x|/2,|y|/2\}\). N3\({}^{\prime}\). For all \(x,y,z\in X^{\pm}\), if \(x\neq y^{-1}\) and \(y\neq z^{-1}\), then \(\delta(x^{-1},y)+\delta(y^{-1},z)<|y|\). **Proposition 2.9**.: 1. N2 _is equivalent to_ N2\({}^{\prime}\)_._ 2. _If_ N2 _holds, then_ N3 _is equivalent to_ N3\({}^{\prime}\)_._ Proof.: Let \(x,y,z\in X^{\pm}\). 1. Observe that \(\delta(x^{-1},y)=(|x|+|y|-|xy|)/2\leq\min\{|x|/2,|y|/2\}\) if and only if \(|xy|\geq\max\{|x|,|y|\}\). See Figure 3 (a). 2. Let \(\Delta=|y|-\delta(x^{-1},y)-\delta(y^{-1},z)\). By N2\({}^{\prime}\), \(\Delta\geq 0\). By Lemma 2.7, we have the situation in Figure 3 (b): Namely, \(|xyz|=|x|+|z|-|y|+\Delta\). Thus N3 holds if and only if \(\Delta>0\), which in turn is true if and only if N3\({}^{\prime}\) holds. **Remark 2.10**.: The conditions N2\({}^{\prime}\) and N3\({}^{\prime}\) can be interpreted geometrically: N2\({}^{\prime}\) states that at least half of the path of \(x\) (similarly \(y\)) must be uncancelled in \(xy\); N3\({}^{\prime}\) states that there must be a subpath of the path of \(y\) uncancelled in \(xyz\). **Definition 2.11**.: A _Nielsen transformation_ of \(X\) is a composition of the following operations: 1. Remove some \(g\in X\), where both \(g\) and \(g^{-1}\) are in \(X\). 2. Replace some \(g\in X\) with \(g^{-1}\). 3. Replace some \(g\in X\) by \(g^{\epsilon_{1}}h^{\epsilon_{2}}\) or \(h^{\epsilon_{2}}g^{\epsilon_{1}}\), where \(h\in X\smallsetminus\{g\}\) and \(\epsilon_{1},\epsilon_{2}\in\{1,-1\}\). This definition of a Nielsen transformation is equivalent to the usual definition (as seen in [12, Chapter I.2]), but is more useful for this paper. We sometimes refer to a replacement of \(g\) by \(h\) where \(g\in X^{\pm}\). This denotes either a replacement of \(g\) by \(h\) or of \(g^{-1}\) by \(h^{-1}\), depending on whether \(g\) or \(g^{-1}\) is in \(X\). Note that if \(X^{\prime}\) is obtained from \(X\) via a Nielsen transformation, then \(\langle X^{\prime}\rangle=\langle X\rangle\). The following lemma demonstrates a _local-to-global_ phenomenon that can be similarly observed in Nielsen's algorithm: A path cancelled in a word \(g\) in an N-reduced set is isometric to a path cancelled in a subword of length \(2\). This allows us to express \(|g|\) in terms of the lengths of cancellations between consecutive terms. **Lemma 2.12**.: _Suppose \(X\) is N-reduced. Let \(g=a_{1}a_{2}\dots a_{n}\) be a reduced word in \(X\). Define \(g_{i}=a_{1}\dots a_{i}\)._ 1. _If_ \(n\geq 2\)_, then_ \([\overset{\cdot}{v},g_{n-2}\overset{\cdot}{v}]\cap[g_{n-1}\overset{\cdot}{v}, g\overset{\cdot}{v}]=\varnothing\)_._ 2. _If_ \(n\geq 2\)_, then_ \([\overset{\cdot}{v},g_{n-1}\overset{\cdot}{v}]\cap[g_{n-1}\overset{\cdot}{v},g\overset{\cdot}{v}]=[g_{n-2}\overset{\cdot}{v},g_{n-1}\overset{\cdot}{v}] \cap[g_{n-1}\overset{\cdot}{v},g\overset{\cdot}{v}]\)_, and in particular_ \(\delta(g_{n-1}^{-1},a_{n})=\delta(a_{n-1}^{-1},a_{n})\)_._ 3. \(|g|=\sum\limits_{i=1}^{n}|a_{i}|-2\sum\limits_{i=1}^{n-1}\delta(a_{i}^{-1},a_{ i+1})\)_._ Proof.: We proceed by induction on \(n\). The cases \(n\leq 2\) are trivial. Suppose \(n>2\), as shown in Figure 4. By (2) and N3\({}^{\prime}\), \[|a_{n-1}|-\delta(g_{n-2}^{-1},a_{n-1})-\delta(a_{n-1}^{-1},a_{n})=|a_{n-1}|- \delta(a_{n-2}^{-1},a_{n-1})-\delta(a_{n-1}^{-1},a_{n})>0.\] By Lemma 2.7, \[[\overset{\cdot}{v},a_{n-2}\overset{\cdot}{v}]\cap[a_{n-2}a_{n-1}\overset{ \cdot}{v},a_{n-2}a_{n-1}a_{n}\overset{\cdot}{v}]=\varnothing.\] If \(n=3\), then this proves (1). If \(n>3\), then by the induction hypothesis on (2), \[[\overset{\cdot}{v},g_{n-2}\overset{\cdot}{v}]\cap[g_{n-2} \overset{\cdot}{v},g_{n-1}\overset{\cdot}{v}]\cap[g_{n-1}\overset{\cdot}{v}, g\overset{\cdot}{v}]\] \[=[g_{n-3}\overset{\cdot}{v},g_{n-2}\overset{\cdot}{v}]\cap[g_{n- 2}\overset{\cdot}{v},g_{n-1}\overset{\cdot}{v}]\cap[g_{n-1}\overset{\cdot}{v},g\overset{\cdot}{v}]\] \[=\varnothing.\] We therefore have the situation shown in Figure 4, proving (1) and (2). By Definition 2.1, \[|g|=|g_{n-1}|+|a_{n}|-2\delta(g_{n-1}^{-1},a_{n})=|g_{n-1}|+|a_{n}|-2\delta(a_{ n-1}^{-1},a_{n}),\] from which (3) follows by induction. From this we obtain a key result: **Proposition 2.13**.: _If \(X\) is N-reduced, then \(G\) is purely hyperbolic, and \(X\) is a free basis for \(G\)._ Figure 4. The path of \(g=a_{1}\dots a_{n}\). Proof.: Let \(g=a_{1}\dots a_{n}\) be a non-trivial reduced word in \(X\). Since \(X\) contains no elliptic elements, we may assume that \(n>1\). Define \(a_{0}=a_{n}\) and \(a_{1}=a_{n+1}\). By Lemma 2.12 (3) and N3\({}^{\prime}\), \[|g^{2}|-|g| =\sum_{i=1}^{n}|a_{i}|-2\sum_{i=1}^{n-1}\delta(a_{i}^{-1},a_{i+1}) -2\delta(a_{n}^{-1},a_{1})\] \[=\sum_{i=1}^{n}\big{(}|a_{i}|-\delta(a_{i-1}^{-1},a_{i})-\delta(a_ {i}^{-1},a_{i+1})\big{)}\] \[>0.\] By Lemma 2.3, \(g\) is hyperbolic and therefore represents a non-trivial element of \(G\). By [12, Chapter I, Proposition 1.9], \(X\) is a free basis for \(G\). In the remainder of this section we describe and prove the correctness of the reduction algorithm that obtains an N-reduced set. A straightforward approach to such an algorithm is to apply Nielsen transformations to \(X\), replacing \(x\) with \(xy\) such that \(|xy|<|x|\) when possible. However, a "tie-breaker" is needed when \(|x|=|xy|\), in particular when \(X\) satisfies N2 but violates N3. To this end, we construct a pre-order on \(G\) such that for every word \(xyz\) in \(X^{\pm}\) in which the path of \(y\) is fully cancelled, either the replacement of \(x\) by \(xy\) or of \(z\) by \(yz\) will reduce the respective generator under this pre-order. **Definition 2.14**.: If \(g\in G\), then the _initial half_ of \(g\) is the subpath \(H(g)\) of \([\hat{v},g\hat{v}]\) with initial vertex \(\hat{v}\) of length \(\lfloor|g|/2\rfloor\). **Lemma 2.15**.: _Let \(x,y\in X\). If \(\delta(x^{-1},y)\leq\min\{|x|/2,|y|/2\}\) and \(|xy|=|x|\), then \(H(x)=H(xy)\)._ Proof.: Let \(w\) be a vertex of \(T\) such that \([w,x\hat{v}]=[\hat{v},x\hat{v}]\cap[x\hat{v},xy\hat{v}]\), as in Figure 5. Then \[d(\hat{v},w)=|x|-\delta(x^{-1},y)\geq|x|/2=|xy|/2,\] so \(H(x)=H(xy)\subseteq[\hat{v},w]\). Let \(P\) be the set of paths in \(T\) with initial vertex \(\hat{v}\), let \(P_{0}=\{(\hat{v})\}\), and for each \(i\in\mathbb{N}\) let \(P_{i}\) be the set of paths in \(P\) of length \(i\). We choose a lexicographic well-ordering \(<\) on \(P\), via the following construction: Let \(<_{0}\) be the trivial ordering on \(P_{0}\). Let \(v_{1}=w_{1}=\hat{v}\). For each \(i\in\mathbb{N}\), choose a well-ordering \(<_{i}\) on \(P_{i}\) such that \((v_{1},\dots,v_{i+1})<_{i}(w_{1},\dots,w_{i+1})\) if \((v_{1},\dots,v_{j+1})<_{j}(w_{1},\dots,w_{j+1})\) for some \(j<i\). **Definition 2.16**.: For \(p,q\in P\), define \(p<q\) if either \(|p|<|q|\), or \(|p|=|q|\) and \(p<_{|p|}q\). Figure 5. The initial half of \(x\) and \(xy\) **Remark 2.17**.: Let \(p=(v_{1},\ldots,v_{i+1})\) and \(q=(w_{1},\ldots,w_{i+1})\) be distinct elements of \(P_{i}\). The ordering of \(p\) and \(q\) is determined by the choice of ordering of \((v_{1},\ldots,v_{j})\) and \((w_{1},\ldots,w_{j})\), where \(j\) is minimal such that \(v_{j}\neq w_{j}\). This in turn may be constructed from a choice of ordering of \((v_{j-1},v_{j})\) and \((w_{j-1},w_{j})=(v_{j-1},w_{j})\). We conclude the following: If for all vertices \(w\) of \(T\) we know a well-ordering of the edges incident to \(w\), then we can determine \(<_{i}\). This provides a method to implement the well-orderings. **Definition 2.18**.: Let \(\overline{P}=\{A\subseteq P:|A|=2\}\). * Define a well-ordering \(\prec\) on \(\overline{P}\) such that \(A\prec B\) if and only if \[\min_{<}\left((A\cup B)\smallsetminus(A\cap B)\right)\in A.\] * Define a partial order \(\prec\) on \(G\) such that \(g\prec h\) if and only if \(|g|=|h|\) and \(\{H(g),H(g^{-1})\}\prec\{H(h),H(h^{-1})\}\). * Define a pre-order \(\leq^{\star}\) on \(G\) such that \(g\leq^{\star}h\) if and only if \(\{H(g),H(g^{-1})\}\preceq\{H(h),H(h^{-1})\}\). Note that \(g<^{\star}h\) if and only if either \(|g|<|h|\) or \(g\prec h\). Also, \(g\) and \(h\) are in the same connected component of \(\leq^{\star}\) if and only if \(\{g\overset{\star}{v},g^{-1}\overset{\star}{v}\}=\{h\overset{\star}{v},h^{-1} \overset{\star}{v}\}\). Hence the induced ordering on the connected components of \(\leq^{\star}\) is isomorphic to the ordering \(\prec\) on \(\overline{P}\). Therefore \(\leq^{\star}\) is a pre-well-ordering of \(G\). **Lemma 2.19**.: _Suppose \(X\) satisfies \(\mathrm{N2}\). If there exists \(x,y,z\in X^{\pm}\) such that \(y\notin\{x^{-1},z^{-1}\}\) and \(|xyz|\leq|x|+|z|-|y|\), then \(\delta(x^{-1},y)=\delta(y^{-1},z)=|y|/2\)._ Proof.: Since \(X\) violates \(\mathrm{N3}\), it violates \(\mathrm{N3}^{\prime}\); in particular, \(\delta(x^{-1},y)+\delta(y^{-1},z)\geq|y|\). In addition, by \(\mathrm{N2}^{\prime}\), \(\max\{\delta(x^{-1},y),\delta(y^{-1},z)\}\leq|y|/2\). Therefore \(\delta(x^{-1},y)=\delta(y^{-1},z)=|y|/2\). Algorithm 1 takes as input \(X\), and outputs a basis \(X^{\prime}\) of \(G\) that is either N-reduced or contains a non-trivial elliptic element; it thereby establishes Theorem A. ``` Data: Finite subset \(X\) of \(\mathrm{Aut}(T)\) Output:\((\mathit{flag},X^{\prime})\) where \(\langle X^{\prime}\rangle=\langle X\rangle=G\), \(\mathit{flag}=\texttt{True}\) if \(G\) is purely hyperbolic and \(X^{\prime}\) is a free basis of \(G\), and \(\mathit{flag}=\texttt{False}\) if \(X^{\prime}\) contains an elliptic element \(X^{\prime}\gets X\) loop ifthere exists non-trivial elliptic \(x\in X^{\prime}\)then return\((\texttt{False},X^{\prime})\) elseif there exists \(x\in X^{\prime}\cap X^{\prime-1}\)then \(X^{\prime}\gets X^{\prime}\smallsetminus\{x\}\) elseifthere exists \(x,y\in X^{\prime\pm}\) such that \(x\neq y^{-1}\) and \(xy<^{\star}x\)then \(X^{\prime}\gets X^{\prime}\smallsetminus\{x,x^{-1}\}\cup\{xy\}\) else return\((\texttt{True},X^{\prime})\) endif endloop ``` **Algorithm 1**Decide whether a finitely generated subgroup of \(\mathrm{Aut}(T)\) is purely hyperbolic Proof of correctness of Algorithm 1.: Let \(X_{i}\) be the value of \(X^{\prime}\) during the \(i\)-th iteration of the loop. If \(X_{i}\) has a non-trivial elliptic element, then we terminate. Suppose \(X_{i}\) has no non-trivial elliptic elements. 1. If \(X_{i}\) violates N1, then there is some \(x\in X_{i}\cap X_{i}^{-1}\). In this case, we assign \(X_{i+1}=X_{i}\smallsetminus\{x\}\). 2. If \(X_{i}\) satisfies N1 and violates N2, then there is some \(x,y\in X_{i}^{\pm}\) with \(x\neq y^{-1}\) such that \(|xy|<|x|\) (as in Figure 6). Since \(x\) is hyperbolic, by Proposition 2.3\(x\neq y\), so we replace \(x\) by \(xy\). 3. Suppose \(X_{i}\) satisfies N1 and N2, and violates N3. Let \(x,y,z\in X_{i}^{\pm}\) be such that \(y\notin\{x^{-1},z^{-1}\}\) and \(|xyz|\leq|x|+|z|-|y|\). Define \[p=[\ddot{v},y\ddot{v}]\cap[\ddot{v},x^{-1}\ddot{v}],\qquad q=[\ddot{v},y^{-1} \ddot{v}]\cap[\ddot{v},z\ddot{v}],\] so that \(|p|=\delta(x^{-1},y)\) and \(|q|=\delta(y^{-1},z)\), as in Figure 7. Note that the right diagram is isometric to the left diagram via multiplication by \(y^{-1}\), and contains an isometric copy of Figure 5. By Lemma 2.19 and N2\({}^{\prime}\), \[|p|=|q|=|y|/2\leq\min\left\{\left\lfloor\frac{|x|}{2}\right\rfloor,\left\lfloor \frac{|z|}{2}\right\rfloor\right\}.\] From Figure 7 we see that \(|xy|=|x|\) and \(|yz|=|z|\). Hence \(xp=[\dot{v},x\dot{v}]\cap[x\dot{v},xy\dot{v}]\) and \(xyq=[x\ddot{v},xy\dot{v}]\cap[xy\ddot{v},xyz\ddot{v}]\) intersect at a unique point \(w\). Suppose \(y=x\). Then \(|p|=|x|/2\). By Proposition 2.3, \(|x^{2}|=2|x|-2|p|=|x|\), hence \(x\) is elliptic. Similarly, if \(y=z\), then \(z\) is elliptic. If \(p=q\), then \(x^{-1}w=(xy)^{-1}w\), so \(y\) is elliptic. Thus we may assume that \(y\notin\{x,z\}\) and \(p\neq q\). By Lemma 2.15, \(H(x)=H(xy)\) and \(H(z^{-1})=H((yz)^{-1})\). Note also that \(p\) is an initial segment of both \(H(x^{-1})\) and \(H(yz)\), and \(q\) is an initial segment of both \(H((xy)^{-1})\) and \(H(z)\). If \(q<p\), then \(H((xy)^{-1})<H(x^{-1})\) so \(xy\prec x\), and we replace \(x\) by \(xy\), as in Figure 8. Similarly, if \(p<q\), then \(H(yz)<H(z)\) and \(yz\prec z\), in which case we replace \(z\) by \(yz\). A transformation performed in case 1, 2 or 3 is denoted _type_ 1, 2 or 3 respectively. Case 1 reduces \(|X_{i}|\), so only finitely many transformations are type 1. Cases 2 and 3 reduce some element of \(X_{i}\) with respect to \(<^{\star}\), and does not increase \(|X_{i}|\). Therefore each generator may be replaced only finitely many times, so finitely many transformations can be types 2 and 3. By Proposition 2.13, we either find a non-trivial elliptic element of \(G\), or a generating set \(X^{\prime}\) of \(G\) that is N-reduced and therefore a free basis for \(G\). Figure 6. A type 2 transformation \(x\mapsto x^{\prime}\) Figure 7. The paths \(p\) and \(q\) To prove Theorem B we use a generalisation of Ihara's theorem [17, Chapter II.1.5, Theorem 4]. This proves [4, Theorem 2.5] independently of [4, Conjecture 2.3]. **Proposition 2.20**.: _Let \(G\) be a subgroup of \(\operatorname{Aut}(T)\)._ 1. _If_ \(T\) _is locally finite and_ \(G\) _is discrete and free, then_ \(G\) _is purely hyperbolic._ 2. _If_ \(G\) _is purely hyperbolic, then_ \(G\) _is discrete and free._ Proof.: (1) is proved in [3, Proposition 5.1]. To prove (2): Freeness is given by the Bass-Serre Theorem [17, Chapter I.3.3, Theorem 4]. Let \((g_{i})_{i\in\mathbb{N}}\) be a sequence of elements of \(G\) converging to \(1\). Since the stabiliser of a vertex is an open neighbourhood of \(1\), all but finitely many \(g_{i}\) are trivial, hence \(G\) is discrete. **Remark 2.21**.: Here is a counterexample to the converse of Proposition 2.20 (2), showing the need for local finiteness in (1): Suppose \(T\) is regular and \(\overset{\ast}{v}\) has neighbourhood set \(\{v_{i}:i\in\mathbb{Z}\}\). Let \(g\in\operatorname{Aut}(T)\) be such that \(g\overset{\ast}{v}=\overset{\ast}{v}\) and \(gv_{i}=v_{i+1}\) for all \(i\in\mathbb{Z}\). Now \(g\) is elliptic, but \(\langle g\rangle\) is discrete and free. Proof of Theorem B.: Suppose \(T\) is locally finite. Then \(G\) is discrete and free if and only if \(G\) is purely hyperbolic, so Algorithm 1 decides whether \(G\) is discrete and free. ## 3. Strong N-reduction For the rest of the paper we fix a choice of well-orderings \((<_{i})_{i\in\mathbb{N}}\), and therefore the orderings in Definitions 2.16 and 2.18. **Definition 3.1**.: A free basis \(X\) for \(G\) is _strongly N-reduced_ (with respect to \(\overset{\ast}{v}\) and \(\leq^{\star}\)) if \(X\) satisfies N1 and: 1. For all \(x,y\in X^{\pm}\), if \(x\neq y^{-1}\), then \(x<^{\star}xy\). We could equivalently write \(y<^{\star}xy\), since \(y<^{\star}xy\) if and only if \(y^{-1}<^{\star}y^{-1}x^{-1}\). Note that a free basis returned by Algorithm 1 is strongly N-reduced. In particular, if \(X\) is strongly N-reduced with respect to \(\overset{\ast}{v}\) and \(\leq^{\star}\), then it is N-reduced with respect to \(\overset{\ast}{v}\), since Algorithm 1 finds an N-reduced basis. The following lemmas show that a word in a strongly N-reduced set is bounded from below (with respect to \(\leq^{\star}\)) by its subwords. **Lemma 3.2**.: _Suppose \(X\) is strongly N-reduced. Let \(g=a_{1}\ldots a_{n}\) be a reduced word in \(X\). For all \(1\leq i\leq j\leq n\), \(|g|\geq|a_{i}\ldots a_{j}|\)._ Proof.: The case \(n=1\) is trivial. We may assume \(i=1\) and \(j=n-1\): Applying this case to \(g^{-1}=a_{n}^{-1}\ldots a_{1}^{-1}\) proves the case \(i=2\) and \(j=n\), from which the rest follows by transitivity. Figure 8. A type 3 transformation \(x\mapsto x^{\prime}\) Define \(h=a_{1}\dots a_{n-1}\). By Lemma 2.12 (2) and N2\({}^{\prime}\), \[|g|=|h|+|a_{n}|-2\delta(h^{-1},a_{n})=|h|+|a_{n}|-2\delta(a_{n-1}^{-1},a_{n})\geq |h|.\qed\] **Lemma 3.3**.: _Suppose \(X\) is strongly N-reduced. Let \(g=a_{1}\dots a_{n}\) be a reduced word in \(X\). For all \(1\leq i<n\), \(H(a_{1}\dots a_{i})\subseteq H(g)\)._ Proof.: By transitivity, we assume \(i=n-1\). Let \(h=a_{1}\dots a_{n-1}\). By Lemma 2.12 (2), \[|[\check{v},h\check{v}]\cap[\check{v},g\check{v}]|=|h|-\delta(h^{-1},a_{n})=| h|-\delta(a_{n-1}^{-1},a_{n}).\] By N2\({}^{\prime}\) and Lemma 3.2, \[|h|-\delta(a_{n-1}^{-1},a_{n})\geq|h|-\frac{|a_{n-1}|}{2}\geq\frac{|h|}{2}.\] Thus \(H(h)\subseteq[\check{v},h\check{v}]\cap[\check{v},g\check{v}]\subseteq[ \check{v},g\check{v}]\). Since \(|H(h)|\leq|H(g)|\), we conclude that \(H(h)\subseteq H(g)\). **Lemma 3.4**.: _Suppose \(X\) is strongly N-reduced. Let \(g=a_{1}\dots a_{n}\) be a reduced word in \(X\). Let \(h=a_{1}\dots a_{i}\) for some \(1\leq i<n\). Suppose \(|h|=|g|=s\)._ 1. _For all_ \(i\leq j\leq n\)_,_ \(|a_{1}\dots a_{j}|=s\)_._ 2. \(|a_{n}|\leq|a_{n-1}|\)_, with strict inequality when_ \(i<n-1\)_._ 3. _For all_ \(i<j\leq n\)_,_ \(h\prec a_{1}\dots a_{j}\preceq g\)_._ Proof.: (1) follows directly from Lemma 3.2. We now prove (2). By Lemma 2.12 (3), \[0=|a_{1}\dots a_{n}|-|a_{1}\dots a_{n-1}|=|a_{n}|-2\delta(a_{n-1}^{-1},a_{n}),\] so \(\delta(a_{n-1}^{-1},a_{n})=|a_{n}|/2\). By N2\({}^{\prime}\), \(|a_{n}|\leq|a_{n-1}|\). If \(i<n-1\) then, by similar reasoning, \[\delta(a_{n-2}^{-1},a_{n-1})=|a_{n-1}|/2\geq\delta(a_{n-1}^{-1},a_{n}).\] By N3\({}^{\prime}\), \[|a_{n}|=2\delta(a_{n-1}^{-1},a_{n})\leq\delta(a_{n-2}^{-1},a_{n-1})+\delta(a_ {n-1}^{-1},a_{n})<|a_{n-1}|.\] Lastly we prove (3). By transitivity, we may assume \(i=n-1\) and \(j=n\). Let \(w\in V(T)\) be such that \([\check{v},w]=[\check{v},h\check{v}]\cap[\check{v},g\check{v}]\), as in Figure 9. By Lemma 2.12 (2) and N2\({}^{\prime}\), \[d(\check{v},w)=|h|-\delta(h^{-1},a_{n})=|h|-\delta(a_{n-1}^{-1},a_{n})\geq \frac{|h|}{2},\] so \(H(h)=H(g)\). We now show that \(H(h^{-1})<H(g^{-1})\). By Lemma 3.3, \(H(a_{n-1}^{-1})\subseteq H(h^{-1})\) and \(H(a_{n}^{-1})\subseteq H(g^{-1})\). Additionally, by Definition 2.1, \[|a_{n}a_{n-1}|=|a_{n}|+|a_{n-1}|-2\delta(a_{n-1}^{-1},a_{n})=|a_{n-1}|.\] By Lemma 2.15, \(H(a_{n-1})=H(a_{n-1}a_{n})\). By N4, \(a_{n-1}\prec a_{n-1}a_{n}\). Thus \(H(a_{n-1}^{-1})<H((a_{n-1}a_{n})^{-1})\). By Lemma 3.3, \(H((a_{n-1}a_{n})^{-1})\subseteq H(g^{-1})\), hence \(H(h^{-1})<H(g^{-1})\). Therefore \(h\prec g\). Figure 9. The construction for Lemma 3.4 **Theorem 3.5**.: _Let \(X\) and \(Y\) be finite generating sets of a purely hyperbolic subgroup of \(\operatorname{Aut}(T)\). If \(X\) and \(Y\) are strongly N-reduced (with respect to \(\tilde{v}\) and \(<^{\star}\)), then \(X^{\pm}=Y^{\pm}\)._ Proof.: We prove this by induction on \(n=\max\{|g|:g\in X\cup Y\}\). Define \[X_{<n}=\{g\in X:|g|<n\},\qquad X_{n}=\{g\in X:|g|=n\},\] and define \(Y_{<n}\) and \(Y_{n}\) similarly. By Lemma 3.2, given some \(y\in Y_{<n}\), the reduced word for \(y\) in \(X\) cannot contain an element of \(X_{n}^{\pm}\), and hence is a word in \(X_{<n}\). Similarly, every \(x\in X_{<n}\) can be written as a reduced word in \(Y_{<n}\). It follows that \(\langle X_{<n}\rangle=\langle Y_{<n}\rangle\). Note also that \(X_{<n}\) and \(Y_{<n}\) are strongly N-reduced. If \(n=1\), then \(X_{<n}=Y_{<n}=\varnothing\). Otherwise, by the induction hypothesis \(X_{<n}^{\pm}=Y_{<n}^{\pm}\). Let \(x\in X_{n}\) have reduced word \(y_{1}\ldots y_{m}\) in \(Y\). Since \(x\) is not represented by a word in \(Y_{<n}\), there must be some \(y_{j}\in Y_{n}^{\pm}\). Suppose for a contradiction that \(y_{k}\in X_{<n}^{\pm}\) for some \(k>j\). By Lemma 3.2, \(|y_{j}\ldots y_{k}\ldots y_{m}|=n\), hence it follows from Lemma 3.4 (2) that \(y_{m}\in X_{<n}^{\pm}\). But Lemma 3.4 (3) implies that \(xy_{m}^{-1}\prec x\), contradicting N4. Applying the same argument to \(x^{-1}\), we find that \(y_{k}\in Y_{n}\) for all \(1\leq k\leq m\). By applying Lemma 3.4 (2) again to \(x\), we conclude that \(m\leq 2\). Suppose \(m=2\). Write \(x=yy^{\prime}\), where \(y,y^{\prime}\in Y_{n}^{\pm}\). Since the words of \(y\) and \(y^{\prime}\) in \(X_{n}\) have length at most \(2\), and \(yy^{\prime}\) reduces to an element of \(X_{n}^{\pm}\), we may assume (up to inverting \(x\)) that \(y=x^{\prime}\) and \(y^{\prime}=x^{\prime-1}x\) for some \(x^{\prime}\in X_{n}^{\pm}\). By N4, \(yy^{\prime}=x\prec y^{\prime}\), which is a contradiction. Therefore \(m=1\) and \(x\in Y^{\pm}\). By symmetry, \(X^{\pm}=Y^{\pm}\). As a consequence, we can decide equality of purely hyperbolic subgroups of \(\operatorname{Aut}(T)\). **Corollary 3.6**.: _Let \(X\) and \(Y\) be finite subsets of \(\operatorname{Aut}(T)\) generating purely hyperbolic subgroups \(G\) and \(H\) respectively. There exists an algorithm to decide whether \(G=H\)._ Proof.: We use Algorithm 1 to find strongly N-reduced free bases \(X^{\prime}\) and \(Y^{\prime}\) for \(G\) and \(H\) respectively. By Theorem 3.5, \(G=H\) if and only if \(X^{\prime\pm}=Y^{\prime\pm}\). ## 4. The constructive membership problem In this section we present a solution to the constructive membership problem for finitely generated purely hyperbolic subgroups of \(\operatorname{Aut}(T)\), by taking advantage of the properties of strongly N-reduced bases. Recall that we have fixed a tree \(T\), a vertex \(\tilde{v}\), a lexicographic well-ordering \(<\) on the set of paths on \(T\) starting from \(\tilde{v}\) as in Definition 2.16, and a finite subset \(X\) of \(\operatorname{Aut}(T)\) generating a group \(G\). **Definition 4.1**.: Let \(g\in G\) be hyperbolic. The _translation axis_ of \(g\) is \(\gamma(g)=\operatorname{Min}(g)\). Since \(g\) acts on \(\gamma(g)\) by translation, there is a natural direction associated with the action of \(g\) on \(\gamma(g)\) such that \(g\) and \(g^{-1}\) translate in opposite directions. Let \(x\) and \(y\) be distinct elements of \(\gamma(g)\). If \(gy\) and \(x\) lie in the same connected component of \(\gamma(g)\smallsetminus\{y\}\), then \(g\) translates \(y\)_towards_\(x\). Otherwise, \(g\) translates \(y\)_away from_\(x\). The Ping-Pong Lemma is a well-known tool for proving the freeness of a group. We use the following incarnation [4, Lemma 3.1]: **Lemma 4.2** (The Ping Pong Lemma).: _Let \(X\) be a finite set of hyperbolic automorphisms of \(T\). Suppose that for each \(g\in X\) there is an open segment \(P_{g}\subseteq\gamma(g)\) of length \(l(g)\) such that_ \[\bigcup_{h\in X\smallsetminus\{g\}}\operatorname{proj}_{\gamma(g)}(\gamma(h)) \subseteq P_{g}.\] _Then the group \(G\) generated by \(X\) is free and \(X\) is a free basis for \(G\)._ Given some hyperbolic \(g\in\operatorname{Aut}(T)\), let \(U_{g}^{0}\) be the half-open interval \((u_{g}^{-},u_{g}^{+}]\) of \(\gamma(g)\) with radius \(l(g)/2\) centred at \(\operatorname{proj}_{\gamma(g)}(\acute{v})\), such that \([\acute{v},u_{g}^{+}]<[\acute{v},u_{g}^{-}]\). Let \(U_{g}^{-}\) and \(U_{g}^{+}\) be the components of \(\gamma(g)\smallsetminus U_{g}^{0}\) such that \(u_{g}^{-}\in U_{g}^{-}\). Define \(U_{g}^{\pm}=U_{g}^{+}\cup U_{g}^{-}\). Define \(U_{g^{-1}}^{0}=U_{g}^{0}\), \(u_{g^{-1}}^{+}=u_{g}^{+}\), and other values similarly. Note that \(U_{g}^{+}\), \(U_{g}^{-}\), \(u_{g}^{+}\), and \(u_{g}^{-}\) are independent of the direction along which \(g\) translates \(\gamma(g)\). **Lemma 4.3**.: _Let \(g\in\operatorname{Aut}(T)\) be hyperbolic._ 1. _There is a unique_ \(u_{g}\in\{u_{g}^{+},u_{g}^{-}\}\cap[\acute{v},g\acute{v}]\)_. Furthermore,_ \(|g|=2d(\acute{v},u_{g})\)_._ 2. \([\acute{v},g\acute{v}]\cap U_{g}^{\pm}\neq\varnothing\)_._ Proof.: Let \(a=\operatorname{proj}_{\gamma(g)}(\acute{v})\), as in Figure 10. We may write \([\acute{v},g\acute{v}]=[\acute{v},a]\cup[a,ga]\cup[ga,g\acute{v}]\). Let \(u_{g}\) be the midpoint of \([a,ga]\). Now \(d(a,u_{g})=d(u_{g},a)=l(g)/2\), and \(u_{g}\) is the unique point of \([a,ga]\) with this property. Hence \(\{u_{g}^{+},u_{g}^{-}\}\cap[\acute{v},g\acute{v}]=\{u_{g}\}\). Also, \[|g|=2d(\acute{v},a)+2d(a,u_{g})=2d(\acute{v},u_{g}),\] proving (1). Since \(u_{g}\) lies in the boundary of \(U_{g}^{\pm}\), it follows that \(ga\in U_{g}^{\pm}\), proving (2). **Definition 4.4**.: Define \(U=\bigcup_{g\in X}\{U_{g}^{-},U_{g}^{0},U_{g}^{+}\}\). If \(X\) has no elliptic elements and \(\operatorname{proj}_{\gamma(g)}(\gamma(h))\subseteq U_{g}^{0}\) for all \(h\in X\smallsetminus\{g\}\), then \(U\) is the _fundamental system_ of \(X\). It is clear that if \(X\) admits a fundamental system, then \(X\) satisfies the hypotheses of the Ping-Pong Lemma, and is therefore a free basis of \(G\). The following lemmas show how paths on \(T\) interact with the translation axes of hyperbolic elements of \(\operatorname{Aut}(T)\). **Lemma 4.5**.: _Let \(w\in T\) and let \(g\in\operatorname{Aut}(T)\) be hyperbolic. Define_ \[x=\operatorname{proj}_{\gamma(g)}(\acute{v}),\quad y=\operatorname{proj}_{ \gamma(g)}(w),\quad p=[\acute{v},w]\cap\gamma(g),\quad q=[\acute{v},gw]\cap \gamma(g).\] 1. _If_ \(p=\varnothing\)_, then_ \(q=[x,gx]\) _and_ \(d(\acute{v},gw)>d(\acute{v},w)+l(g)\)_._ 2. _If_ \(p=[x,y]\) _and_ \(gy=x\)_, then_ \(q\in\{\varnothing,\{x\}\}\) _and_ \(d(\acute{v},gw)\leq d(\acute{v},w)-l(g)\)_._ 3. _If_ \(p=[x,y]\) _and either_ \(x=y\) _or_ \(g\) _translates_ \(y\) _away from_ \(x\)_, then_ \(q=[x,gy]\) _and_ \(d(\acute{v},gw)=d(\acute{v},w)+l(g)\)_._ 4. _If_ \(p=[x,y]\) _where_ \(x\notin\{y,gy\}\) _and_ \(g\) _translates_ \(y\) _towards_ \(x\)_, then_ \(q=[x,gy]\) _and_ \(d(\acute{v},gw)=d(\acute{v},w)+\big{|}d(x,y)-l(g)\big{|}-d(x,y)\)_._ Proof.: Let \(P\) be the walk \([\acute{v},x]\oplus p\oplus[y,w]\), where \(\oplus\) denotes concatenation. Let \(Q\) be the walk \([\acute{v},x]\oplus q\oplus[gy,gw]\). We see that \(|Q|-|P|=|q|-|p|\). Since \(g\) translates \(y\) along \(\gamma(g)\) which is isometric to \(\mathbb{R}\), we find that \(|q|=\big{|}d(x,y)+a\big{|}\), where \(a=l(g)\) if \(g\) translates \(y\) away from \(x\), and \(a=-l(g)\) if \(g\) translates \(y\) towards \(x\). Therefore \[|Q|=|P|+\big{|}d(x,y)+a\big{|}-d(x,y).\] Note that \(|P|\geq d(\dot{v},w)\) and \(|Q|\geq d(\dot{v},gw)\), with equality if and only if \(P=[\ddot{v},w]\) and \(Q=[\ddot{v},gw]\) respectively. Figure 11 depicts each case. We achieve the formulae in cases (1)-(4) by the following substitutions: * In case (1), \(p=\varnothing\) so \(P\) backtracks. Thus \(|P|>d(\dot{v},w)\), \(Q=[\ddot{v},gw]\), and \(d(x,y)=0\). * In case (2), \(q\) is either empty or a point. It follows that \(|Q|\geq d(\dot{v},gw)\), \(P=[\ddot{v},w]\), and \(d(x,y)=l(g)=-a\). * In cases (3) and (4), \(P=[\ddot{v},w]\) and \(Q=[\ddot{v},gw]\). The cases are distinguished by whether \(a\) is positive or negative. **Lemma 4.6**.: _Let \(w\in T\) and let \(g\in\operatorname{Aut}(T)\) be hyperbolic. Suppose \([\ddot{v},w]\cap\gamma(g)\) is a path \([x,y]\) for some distinct \(x\) and \(y\), and that \(g\) translates \(y\) towards \(x\)._ * _If_ \(d(x,y)>l(g)/2\)_, then_ \(d(\dot{v},w)>d(\dot{v},gw)\)_._ * _If_ \(d(x,y)=l(g)/2\)_, then_ \(d(\ddot{v},w)=d(\ddot{v},gw)\)_._ * _If_ \(d(x,y)<l(g)/2\)_, then_ \(d(\dot{v},w)<d(\dot{v},gw)\)_._ Proof.: If \(d(x,y)>l(g)\), then by Lemma 4.5 (4), Figure 11. The cases of Lemma 4.5 If \(d(x,y)=l(g)\), then by Lemma 4.5 (2), \(d(\dot{v},gw)<d(\dot{v},w)\). If \(d(x,y)<l(g)\), then by Lemma 4.5 (4), \[d(\ddot{v},gw)=d(\dot{v},w)+|d(x,y)-l(g)|-d(x,y)=d(\ddot{v},w)+l(g)-2d(x,y).\] We see that \(d(\ddot{v},w)\leq d(\ddot{v},gw)\) if and only if \(d(x,y)\leq l(g)/2\), with equality when \(d(x,y)=l(g)/2\). **Lemma 4.7**.: _Let \(w\in T\) and let \(g\in\operatorname{Aut}(T)\) be hyperbolic. The following are equivalent:_ 1. \(\operatorname{proj}_{\gamma(g)}(w)\in U_{g}^{\pm}\)_._ 2. \([\ddot{v},w]\cap U_{g}^{\pm}\neq\varnothing\)_._ 3. \([\ddot{v},g^{\epsilon}w]<[\ddot{v},w]\) _for some_ \(\epsilon\in\{1,-1\}\)_._ Proof.: (1) \(\Rightarrow\) (2): Suppose \(y=\operatorname{proj}_{\gamma(g)}(w)\in U_{g}^{\pm}\). Let \(x=\operatorname{proj}_{\gamma(g)}(\ddot{v})\). Since \(x\in U_{g}^{0}\), \([\ddot{v},x]\) and \([y,w]\) are disjoint, so \([\ddot{v},w]=[\ddot{v},x]\cup[x,y]\cup[y,w]\). (2) \(\Rightarrow\) (3): Suppose \([\ddot{v},w]\cap U_{g}^{\pm}\neq\varnothing\). Then \([\ddot{v},w]\cap\gamma(g)\) is a path \([x,y]\), where \(x=\operatorname{proj}_{\gamma(g)}(\dot{v})\) and \(y\in U_{g}^{\pm}\). It follows that \(d(x,y)\geq l(g)/2\). We may assume that \(g\) translates \(y\) towards \(x\). By Lemma 4.6, \(d(\dot{v},gw)\leq d(\dot{v},w)\). If \(d(\ddot{v},gw)=d(\dot{v},w)\), then \(d(x,y)=l(g)/2\), so \(y=u_{g}^{-}\). Therefore \([\ddot{v},gw]<[\ddot{v},w]\). (3) \(\Rightarrow\) (1): Inverting \(g\) if necessary, suppose \([\ddot{v},gw]<[\ddot{v},w]\). Now case (2) or (4) of Lemma 4.5 must hold, that is \([\ddot{v},w]\cap\gamma(g)\) is a path \([x,y]\) such that \(g\) translates \(y\) towards \(x\). By Lemma 4.6, \(d(x,y)\geq l(g)/2\). If \(d(x,y)=l(g)/2\), then \(y\in\{u_{g}^{+},u_{g}^{-}\}\) and \([\ddot{v},gy]<[\ddot{v},y]\), so \(y=u_{g}^{-}\). In either case, \(\operatorname{proj}_{\gamma(g)}(w)=y\in U_{g}^{\pm}\). **Lemma 4.8**.: _Suppose \(X\) contains no elliptic elements and satisfies_ N1_. Then \(X\) admits a fundamental system if and only if for all \(g,h\in X^{\pm}\), if \(g\neq h^{-1}\), then \([\ddot{v},h\ddot{v}]<[\ddot{v},gh\dot{v}]\)._ Proof.: Suppose \([\ddot{v},gh\ddot{v}]\leq[\ddot{v},h\ddot{v}]\) for some \(g,h\in X^{\pm}\) with \(g\neq h^{-1}\). If \([\ddot{v},gh\ddot{v}]=[\ddot{v},h\ddot{v}]\), then \(h^{-1}gh\ddot{v}=\dot{v}\), so \(g\) is elliptic. Hence \([\ddot{v},gh\ddot{v}]<[\dot{v},h\ddot{v}]\). Let \(a=\operatorname{proj}_{\gamma(g)}(h\ddot{v})\). By Lemma 4.7, \(a\in[\ddot{v},h\ddot{v}]\cap U_{g}^{\pm}\). Let \(b\in[\ddot{v},h\ddot{v}]\cap U_{h}^{\pm}\). We see that either \(a\in[\ddot{v},b]\) or \(b\in[\ddot{v},a]\). By Lemma 4.7, either \(\operatorname{proj}_{\gamma(g)}(b)\in U_{g}^{\pm}\) or \(\operatorname{proj}_{\gamma(h)}(a)\in U_{h}^{\pm}\). Therefore \(X\) does not admit a fundamental system. Conversely, suppose \(X\) does not admit a fundamental system. By Lemma 4.7, there exists distinct \(g,h\in X\) and \(a\in U_{g}^{\pm}\), \(b\in U_{h}^{\pm}\) such that \(a\in[\ddot{v},b]\), for example as in Figure 12. Inverting \(g\) and \(h\) if necessary, let \[a^{\prime}\in[\ddot{v},b]\cap[\ddot{v},g\ddot{v}]\cap U_{g}^{\pm},\qquad b^{ \prime}\in[\ddot{v},b]\cap[\ddot{v},h\ddot{v}]\cap U_{h}^{\pm}.\] Both \(a^{\prime}\) and \(b^{\prime}\) must exist, since both \([\ddot{v},b]\) and \([\ddot{v},g\ddot{v}]\) intersect the boundary of \(U_{g}^{\pm}\), and similarly for \(h\). We may assume (swapping \(g\) and \(h\) if necessary) that \(a^{\prime}\in[\ddot{v},b^{\prime}]\). Then \(a^{\prime}\in[\ddot{v},h\ddot{v}]\) so, by Lemma 4.7, \([\ddot{v},g^{\epsilon}h\ddot{v}]<[\ddot{v},h\ddot{v}]\) for some \(\epsilon\in\{1,-1\}\). Figure 12. One construction for Lemma 4.8 **Theorem 4.9**.: \(X\) _is strongly N-reduced if and only if \(X\) admits a fundamental system._ Proof.: We may assume \(X\) contains no elliptic elements and satisfies N1. Suppose \(X\) is strongly N-reduced. Let \(g,h\in X^{\pm}\) such that \(g\neq h^{-1}\). By Lemma 4.8, it suffices to show that \([\hat{v},h\hat{v}]<[\hat{v},gh\hat{v}]\). This holds by definition if \(|h|<|gh|\), so suppose otherwise. By N4, \(|h|=|gh|\). We apply Lemma 2.15 to \(h^{-1}\) and \(g^{-1}\). From N2\({}^{\prime}\) and this application we conclude that \(H(h^{-1})=H(h^{-1}g^{-1})\). By N4, \(H(h)<H(gh)\). Therefore \([\hat{v},h\hat{v}]<[\hat{v},gh\hat{v}]\). Conversely, suppose \(X\) is not strongly N-reduced. If there exists \(g,h\in X^{\pm}\) such that \(g\neq h^{-1}\) and \(|gh|<|h|\), then \([\hat{v},gh\hat{v}]<[\hat{v},h\hat{v}]\). Otherwise, suppose \(X\) satisfies N2 and violates N4. Then there exists \(g,h\in X^{\pm}\) such that \(|gh|=|h|\) and \(gh\prec h\). By a similar argument to the forward direction, \(H(gh)<H(h)\), so \([\hat{v},gh\hat{v}]<[\hat{v},h\hat{v}]\). By Lemma 4.8, \(X\) does not admit a fundamental system. **Corollary 4.10**.: _Every finitely generated purely hyperbolic subgroup of \(\operatorname{Aut}(T)\) admits a unique fundamental system._ Proof.: Existence follows from Algorithm 1 and Theorem 4.9; uniqueness follows from Theorem 3.5. A _fundamental domain_ of \(G\) is a subtree of \(T\) containing exactly one vertex from every orbit of \(G\)[17, Chapter I.4]. **Definition 4.11**.: Suppose \(G\) admits a fundamental system \(U\). Define \(\Gamma(G)\) to be the subtree of \(T\) with vertex set \[\{w\in V(T):\operatorname{proj}_{\gamma(g)}(w)\in U^{0}_{g}\text{ for all }g\in X\}.\] We show that \(\Gamma(G)\) is a fundamental domain of \(G\); see Corollary 4.14. **Lemma 4.12**.: _Suppose \(X\) is strongly N-reduced. For each vertex \(w\) of \(T\) there is at most one \(g\in X\) such that \(\operatorname{proj}_{\gamma(g)}(w)\notin U^{0}_{g}\)._ Proof.: Let \(g,h\in X\) be distinct. Let \(a=\operatorname{proj}_{\gamma(g)}(w)\) and \(b=\operatorname{proj}_{\gamma(h)}(w)\). Suppose \(a\in U^{\pm}_{g}\) and \(b\in U^{\pm}_{h}\). By Lemma 4.7, \(a\) and \(b\) are both in \([\hat{v},w]\), so either \(a\in[\hat{v},b]\) or \(b\in[\hat{v},a]\). By Lemma 4.7 and Theorem 4.9, \(X\) is not strongly N-reduced. **Proposition 4.13**.: _Suppose \(X\) is strongly N-reduced. Let \(w\in V(T)\). The following are equivalent:_ 1. \(w\in\Gamma(G)\)_._ 2. \([\hat{v},w]<[\hat{v},xw]\) _for all_ \(x\in X^{\pm}\)_._ 3. \([\hat{v},w]<[\hat{v},gw]\) _for all_ \(g\in G\smallsetminus\{1\}\)_._ Proof.: The equivalence of (1) and (2) is given directly by Lemma 4.7, and (3) \(\Rightarrow\) (2) is trivial. We now prove (2) \(\Rightarrow\) (3). Suppose \([\hat{v},w]<[\hat{v},xw]\) for all \(x\in X^{\pm}\), or equivalently \(w\in\Gamma(G)\). Let \(g=x^{\epsilon_{n}}_{n}\ldots x^{\epsilon_{1}}_{1}\), where each \(x_{i}\in X\), \(\epsilon_{i}\in\mathbb{Z}\smallsetminus\{0\}\), and \(x_{i}\neq x_{i+1}\). Let \(h=x^{\epsilon_{n-1}}_{n-1}\ldots x^{\epsilon_{1}}_{1}\) and if \(n\geq 2\), then let \(h^{\prime}=x^{\epsilon_{n-2}}_{n-2}\ldots x^{\epsilon_{1}}_{1}\). By transitivity, it suffices to show that \([\hat{v},hw]<[\hat{v},gw]\). We proceed by induction on \(n\). First we show that \(\operatorname{proj}_{\gamma(x_{n})}(hw)\in U^{0}_{x_{n}}\). If \(n=1\), then \(h\) is trivial, so the statement holds by Lemma 4.7. Suppose \(n>1\). Note that \(U^{0}_{x_{i}}\subseteq U^{0}_{x^{\epsilon_{i}}_{i}}\) and \(U^{\pm}_{x^{\epsilon_{i}}_{i}}\subseteq U^{\pm}_{x_{i}}\) for all \(i\), since \(\gamma(x_{i})=\gamma(x^{\epsilon_{i}}_{i})\) and \(U^{0}_{x^{\epsilon_{i}}_{i}}\) has length \(l(x^{\epsilon_{i}}_{i})\geq l(x_{i})\). Additionally, by the induction hypothesis \([\hat{v},h^{\prime}w]<[\hat{v},hw]\). By Lemma 4.7, \[\operatorname{proj}_{\gamma(x_{n-1})}(hw)\in U^{\pm}_{x^{\epsilon_{n-1}}_{n-1}} \subset U^{\pm}_{x_{n-1}}.\] By Lemma 4.12, \(\operatorname{proj}_{\gamma(x_{n})}(hw)\in U^{0}_{x_{n}}\), as desired. It follows that \(\operatorname{proj}_{\gamma(x_{n})}(hw)\in U^{0}_{x^{\epsilon_{n}}_{n}}\). By Lemma 4.7, \([\hat{v},hw]<[\hat{v},gw]\), completing the proof. **Corollary 4.14**.: \(\Gamma(G)\) _has vertex set \(\{w\in V(T):[\ddot{v},w]<[\ddot{v},gw]\text{ for all }g\in G\smallsetminus\{1\}\}\). In particular, \(\Gamma(G)\) is a fundamental domain of \(G\)._ Proof.: Let \(u\in V(T)\). There must be some unique \(g\in G\) minimising \([\ddot{v},gu]\), since \(<\) is a well-ordering. By Proposition 4.13, \(gu\) is the unique element of \(Gu\) in \(\Gamma(G)\). **Remark 4.15**.: \(\Gamma(G)\) is closely related to the _Dirichlet fundamental domain_[11, Definition 1.8]. However, a Dirichlet fundamental domain is not necessarily a fundamental domain, since it also contains each vertex \(w\) of \(T\) such that \(\operatorname{proj}_{\gamma(g)}(w)=u_{g}^{+}\) for some \(g\in X\). Given \(w\in V(T)\), Algorithm 2 finds the unique \(g\in G\) such that \(gw\in\Gamma(G)\). ``` Data: Finite strongly N-reduced subset \(X\) of \(\operatorname{Aut}(T)\), \(w\in V(T)\) Output: The unique \(g\in G=\langle X\rangle\) such that \(gw\in\Gamma(G)\) \(g\gets 1\) whilethere exists \(x\in X^{\pm}\) such that \([\ddot{v},xgw]<[\ddot{v},gw]\)do \(g\gets xg\) endwhile return\(g\) ``` **Algorithm 2**Find \(g\in G\) mapping vertex to fundamental domain Proof of correctness of Algorithm 2.: Let \(a=\operatorname{proj}_{\gamma(g)}(w)\). Suppose \(w\notin\Gamma(G)\). By Proposition 4.13, there is some \(x\in X^{\pm}\) such that \([\ddot{v},xw]<[\ddot{v},w]\). We thus obtain a sequence \(w=w_{0},w_{1}\dots\), where \(w_{i}=x_{i}w_{i-1}\) for some \(x_{i}\in X^{\pm}\) and \([\ddot{v},w_{i}]<[\ddot{v},w_{i-1}]\) for each \(i\). Eventually this sequence must terminate, so we find some \(w_{n}=x_{n}\dots x_{1}w\) such that \([\ddot{v},w_{n}]<[\ddot{v},xw_{n}]\) for all \(x\in X^{\pm}\). By Proposition 4.13, \(w_{n}\in\Gamma(G)\). As a corollary, we obtain the following: **Theorem C**.: _Every finitely generated purely hyperbolic subgroup of \(\operatorname{Aut}(T)\) has solvable constructive membership problem._ Proof.: Let \(g\in\operatorname{Aut}(T)\). We wish to decide whether \(g\in G\), and if so write \(g\) as a word in \(X\). We may assume \(X\) is strongly N-reduced, since by keeping track of the transformations involved in Algorithm 1 we may write each word in a strongly N-reduced basis of \(G\) as a word in \(X\). Using Algorithm 2, we find the unique \(h\in G\), and a reduced word of \(h\) in \(X\), such that \(hg\ddot{v}\in\Gamma(G)\). It follows that \(g\in G\) if and only if \(g=h^{-1}\). **Remark 4.16**.: The proof of Theorem C provides an alternative algorithm for Corollary 3.6: If \(X\) and \(Y\) are strongly N-reduced, then \(\langle X\rangle=\langle Y\rangle\) if and only if \(x\in\langle Y\rangle\) and \(y\in\langle X\rangle\) for all \(x\in X\) and \(y\in Y\). However, we included the first algorithm to show that deciding equality of these groups is easier than Theorem C would imply. ## 5. Implementation and performance We implemented in Magma[2] our algorithms for \(\operatorname{PGL}_{2}(K)\), where \(K\) is a \(p\)-adic field, acting on the Bruhat-Tits tree [13]. This package implements Algorithms 1, 2, and the algorithms given in the proofs of Corollary 3.6 and Theorem C. In our implementation, \(X\) is input as a sequence rather than a set, since this simplifies the code. Table 1 shows the runtime of Algorithm 1, and the average time per iteration of the loop, for different values of \(|X|\). The trials are run with randomly chosen elements of \(\operatorname{SL}_{2}(\mathbb{Q}_{5})\) with 1000 digits of precision, where each entry has valuation at most 10. Changing the choice of prime \(p\) and the precision did not seem to significantly affect the runtime. The times shown are averaged over 1000 trials. Increasing the maximum valuation of entries of the generators rapidly leads to the algorithm losing precision, so we do not run trials over a range of maximum valuations. Tables 2 and 3 show runtimes of Algorithm 2. Recall that Algorithm 2 takes as input a strongly N-reduced subset of \(\operatorname{Aut}(T)\) and a vertex of \(T\). We record the average runtime over all pairings of 100 strongly N-reduced generating sets and 100 vertices. For Table 2 we vary \(|X|\) and choose \(w\) such that \(d(w,\hat{v})=100\), and for Table 3 we vary \(d(w,\hat{v})\) and set \(|X|=5\). The code used for these tests is provided in [13]. The trials were run using Magma V2.28-2 on a 2.6 GHz machine.
2303.00649
Density of continuous functions in Sobolev spaces with applications to capacity
We show that capacity can be computed with locally Lipschitz functions in locally complete and separable metric spaces. Further, we show that if $(X,d,\mu)$ is a locally complete and separable metric measure space, then continuous functions are dense in the Newtonian space $N^{1,p}(X)$. Here the measure $\mu$ is Borel and is finite and positive on all metric balls. In particular, we don't assume properness of $X$, doubling of $\mu$ or any Poincar\'e inequalities. These resolve, partially or fully, questions posed by a number of authors, including J. Heinonen, A. Bj\"orn and J. Bj\"orn. In contrast to much of the past work, our results apply to locally complete spaces $X$ and dispenses with the frequently used regularity assumptions: doubling, properness, Poincar\'e inequality, Loewner property or quasiconvexity.
Sylvester Eriksson-Bique, Pietro Poggi-Corradini
2023-03-01T16:51:43Z
http://arxiv.org/abs/2303.00649v3
# Density of continuous functions in Sobolev spaces with applications to capacity ###### Abstract. We show that capacity can be computed with locally Lipschitz functions in locally complete and separable metric spaces. Further, we show that if \((X,d,\mu)\) is a locally complete and separable metric measure space, then continuous functions are dense in the Newtonian space \(N^{1,p}(X)\). Here the measure \(\mu\) is Borel and is finite and positive on all metric balls. In particular, we don't assume properness of \(X\), doubling of \(\mu\) or any Poincare inequalities. These resolve, partially or fully, questions posed by a number of authors, including J. Heinonen, A. Bjorn and J. Bjorn. In contrast to much of the past work, our results apply to _locally complete_ spaces \(X\) and dispenses with the frequently used regularity assumptions: doubling, properness, Poincare inequality, Loewner property or quasiconvexity. The first author was partially supported by Finnish Academy Grants n. 345005 and n. 356861. The second author is partially supported by NSF DMS n. 2154032. ###### Abstract We consider the capacity of a set \(E\subset X\) defined as \[\operatorname{Cap}_{p}(E):=\inf_{E\subset O}\operatorname{Cap}_{p}(O),\] where the infimum is taken over open subsets of \(X\) containing \(E\). ## 1. Introduction Let \(X\) be a locally complete and separable metric measure space. Let \((X,d,\mu)\) be a locally complete and separable metric measure space. This improves on prior work by removing the assumption of properness and density used in [3, Corollary 1.3]. The proof involves both Theorem 1.6 and an observation in Lemma 2.15 on lower semicontinuity involving certain "good" functions. (These are used to handle the case when \(\operatorname{Cap}_{p}(E)=0\); see Theorem 3.2.) As a corollary, we show that Sobolev Capacity is a Choquet capacity, under very weak assumptions. See Section 5.1 for a definition of a Choquet capacity. **Theorem 1.8**.: _If \((X,d,\mu)\) is a locally complete and seperable metric measure space and \(p\in(1,\infty)\), then the map \(E\mapsto\operatorname{Cap}_{p}(E)\), for \(E\subset X\), is a Choquet capacity._ **Remark 1.9**.: In much of the literature, see e.g. [12, 16], a neighbourhood capacity is defined: \[\overline{\operatorname{Cap}_{p}}(E):=\inf\{\|u\|_{N^{1,p}(X)}^{p}:u|_{O}\geq 1,\text{ for an open set }O\text{ with }E\subset O\}.\] An advantage of this definition is that it is automatically outer regular and a Choquet capacity without further assumptions, see [16]. Using Theorem 1.7 it is easy to show that \(\operatorname{Cap}_{p}(E)=\overline{\operatorname{Cap}_{p}}(E)\) for locally complete and separable metric measure spaces. This gives another way of proving Theorem 1.8. Much of the literature is split on which definition, \(\operatorname{Cap}\) or \(\overline{\operatorname{Cap}_{p}}\), they employ. Theorem 1.7 shows that very generally the two coincide, and one can use either definition and obtain an equivalent theory. Recall that the notion of having zero capacity is a finer notion than having zero measure. Indeed, a set of capacity zero must be of measure zero. However, a set of capacity zero will usually be of smaller Hausdorff dimension. We say that a function \(f:X\to\mathbb{R}\cup\{\infty,-\infty\}\) is _quasicontinuous_ if for every \(\epsilon>0\) there exists an open set \(O\) with \(\operatorname{Cap}_{p}(O)<\epsilon\) and so that \(f|_{X\setminus O}\) is continuous. Our last result addresses quasicontinuity. **Theorem 1.10**.: _If \((X,d,\mu)\) is a locally complete and separable metric measure space, then every \(f\in N^{1,p}(X)\) is quasicontinuous._ This strengthens the main result in [3, Theorem 1.1] in two ways: first, one does not need to switch representatives of \(f\), and second, the assumptions are much weaker. In conclusion, we discuss the ways in which we improve on [3] and how we execute this technically. First, Theorem 1.6 rests on a new approximation inspired by the authors' prior work in [9, 10]. This approximation is built by solving an extension problem. Let \(K\subset X\) be compact such that \(f|_{K}\) is continuous. Theorem 3.8 describes how, and under which assumptions, we are able to extend \(f|_{K}\) to a continuous function \(\tilde{f}\in N^{1,p}(X)\). See Equation (3.19) for the precise formulation of this extension. This construction ought to be thought of as a discretized and adapted version of the more familiar construction used in Lemma 2.15. The discretization yields continuity without assuming the existence of curves. It plays a crucial role in allowing us to dispense with the geodesic assumption employed in [15], and the quasiconvexity assumption employed in [13]. This approach to approximating Sobolev functions by discretizations is novel. Indeed, prior methods fell short from being able to handle the case of complete and separable metric spaces. A second technical contribution of this paper concerns removing the properness assumption in [3]. This is somewhat subtle, and involves the inner-regularity of the measure \(\mu\), i.e. that for any bounded Borel set \(A\subset X\) and \(\epsilon>0\) there is a compact set \(K\subset A\) with \(\mu(A\setminus K)<\epsilon\). The main argument here is in Lemma 2.15, where a slight modification of the notion of "good function" allows for the usual arguments in [3, Section 3] to go through. Indeed, this yields Theorem 3.2 and the more general capacity results. A refinement of this notion, "a good sequence of functions", plays a role in the proof of Theorem 1.6. In Section 2 we set the notation and prove some useful lemmas about good functions, discrete paths and almost upper gradients. The results in that section are new and have been written in a way that they may be useful in future work. In Section 3, we establish our main results in the complete setting. In Section 4, we extend these results to the locally complete setting using partition of units and localization. Finally, in Section 5, we discuss the Choquet property and the equivalence of different definitions of capacity. **Acknowledgements:** The authors thank Nages Shanmugalingam, and Anders and Jana Bjorn for useful conversations on these topics. ## 2. Notation and preliminaries ### Modulus and Sobolev spaces Throughout the paper \(X\) will be a separable metric space and \(\mu\) any Borel measure on \(X\) which is finite and positive on each ball, that is \(\mu(B(x,r))\in(0,\infty)\) for each ball \(B(x,r)\subset X\). Such measures are Radon when \(X\) is (locally) complete and separable, see [4, Theorem 7.1.7, Definition 7.1.1]. (In the reference, the claim is stated only for complete metric spaces. However, by an extension of the measure to the completion, following [20], we obtain the claim for locally complete spaces.). In particular, the measures \(\mu\) in this paper are inner and outer regular. By convention, we denote open balls by \(B(x,r)=\{y\in X:d(x,y)<r\}\). The value \(r\) is called the radius of the ball (which may be non-unique), and any ball of radius \(r\) is referred to as an \(r\)_-ball_. The distance between two sets \(A,B\subset X\) is defined as \(d(A,B)=\inf_{a\in A,b\in B}d(a,b)\). For a single point \(x\in X\), we adopt the convention \(d(x,A)=d(\{x\},A)\). The characteristic function of a set \(A\subset X\) is denoted \(\mathbb{1}_{A}\). Generally, we will assume that either \(X\) is complete or locally complete. In the latter case, we will also consider its completion \(\hat{X}\). If \(X\) is locally complete, then \(X\) is an open subset in \(\hat{X}\). The spaces of \(L_{p}\)-integrable functions with respect to \(\mu\) for \(p\in[1,\infty)\) will be denoted by \(L^{p}(X)\). The \(L^{p}\)-norm of a function \(f\) is denoted \(\|f\|_{L^{p}(X)}\). The space of continuous functions on \(X\) is denoted \(C(X)\). We do not need a topology on this space, and thus consider it only as a set. To discuss Newtonian spaces and capacities we next recall some classical terminology. These are covered in more detail in [14], as well as [21, 3]. A curve \(\gamma\) is a continuous map \(\gamma:[0,1]\to X\) (or, in specific instances any continuous map \(\gamma:I\to X\), where \(I=[a,b]\subset\mathbb{R}\) is a bounded interval). The length of a rectifiable curve is denoted \(\operatorname{Len}(\gamma)\). The _speed_ of an absolutely continuous curve, which exists for a.e. \(t\in[0,1]\), is defined as \[|\gamma^{\prime}(t)|=\lim_{h\to 0}\frac{d(\gamma(t+h),\gamma(t))}{h}. \tag{2.1}\] Every rectifiable curve has a unique constant-speed parametrization, where \(|\gamma^{\prime}(t)|=\operatorname{Len}(\gamma)\) for a.e. \(t\in[0,1]\), [14, Sec. 5.1]. If \(\tilde{\gamma}:[0,1]\to X\) is the constant-speed parametrization of \(\gamma\), we define the path integral with respect to \(\gamma\) as: \[\int_{\gamma}g\,ds:=\int_{0}^{1}g(\tilde{\gamma}(t))|\tilde{\gamma}^{\prime}( t)|\,dt=\operatorname{Len}(\gamma)\int_{0}^{1}g(\tilde{\gamma}(t))\,dt \tag{2.2}\] when \(g\) is any Borel function for which the right hand-side is defined. We will mostly only consider rectifiable curves and, unless otherwise specified, allow constant curves. We write \(\gamma\subset A\) for a subset \(A\subset X\) if \(\gamma([0,1])\subset A\). If \(x\in X\) is any point, we write \(\gamma:A\rightsquigarrow x\) to denote that \(\gamma(0)\in A\) and \(\gamma(1)=x\), i.e. that \(\gamma\) connects \(A\) to \(x\). The diameter of a curve is denoted \(\operatorname{diam}(\gamma):=\operatorname{diam}(\operatorname{Image}(\gamma ))=\sup_{a,t\in[0,1]}d(\gamma(s),\gamma(t))\). Let \(\Gamma\) be a collection of rectifiable curves. A non-negative Borel function \(\rho:X\to[0,\infty]\) is called admissible for \(\Gamma\), denoted \(\rho\in\operatorname{Adm}(\Gamma)\), if \(\int_{\gamma}\rho\,ds\geq 1\) for each \(\gamma\in\Gamma.\) Here, \(\int_{\gamma}g\,ds\) is the path integral defined in (2.2). Modulus is defined by \[\operatorname{Mod}_{p}(\Gamma)=\inf_{\rho\in\operatorname{Adm}(\Gamma)}\|\rho \|_{L^{p}(X)}^{p}.\] A property is said to hold for \(p\)-a.e. curve \(\gamma\) if it holds for each rectifiable \(\gamma\not\in\Gamma\) for some collection \(\Gamma\) with \(\operatorname{Mod}_{p}(\Gamma)=0\). Given two sets \(E,F\subset X\) we will denote by \(\Gamma(E,F)\) the family of all rectifiable curves \(\gamma\) in \(X\) with \(\gamma(0)\in E\) and \(\gamma(1)\in F\). Recall, that a non-negative Borel function \(g:X\to[0,\infty]\) is called an _upper gradient_ for \(f:X\to[-\infty,\infty]\), if for every rectifiable \(\gamma:[0,1]\to X\), we have \[|f(\gamma(1))-f(\gamma(0))|\leq\int_{\gamma}g\ ds. \tag{2.3}\] Here, the left hand side is interpreted to be infinity if the expression gives \(|\infty-\infty|\) or \(|-\infty-(-\infty)|\). The collection of upper gradients for \(f\) is denoted by \(\mathcal{D}(f)\). We define the Newtonian space \(N^{1,p}(X)\) as the collection of all functions \(f\in L^{p}(X)\) that admit an upper gradient \(g\in L^{p}(X)\). A seminorm on \(N^{1,p}(X)\) is given by \[\|f\|_{N^{1,p}(X)}=\left(\|f\|_{L^{p}(X)}^{p}+\inf_{g\in\mathcal{D}(f)}\|g\|_{ L^{p}(X)}^{p}\right)^{1/p}.\] Then, if we identify \(f\sim g\) for \(f,g\in N^{1,p}(X)\) whenever \(\|f-g\|_{N^{1,p}(X)}=0\), we obtain a Banach space; see [21]. Thus, while formally \(N^{1,p}(X)\) consists of equivalence classes of functions, we will always consider pointwise representatives for a given class. A function \(g\) is a (\(p\)-)_weak upper gradient_, if Inequality (1.1) holds for \(p\)-a.e. rectifiable curve \(\gamma:[0,1]\to X\). A function \(f\in L^{p}(X)\) always admits a minimal \(p\)-weak upper gradient \(g_{f}\) for which \(\|g_{f}\|_{L^{p}(X)}=\inf_{g\in\mathcal{D}(f)}\|g\|_{L^{p}(X)}\). See [14, Theorem 6.3.20] for further details. The following is a classical statement following from the Vitali-Caratheodory theorem. **Lemma 2.4**.: _If \(g\in L^{p}(X)\) is any \(p\)-weak upper gradient for \(f\), then for any \(\epsilon>0\), there exists a lower semicontinuous \(g_{\epsilon}\geq g\) so that \(g_{\epsilon}\) is an upper gradient for \(f\) and \(\int_{X}g_{\epsilon}^{p}d\mu\leq\int_{X}g^{p}d\mu+\epsilon\)._ Proof.: Let \(g\in L^{p}(X)\) be a weak upper gradient. If \(\Gamma\) is the family of rectifiable curves so that Inequality (1.1) does not hold, then \(\operatorname{Mod}_{p}(\Gamma)=0.\) Hence, by [14, Lemma 5.2.8], for any \(\epsilon>0\), there is a \(h\) so that \(\int_{X}h^{p}d\mu\leq\epsilon/2\) and \(\int_{\gamma}hds=\infty\) for each \(\gamma\in\Gamma\). Applying [14, Vitali-Caratheodory theorem, p. 108] to \(\max(g,h)\) we obtain a function \(g_{\epsilon}\) such that \(g_{\epsilon}\geq\max(g,h)\), with \(\int_{X}g_{\epsilon}^{p}d\mu\leq\int_{X}g^{p}d\mu+\epsilon\). Finally, we verify that Inequality (1.1) holds for \(g_{\epsilon}\) and for every rectifiable path \(\gamma\). Indeed, if \(\gamma\in\Gamma\), then (1.1) follows from \(\infty=\int_{\gamma}hds\leq\int_{\gamma}g_{\epsilon}ds\). While, for \(\gamma\not\in\Gamma\), Inequality (1.1) is satisfied since it holds for \(g\) and \(g\leq g_{\epsilon}\). If \(E\subset X\), denote by \(\Gamma_{E}\) the set of non-constant rectifiable curves that intersect \(E\). A set \(E\) is called \(p\)-exceptional if \(\operatorname{Mod}_{p}(\Gamma_{E})=0\). We will need a version of [14, Lemma 6.3.14] which we state next. **Lemma 2.5**.: _Suppose that \(f\in N^{1,p}(X)\) has \(g\in L^{p}(X)\) as upper gradient, and suppose \(f|_{A}=c\) for some \(c\in\mathbb{R}\), and for some Borel set \(A\subset X\). Then, the function \(g_{A}=g\mathbb{1}_{X\setminus A}\) is a \(p\)-weak upper gradient for \(f\). In particular, the minimal \(p\)-weak upper gradient \(g_{f}\) satisfies \(g_{f}(x)=0\) for \(\mu\)-almost every \(x\in A\)._ We say that \(g\) is an upper gradient for \(f\) in a set \(A\), if Inequality (1.1) holds for every curve \(\gamma\) with \(\gamma\subset A\). The following lemma is useful when extending Sobolev functions. Our starting point is a function \(f\) and its upper gradient \(g\). Another function \(\tilde{f}\) agrees with \(f\) on a set \(K\), and we _a priori_ know that \(g\) is also an upper gradient for \(\tilde{f}\) on \(X\setminus K\). When \(\tilde{f}\) is continuous this information can be patched together to conclude that \(g\) is an upper gradient for \(\tilde{f}\) on all of \(X\). **Proposition 2.6**.: _Let \(f\in N^{1,p}(X)\) and let \(g\in L^{p}(X)\) be an upper gradient for \(f\). Let \(\tilde{f}\in L^{p}(X)\) be a continuous function so that that \(f|_{K}=\tilde{f}|_{K}\) for some closed set \(K\). If \(g\) is an upper gradient for \(\tilde{f}\) on \(X\setminus K\), then \(g\) is also an upper gradient for \(\tilde{f}\) on all of \(X\). In particular, \(\tilde{f}\in N^{1,p}(X)\)._ Proof.: Let \(\gamma:[0,1]\to X\) be any non-constant rectifiable curve. The upper gradient inequality (1.1) is invariant under reparametrizations. Hence, for convenience, we will assume that \(\gamma\) has the constant-speed parametrization. There are essentially three cases to consider, when verifying Inequality (1.1) for \(\tilde{f}\) in place of \(f\). Since (1.1) is clear when \(\int_{\gamma}g\,ds=\infty\), we can assume that \(\int_{\gamma}g\,ds<\infty\). **1. Assume \(\gamma(0),\gamma(1)\in K\).** Since \(g\) is an upper gradient for \(f\) and \(f|_{K}=\tilde{f}|_{K}\) the Inequality (1.1) for \(\gamma\) and \(\tilde{f}\) is identical to that for \(f\). **2. Assume \(\gamma(0)\in K\) but \(\gamma(1)\not\in K\) (or the reverse).** The reverse case of \(\gamma(1)\in K\) and \(\gamma(0)\not\in K\) is symmetrical and can be reduced to this by considering the curve \(\tilde{\gamma}(t)=\gamma(1-t)\). Thus take \(\gamma(0)\in K\) and \(\gamma(1)\not\in K\). Let \(t=\sup\gamma^{-1}(K)\). We have \(t<1\) since \(K\) is closed. Consider \(\gamma_{1}=\gamma|_{[0,t]}\) (which may be constant) and \(\gamma_{2,\epsilon}=\gamma|_{[t+\epsilon,1]}\) for \(\epsilon\in[0,1-t)\). By case 1., we have \(\int_{\gamma_{1}}gds\geq|\tilde{f}(\gamma_{1}(0))-\tilde{f}(\gamma_{1}(t))|\). For \(0<\epsilon<1-t\), we have that \(\gamma_{2,\epsilon}\subset X\setminus K\) and since \(g\) is an upper gradient for \(\tilde{f}\) on \(X\setminus K\) we have \[|\tilde{f}(\gamma_{2,\epsilon}(t+\epsilon))-\tilde{f}(\gamma_{2,\epsilon}(1))| \leq\int_{\gamma_{2,\epsilon}}g\,ds \tag{2.7}\] Thus, we get \[|\tilde{f}(\gamma(0))-\tilde{f}(\gamma(1))| \leq|\tilde{f}(\gamma(0))-\tilde{f}(\gamma(t))|+|\tilde{f}(\gamma( t))-\tilde{f}(\gamma(1))|\] \[\leq\int_{\gamma_{1}}g\,ds+\lim_{\epsilon\to 0}|\tilde{f}(\gamma_{2, \epsilon}(t+\epsilon))-\tilde{f}(\gamma_{2,\epsilon}(1))|\qquad\text{ (by item \bf 1. and continuity)}\] \[\leq\int_{\gamma_{1}}g\,ds+\lim_{\epsilon\to 0}\int_{\gamma_{2, \epsilon}}g\,ds\] (by ( 2.7 )) \[=\int_{\gamma}g\,ds.\] In the second to last line, we rewrite the integrals using \(g(\gamma(t))\) multiplied by the characteristic function of \([0,t]\cup[t+\epsilon,1]\), and then we conclude using monotone convergence. **3. Assume \(\gamma(0),\gamma(1)\not\in K\).** If \(\gamma\subset X\setminus K\), then the claim follows since \(\tilde{g}\) is an upper gradient for \(\tilde{f}\) in \(X\setminus K\). Otherwise there is some \(t\in[0,1]\) so that \(\gamma(t)\in K\). Now, apply the second case to \(\gamma|_{[0,t]}\) and to \(\gamma|_{[t,1]}\) together with the triangle inequality to get Inequality (1.1). Let \(N^{1,p}_{b}(X)\subset N^{1,p}(X)\) consist of those functions \(f\in N^{1,p}(X)\) with bounded support which are bounded in \(X\). More precisely, \(N^{1,p}_{b}(X)\) consists of those \(f\in N^{1,p}(X)\) for which there are constants \(M,R>0\) and a point \(x_{0}\in X\), so that \(f|_{X\setminus B(x_{0},R)}=0\) almost everywhere and \(f(x)\in[-M,M]\) for almost every \(x\in X\). An important first step will be to reduce the approximation to such functions. **Lemma 2.8**.: \(N^{1,p}_{b}(X)\) _is dense in \(N^{1,p}(X)\)._ Proof.: Fix \(x_{0}\in X\) and consider \(M>0\). Let \(\psi_{M}(x)=\max(\min(2-d(x_{0},x)/M,1),0)\), which can be seen to be \(1/M\)-Lipschitz. Define \[f_{M}=\psi_{M}(x)\min(\max(f,-M),M).\] We have \(|f_{M}|\leq\min(|f|,M)\), and \(\lim_{M\to\infty}f_{M}=f\) pointwise and in \(L^{p}(X)\). Further, using the Leibniz rule for Sobolev functions (see [14, Proposition 6.3.28]) one can show that \(g_{f_{M}}\leq\frac{|f|}{M}+g_{f}\) is an upper gradient for \(f_{M}\). So, \(f_{M}\in N^{1,p}(X)\). Also \(f_{M}-f=0\) on the set \(A_{M}=B(x_{0},M)\cap\{|f|\leq M\}\). By Lemma 2.5, the function \(f_{M}-f\) has a weak upper gradient \(g_{f_{M}-f}\leq 1_{X\setminus A_{M}}(2g_{f}+\frac{|f|}{R})\). So \(g_{f_{M}-f}\to 0\) in \(L^{p}\) by dominated convergence, since \(\mu(X\setminus\bigcup_{M\in\mathbb{N}}A_{M})=0\). Thus \[\lim_{M\to\infty}\|f_{M}-f\|_{N^{1,p}(X)}^{p}=\lim_{M\to\infty}\|f-f_{M}\|_{L^ {p}(X)}^{p}+\|g_{f-f_{M}}\|_{L^{p}(X)}^{p}=0.\] ### Good functions We will mostly consider curve families \(\Gamma\) which are invariant under re-parametrization. That is, if \(\gamma\in\Gamma\), then any reparametrization of the curve is in \(\Gamma\) as well. With this in mind, we say that a collection \(\Gamma\) of curves is pre-compact, if every sequence \(\{\gamma_{i}\}_{i=1}^{\infty}\subset\Gamma\), where \(\gamma_{i}:[0,1]\to X\) is parametrized by constant-speed, has a uniformly convergent subsequence. Many arguments regarding modulus rely on extracting convergent subsequences. The basic requirement is some form of compactness, and the following formulation of Arzela-Ascoli captures this. **Lemma 2.9**.: _(Arzela-Ascoli) Suppose that \(X\) is complete and that \(L\geq 1\). A collection \(\Gamma\) of curves of length at most \(L\) is pre-compact, if given the constant-speed parametrizations \(\gamma:[0,1]\to X,\) the set_ \[A_{t}=\{\gamma(t):\gamma\in\Gamma,\gamma\text{ parametrized by constant speed}\}\] _is pre-compact in \(X\), for every \(t\in[0,1]\)._ The proof is standard, see for instance the argument in [19, Theorem 4.25]. When \(X\) is a proper space, pre-compactness is the same as a boundedness. However, to work in the case when \(X\) is simply a complete space, we introduce a notion of "good function", which allows us to circumvent the lack of properness. Given a collection \(\Gamma\) of curves and a set \(A\), the collection of curves \(\Gamma^{A}\) contained in \(A\) is defined by \(\Gamma^{A}=\{\gamma\in\Gamma:\gamma\subset A\}\). If further \(\delta,L>0\) and \(g\in L^{p}(X)\), we define a subcollection by \[\Gamma^{A}_{\delta,L}(g)=\{\gamma\in\Gamma^{A}:\int_{\gamma}g\leq L,\text{diam} (\gamma)\geq\delta\}. \tag{2.10}\] **Definition 2.11**.: A Borel function \(g:X\to[0,\infty]\) is called a _good function_, if the set of curves \(\Gamma^{A}_{\delta,L}(g)\) defined in (2.10) is pre-compact, for any family \(\Gamma\) of rectifiable curves, any bounded Borel set \(A\), and for all \(\delta,L>0\). **Example 2.12**.: Suppose that \(X\) is compact. For any \(\epsilon>0\) the function \(g:X\to[\epsilon,\infty]\) is a good function. Indeed, let \(\Gamma\) be an arbitrary family of rectifiable curves and let \(A\subset X\) be a bounded Borel set. In this case, for any \(\delta,L>0\) we have \(\Gamma^{A}_{\delta,L}(g)\subset\{\gamma\subset A:\operatorname{Len}(\gamma) \leq L/\epsilon\}\). Then, \(A_{t}\subset X\), and \(A_{t}\) is automatically pre-compact as a subset of a compact space \(X.\) Therefore, by Lemma 2.9 the collection \(\Gamma^{A}_{\delta,L}(g)\) is pre-compact. The next lemma strenghtens the Vitali-Caratheodory Lemma 2.4 by showing that any \(L^{p}\)-function can be slightly modified to become a lower semicontinuous good function. **Lemma 2.13**.: _Assume \(X\) is complete. If \(g\in L^{p}(X)\) is non-negative, then for any \(\epsilon>0\) there exists a lower semicontinuous good function \(g_{\epsilon}\geq g\) so that \(\|g_{\epsilon}\|_{L^{p}(X)}\leq\|g\|_{L^{p}(X)}+\epsilon\)._ Proof.: Fix \(g,\epsilon\) as in the statement. By an application of Lemma 2.4, we can take \(g\) to be lower semicontinuous. Fix any point \(x_{0}\in X\). Choose compact sets \(\tilde{K}_{i}\subset B(x_{0},i)\), so that \(\mu(B(x_{0},i)\setminus\tilde{K}_{i})\leq e^{p}2^{-(i+1)p}\). This is possible since \(\mu(B(x_{0},i))<\infty\) and since the measure is Radon. Define \(K_{i}=\bigcup_{j=1}^{i}\tilde{K}_{j}\), so that \(K_{n}\subset K_{m}\) for \(n\leq m\) and so that \(\mu(B(x_{0},i)\setminus K_{i})\leq\epsilon^{p}2^{-(i+1)p}\). Define \[g_{\epsilon}=g+\sum_{i=1}^{\infty}\left(\mathbb{1}_{B(x_{0},i)\setminus K_{i} }+\frac{\epsilon}{2^{i+1}\mu(B(x_{0},i))^{1/p}}\mathbb{1}_{B(x_{0},i)}\right).\] This function is lower semi continuous, since it is a sum of lower semicontinuous functions. Moreover, \[\|g_{\epsilon}\|_{L^{p}(X)}\leq\|g\|_{L^{p}(X)}+\sum_{i=1}^{\infty}\left(\left\| \mathbb{1}_{B(x_{0},i)\setminus K_{i}}\right\|_{L^{p}(X)}+\left\|\frac{ \epsilon}{2^{i+1}\mu(B(x_{0},i))^{\frac{1}{p}}}\mathbb{1}_{B(x_{0},i)}\right\| _{L^{p}(X)}\right)\leq\|g\|_{L^{p}(X)}+\epsilon.\] We show that \(g_{\epsilon}\) is a good function. Let \(\Gamma\) be any family of rectifiable curves and let \(A\) be any bounded Borel set. There exists some \(j\in\mathbb{N}\) so that \(A\subset B(x_{0},j)\). By increasing the set \(A\), we get a larger collection of curves, and thus it suffices to consider \(A=B(x_{0},j)\). The idea is to use Arzela-Ascoli Lemma 2.9 to show that the collection \(\Gamma^{A}_{\delta,L}(g_{\epsilon})\) defined in (2.10), for given \(\delta,L>0\), is pre-compact. First note that, if \(\gamma\in\Gamma^{A}_{\delta,L}(g_{\epsilon})\), then \(L\geq\int_{\gamma}g_{\epsilon}ds\). Since \(A=B(x_{0},j)\), we get \(g_{\epsilon}>\frac{\epsilon}{2^{j+1}\mu(B(x_{0},j))^{\frac{1}{p}}}\mathbb{1}_{A}\). Thus, since \(\gamma\subset A\), \[\operatorname{Len}(\gamma)\leq\frac{2^{j+1}\mu(B(x_{0},j))^{\frac{1}{p}}}{ \epsilon}\int_{\gamma}g_{\epsilon}\,ds\leq\frac{2^{j+1}\mu(B(x_{0},j))^{\frac {1}{p}}L}{\epsilon}=:L^{\prime}.\] Therefore all the curves in \(\Gamma^{A}_{\delta,L}(g_{\epsilon})\) have length at most \(L^{\prime}\). Now, consider curves \(\gamma\in\Gamma^{A}_{\delta,L}(g_{\epsilon})\) which are parametrized by constant-speed. It is enough to show that for each \(t\in[0,1]\) the set \[A_{t}:=\{\gamma(t):\gamma\in\Gamma^{A}_{\delta,L}(g_{\epsilon})\text{ is parametrized by constant speed}\}\] is pre-compact in \(X\). Since \(X\) is complete, it suffices to show that \(A_{t}\) is totally bounded. Fix \(\eta\in(0,\delta/2)\) for this purpose. Choose \(N=j+\lfloor 4L/\eta\rfloor+1\). We claim that \(A_{t}\subset\{y\in X:d(y,K_{N})<\eta/4\}\). Assume, for the sake of contradiction, that \(d(\gamma(t),K_{N})\geq\eta/4\) for some \(\gamma\in\Gamma^{A}_{\delta,L}(g_{\epsilon})\). Since \(\operatorname{diam}(\gamma)\geq\delta>2\eta\), we must have a segment of \(\gamma\) of length at least \(\eta/4\) contained in \(A\setminus K_{N}\subset B(x_{0},j)\setminus K_{N}\). Thus, since \(K_{i}\subset K_{N}\) for each \(i\) with \(1\leq i\leq N\), and \(A\subset B(x_{0},i)\) for each \(j\leq i\leq N\), we get \[\int_{\gamma}g_{\epsilon}\,ds\geq\sum_{i=j}^{N}\int_{\gamma}\mathbb{1}_{B(x_{0},i)\setminus K_{i}}\,ds\geq(N-j)\int_{\gamma}\mathbb{1}_{B(x_{0},j)\setminus K _{N}}\,ds\geq(\lfloor 4L/\eta\rfloor+1)\,\frac{\eta}{4}>L,\] which is a contradiction. Thus, \(A_{t}\subset\{y\in X:d(y,K_{N})<\eta/4\}\). By covering \(K_{N}\) by \(\eta/2\) balls and inflating these balls by \(2\) we get a finite covering of \(A_{t}\) by \(\eta\)-balls. This shows that the set \(A_{t}\) is totally bounded and thus pre-compact. We will need the following lower semicontinuity of curve integrals. The result is classical, and its proof has appeared in many places, such as [15, Proposition 4]. For the reader's convenience we give a short proof. **Lemma 2.14**.: _Let \(\gamma_{i}:[0,1]\to X\) be a sequence of rectifiable curves parametrized by constant speed, with \(\operatorname{Len}(\gamma_{i})\leq L\) for some \(L\in[0,\infty]\) and all \(i\in\mathbb{N}\), and suppose that \(\gamma_{i}\) converges uniformly to a curve \(\gamma:[0,1]\to X\). If \(g:X\to[0,\infty]\) is a lower semicontinuous function, then_ \[\int_{\gamma}g\,ds\leq\liminf_{i\to\infty}\int_{\gamma_{i}}g\,ds.\] Proof.: By passing to a subsequence, we can assume that \(\lim_{i\to\infty}\int_{\gamma_{i}}g\,ds\) exists, and that \(\operatorname{Len}(\gamma_{i})\to L^{\prime}\) for some \(L^{\prime}\). Since \(\gamma_{i}\) are parametrized by constant speed, see (2.1), \(|\gamma_{i}^{\prime}|(t)=\operatorname{Len}(\gamma_{i})=:L_{i}\) for each \(i\in\mathbb{N}\) and every \(t\in[0,1]\), and \(\gamma_{i}\) is \(L_{i}\)-Lipschitz. Therefore, \(\gamma\) is \(L\)-Lipschitz, and \(|\gamma^{\prime}|(t)\leq L\) for a.e. \(t\in[0,1]\). \[\int_{\gamma}g\,ds=\int_{0}^{1}g(\gamma(t))|\gamma^{\prime}|(t)dt \leq\int_{0}^{1}g(\gamma(t))Ldt\] \[\leq\lim_{i\to\infty}\int_{0}^{1}g(\gamma_{i}(t))L_{i}dt\qquad \qquad\text{(by l.s.c. and Fatou's Lemma)}\] \[=\lim_{i\to\infty}\int_{\gamma_{i}}g\,ds.\] The next result is a well-known general method to construct a function with a given upper gradient, but in this generality it is crucial to use the notion of a good function. **Proposition 2.15**.: _Let \(X\) be complete. Assume that \(V\subset X\) is a bounded open set and that \(g:X\to[0,\infty]\) is a good function. Then, the function \(u:X\to\mathbb{R}\) given by_ \[u(x):=\min\left(\inf_{\gamma:X\setminus V\to x}\int_{\gamma}g\,ds,1\right)\] _is in \(L^{p}(X)\) for \(1\leq p\leq\infty\), and has \(g\) as upper gradient. Moreover, \(u\) is lower semicontinuous._ **Remark 2.16**.: We adopt the usual conventions for the infimum. If \(x\in X\setminus V\), then the constant curve is allowed, hence \(u(x)=0\). If there are no curves \(\gamma:X\setminus V\mapsto x\), then \(u(x)=1\) since the infimum over an emptyset is \(\infty\). Proof of Proposition 2.15.: If we show that \(u\) is lower semicontinuous, then measurability will follow and we will have \(u\in L^{p}(X)\), because, by Remark 2.16, \(u\leq 1_{V}\). Proving that \(g\) is an upper gradient for \(u\) is classical argument, see [3, Lemma 3.1]. We recall this argument now. If \(\gamma:[0,1]\to X\) and \(\gamma_{0}:X\setminus V\rightsquigarrow\gamma(0)\) are any rectifiable curves, then we can form a curve \(\gamma_{1}:X\setminus V\rightsquigarrow\gamma(1)\) by concatenating them. Thus, \(\int_{\gamma_{1}}g\,ds=\int_{\gamma_{0}}g\,ds+\int_{\gamma}g\,ds\). By the definition of \(u\), we get \(u(\gamma(1))\leq\int_{\gamma_{1}}g\,ds\leq\int_{\gamma_{0}}g\,ds+\int_{\gamma}g \,ds\). Infinizing over \(\gamma_{0}\) yields \(u(\gamma(1))\leq\inf_{\gamma_{0}}\int_{\gamma_{0}}g\,ds+\int_{\gamma}g\,ds.\) Since \(u(\gamma(1))\leq 1\), we have \(u(\gamma(1))\leq u(\gamma(0))+\int_{\gamma}g\,ds\). By reversing the curve, we obtain \(u(\gamma(0))\leq u(\gamma(1))+\int_{\gamma}g\,ds\). From these two inequalities, we get inequality (1.1). Thus the more important part of the proof is to show that \(u\) is lower semicontinuous. Arguing by contradiction, assume that we can find some sequence \(x_{i}\) converging to \(x\) in \(X\), with the property \(\lim_{i\to\infty}u(x_{i})<u(x)-\Delta\) for some \(\Delta>0\). Since \(u\) is non-negative, we must have that \(u(x)>\Delta\), hence \(x\in V\). Since \(V\) is open, there is some ball \(B(x,2\delta)\subset V\), with \(\delta>0\). By convergence, we can throw away finitely many terms and assume that \(x_{i}\in B(x,\delta)\) for all \(i\in\mathbb{N}\). We can also pass to a subsequence so that \(u(x_{i})<u(x)-\Delta\) for every \(i\in\mathbb{N}\). In particular, we have \(u(x_{i})<1\). Hence, we can choose curves \(\gamma_{i}:X\setminus V\rightsquigarrow x_{i}\) with \[\int_{\gamma_{i}}gds\leq u(x)-\Delta\leq 1. \tag{2.17}\] Assume \(\gamma_{i}:[0,1]\to X\) has the constant-speed parametrization and let \[t_{i}:=\sup\left\{t\in[0,1]:d(\gamma_{i}(t),X\setminus V)\leq\frac{\delta}{2i }\right\},\] which is the last time \(\gamma_{i}(t)\) is \(\frac{\delta}{2i}\) away from \(X\setminus V\). Let \(\tilde{\gamma_{i}}\) be the constant speed parametrization of \(\gamma_{i}|_{[t_{i},1]}\) on \([0,1]\), so that \(\tilde{\gamma_{i}}\subset V\), for each \(i\in\mathbb{N}\). By (2.17) and since \(\tilde{\gamma_{i}}\) is a subcurve of \(\gamma_{i}\), we have \(\int_{\tilde{\gamma_{i}}}g,ds\leq 1\). Since \(d(x_{i},X\setminus V)\geq\delta\), we have \(\operatorname{diam}(\tilde{\gamma_{i}})\geq\delta/2\). Thus \(\tilde{\gamma_{i}}\in\Gamma^{V}_{\delta/2,1}(g)\), where \(\Gamma\) is the collection of all rectifiable curves. Since \(g\) is a good function, and since \(\tilde{\gamma_{i}}\) are parametrized by constant speed, there is a subsequence \(\{\tilde{\gamma}_{i_{k}}\}_{k\in\mathbb{N}}\) converging uniformly to some continuous function \(\gamma:[0,1]\to X\). Next, by Lemma 2.14, we have \[\int_{\gamma}gds\leq\liminf_{k\to\infty}\int_{\tilde{\gamma}_{i_{k}}}gds\leq u (x)-\Delta.\] By construction, \(d(\tilde{\gamma}_{i_{k}}(0),X\setminus V)\leq\frac{\delta}{2i_{k}}\), and \(\tilde{\gamma}_{i_{k}}(1)=x\). Also, \(\lim_{k\to\infty}\tilde{\gamma}_{i_{k}}(0)=\gamma(0)\) and \(\lim_{k\to\infty}\tilde{\gamma}_{i_{k}}(1)=\gamma(1)\). Therefore, sending \(k\to\infty\), we get \(\gamma(0)\in X\setminus V\) and \(\gamma(1)=x\), namely, \(\gamma:X\setminus V\rightsquigarrow x\). This leads to \[u(x)\leq\int_{\gamma}gds\leq u(x)-\Delta,\] which is a contradiction. ### Discrete paths We will be considering discrete path approximations to curves. A _(discrete) path_ is a sequence \(P=(p_{0},\ldots,p_{n})\), with \(n\geq 0\), and which does not repeat. We identify \(P\) sometimes with the image set, for example in writing \(p\in P\) to state that a point \(p\) lies in the path. When \(n\geq 1\) define its _mesh size_ by \(\operatorname{Mesh}(P)=\max_{k=0,\ldots n-1}d(p_{k},p_{k+1})\) and its _length_ by \(\operatorname{Len}(P)=\sum_{k=0}^{n-1}d(p_{k},p_{k+1})\). The _diameter_, \(\operatorname{diam}(P)\), is the diameter as a set of points. Note that the path \(P=(p_{0})\) consisting of only one point, and with diameter, length and mesh equal to zero, is permitted. Given a function \(g:X\to\mathbb{R}\) we define the _discrete integral_ of \(g\) along \(P\) by \[\int_{P}g:=\sum_{k=0}^{n-1}g(p_{k})d(p_{k},p_{k+1}).\] Again, if \(P\) is a single point, then \(\int_{P}g=0\). Discrete paths can be extended to curves after we pass to a larger super space. For such arguments, we introduce an isometric Kuratowski embedding \(\iota:X\to\ell_{\infty}(\mathbb{N})\) into the sequence space \(\ell_{\infty}(\mathbb{N})\). We will fix such an embedding for the remainder of this subsection. Given a discrete path \(P\) we call a curve \(\gamma:[0,1]\to\ell_{\infty}(\mathbb{N})\) its _linearly interpolating curve_ if it is constructed as follows. If \(\operatorname{Len}(P)=0\), i.e. when discrete path consists of only single point \(P=(p_{0})\), then define \(\gamma(t)=p_{0}\) for all \(t\in[0,1]\). Next, we define the interpolant when \(\operatorname{Len}(P)>0\). Let \(t_{0}=0\) and define \[t_{l}:=\sum_{k=0}^{l-1}\frac{d(p_{k},p_{k+1})}{\operatorname{Len}(P)}, \tag{2.18}\] for \(l=1,\ldots,n\). We refer to \(t_{0},\ldots,t_{n}\) as the time-partition points associated to \(P\), which are only defined if \(\operatorname{Len}(P)>0\). Define \(\gamma(t_{l})=p_{l}\) and define \[\gamma(t)=\frac{t_{l+1}-t}{t_{l+1}-t_{l}}p_{l}+\frac{t-t_{l}}{t_{l+1}-t_{l}}p_{ l+1}, \tag{2.19}\] for \(t\in(t_{l},t_{l+1})\). Note that since the path is simple, \(p_{l}\neq p_{l+1}\) and \(t_{l}\neq t_{l+1}\) for all \(l=0,\ldots,n-1\). With this construction, we have the following lemma. **Lemma 2.20**.: _Let \(P\) be a discrete path and \(\gamma\) its linearly interpolating curve. We have the following._ 1. \(\operatorname{Len}(\gamma)=\operatorname{Len}(P)\)_._ 2. \(d(\gamma(t),X)\leq\operatorname{Mesh}(P)\) _for every_ \(t\in[0,1]\)_._ 3. \(\gamma\) _is parametrized by constant speed._ Proof.: Since \(\gamma\) is piecewise linear, we can compute its length by adding up the linear segments, and \(\operatorname{Len}(\gamma)=\operatorname{Len}(P)\) follows. If \(\operatorname{Len}(P)=0\), then \(\gamma\) is a constant curve and \(d(\gamma,X)=0=\operatorname{Mesh}(P)\). Further, \(\gamma\) has constant (zero) speed. Suppose then that \(\operatorname{Len}(P)>0\) and let \(t_{l}\), for \(l=0,\ldots,n-1\), be the time-partition points. For every \(t\in[0,1]\), there is a \(l=0,\ldots,n-1\) for which \(t\in[t_{l},t_{l+1}]\). Hence, by (2.19), \[d(\gamma(t),X)\leq\min(d(\gamma(t),p_{l}),d(\gamma(t),p_{l+1}))\leq d(p_{l},p_{ l+1})\leq\operatorname{Mesh}(P),\] for each \(t\in[0,1]\). Next, if \(t\in(t_{l},t_{l+1})\) for some \(l=0,\ldots,n-1\), then \(\gamma\) is linear in a neighborhood of \(t\), and has speed \(d(p_{l},p_{l+1})/(t_{l+1}-t_{l})=\operatorname{Len}(P)\), by (2.18) and (2.19). Therefore, \(\gamma\) has speed \(\operatorname{Len}(P)\) at all points \(t\not\in\{t_{0},\ldots,t_{n}\}\), in other words, it is parametrized by constant speed. If \(P_{i}=(p_{0}^{i},\ldots p_{n(i)}^{i})\) is a sequence of discrete paths, we say that it converges to a curve \(\gamma:[0,1]\to X\), if \(\lim_{i\to\infty}\operatorname{Mesh}(P_{i})=0\) and the linear interpolation curves \(\gamma_{i}\) converge to \(\gamma\) uniformly in \(\ell_{\infty}(\mathbb{N})\), in the sense that: \[\lim_{i\to\infty}\sup_{t\in[0,1]}\|\gamma_{i}(t)-\gamma(t)\|_{\infty}=0. \tag{2.21}\] Moreover, one can show that this notion of convergence does not depend on the embedding to \(\ell_{\infty}(\mathbb{N})\). We will need the following variant of Lemma 2.14, in the context of discrete paths. **Lemma 2.22**.: _Assume that \(P_{i}=(p_{0}^{i},\ldots p_{n(i)}^{i})\) is a sequence of discrete paths converging to a rectifiable curve \(\gamma:[0,1]\to X\), in the sense of (2.21). Also, assume that \(\liminf_{i\to\infty}\operatorname{Len}(P_{i})<\infty\). Then, for any lower semicontinuous non-negative function \(g:X\to[0,\infty]\) we have_ \[\int_{\gamma}g\,ds\leq\liminf_{i\to\infty}\int_{P_{i}}g.\] Proof.: If \(\operatorname{Len}(\gamma)=0\), then the inequality is trivial. So we may assume that \(\operatorname{Len}(\gamma)>0\), which, by (2.21), implies that \(\operatorname{Len}(P_{i})>0\) for all but finitely many \(i\in\mathbb{N}\). For convenience, we pass to a subsequence so that \(\operatorname{Len}(P_{i})>0\) for all \(i\in\mathbb{N}\). We first show that if the Lemma has been proven for \(g\) continuous, then it follows for lower semicontinuous \(g\). Indeed, we can find an increasing sequence of functions \(\{g_{l}\}_{l\in\mathbb{N}}\) of non-negative continuous functions converging pointwise to \(g\) (see [14, Corollary 4.2.3]). For each \(l\) we have \[\int_{\gamma}g_{l}\,ds\leq\liminf_{i\to\infty}\int_{P_{i}}g_{l}\leq\liminf_{i \to\infty}\int_{P_{i}}g.\] Send \(l\to\infty\) and use monotone convergence then to conclude the lemma for all \(g\) lower semicontinuous. Hence, assume that \(g\) is continuous. Denote the interpolating paths for \(P_{i}\) by \(\gamma_{i}\), and use superscripts of the form \(t_{l}^{i}\) when defining \(\gamma_{i}\) as in (2.18) and (2.19). Fix \(\epsilon>0\). Extend \(g\) to be continuous on \(\ell_{\infty}(\mathbb{N})\) using the Tietze extension theorem; see for example [18]. Since the image of \(\gamma\) is compact, and \(g\) continuous, we can find for any \(\epsilon>0\) a \(\delta>0\) so that if \(x,y\in\ell_{\infty}(\mathbb{N})\) and \(\max\left(d(x,\gamma),d(y,\gamma),d(x,y)\right)<\delta\) then \(|g(x)-g(y)|<\epsilon\). Choose an \(N\) so that for \(i\geq N\), we have \(\operatorname{Mesh}(P_{i})<\delta/2\), \(d(\gamma_{i}(t),\gamma(t))<\delta/2\) for all \(t\in[0,1]\). Next, let \(i\geq N\) be arbitrary. For every \(t\in[t_{l}^{i},t_{l+1}^{i}]\), we have \(d(\gamma_{i}(t),\gamma(t_{l}^{i}))\leq\operatorname{Mesh}(P_{i})<\delta\) and \(\gamma(t_{l}^{i})\in P_{i}\in X\). Thus, by the choice of \(\delta\), we get \(g(\gamma_{i}(t))\leq g(\gamma_{i}(t_{l}^{i}))+\epsilon\). Integrating this inequality and using (2.18), we get \[\int_{t_{l}^{i}}^{t_{i+1}^{i}}g(\gamma_{i}(t))\operatorname{Len}(P_{i})dt\leq \operatorname{Len}(P_{i})(t_{l+1}^{i}-t_{l}^{i})(g(\gamma(t_{l}^{i}))+ \epsilon)\leq d(p_{l}^{i},p_{l+1}^{i})(g(\gamma(t_{l}^{i}))+\epsilon).\] Finally, by summing over \(l=0,\ldots n(i)-1\), Lemma 2.20 gives \[\int_{\gamma_{i}}gds=\int_{0}^{1}g(\gamma_{i}(t))\operatorname{Len}(\gamma_{i })dt\leq\sum_{l=0}^{n(i)-1}d(p_{l}^{i},p_{l+1}^{i})(g(\gamma(t_{l}^{i}))+ \epsilon)\leq\int_{P_{i}}g+\epsilon\operatorname{Len}(P_{i}).\] By taking a limit inferior with \(i\to\infty\), we get \[\liminf_{i\to\infty}\int_{\gamma_{i}}g\leq\liminf_{i\to\infty}\int_{P_{i}}g+ \epsilon\liminf_{i\to\infty}\operatorname{Len}(P_{i})\,. \tag{2.23}\] Next, by Lemma 2.20, each \(\gamma_{i}\) is parametrized by constant speed \(\operatorname{Len}(P_{i})\). Thus, the \(\gamma_{i}\) are \(\operatorname{Len}(P_{i})\)-Lipschitz. Let \(L=\liminf_{i\to\infty}\operatorname{Len}(P_{i})\). Then, from uniform convergence, we get that \(\gamma\) is \(L\)-Lipschitz. Together with the fact that the functions \(h_{i}(t)=g(\gamma_{i}(t))\) converge uniformly to the function \(g(\gamma(t))\), since \(g\) is continuous, we get \[\int_{\gamma}g\,ds \leq\int_{0}^{1}g(\gamma(t))Ldt\] \[=\lim_{i\to\infty}\int_{0}^{1}g(\gamma_{i}(t))Ldt\] \[\leq\liminf_{i\to\infty}\int_{0}^{1}g(\gamma_{i}(t))\mathrm{Len}(P _{i})dt\] \[=\liminf_{i\to\infty}\int_{\gamma_{i}}gds\leq\liminf_{i\to\infty} \int_{P_{i}}g+\epsilon L\,.\] Since \(\epsilon>0\) was arbitrary, the claim follows. We will need the following compactness statement for discrete paths. **Lemma 2.24**.: _If \(\{P_{i}\}_{i\in\mathbb{N}}\) is a sequence of paths in a complete metric space \(X\) satisfying_ 1. \(\lim_{i\to\infty}\mathrm{Mesh}(P_{i})=0\)_;_ 2. \(\mathrm{Len}(P_{i})\leq S\) _for some_ \(S\in(0,\infty)\) _and all_ \(i\in\mathbb{N}\)_; and_ 3. _for any_ \(\tau>0\) _there is a compact set_ \(K_{\tau}\subset X\) _that_ \(\max_{p\in P_{i}}d(p,K_{\tau})\leq\tau\) _for all_ \(i\in\mathbb{N}\)_,_ _then a subsequence of \(P_{i}\) converges to a curve \(\gamma:[0,1]\to X\) in the sense of (2.21)._ Proof.: For each \(i\in\mathbb{N}\) let \(\gamma_{i}:[0,1]\to\ell_{\infty}(\mathbb{N})\) be the curve linearly interpolating \(P_{i}\). Lemma 2.20 states that we have \(\mathrm{Len}(\gamma_{i})\leq S\) and that the curves \(\gamma_{i}\) are parametrized by constant speed. First, we show that a subsequence of \((\gamma_{i})_{i\in\mathbb{N}}\) converges uniformly to some curve \(\gamma:[0,1]\to\ell_{\infty}(\mathbb{N})\). Fix \(t\in[0,1]\). Let \(A_{t}=\{\gamma_{i}(t):i\in\mathbb{N}\}\). The claim follows from Lemma 2.9, if we show that \(A_{t}\) is pre-compact. Since \(\ell_{\infty}(\mathbb{N})\) is complete, it suffices to show that \(A_{t}\) is totally bounded. Fix \(\tau>0\) and choose \(N\in\mathbb{N}\) so that \(\mathrm{Mesh}(P_{i})\leq\tau/8\) for all \(i\geq N\) and a compact set \(K_{\tau/8}\) as in the statement. Then, for \(i\geq N\), we have \[d(\gamma_{i}(t),K_{\tau/8})\leq\mathrm{Mesh}(P_{i})+\max_{p\in P_{i}}d(p,K_{ \tau/8})\leq\tau/4.\] Set \(K^{\prime}=K_{\tau/8}\cup\bigcup_{j=1}^{N}\gamma_{j}\) which is compact. Since \(K^{\prime}\) is compact, it can be covered by a finite collection \(\mathcal{B}\) of balls of radius \(\tau/2\). Every point \(\gamma_{i}(t)\in A_{t}\) has \(d(\gamma_{i}(t),K^{\prime})\leq\tau/4\), and thus by inflating each ball in \(\mathcal{B}\) by two we can cover \(A_{t}\) by finitely many balls of radius \(\tau\). Therefore, \(A_{t}\) is totally bounded and pre-compact as desired. Thus, a subsequence \(\gamma_{i_{k}}\) converges uniformly to some curve \(\gamma:[0,1]\to\ell_{\infty}(\mathbb{N})\). Further, for any \(t\in[0,1]\) we have \(d(\gamma(t),X)=\lim_{k\to\infty}d(\gamma_{i_{k}}(t),X)\leq\lim_{k\to\infty} \mathrm{Mesh}(P_{i})=0\) by Lemma 2.20. Thus, the image of \(\gamma\) is contained in \(X\) and the claim follows. Discrete paths can be used to conveniently define functions which have given upper gradients, in the spirit of Proposition 2.15. **Proposition 2.25**.: _Suppose \(X\) is a metric space. Let \(\delta,M>0\) and let \(E\subset X\) be a non-empty subset. Let \(g:X\to[0,\infty)\) be a continuous function and \(h:E\to\mathbb{R}\) a bounded function. Define_ \[f(y):=\min\left(\inf_{P}\left\{h(p_{0})+\int_{P}g\right\},M\right),\] _where the infimum is taken over all discrete paths \(P=(p_{0},\dots,p_{n})\) with \(p_{0}\in E,p_{n}=y\) and \(\mathrm{Mesh}(P)\leq\delta\)._ _Then, \(f\) is locally Lipschitz, and \(g\) is an upper gradient of \(f\). Moreover, if \(h\) is constant and less than or equal to \(M\), then \(f\equiv h\) on \(E\)._ Proof.: Fix \(y\in X\) and assume that \(d(x,y)\leq\delta\). Let \(P=(p_{0},\dots,p_{n})\) be a discrete path with \(p_{0}\in E,p_{n}=y\) and \(\mathrm{Mesh}(P)\leq\delta\). Let \(p_{n+1}=x\) and set \(P^{\prime}=(p_{0},\dots,p_{k})\), where \(k=\inf\{j\geq 0:p_{j}=x\}\). Thus, \(P^{\prime}\) is a discrete path with \(p_{0}\in E\), \(p_{k}=x\), and \(\mathrm{Mesh}(P^{\prime})\leq\delta\). In particular, \[f(x)\leq h(p_{0})+\int_{P^{\prime}}g\leq h(p_{0})+\int_{P}g+g(y)d(x,y).\] Taking the infimum over \(P\) and comparing with \(M\) we get \[f(x)\leq f(y)+g(y)d(x,y) \tag{2.26}\] By switching the role of \(x\) and \(y\), we find that \[|f(y)-f(x)|\leq\max(g(x),g(y))d(x,y), \tag{2.27}\] whenever \(d(x,y)\leq\delta\). Since \(g\) is continuous, it is also locally bounded. Hence, (2.27) implies that \(f\) is locally Lipschitz. Next, we want to show that \(g\) is an upper gradient for \(f\). Let \(\gamma:[0,1]\to X\) be a curve with constant speed and length \(L\). Fix a partition \(s_{0}=0<s_{1}<\cdots<s_{k}=1\). Then, \[\sum_{j=1}^{k}g(\gamma(s_{j-1}))L|s_{j}-s_{j-1}| =\sum_{j=1}^{k}g(\gamma(s_{j-1}))\operatorname{length}\left( \gamma|_{[s_{j},s_{j-1}]}\right)\] \[\geq\sum_{j=1}^{k}g(\gamma(s_{j-1}))d\left(\gamma(s_{j-1}),\gamma (s_{j})\right)\] \[\geq\sum_{j=1}^{k}\left(f(\gamma(s_{j}))-f(\gamma(s_{j-1}))\right) \text{ ( by \eqref{eq:2.26})}\] \[=f(\gamma(s_{k}))-f(\gamma(s_{0})).\] Taking the limit as the mesh goes to zero, we find that \[f(\gamma(1))-f(\gamma(0))\leq\int_{\gamma}gds.\] Running \(\gamma\) in reverse, substituting \(s\) with \(1-s\), we find by a similar argument that \[|f(\gamma(1))-f(\gamma(0))|\leq\int_{\gamma}gds.\] This shows that \(g\) is an upper gradient for \(f\). Finally, assume that \(h\) is constant and less than or equal to \(M\). Then, for \(y\in E\), we may choose the constant discrete path \(P=(y)\), and get that \(f(y)\leq h(y)=h\). Conversely, for any non-constant discrete path \(P=(p_{0},\ldots,p_{n})\) with \(p_{0}\in E\), \(p_{n}=y\), and \(\operatorname{Mesh}(P)\leq\delta\), we have \[h(p_{0})+\int_{P}g\geq h(p_{0})=h.\] Taking the infimum over such paths, and then the minimum with \(M\), we find that \(f(y)\geq h\). Therefore, \(f(y)=h\). ### Good sequences of functions For sequences of discrete paths it becomes often convenient to construct a "good sequence of functions", which approximate a given function. **Definition 2.28**.: We say that a sequence of continuous functions \((g_{i})_{i\in\mathbb{N}}\), \(g_{i}:X\to[0,\infty)\), is a _good sequence of functions_, if it satisfies the following properties: 1. Increasing: \(g_{i}(x)\leq g_{j}(x)\) for each \(i\leq j\) and all \(x\in X\). 2. Positivity: For any bounded set \(V\subset X\), there exists an \(\eta_{V}>0\) so that \(g_{i}(x)\geq\eta_{V}\) for every \(i\in\mathbb{N}\) and every \(x\in V\). 3. "Goodness": For any bounded set \(V\subset X\), any \(\delta,L>0\), and any sequence \((P_{i})_{i\in\mathbb{N}}\) of discrete paths \(P_{i}\subset V\) with \(\lim_{i\to\infty}\operatorname{Mesh}(P_{i})=0\), and such that 1. \(\int_{P_{i}}g_{i}\leq L\), 2. \(\operatorname{diam}(P_{i})\geq\delta\), there is a subsequence converging to a curve \(\gamma\) in the sense of (2.21). **Proposition 2.29**.: _Let \((X,d,\mu)\) be a complete separable metric measure space, where \(\mu\) is a Borel measure that is positive and finite on \(r\)-balls with \(0<r<\infty\). Assume \(p\in[1,\infty)\) and let \(g\) be given a lower semicontinuous function. For every \(\epsilon>0\), there exists a good sequence of bounded Lipschitz continuous functions \((\tilde{g}_{i})_{i\in\mathbb{N}}\) converging pointwise to a function \(\tilde{g}\) that is a lower semicontinuous good function, and so that for every bounded set \(A\subset X\), there exists some \(\eta_{A}>0\) so that_ \[\tilde{g}(x)\geq g(x)+\eta_{A},\qquad\text{for all $x\in A$,} \tag{2.30}\] _and_ \[\int_{X}\tilde{g}^{p}d\mu\leq\int_{X}g^{p}d\mu+\epsilon. \tag{2.31}\] _Moreover, if \(K\subset X\) is a compact set on which \(g|_{K}\) is bounded, then we can choose \(\tilde{g}\) so that \(\tilde{g}|_{K}\) is bounded._ Proof.: Let \(x_{0}\in X\) be arbitrary, and let \(\epsilon\in(0,1)\). For simplicity, if \(K\) is provided, scale the metric so that \(K\subset B(x_{0},1)\). Let \(\psi_{i}(x)=\max(0,\min(i+1-d(x_{0},x),1))\) so that \(\psi_{i}|_{B(x_{0},i)}=1\) and \(\psi_{i}|_{X\setminus B(x_{0},i+1)}=0\). One directly observes that \(\psi_{i}\) is Lipschitz for every \(i\in\mathbb{N}\). Define \(E_{i}\) to be an increasing sequence of compact sets so that \(\mu(B(x_{0},i+1)\setminus E_{i})\leq\epsilon^{p}2^{-4pi}\). If the set \(K\) is provided as in the 'Moreover part' of the statement, then we choose \(E_{i}\) so that \(K\subset E_{i}\) for each \(i\). These sets \(E_{i}\) can be constructed since \(\mu\) is Radon. By lower semicontinuity, we may choose an increasing sequence of Lipschitz continuous bounded functions \(g_{i}\) converging to \(g\). The standard construction is to let \(g_{i}(x):=\inf\{g(y)+id(x,y):y\in X\}\), see [14, Proposition 4.2.2]. We modify these functions as follow: \[\tilde{g}_{i}(x):=g_{i}(x)+\sum_{n=1}^{i}\left(n\min(1,d(x,E_{n}))+\frac{ \epsilon}{8^{n}(\mu(B(x_{0},n+1))+1)}\right)\psi_{n}(x). \tag{2.32}\] Note that \(\tilde{g}_{i}\) is Lipschitz continuous and bounded as well. Also, define \[\tilde{g}(x):=g(x)+\sum_{n=1}^{\infty}\left(n\min(1,d(x,E_{n}))+\frac{ \epsilon}{8^{n}(\mu(B(x_{0},n+1))+1)}\right)\psi_{n}(x).\] Then, it holds that \(\lim_{i\to\infty}\tilde{g}_{i}(x)=\tilde{g}(x)\) and \(g(x)\leq\tilde{g}(x)\) for every \(x\in X\). If the set \(K\) was provided in the 'Moreover part' of the proposition, then for every \(x\in K\), we have \(d(x,E_{n})=0\) and \(\psi_{n}(x)=1\), so \(\tilde{g}\leq g+\epsilon\) is bounded on \(K\). We begin by verifying Inequality (2.31). By Minkowski's Inequality, \[\|\tilde{g}\|_{L^{p}(X)}\leq\|g\|_{L^{p}(X)}+\sum_{n=1}^{\infty}\|n\min(1,d( \cdot,E_{n}))\psi_{n}\|_{L^{p}(X)}+\left\|\frac{\epsilon}{8^{n}(\mu(B(x_{0}, n+1))+1)}\psi_{n}\right\|_{L^{p}(X)}\] Note that \[\min(1,d(\cdot,E_{n}))\psi_{n}\leq 1_{B(x_{0},n+1)\setminus E_{n}}\qquad\text{ and } \qquad\psi_{n}\leq 1_{B(x_{0},n+1)}\] Therefore, \[\|\tilde{g}\|_{L^{p}(X)}\leq\|g\|_{L^{p}(X)}+\epsilon\sum_{n=1}^{\infty}(n2^{ -4n}+8^{-n})\leq\|g\|_{L^{p}(X)}+\epsilon.\] Raising both sides to the power \(p\), applying the mean value theorem to the function \(x\mapsto x^{p}\), and using \(0<\epsilon<1\), we get that \[\|\tilde{g}\|_{L^{p}(X)}\leq\|g\|_{L^{p}(X)}^{p}+\epsilon p\left(\|g\|_{L^{p} (X)}+1\right)^{p-1}.\] Finally, replacing \(\epsilon\) with \(\epsilon p^{-1}(\|g\|_{L^{p}(X)}+1)^{-(p-1)}\), yields the desired estimate (2.31). Also, let \(\eta_{i}:=\epsilon 8^{-i}(\mu(B(x_{0},i+1))+1)^{-1}\), then \(\tilde{g}_{i}|_{B(x_{0},i)}\geq g_{i}|_{B(x_{0},i)}+\eta_{i}\) for every \(i\in\mathbb{N}\). Further, we get that for any bounded set \(A\subset X\) there is some \(i\) so that \(A\subset B(x_{0},i)\) and so that \(\tilde{g}|_{A}\geq g|_{A}+\eta_{i}\). To show goodness for the sequence \(\tilde{g}_{i}\). Let \(L,\delta>0\) and let \(A\) be a bounded set and consider any sequence \((P_{i})_{i\in\mathbb{N}}\subset A\) of discrete paths \(P_{i}\) with \(\lim_{i\to\infty}\operatorname{Mesh}(P_{i})=0\), such that 1. \(\int_{P_{i}}\tilde{g}_{i}\leq L\), 2. \(\operatorname{diam}(P_{i})\geq\delta\). By passing to a subsequence, we can assume \(\operatorname{Mesh}(P_{i})\leq\frac{1}{\delta}\). Since \(P_{i}\subset A\), by (2.30), we have \(\tilde{g}_{i}|_{P_{i}}\geq\eta_{A}\) for some \(\eta_{A}>0\). Let \(L^{\prime}=\frac{L}{\eta_{A}}\). Then \[\operatorname{Len}(P_{i})\frac{1}{L^{\prime}}=\int_{P_{i}}\frac{\eta_{A}}{L} \leq\int_{P_{i}}\frac{1}{L}\tilde{g}_{i}|_{A}\leq 1.\] Thus, \(\operatorname{Len}(P_{i})\leq L^{\prime}\). By Lemma 2.24 it suffices to prove that for every \(\tau\in(0,1)\) there is a compact set \(K_{\tau}\) for which \(d(P_{i},K_{\tau})\leq\tau\) for all \(i\in\mathbb{N}\). Without loss of generality, assume \(\tau\in(0,\delta/2)\). Since \(P_{i}\subset A\) for all \(i\in\mathbb{N}\) and since \(A\) is bounded, there is some \(T\) so that \(P_{i}\subset B(x_{0},T)\) for all \(i\in\mathbb{N}\). Choose \(N=\lfloor 2^{4}\max(L,1)/\tau^{2}\rfloor+T+1\). Let \(K_{\tau}=E_{N}\cup\bigcup_{i=1}^{N}P_{i}\). Then \(K_{\tau}\) is compact, and it suffices to show that \(\sup_{p\in P_{i}}d(p,K_{\tau})\leq\tau\). This is clear for \(i=1,\ldots,N\), thus consider \(i>N\). Suppose for the sake of contradiction that there is a point \(p_{k}^{i}\in P_{i}\) for some \(k=0,\ldots,n(i)\) with \(d(p_{k}^{i},K_{\tau})>\tau\). Note that \(\operatorname{diam}(P_{i})\geq\delta>2\tau\) and \(\operatorname{Mesh}(P_{i})\leq 1/i\leq\tau/8\). Consider the maximal interval \([k_{0},k_{1}]\) containing \(k\) and so that the corresponding subpath \((p_{l}^{i})_{l=k_{0}}^{k_{1}}\) stays in \(B(p_{k}^{i},\tau/2)\). In particular, for \(0\leq k_{0}\leq l\leq k_{1}\leq n(i)\), we have \(d(p_{l}^{i},K_{\tau})\geq\tau/2\). Furthermore, by maximality of the interval \([k_{0},k_{1}]\), the path must exit the ball. Hence, \(p_{l}^{i}\not\in B(p_{k}^{i},\tau/2)\), for either \(l=k_{0}-1\) or \(l=k_{1}+1\), and \[\sum_{l=k_{0}}^{k_{1}-1}d(p_{l}^{i},p_{l+1}^{i})\geq\tau/2-1/i\geq\tau/4. \tag{2.33}\] Now, take any index \(l\in\{k_{0},\ldots,k_{1}\}\) and let \(i\geq N\). Since \(d(p_{l}^{i},K_{\tau})\geq\tau/2\) and \(E_{N}\subset K_{\tau}\), we have \[d(p_{l}^{i},E_{N})\geq d(p_{l}^{i},K_{\tau})\geq\tau/2.\] Further, since \(p_{l}^{i}\subset B(x_{0},T)\subset B(x_{0},N)\) we have \[N\min(1,d(p_{l}^{i},E_{N}))\psi_{N}(p_{l}^{i})\geq N\tau/2.\] Therefore, we get \(\tilde{g}_{i}(p_{l}^{i})\geq N\tau/2\) and thus by Inequality (2.33) \[L\geq\sum_{l=0}^{n(i)-1}\tilde{g}_{i}(p_{l}^{i})d(p_{l}^{i},p_{l+1}^{i})\geq( \tau/4)(N\tau/2)>L.\] This is a contradiction, and thus, \(d(p,K_{\tau})\leq\tau\) for each \(p\in P_{i}\) and all \(i\in\mathbb{N}\). Finally, we formulate a result analogous to Lemma 2.22 and Lemma 2.14, in the case of good sequences for functions. **Lemma 2.34**.: _Suppose that \(\{g_{i}\}_{i\in\mathbb{N}}\) is a good sequence of functions converging to \(g\), and that \(P_{i}\) is a sequence of discrete paths converging to a curve \(\gamma\), then_ \[\int_{\gamma}gds\leq\liminf_{i\to\infty}\int_{P_{i}}g_{i}.\] Proof.: By Lemma 2.22, for any \(l\) fixed, we have \[\int_{\gamma}g_{l}ds\leq\liminf_{i\to\infty}\int_{P_{i}}g_{l}.\] Since \(\{g_{i}\}_{i\in\mathbb{N}}\) is an increasing sequence of functions, we get \[\int_{\gamma}g_{l}ds\leq\liminf_{i\to\infty}\int_{P_{i}}g_{i}.\] Sending \(l\to\infty\) yields the claim. ### Continuous "almost" upper gradients We will approximate an upper gradient by continuous functions. Recall, that a minimal \(p\)-weak upper gradient \(g_{f}\) of a function \(f\in N^{1,p}(X)\), is _a priori_, only in \(L^{p}(X)\). Lemma 2.4 shows that, by introducing a small error \(\epsilon>0\), we can find a lower semicontinuous function \(g_{\epsilon}\in L^{p}(X)\) which is an actual upper gradient. We would like to replace \(g_{\epsilon}\) with a continuous function. However, the upper gradient inequality (1.1) is only preserved if we approximate \(g_{\epsilon}\) from _above_ by a function \(h\). Further, it is impossible to approximate every \(L^{p}(X)\)-function from above by a continuous, let alone bounded, function. Fortunately, lower semicontinuous functions can be approximated _from below_ by a sequence of continuous bounded functions. This does not preserve (1.1). However, it will preserve being an "almost upper gradient" in the following sense. **Definition 2.35**.: Let \(V\) be a closed set with \(\mu(V)<\infty\) and let \(C\subset V\) be a closed subset of \(X\). A function \(h\) is a \((\delta,\Delta)\)_-discrete upper gradient_ for \(f\) on \((C,V)\) if for every discrete path \(P=(p_{0},\ldots,p_{n})\) with \(\operatorname{Mesh}(P)\leq\delta\), \(P\subset V\), \(p_{0},p_{n}\in C\) and \(\operatorname{diam}(\{p_{0},\ldots,p_{n}\})>\Delta\) we have \[|f(p_{n})-f(p_{0})|\leq\int_{P}h\,.\] Here, it is necessary to localize the condition to apply only to curves with large enough diameter, which lie within a bounded set \(V\), and which connect points in a closet set \(C\). The first two of these are used to ensure compactness of the relevant families of curves. The final one is a bit more subtle, and is related to the fact that a Sobolev function may not be continuous, and \(C\) should be thought of as a closed set such that \(f|_{C}\) is continuous. In fact, the following lemma illustrates well the role of each of these assumptions. **Lemma 2.36**.: _Assume that \(C,V\subset X\) are closed bounded sets with \(C\subset V\). Let \(M>0\) and \(f:X\to[0,M]\) be a measurable function which is continuous on \(C\). Let \(g:X\to[0,\infty]\) be a lower semicontinuous upper gradient for \(f\). Suppose that \(\eta>0\) and \((g_{i})_{i\in\mathbb{N}}\) is a good sequence of functions, which converges pointwise to a lower semicontinuous function \(\tilde{g}\) with \(\tilde{g}|_{V}>g|_{V}+\eta\), as constructed in Proposition 2.29. Then, for every \(\Delta>0\) there exists an \(N\in\mathbb{N}\) so that \(g_{i}\) is a \((1/i,\Delta)\)-discrete upper gradient for \(f\) on \((C,V)\) for every \(i\geq N\)._ Proof.: Arguing by contradiction, there exists \(\Delta>0\) and an infinite subset \(\mathbb{I}\subset\mathbb{N}\) so that for every \(i\in\mathbb{I}\) there exists a path \(P_{i}=(p_{0}^{i},\ldots,p_{n(i)}^{i})\) with \(\text{Mesh}(P_{i})\leq\frac{1}{i}\), \(\text{diam}(P_{i})\geq\Delta\), \(P_{i}\subset V\), \(p_{0}^{i},p_{n(i)}^{i}\in C\) and \[|f(p_{n(i)}^{i})-f(p_{0}^{i})|>\int_{P_{i}}g_{i}\,. \tag{2.37}\] Since \(|f|\leq M\), we get \(\int_{P_{i}}g_{i}\leq 2M\) for each \(i\in\mathbb{I}\). By Definition 2.28 (3), there exists an infinite subset \(\mathbb{J}\subset\mathbb{I}\), so that \((P_{i})_{i\in\mathbb{J}}\) converges to a curve \(\gamma\). In particular, \(\gamma(1)\) is a limit of the sequence \((p_{n(i)}^{i})_{i\in\mathbb{J}}\), and thus \(\gamma(1)\in C\). Similarly, \(\gamma(0)\) is a limit of the sequence \((p_{0}^{i})_{i\in\mathbb{J}}\), hence \(\gamma(0)\in C\). Further, \(\gamma\subset V\), since \(V\) is closed, and \(\text{diam}(\gamma)\geq\Delta\) since \(\text{diam}(P_{i})\geq\Delta\) for all \(i\in\mathbb{N}\). By sending \(i\in\mathbb{J}\) to infinity in Inequality (2.37), using Lemma 2.34 and the fact that \(f|_{C}\) is continuous, we get \[|f(\gamma(1))-f(\gamma(0))|\geq\int_{\gamma}\tilde{g}ds. \tag{2.38}\] However, \(\tilde{g}|_{V}>g|_{V}+\eta\), which contradicts the upper gradient inequality. Therefore the claim has been proved. ## 3. The case when \(X\) is complete In this section we will prove versions of our main theorems when \(X\) is a metric measure space that is complete and separable. We show that: * That capacity is outer regular for sets \(E\) with \(\text{Cap}_{p}(E)=0\); * different versions of the capacity are equal, namely \(\text{Cap}_{p}=\text{Cap}_{p}^{c}=\text{Cap}_{p}^{\text{lip}}=\text{Cap}_{p}^{ \text{(lip,lip)}}\) under some weak hypothesis; * \(C(X)\cap N^{1,p}(X)\) is dense in \(N^{1,p}(X)\); * every function \(f\in N^{1,p}(X)\) is quasicontinuous; * \(\text{Cap}_{p}\) is outer regular, and thus a Choquet capacity. In the subsections that follow, we address each one of these claims in turn. ### Null capacity sets We will employ the following lemma for capacity. A set \(E\) is said to be \(p\)_-exceptional_, if \(\text{Mod}_{p}(\Gamma_{E})=0\), where \(\Gamma_{E}\) is the collection of all rectifiable curves \(\gamma\) for which \(\gamma\cap E\neq\emptyset\). **Lemma 3.1**.: _([14, Proposition 7.2.8.]) Suppose that \((X,d,\mu)\) is a separable metric measure space, then a set \(E\subset X\) satisfies \(\text{Cap}_{p}(E)=0\) if and only if \(E\) is \(p\)-exceptional and \(\mu(E)=0\)._ Lemma 3.1 is crucial when one wants to show that capacity is outer regular. The first step is to analyze sets with zero capacity. The proof of Theorem 3.2 below follows closely that of [14, Proposition 7.2.12] - except for the novel use of a good function. **Theorem 3.2**.: _Suppose that \((X,d,\mu)\) is a complete separable metric measure space and let \(E\subset X\) satisfy \(\text{Cap}_{p}(E)=0\). For any \(\epsilon>0\) we have an open set \(O\) s.t. \(E\subset O\) and \(\text{Cap}_{p}(O)<\epsilon\)._ Proof.: Capacity is easily seen to be sub-additive and so it suffices to consider the case when \(E\) is bounded. Thus, assume that \(E\subset B(x_{0},R)\) for some ball \(B(x_{0},R)\) with \(x_{0}\in X,R>0\). Choose an open set \(V\subset B(x_{0},R)\) so that \(E\subset V\) and \(\mu(V\setminus E)\leq\epsilon 2^{-p-1}\). By Lemma 3.1, the set \(E\) is \(p\)-exceptional. Thus, \(\mathrm{Mod}_{p}(\Gamma_{E})=0\) and since \(\Gamma(E,X\setminus V)\subset\Gamma_{E}\) we have \(\mathrm{Mod}_{p}(\Gamma(E,X\setminus V))=0\). Let \(g\) be an admissible function for \(\Gamma(E,X\setminus V)\) with \(\int_{X}g^{p}d\mu\leq\epsilon 2^{-p-3}\). Lemma 2.13 provides a good function that is lower semicontinuous and admissible for \(\Gamma(E,X\setminus V)\), with \(g_{\epsilon}\geq g\) and \(\int_{X}g_{\epsilon}^{p}d\mu\leq\epsilon 2^{-p-2}\). Define \(u(x):=\min(1,\inf_{\gamma:X\setminus V\to x}\int_{\gamma}g_{\epsilon}ds)\). By Proposition 2.15, \(u\) is lower semicontinuous, \(u|_{E}=1\), \(u|_{X\setminus V}=0\), and \(u\) has upper gradient \(g_{\epsilon}\). Thus, \(U=\{u>\frac{1}{2}\}\) will be an open set containing \(E\) and \(U\subset V\). Take \(O=U\). Then, \(\tilde{u}=2u\in N^{1,p}(X)\) and \(\tilde{u}|_{O}\geq 1\). Therefore, from \(\tilde{u}\leq 2\cdot\mathbb{1}_{V\setminus E}\) we get \[\mathrm{Cap}_{p}(O)\leq\int_{X}|2u|^{p}d\mu+\int_{X}(2g_{\epsilon})^{p}d\mu\leq\epsilon\] and the claim follows. ### Different versions of capacity **Theorem 3.3**.: _Let \((X,d,\mu)\) be a complete, bounded and separable metric measure space equipped with a Radon measure which is positive and finite on all balls. Let \(E,F\subset X\) be two non-empty closed disjoint sets with \(d(E,F)>0\), and let \(p\in[1,\infty)\). Then_ \[\mathrm{Cap}_{p}(E,F)=\mathrm{Cap}_{p}^{c}(E,F)=\mathrm{Cap}_{p}^{\mathrm{lip} }(E,F)=\mathrm{Cap}_{p}^{(\mathrm{lip},\mathrm{lip})}(E,F).\] We will use this result in Section 5 to prove Theorem 1.3, which is a more general version. **Corollary 3.4**.: _Let \(E,F\subset X\) be two non-empty closed disjoint sets with \(d(E,F)>0\), and let \(p\in[1,\infty)\). If \(u\in N^{1,p}(X)\) is non-negative with \(u|_{E}=0,u|_{F}=1\) and \(g\) is an upper gradient for \(u\) in \(L^{p}(X)\), then there exists a sequence of functions \(u_{i}\in N^{1,p}(X)\), which are locally Lipschitz, and which have locally Lipschitz upper gradients \(h_{i}\in L^{p}(X)\), with \(h_{i}\to g\) in \(L^{p}(X)\)._ Note that \(h_{i}\) need not be the minimal \(p\)-weak upper gradient of \(u_{i}\). Proof of Corollary 3.4.: The proof is the same as the one for Theorem 3.3, and is obtained by setting \(h_{i}=(a_{i})^{-1}g_{i}\) at the end of the proof. Proof of Theorem 3.3.: An infimum over a smaller set yields a larger value than an infimum over a larger set, and thus \[\mathrm{Cap}_{p}(E,F)\leq\mathrm{Cap}_{p}^{c}(E,F)\leq\mathrm{Cap}_{p}^{\mathrm{ lip}}(E,F)\leq\mathrm{Cap}_{p}^{(\mathrm{lip},\mathrm{lip})}(E,F).\] Therefore, it suffices to prove \(\mathrm{Cap}_{p}^{(\mathrm{lip},\mathrm{lip})}(E,F)\leq\mathrm{Cap}_{p}(E,F)\). If \(\mathrm{Cap}_{p}(E,F)=\infty\), this is immediate. Thus, let us assume that \(\mathrm{Cap}_{p}(E,F)<\infty\) and let \(\epsilon>0\) be arbitrary. By definition of capacity, there exists \(u:X\to[-\infty,\infty]\) with \(u|_{E}=0\) and \(u|_{F}=1\) and an \(L^{p}\)-upper gradient \(g:X\to[0,\infty]\) for \(u\) with \(\int g^{p}d\mu\leq\mathrm{Cap}_{p}(E,F)+\epsilon.\) By replacing \(u\) with \(\max(\min(u,1),0)\) we can assume that \(u:X\to[0,1]\). Further, since \(\mu(X)<\infty\), we get \(u\in L^{p}(X)\) and, moreover, that \(u\in N^{1,p}(X)\). We have \(\int_{\gamma}g\,ds\geq 1\) for each rectifiable \(\gamma\) connecting \(E\) to \(F\). By Proposition 2.29 there exists a \(g_{\epsilon}\in L^{p}(X)\) which is lower semicontinuous with \(g_{\epsilon}>g\) and \(\int_{X}g^{p}d\mu\leq\int_{X}g^{p}d\mu+\epsilon\), and a good sequence of bounded and Lipschitz continuous non-negative functions \(\{g_{i}\}_{i\in\mathbb{N}}\) that satisfy \(g_{i}\nearrow g_{\epsilon}\) and \(g_{i}\geq 0\). Let \[u_{i}(x):=\min\left(\inf\left\{\int_{P}g_{i}:P=(p_{0},\ldots,p_{n}),p_{0}\in E, p_{n}=x,\mathrm{Mesh}(P)\leq i^{-1}\right\},1\right). \tag{3.5}\] Note that \(u_{i}:X\to[0,1]\), since we are taking a minimum with \(1\). Further, \(u|_{E}=0\) since for \(x\in E\) we can use a constant path \(P=(p_{0})\). By Proposition 2.25, the function \(g_{i}\) is an upper gradient for \(u_{i}\). We show first that the function \(u_{i}\) is \(M_{i}\)-Lipschitz with \(M_{i}=\max\{i,\sup_{x\in X}g_{i}(x)\}\). Note that \(M_{i}<\infty\) since \(g_{i}\) is bounded. To see the Lipschitz property, observe that if \(x,y\in X\) and \(d(x,y)\geq\frac{1}{i}\), then since \(0\leq u_{i}\leq 1\), \[|u_{i}(x)-u_{i}(y)|\leq 1\leq M_{i}d(x,y).\] On the other hand, if \(x,y\in X\) and \(d(x,y)<\frac{1}{i}\), then any discrete path \(P=(p_{0},\ldots,p_{n})\) with \(p_{0}\in E,p_{n}=x,\mathrm{Mesh}(P)\leq i^{-1}\), can be expanded to \(P^{\prime}=(p_{0},\ldots,p_{n},y)\) with \(\mathrm{Mesh}(P^{\prime})\leq i^{-1}\). It may be that \(P^{\prime}\) is not simple, as we require for paths. This occurs only if for some \(i\in[0,n]\) we have \(p_{i}=y\), and then we truncate \(P^{\prime}\) at such index. This gives, \(u_{i}(y)\leq\int_{P^{\prime}}g_{i}\leq\int_{P}g_{i}+d(x,y)g_{i}(x)\). Infimizing over \(P\) yields \(u_{i}(y)\leq u_{i}(x)+M_{i}d(x,y)\). By symmetry, we get \(|u(x)-u(y)|\leq M_{i}d(x,y)\), which completes the proof of the Lipschitz bound. Let \(a_{i}=\inf_{x\in F}u_{i}(x)\). We show next that \(\lim_{i\to\infty}a_{i}=1\). Since \(g_{i}\leq g_{j}\) is an increasing sequence of functions, the limit \(\lim_{i\to\infty}a_{i}\) exists. We obtain our claim via contradiction: Suppose that \(\lim_{i\to\infty}a_{i}<1\). Then there would exist some \(\delta>0\) so that \(a_{i}<1-\delta\) for every \(i\in\mathbb{N}\). By definition, for every \(i\), there exists a discrete path \(P^{i}=(p_{0}^{i},\ldots,p_{n}^{i})\) with \(\int_{P_{i}}g_{i}<1-\delta\) and with \(p_{0}^{i}\in E,p_{n}^{i}\in F\) and \(\operatorname{Mesh}(P_{i})<i^{-1}\). By the final condition, \(\operatorname{diam}(P^{i})\geq d(E,F)\) for each \(i\). Since \(g_{i}\) is a good sequence of functions, and since \(X\) is bounded, there exists a subsequence \(i_{k}\) so that \(P_{i_{k}}\to\gamma\) for some curve \(\gamma:[0,1]\to X\). Since \(E\) and \(F\) are closed, we conclude that \(\gamma(0)\in E,\gamma(1)\in F\). By Lemma 2.22, we have, for each \(i\in\mathbb{N}\), \[\int_{\gamma}g_{i}\,ds\leq\liminf_{k\to\infty}\int_{P_{i_{k}}}g_{i}<1-\delta.\] Sending \(i\to\infty\), and with monotone convergence, we get \(\int_{\gamma}g_{\epsilon}\,ds<1-\delta\), which is a contradiction to the fact that \(\int_{\gamma}g_{\epsilon}\,ds\geq\int_{\gamma}g\,ds\geq 1.\) Thus, our initial assumption was false, and \(\lim_{i\to\infty}a_{i}=1\). Choose now \(i\) so large that \[\frac{\int_{X}g_{\epsilon}^{p}d\mu}{a_{i}^{p}}\leq\int_{X}g^{p}d\mu+2\epsilon.\] Then \(\tilde{u}_{i}=\min(\frac{u_{i}}{a_{i}},1)\) is a Lipschitz function with the upper gradient \((a_{i})^{-1}g_{i}\). Further, \(\tilde{u}_{i}|_{E}=0\), \(\tilde{u}_{i}|_{F}=1\), and \[\int_{X}((a_{i})^{-1}g_{i})^{p}d\mu\leq\int_{X}g^{p}d\mu+2\epsilon.\] Thus, \(\operatorname{Cap}_{p}^{\operatorname{lip}}(E,F)\leq\operatorname{Cap}_{p}(E,F )+2\epsilon\), and the claim follows since \(\epsilon>0\) is arbitrary. The proof of the statement shows in fact slightly more. For future reference, we state this as a theorem. **Theorem 3.6**.: _Let \((X,d,\mu)\) be complete separable metric measure space with \(\mu(X)<\infty\). Let \(E,F\subset X\) be two non-empty closed disjoint sets with \(d(E,F)>0\), and let \(p\in[1,\infty)\). Then, for any \(\epsilon>0\), and for any "admissible" function \(g\in L^{p}(X)\), i.e., so that \(\int_{\gamma}g\,ds\geq 1\) for every \(\gamma\in\Gamma(E,F)\), there exists a locally Lipschitz function \(g_{\epsilon}\) that is also admissible, meaning that \(\int_{\gamma}g_{\epsilon}\,ds\geq 1\) for every \(\gamma\in\Gamma(E,F)\), and such that \(\|g-g_{\epsilon}\|_{L^{p}(X)}\leq\epsilon\)._ ### Continuous functions are dense in Sobolev spaces Next, we prove the density of continuous functions in Newton-Sobolev spaces. **Theorem 3.7**.: _Let \((X,d,\mu)\) be complete and separable metric measure space. Then \(C(X)\cap N^{1,p}(X)\) is dense (in norm) in \(N^{1,p}(X)\) for \(p\in[1,\infty)\)._ Given a function \(f\in N^{1,p}(X)\) and \(\epsilon>0\), we want to find a continuous Newton-Sobolev function \(\tilde{f}\) on \(X\) such that \(\|f-\tilde{f}\|_{N^{1,p}(X)}\leq\epsilon\). The idea of the proof is to consider an appropriately large compact set \(K\subset X\) (see Equation (3.9)), where \(f|_{K}\) is continuous, and then to find an extension \(\tilde{f}\) which is continuous everywhere, and which has controlled minimal \(p\)-weak upper gradient \(g_{\tilde{f}}\). That is, our proof will be based on the following extension result of Whitney type. **Theorem 3.8**.: _Let \((X,d,\mu)\) be complete and separable metric measure space. Let \(f\in N^{1,p}(X)\) and let \(g_{*}\in L^{p}(X)\) be an upper gradient. Suppose that \(f|_{X\setminus B(x_{0},R)}=0\) for some \(x_{0}\in X\), and \(R>0\). Suppose there is a compact set \(K\subset B(x_{0},R)\) with \(f|_{K}\) continuous and \(g_{*}|_{K}\) bounded._ _Then, for every \(\epsilon>0\), there exists a function \(\tilde{f}\) with:_ 1. \(\sup_{x\in X}|\tilde{f}(x)|\leq\sup_{x\in K}|f(x)|\)_._ 2. \(\tilde{f}|_{K}=f|_{K}\) _and_ \(\tilde{f}|_{X\setminus B(x_{0},R)}=f|_{X\setminus B(x_{0},R)}=0\)_._ 3. \(\tilde{f}\in N^{1,p}(X)\cap C(X)\)_._ 4. \(\int_{X\setminus K}g_{\tilde{f}}^{p}d\mu\leq\int_{X\setminus K}g_{*}^{p}d\mu+\epsilon\)_._ We delay the proof of this extension result, briefly, in order to show how the density result follows from it. Proof of Theorem 3.7.: First, recall that by Lemma 2.8 the space of bounded Newton-Sobolev functions with bounded support \(N^{1,p}_{b}(X)\) is dense in \(N^{1,p}(X)\). Next, we show that \(C(X)\cap N^{1,p}_{b}(X)\) is dense in \(N^{1,p}_{b}(X)\). If \(f\in N^{1,p}_{b}(X)\), then there is a constant \(M<\infty\) such that \(|f|\leq M\) everywhere in \(X\) and there is a ball \(B(x_{0},R)\) so that \(f|_{X\setminus B(x_{0},R)}=0\). Let \(g\in L^{p}(X)\) be any upper gradient of \(f\). Since \(f=0\) in \(X\setminus B(x_{0},R)\), we can assume by modifying \(g_{*}\) that \(g_{*}|_{X\setminus B(x_{0},R)}=0\). Indeed, this modification leaves 1.1 invariant. Let \(\epsilon>0\) be fixed, by using Lusin's theorem and the absolute continuity of integrals, choose a compact set \(K\subset B(x_{0},R)\) so that \(f|_{K}\) is continuous, \(g_{*}|_{K}\) is bounded and so that \[\int_{X\setminus K}2^{p+3}g^{p}\,d\mu+\mu(B(x_{0},R)\setminus K)2^{p+1}M^{p} \leq\epsilon. \tag{3.9}\] This is possible, since \(\mu\) is Radon, \(g_{*}\in L^{p}(X)\), and \(g_{*}=0\) in \(X\setminus B(x_{0},R)\). By Theorem 3.8 there exists a function \(\tilde{f}\in N^{1,p}(X)\cap C(X)\) with \(\tilde{f}|_{K}=f|_{K}\) and \[\int_{X\setminus K}g^{p}_{\tilde{f}}\,d\mu\leq\int_{X\setminus K}g^{p}_{*}\,d \mu+\epsilon 2^{-p-3}\leq\epsilon 2^{-p-2},\] where the last inequality follows by (3.9). Furthermore, \(\tilde{f}|_{X\setminus B(x_{0},R)}=0\) and \(|\tilde{f}|\leq M\) everywhere. Since \((f-\tilde{f})|_{K\cup(X\setminus B(x_{0},R))}=0\), by Lemma 2.5 and subadditivity of minimal \(p\)-weak upper gradients, we have that \(g_{f-\tilde{f}}\leq(g_{f}+g_{*})\mathbb{1}_{B(x_{0},R)\setminus K}.\) Similarly, \(|f-\tilde{f}|\leq 2M\mathbb{1}_{B(x_{0},R)}\). Thus, \(\int_{X}g^{p}_{f-\tilde{f}}d\mu\leq\epsilon/2\). Also, we have \(\int_{X}|f-\tilde{f}|^{p}\,d\mu\leq(2M)^{p}\mu(B(x_{0},R)\setminus K).\) Therefore, \[\|f-\tilde{f}\|_{N^{1,p}(X)}^{p}\leq\int_{X}|f-\tilde{f}|^{p}\,d\mu+\int_{X}g^ {p}_{f-\tilde{f}}\,d\mu\leq\epsilon.\] Proof of Theorem 3.8.: This proof will take some detours and require some auxiliary lemmas. The function \(\tilde{f}\) is defined in Formula (3.19) below. However, this definition depends on some technical choices, which are explained first. The crucial properties ensured by these choices are codified as lemmas. Once the function \(\tilde{f}\) has been properly defined, we verify, one-by-one, the properties of the theorem. The proof ends at the end of this subsection by verifying the fourth property in the statement. Fix a function \(f\in N^{1,p}(X)\) and an arbitrary \(\epsilon>0\). By assumption, there is \(x_{0}\in X\) and \(1\leq R<\infty\), so that \(f|_{X\setminus B(x_{0},R)}=0\). Since \(K\) is compact and \(f\) is uniformly continuous on \(K\), \(M:=\sup_{x\in K}|\tilde{f}(x)|<\infty\). If \(X\setminus B(x_{0},R)\neq\emptyset\), assume, by scaling the metric \(d\), that \[d(K,X\setminus B(x_{0},R))\geq 1. \tag{3.10}\] Let \(g_{*}\in L^{p}(X)\) be an upper gradient for \(f\), and assume it is bounded on \(K\). Thus, there is a constant \(S<\infty\) so that \(|\sup_{x\in K}g_{*}(x)|=S\). The result will be proven by constructing a function \(\tilde{f}\in N^{1,p}(X)\cap C(X)\) so that the properties in Theorem 3.8 hold. We construct \(\tilde{f}\) together with an upper gradient \(g\in L^{p}(X)\) for it. Our choice of upper gradient \(g\) is given in the following lemma. Let \(\epsilon>0\) and let \(g_{\epsilon}\) be a lower semicontinuous upper gradient for \(f\) with \(\int_{X}g^{p}_{\epsilon}d\mu\leq\int_{X}g^{p}_{\epsilon}d\mu+\epsilon 2^{-5}\), as guaranteed by Lemma 2.4. By replacing \(g_{\epsilon}(x)\) with \(\min(g_{\epsilon},S\mathbb{1}_{K}+\infty\mathbb{1}_{X\setminus K})\), we can assume that \(g_{\epsilon}|_{K}\) is bounded \(|\sup_{x\in K}g_{\epsilon}(x)|=S\). Then, Proposition 2.29 applied to the function \(g_{\epsilon}\) gives the following lemma. **Lemma 3.11**.: _There is a lower semicontinuous good function \(g:X\to[0,\infty]\) so that \(g|_{K}\) is bounded and a good sequence \(\{g_{i}\}_{i\in\mathbb{N}}\) of bounded Lipschitz continuous functions so that \(g_{i}\nearrow g\) pointwise on \(X\), and_ \[\int_{X}g^{p}d\mu\leq\int_{X}g^{p}_{\epsilon}d\mu+\epsilon 2^{-5}\leq\int_{X}g^{p}_{ \epsilon}d\mu+\epsilon 2^{-4}.\] _Moreover, for every bounded set \(V\subset X\) there exists an \(\eta>0\) such that \(g|_{V}>g_{\epsilon}+\eta\)._ We now introduce several auxiliary functions that require some motivation. By passing from \(g_{*}\) to the functions \(g_{i}\), we have gained continuity but at the price of losing the property that \(g_{i}\) is an upper gradient for \(f\). This loss forces us to make further choices, whose role we now briefly describe. The construction of a good sequence of functions guarantees that \(g_{i}\), for large \(i\), is a discrete upper gradient in the sense of Definition 2.35. The discrete upper gradient property only holds for the closed sets \(C\) and \(V\), which we choose as follows. First, \[V:=\overline{B(x_{0},2R)}. \tag{3.12}\] Note, that \(V\) is the closure of the ball, and not the closed ball - although this makes little difference for the proof. Second, \[C:=K\cup(\overline{B(x_{0},2R)}\setminus B(x_{0},R)). \tag{3.13}\] Note that \(f|_{C}\) is continuous, and \(V\) is bounded, so the set \(V\) localizes the argument. In the definition of \(C\) we adjoin the annulus \(\overline{B(x_{0},2R)}\setminus B(x_{0},R)\) to ensure later that our approximation will vanish outside of \(B(x_{0},R)\), see Figure 1. The discrete upper gradient property includes two parameters \((\delta,\Delta)\), where the first controls the mesh-size and the second the diameter of discrete paths. As we decrease \(\Delta\), we need to pass further into the sequence \(g_{i}\) and decrease the mesh-size \(\delta\). Due to this, we cannot construct an approximation using a single function \(g_{i}\) or a single mesh-size \(\delta\). Consequently, as we approach the set \(K\), we will force the mesh-size to decrease, and the index \(i\) to increase. This leads to a definition of auxiliary functions \(\mathrm{D}(x)\) and \(G(x)\), where \(\mathrm{D}\) stands for the size of gaps, and \(G\) will be a candidate gradient for the constructed function \(\tilde{f}\). The idea of using different functions and meshes, which are fixed at dyadic length scales, comes from a Whitney-type extension argument. Finally, to force the property that \(f|_{C}=\tilde{f}|_{C}\), we need to control the behaviour off of \(K\), and this involves the modulus of continuity \(\omega\) of \(f|_{K}\). The modulus of continuity is used to define a "penalty" term \(\mathbf{P}\), which eventually will depend on the distance to \(K\). The use of the penalty term is a bit similar to how one extends a uniformly continuous function off a subset to a uniformly continuous function on the entire space; see for example [18]. The value of the approximation at a point \(x\in X\) will be ultimately obtained by infimizing over discrete paths connecting \(x\) to the closed set \(C\), which have mesh-size controlled by \(\mathrm{D}(x)\) - these will be called \((x,\mathrm{D})\)- admissible paths. The minimized function sums \(G\) over such a path together with a penalty term and a term from \(f\). The reader may now wish to glance at Equation (3.19) to see how the three functions, \(\mathbf{P},\mathrm{D},G\) are used. It may also be helpful to compare this to (3.5), or to the approximation and discussion found in [9] - where also a more detailed historical comparison is contained. Denote by \(\omega:(0,\infty)\to\mathbb{R}\) the modulus of continuity for \(f|_{K}\), that is \[\omega(\delta)=\sup\{|f(x)-f(y)|:d(x,y)\leq\delta,x,y\in K\}. \tag{3.14}\] **Lemma 3.15**.: _With \(C,V\), and \(K\) as defined above in (3.9), (3.12), and (3.13), if \(g_{i}\) is a sequence of good functions converging to \(g\), as constructed in Lemma 3.11, and \(\omega\) is the modulus of continuity of \(f|_{K}\), as defined in (3.14), then there exists an increasing sequence \(i_{n}\in\mathbb{N}\), a gap function \(\mathrm{D}:X\to[0,\infty)\), a candidate upper gradient \(G:X\to[0,\infty]\), and a penalty function \(\mathbf{P}:[0,\infty)\to[0,\infty)\), with the following properties given \(n\in\mathbb{N}\):_ 1. \(g_{i_{n}}\) _is an_ \((i_{n}^{-1},2^{-n})\)_-discrete upper gradient for_ \(f\) _for_ \((C,V)\)_;_ 2. _the gap function_ \(D\) _satisfies:_ 1. \(\mathrm{D}(x)\leq d(x,K)/4\) _for every_ \(x\in X\)_;_ 2. \(0<\mathrm{D}(x)<i_{3}^{-1}\) _for each_ \(x\in X\setminus K\)_;_ 3. \(D(x)=\min(1/i_{n+2},2^{-n-3})\)_, if_ \(2^{-n-1}\leq d(x,K)<2^{-n}\)_, for_ \(n\geq 1\)_;_ 3. _the candidate upper gradient satisfies:_ 1. \(G(x)\geq g_{i_{3}}(x)\) _for all_ \(x\)_;_ 2. _for_ \(n>3\)_,_ \(G(x)\geq g_{i_{n}}\) _if_ \(d(x,K)\leq 2^{-n}\)_;_ 3. _for_ \(n\geq 3\)_,_ \(G(x)\leq g_{i_{n}}\) _if_ \(d(x,K)\geq 2^{-n-1}\)_;_ 4. \(G|_{K}=g|_{K}\)_._ 4. _the penalty function satisfies:_ 1. \(\mathbf{P}(r)\geq 2M\) _if_ \(r\geq i_{3}^{-1}\)_;_ 2. \(\lim_{r\to 0}\mathbf{P}(r)=0\)_, but_ \(\mathbf{P}(r)\geq\omega(r)+\omega(2^{1-n})\) _for_ \(i_{n+1}^{-1}\leq r<i_{n}^{-1}\) _and_ \(n\geq 3\)_;_ 3. \(\mathbf{P}(r)\geq\omega(r)\)_, for all_ \(r>0\) Proof.: By Lemma 2.36, there is an increasing sequence \(i_{n}\), for \(n\in\mathbb{N}\), so that \(g_{i_{n}}\) is an \((i_{n}^{-1},2^{-n})\)-discrete upper gradient for \(f\) with respect to \((C,V)\). Define \(\mathrm{D}\) as a step function depending on dyadic length scales determined by the distance to \(K\): \[\mathrm{D}(x):=\begin{cases}\min(1/i_{3},1/8)&d(x,K)\geq 2^{-1}\\ \min(1/i_{n+2},2^{-n-3})&2^{-n-1}\leq d(x,K)<2^{-n},n\geq 1\\ 0&x\in K.\end{cases}\] Similarly, define the candidate upper gradient piecewise using the good functions \(g_{i}\) and their limit \(g\). \[G(x):=\begin{cases}g_{i_{3}}(x)&d(x,K)\geq 2^{-3}\\ g_{i_{n}}(x)&2^{-n-1}\leq d(x,K)<2^{-n},n\geq 3\\ g(x)&x\in K.\end{cases}\] Finally, we define a penalty function depending on the modulus of continuity of \(f\) on \(K\). \[\mathbf{P}(r):=\begin{cases}2M&\frac{1}{i_{3}}\leq r\\ \omega(2^{1-n})+\omega(r)&\frac{1}{i_{1+n}}\leq r<\frac{1}{i_{n}},n\geq 3 \;.\end{cases}\] Notice that we have \(\lim_{r\to 0}\mathbf{P}(r)=0\). The properties of \(\mathbf{P}\) are direct to verify, once one notices that \(2M\geq\omega(r)\) for all \(r>0\). We let the reader verify that these definitions imply the properties stated above. Next, we describe the admissible discrete paths used in the extension. **Definition 3.16**.: With \(V\), and \(C\) defined as in (3.12) and (3.13), a discrete path \(P=(p_{0},\ldots p_{n})\) is called \((x,\mathrm{D})\)-_admissible_ if \(p_{0}\in C\), \(p_{n}=x\), and \(d(p_{k},p_{k+1})\leq\mathrm{D}(p_{k})\) for each \(k=1,\ldots,n-1\) and \(p_{1},\ldots,p_{n}\in V\). In other words, the length of the first step \(d(p_{0},p_{1})\) can be arbitrary, but after this "first jump", the following steps are controlled by the function \(\mathrm{D}\). This ensures that there is always at least one \((x,\mathrm{D})\)-admissible path for every \(x\in V\), namely \(P=(p_{0},x)\) for any \(p_{0}\in C\) (or \(P=(p_{0})\) if \(x=p_{0}\in C\). This fact will be used to guarantee an upper-bound for the extension. Note that Lemma 3.15 guarantees \(\mathrm{D}(x)\leq d(x,K)/4\) for all \(x\in X\). This implies the following useful lemma. **Lemma 3.17**.: _With \(K\) as defined in (3.9), let \(\mathrm{D}(x)\geq 0\) be a function such that \(\mathrm{D}(x)\leq d(x,K)/4\) for all \(x\in X\). If \(P=(p_{0},p_{1},\ldots,p_{n})\) is \((x,\mathrm{D})\)-admissible and \(p_{1}\not\in K\), then \(p_{l}\not\in K\) for any \(l\geq 1\). Alternatively, if \(n\geq 1\) and \(p_{1}\in K\), then \(P=(p_{0},p_{1})\)._ Proof.: If \(p_{1}\in K\), since \(D\equiv 0\) on \(K\), \(d(p_{l+1},p_{l})=0\) for \(l\geq 1\), so the path \(P\) will stay at \(p_{1}\). Else, if \(p_{1}\not\in K\), then \(d(p_{l+1},p_{l})\leq D(p_{l})<d(p_{l},K)/2\) for \(l\geq 1\), hence \(p_{l}\not\in K\) for \(l\geq 1\). **Definition 3.18**.: With the functions \(\mathbf{P},G,\mathrm{D}\) from Lemma 3.15 and \(K,C,V\subset X\), and \(M,R\in(0,\infty),x_{0}\in X\) defined above with \(d(K,X\setminus B(x_{0},R))\geq 1\). We set the extension of \(f\) as follow: for \(x\in V\), \[\tilde{f}(x):=\min\left(M,\inf_{P=(p_{0},\ldots,p_{n}=x)}\Phi(P)\right), \tag{3.19}\] where the infimum is taken over all \((x,\mathrm{D})\)-admissible paths \(P\) and \(\Phi(P)\) is defined as follows. \[\Phi(P):=\mathbf{P}(d(p_{0},p_{1}))+f(p_{0})+\sum_{k=0}^{n-1}G(p_{k})d(p_{k},p_ {k+1}), \tag{3.20}\] when \(n\geq 0\). When \(n=0\), we define \(\Phi(P):=f(p_{0})\). Finally, for \(x\not\in V\), set \(\tilde{f}(x)=0\). Examples of admissible paths are depicted in Figure 1. We now show that \(\tilde{f}\) satisfies properties 1-4 in the statement of Theorem 3.8. (3.21) **Property 1:**\(\sup_{x\in X}|\tilde{f}(x)|\leq\sup_{x\in K}|f(x)|\). Recall that \(M=\sup_{x\in K}|f(x)|\). We want to show that \(\tilde{f}:X\to[-M,M]\). First, from Definition 3.18, it follows that \(\tilde{f}(x)\leq M\) for each \(x\in X\). On the other hand, since \(G\geq 0\) and \(\mathbf{P}\geq 0\), \(\Phi(P)\geq f(p_{0})\) for every path \(P=(p_{0},\ldots,p_{n})\). Thus, \(\inf_{P}\Phi(P)\geq\inf_{x\in C}f(x)=\inf_{x\in K}f(x)\geq-M\), and \(\tilde{f}(x)\geq-M\) for \(x\in V\). For \(x\not\in V\) we have \(\tilde{f}(x)=0\), and the claim follows. (3.22) **Property 2:**\(\tilde{f}|_{K}=f|_{K}\quad\text{and}\quad\tilde{f}|_{X\setminus B(x_{0},R)}=f|_ {X\setminus B(x_{0},R)}=0\). Take an arbitrary \(x\in K\cup(X\setminus B(x_{0},R))=C\cup(X\setminus V)\), where \(C,V\), and \(K\) are defined above in (3.9), (3.12), and (3.13). We will show that \(\tilde{f}(x)=f(x)\). If \(x\in X\setminus V\), then, by definition, \(\tilde{f}(x)=0=f(x)\). Thus, we can assume that \(x\in C\). The path \(P=(x)\) is \((x,\mathrm{D})\)-admissible and so we have \(\tilde{f}(x)\leq\Phi(P)=f(x)\). We want to prove the opposite inequality, and for that we will separate the two cases when \(x\in K\) and when \(x\in C\setminus K\). \(\bullet\) First, consider the case \(x\in K\). It suffices to prove \(\Phi(P)\geq f(x)\) for every \((x,D)\)-admissible path \(P\). This is clear if \(P=(x)\), and thus we can assume that \(P=(p_{0},\ldots,p_{n}=x)\) with \(n\geq 1\). If \(p_{1}\not\in K\), then \(p_{1}\not\in x\) and Lemma 3.17 shows that the path will never reach \(x\). Therefore, \(p_{1}\) must be in \(K\). Then, Lemma 3.17 again shows that \(p_{1}=x\) and \(n=1\), so \(P=(p_{0},p_{1}=x)\). In other words, we are reduced to considering a path with only one jump. There are two possibilities: Figure 1. The figure shows five different admissible paths starting at various points in \(C\) represented by white circles and ending at the points \(P,Q,R,S,T\) and \(U\). The set \(C\) is the lighter gray annulus together with the darker gray compact set \(K\) in the center. Squares indicate the points along the paths and dashed segments the jumps that we imagine occurring in between. The points \(P\) and \(Q\) show how a path can either start in \(K\) or in the annular region. The point \(R\) shows that a path contained in \(C\) can have zero length. The path ending at \(S\) shows that one can jump between points in \(K\) - but must stop there. Similarly, the path ending at \(U\) makes one jump from \(C\setminus K\) to \(K\). Finally, the path ending at \(T\) depicts how a path that at some point leaves \(K\) can never return - but can get very close. The first unrestricted jump of each path is bolded, and note that all the paths must be contained in the bounded set \(V\) which is a ball containing the full figure. * If \(p_{0}\) is in the annulus \(C\setminus K\), the path \(P\) is illustrated by the path ending at the point \(U\) in Figure 1. Then, since \(p_{0}\not\in B(x_{0},R)\), the normalization (3.10) implies that \(d(p_{0},x)\geq 1\). By Property 4(a) of Lemma 3.15, the penalty on the first jump satisfies \(\mathbf{P}(d(p_{0},p_{1}))\geq 2M\). Therefore, from the definition of \(\Phi(P)\) in (3.20), we get that \(\Phi(P)\geq\mathbf{P}(d(p_{0},p_{1}))\geq 2M\geq f(x)\). * If, on the other hand, \(p_{0}\in K\), then the path \(P\) is illustrated by the path ending at the point \(S\) in Figure 1. In this case, we estimate using the modulus of continuity: \[\mathbf{P}(d(p_{0},p_{1})) \geq\omega(d(p_{0},p_{1})) \text{(Property 4(c) of Lemma \ref{lem:P1})}\] \[\geq|f(p_{1})-f(p_{0})| \text{(by \eqref{eq:P1})}\] Thus, from (3.20), we get \(\Phi(P)\geq f(p_{0})+\mathbf{P}(d(p_{0},p_{1}))\geq f(x)\). \(\bullet\) Now, consider the case that \(x\in C\setminus K\), and \(P=(p_{0},\ldots,p_{n})\) is any \((x,\mathrm{D})\)-admissible path. We need to show that \(\Phi(P)\geq f(x)=0\) for every such path. Recall that \((x,D)\)-admissible paths start in \(C\) and end at \(x\). If \(p_{0}\not\in K\), then \(f(p_{0})=0\), and, from (3.20), we get \(\Phi(P)\geq f(p_{0})=0\). Thus, we are left to consider the case when \(p_{0}\in K\) and \(x\in C\setminus K\). If the first jump satisfies \(d(p_{0},p_{1})\geq i_{1}^{-1}>i_{3}^{-1}\), then by Lemma 3.15 (4)(a), \(\mathbf{P}(d(p_{0},p_{1}))\geq 2M\) and \(f(p_{0})+\mathbf{P}(d(p_{0},p_{1}))\geq 0=f(x)\). Therefore, we can assume that \(d(p_{0},p_{1})\leq i_{1}^{-1}\). By Property 2(b) of Lemma 3.15, we have \(\mathrm{D}(p_{k})\leq i_{1}^{-1}\), for \(k\geq 1\). Thus, \(d(p_{k},p_{k+1})\leq\mathrm{D}(p_{k})\leq i_{1}^{-1}\) for all \(k\geq 1\). In particular, \(\mathrm{Mesh}(P)\leq i_{1}^{-1}\). Since \(x\in C\setminus K\), the normalization (3.10) implies that \(d(x,K)\geq 1\) and therefore \(\mathrm{diam}(P)\geq 1\). Finally, by Property 3(a) of Lemma 3.15, we have that \(G\geq g_{i_{1}}\). Recall that \(g_{i_{1}}\) is a discrete \((i_{1}^{-1},2^{-1})\)-upper gradient for \(f\) with respect to \((C,V)\). By Definition 2.35 and Definition 3.16, since \(\mathrm{Mesh}(P)\leq i_{1}^{-1}\), \(p_{0},p_{n}\in C\), \(P\subset V\), and \(\mathrm{diam}(P)\geq 1\), we get \[\sum_{k=0}^{n-1}g_{i_{1}}(p_{k})d(p_{k},p_{k+1})=\int_{P}g_{i_{1}}\geq|f(p_{n} )-f(p_{0})|.\] In particular, \(\Phi(P)\geq f(p_{0})+\sum_{k=1}^{n-1}G(p_{k})d(p_{k},p_{k+1})\geq f(p_{n})=0\). Finally, the inequality \(\tilde{f}(x)\geq f(x)\) follows by infimizing over discrete paths \(P\). (3.23) **Property 3:**\(\tilde{f}\in N^{1,p}(X)\cap C(X)\). To prove \(\tilde{f}\in N^{1,p}(X)\cap C(X)\), we proceed in a few stages. First, we show local Lipschitz continuity and an upper gradient property in the complement of \(K\). **Lemma 3.24**.: _The function \(\tilde{f}\) is locally Lipschitz in \(X\setminus K\) with upper gradient the function \(g\) restricted to \(X\setminus K\), where \(g\) is defined as in Lemma 3.11._ Proof.: Recall that \(D(x)>0\) for \(x\in X\setminus K\), by Lemma 3.15 property 2(b). If \(x,y\in X\setminus K\) and if \(d(x,y)\leq\min\{D(x),D(y)\}\), then we claim that \[|\tilde{f}(x)-\tilde{f}(y)|\leq\max\{G(x),G(y)\}d(x,y). \tag{3.25}\] Suppose for the moment, that this is true. Then, we can show that \(g\) is an upper gradient for \(\tilde{f}\) restricted to \(X\setminus K\). Indeed, let \(x\in X\setminus K\) and fix \(n\geq 1\) so that \(2^{-n}\leq d(x,K)\). Take any \(r<\min(2^{-n-3},1/i_{n+2})/2\). Then, for every \(y\in B(x,r)\) we have \(d(y,K)>2^{-n-1}\) and, by Lemma 3.15 properties 2(c) and 3(c), we have and \(D(y)\geq 2r\) and \(G(y)\leq g_{i_{n}}(y)\leq C(i_{n})\), where \(C(i_{i})\) is the supremum of the bounded function \(g_{i_{l}}\) for \(l\in\mathbb{N}\). Thus Inequality (3.25) implies that \(\tilde{f}|_{B(x,r)}\) is \(C(i_{n})\)-Lipschitz. Using this local Lipschitz property and compactness, if \(\gamma:[0,1]\to X\setminus K\) is any rectifiable curve then \(\tilde{f}\circ\gamma\) is Lipschitz, and \(d(\gamma,K)>2^{-n-1}\) for some \(n\). Therefore, by Lemma 3.15 property 3(c) again, we have \(G\circ\gamma\leq g_{i_{n}}\circ\gamma\) and that \(|\tilde{f}(\gamma(t))-\tilde{f}(\gamma(s))|\leq\max\{g_{i_{n}}(\gamma(t)),g_{i_{ n}}(\gamma(t))\}d(\gamma(s),\gamma(t))\) for any \(s,t\in[0,1]\) with \(d(\gamma(s),\gamma(t))\leq\min\{D(\gamma(s)),D(\gamma(t))\}\). Following similar arguments as in [21, Lemma 4.7], this together with the continuity of \(g_{i_{n}}\) and Lemma 3.15 property 2(c) yields \[|\tilde{f}(\gamma(0))-\tilde{f}(\gamma(1))|\leq\int_{0}^{1}|(\tilde{f}\circ\gamma )^{\prime}|dt\leq\int_{\gamma}g_{i_{n}}\,ds.\] Since \(g_{i_{n}}\leq g\), then \(g\) is an upper gradient for \(\tilde{f}\) restricted to \(X\setminus K\). Next, we prove Inequality (3.25). Fix \(x,y\in X\setminus K\) with \(d(x,y)\leq\min\{D(x),D(y)\}\). By Property 2(b) of Lemma 3.15 we have \(\min\{D(x),D(y)\}\leq 1\). Recall that, by (3.10), \(R>1\). If \(x\) or \(y\) is in \(X\setminus B(x_{0},2R)\) then \(d(x,y)\leq 1\) and both \(x,y\not\in B(x_{0},R)\). Thus, by Property 2 in (3.22), \(\tilde{f}(x)=\tilde{f}(y)=0\) and inequality (3.25) is immediate. We are left to consider the case when \(x,y\in B(x_{0},2R)\) and Formula (3.19) gives the values of the function \(\tilde{f}\) at \(x\) and \(y\). By symmetry, it suffices to show that \(\tilde{f}(x)\leq\tilde{f}(y)+G(y)d(x,y)\). If \(\tilde{f}(y)=M\), the claim follows by the definition of \(\tilde{f}\). Otherwise, we have \(\tilde{f}(y)=\inf_{P}\Phi(P)\), where \(P=(p_{0},\ldots,p_{n})\) runs through all \((y,\mathrm{D})\)-admissible paths. Let \(P\) be any \((y,\mathrm{D})\) admissible path. If \(x\in P\), then by truncating \(P\), we obtain an \((x,\mathrm{D})\) admissible sub-path, and \(\tilde{f}(x)\leq\Phi(P)\) by definition. If \(x\not\in P\), the augmented path \(P^{\prime}=(p_{0},\ldots,p_{n},x)\) is \((x,\mathrm{D})\) admissible, since \(d(p_{n},x)=d(y,x)\leq\min\{D(x),D(y)\}\leq\mathrm{D}(p_{n})\). Thus, \[\tilde{f}(x)\leq\Phi(P^{\prime})=\mathbf{P}(d(p_{0},p_{1}))+f(p_{0})+\sum_{k= 0}^{n-1}G(p_{k})d(p_{k},p_{k+1})+G(y)d(x,y)=\Phi(P)+G(y)d(x,y).\] Infimizing over all discrete \((y,\mathrm{D})\)-admissible paths \(P\) now yields \(\tilde{f}(x)\leq\tilde{f}(y)+G(y)d(x,y)\), and thus the claim. Next, we prove continuity of \(\tilde{f}\) on all of \(X\). **Lemma 3.26**.: _The function \(\tilde{f}:X\longrightarrow\mathbb{R}\) is continuous._ Proof.: By Lemma 3.24, the function \(\tilde{f}|_{X\setminus K}\) is continuous. Also, \(\tilde{f}|_{K}=f|_{K}\) is continuous by assumption. Recall that \(K\) is closed. Thus to prove the lemma, it suffices to prove sequential continuity at points of \(\partial K\), with the sequence approaching from \(X\setminus K\). Fix \(x\in\partial K\) and a sequence \(x_{i}\to x\) with \(x_{i}\not\in K\) for each \(i\in\mathbb{N}\). Since \(\partial K\subset B(x_{0},R)\), we may assume that \(x_{i}\in B(x_{0},R)\) for all \(i\) by passing to the tail of the sequence. Since the first jump is free, for every \(i\in\mathbb{N}\), the path \(P=(x,x_{i})\) is an \((x_{i},\mathrm{D})\)-admissible path. Thus, by Property 4(d) of Lemma 3.15, we have \[\tilde{f}(x_{i})\leq f(x)+\mathbf{P}(d(x,x_{i}))+g(x)d(x,x_{i}).\] By Property 4(b) of Lemma 3.15 and the fact that \(g|_{K}\) is bounded by Lemma 3.11, we have \(\limsup_{i\to\infty}\tilde{f}(x_{i})\leq f(x)\). So, it suffices to show that \(\liminf_{i\to\infty}\tilde{f}(x_{i})\geq f(x)\). Indeed, by passing to a subsequence, it suffices to assume that the limit \(\lim_{i\to\infty}\tilde{f}(x_{i})\) exists and then to show that \[\lim_{i\to\infty}\tilde{f}(x_{i})\geq f(x). \tag{3.27}\] In the following, we will analyze several sub-cases depending on the values of \(\tilde{f}\) and the constructed paths. Eliminating each sub-case will reduce the problem to a simpler situation. In the following, WLOG is short for "Without loss of generality". **Reduction 1: WLOG \(\tilde{f}(x_{i})<M\) for infinitely many \(i\).** If for all but finitely many \(i\) we have \(\tilde{f}(x_{i})=M\), the claim (3.27) follows from the definition of \(M\). Thus, we may pass to a subsequence, where \(\tilde{f}(x_{i})<M\) for every \(i\in\mathbb{N}\). By definition of \(\tilde{f}\) in (3.19), we may find discrete paths \(P_{i}=(p_{0}^{i},\ldots,p_{n(i)}^{i})\) which are \((x_{i},\mathrm{D})\)-admissible and for which \[\lim_{i\to\infty}\tilde{f}(x_{i})=\lim_{i\to\infty}\Phi(P_{i})=\lim_{i\to\infty }\left\{\mathbf{P}(d(p_{0}^{i},p_{1}^{i}))+f(p_{0}^{i})+\sum_{k=0}^{n(i)-1}G(p_ {k}^{i})d(p_{k}^{i},p_{k+1}^{i})\right\}, \tag{3.28}\] and \(\Phi(P_{i})<M\). Note that \(n(i)>0\), because \(x_{i}\in B(x_{0},R)\setminus K\), and hence is not in \(C\). **Reduction 2: WLOG the points \(p_{0}^{i}\) do not converge to \(x\).** If \(\lim_{i\to\infty}p_{0}^{i}=x\), then we get that \(\lim_{i\to\infty}\Phi(P_{i})\geq\lim_{i\to\infty}f(p_{0}^{i})=f(x)\), as desired, because \(p_{0}^{i}\) and \(x\) are in \(C\) and \(f|_{C}\) is continuous. Thus, by passing to some subsequence we are left to consider the case that \(\lim_{i\to\infty}d(p_{0}^{i},x)=\Delta\) for some \(\Delta>0\). By further passing to a subsequence we can ensure \(\Delta/2\leq d(p_{0}^{i},x)\leq 2\Delta\), so that \(\operatorname{diam}(P_{i})\geq\Delta/2\) for each \(i\in\mathbb{N}\). By passing to another subsequence, since \(\lim_{i\to\infty}p_{n(i)}^{i}=\lim_{i\to\infty}x_{i}=x\), we can assume that \[d(p_{n(i)}^{i},x)\leq\min\{\Delta/2,i_{3}^{-1}\}\leq\operatorname{diam}(P_{i}) \qquad\text{for all }i\in\mathbb{N}. \tag{3.29}\] This allows us to compare the values of \(f(x)\) and \(f(p_{0}^{i})\) by considering the augmented discrete path \(P_{i}^{\prime}=(p_{0}^{i},\ldots,p_{n(i)}^{i},x)\). At this point we may picture the path \(P_{i}\) as the path corresponding to the point \(T\) in Figure 1. The path \(P^{\prime}_{i}\) is obtained by augmenting \(P_{i}\) with a jump to \(x\). Hence, \(P^{\prime}_{i}\) is no longer admissible. **Reduction 3: WLOG the first jump \(d(p^{i}_{0},p^{i}_{1})\) is less than \(i_{3}^{-1}\).** If not, by Lemma 3.15 4(a), \(\mathbf{P}(d(p^{i}_{0},p^{i}_{1}))\geq 2M\). However, this contradicts the fact that \(\Phi(P_{i})<M\). Therefore, we must thus have \(d(p^{i}_{0},p^{i}_{1})\leq\frac{1}{i_{3}}\) for all \(i\in\mathbb{N}\). Recall that Lemma 3.15 2(a-b) gives \(\mathrm{D}(x)\leq i_{3}^{-1}\), for all \(x\in X\). Thus, since \(P_{i}\) is \((x_{i},\mathrm{D})\)-admissible, and \(d(p^{i}_{n(i)},x)\leq i_{3}^{-1}\), we get \(\mathrm{Mesh}(P^{\prime}_{i})\leq i_{3}^{-1}\). **Reduction 4: WLOG eventually the diameter \(\mathrm{diam}(P^{\prime}_{i})\) is less than \(2^{-3}\).** If \(\mathrm{diam}(P^{\prime}_{i})\geq 2^{-3}\), for infinitely many \(i\in\mathbb{N}\), we may pass to a subsequence where this property holds. Then, since \(g_{i_{3}}\) is \((1/i_{3},2^{-3})\)-disretely admissible for \(f\) with respect to \((C,V)\), we can apply the admissibility condition to the path \(P^{\prime}\) and we get \[|f(x)-f(p^{i}_{0})| \leq g_{i_{3}}(p^{i}_{n(i)})d(p^{i}_{n(i)},x)+\sum_{k=0}^{n(i)-1}g _{i_{3}}(p^{i}_{k})d(p^{i}_{k},p^{i}_{k+1}) \tag{3.30}\] \[\leq g_{i_{3}}(p^{i}_{n(i)})d(p^{i}_{n(i)},x)+\sum_{k=0}^{n(i)-1}G (p^{i}_{k})d(p^{i}_{k},p^{i}_{k+1}).\qquad\qquad\text{(by Lemma~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\ref{lem:wLOG}~{}\] Thus, \[\Phi(P_{i})=\mathbf{P}(d(p^{i}_{0},p^{i}_{1}))+f(p^{i}_{0})+\sum_{k=0}^{n(i)-1} G(p^{i}_{k})d(p^{i}_{k},p^{i}_{k+1})\geq f(x)-g_{i_{3}}(p^{i}_{n(i)})d(p^{i}_{n(i)},x),\] and by sending \(i\to\infty\) along such a subsequence and noting that \(g_{i_{3}}\) is continuous and bounded, we get that \(\lim_{i\to\infty}\Phi(P_{i})\geq f(x)\). Therefore, in this case, \(\lim_{i\to\infty}\tilde{f}(x_{i})\geq f(x)\) using Equation (3.28). Thus, by the last reduction and by (3.29), we can assume that \[\frac{\Delta}{2}\leq\mathrm{diam}(P^{\prime}_{i})\leq 2^{-3}.\] In particular, if we let \(L\in\mathbb{Z}\) be such that \(2^{-L}\leq\Delta<2^{1-L}\), then \(L\geq 3\). Since \(p^{i}_{n(i)}\) is converging to \(x\), by passing to the tail, we can assume \(d(p^{i}_{n(i)},x)\leq\min\{\Delta/2,i_{L+1}^{-1}\}\). Let \(l_{i}\in\mathbb{N}\) be such that \(3\leq l_{i}\leq L\) and \(2^{-l_{i}-1}\leq\mathrm{diam}(P^{\prime}_{i})\leq 2^{-l_{i}}\). By the pigeonhole principle, we can pass to a sub-sequence with \(l_{i}=l\) for all \(i\in\mathbb{N}\). Given \(l\in\mathbb{N}\), which controls the size of the diameter, we now want to control the mesh-size of the path \(P^{\prime}_{i}\). **Reduction 5: WLOG for all but finitely many indices \(i\), we have \(d(p^{i}_{0},p^{i}_{1})\leq i_{l+1}^{-1}\),** where \(i_{n}\) is the bound for the mesh-size defined in Lemma 3.15. If not, then \(d(p^{i}_{0},p^{i}_{1})\geq i_{l+1}^{-1}\) for infinitely many \(i\in\mathbb{N}\). For such indices \(i\), we have \(\mathbf{P}(d(p^{i}_{0},p^{i}_{1}))\geq\omega(2^{1-l})\) by Property 4(b) of Lemma 3.15. Thus, \[\Phi(P_{i}) \geq\mathbf{P}(d(p^{i}_{0},p^{i}_{1}))+f(p^{i}_{0})\] \[\geq\omega(2^{1-l})+f(p^{i}_{0})\] \[\geq\omega(\mathrm{diam}(P^{\prime}_{i}))+f(p^{i}_{0}) \text{(since $2^{1-l}\geq\mathrm{diam}(P^{\prime}_{i})$)}\] \[\geq\omega(d(x,p^{i}_{0}))+f(p^{i}_{0}) \text{(since $x,p^{i}_{0}\in P^{\prime}_{i}$)}\] \[\geq f(x)\] Letting \(i\to\infty\) along the given subsequence gives the claim. Therefore, we can assume by passing to the tail that \(d(p^{i}_{0},p^{i}_{1})\leq i_{l+1}^{-1}\) for all \(i\in\mathbb{N}\). **End of proof of Lemma 3.26:** By the reductions described above, after passing to a subsequence, \[d(p^{i}_{0},p^{i}_{1})\leq i_{l+1}^{-1}\qquad\text{and}\qquad d(p^{i}_{n(i)},x) \leq i_{L+1}^{-1}\leq i_{l+1}^{-1} \tag{3.31}\] For \(k=1,\ldots,n(i)-1\), we have \(d(p_{k}^{i},K)\leq\operatorname{diam}(P_{i}^{\prime})\leq 2^{-l}\). Since \(P_{i}\) is \((x,D)\)-admissible, by Property 2(c) of Lemma 3.15, \[d(p_{k}^{i},p_{k+1}^{i})\leq\operatorname{D}(p_{k}^{i})\leq\frac{1}{i_{l+1}}. \tag{3.32}\] Combining (3.31) and (3.32), we get \(\operatorname{Mesh}(P_{i}^{\prime})\leq 1/i_{l+1}\) and \(\operatorname{diam}(P_{i}^{\prime})\geq 2^{-l-1}\). Finally, since \(g_{i_{l+1}}\) is \((i_{l+1}^{-1},2^{-l-1})\)-discretely admissible for \(f\) for \((C,V)\) we get Inequality (3.30) with \(i_{l+1}\) replacing \(i_{3}\). Therefore, \[\Phi(P_{i})=\mathbf{P}(d(p_{0}^{i},p_{1}^{i}))+f(p_{0}^{i})+\sum_{k=0}^{n(i)-1 }G(p_{k}^{i})d(p_{k}^{i},p_{k+1}^{i})\geq f(x)-g_{i_{l+1}}(p_{n(i)}^{i})d(p_{n( i)}^{i},x),\] and the claim follows from Equation (3.28) by sending \(i\to\infty\) and noting that \(g_{i_{l+1}}\) is continuous. Next, we quickly get the Sobolev property. **Lemma 3.33**.: _The function \(\tilde{f}\in N^{1,p}(X)\) and \(g\) is its upper gradient._ Proof.: Recall that \(f\in N^{1,p}(X)\), \(g\) is an upper gradient for \(f\), \(f|_{K}=\tilde{f}|_{K}\) by Property (3.22), \(\tilde{f}\in C(X)\) by Lemma 3.26, and \(g\) is an upper gradient for \(\tilde{f}\) in \(X\setminus K\) by Lemma 3.24. Thus, Proposition 2.6 applied to \(f,g,\tilde{f}\) and \(K\) shows that \(g\) is an upper gradient for \(\tilde{f}\) and that consequently \(\tilde{f}\in N^{1,p}(X)\). This establishes Property (3.23). (3.34) **Property 4:**\(\int_{X\setminus K}g_{\tilde{f}}^{p}\,d\mu\leq\int_{X\setminus K}g_{*}^{p}\,d\mu+\epsilon\) By Lemma 3.33, \(\tilde{f}\in N^{1,p}(X)\) with upper gradient \(g\). Recall that \(g_{\tilde{f}}\) is the minimal \(p\)-weak upper gradient and is smaller than any other upper gradient, i.e. \(g_{\tilde{f}}\leq g\) (a.e.). By construction, \(g_{*}\leq g\). Thus, the fourth property follows from Lemma 3.11 and the fact that \(g\in L^{p}(X)\): \[\int_{X\setminus K}g_{\tilde{f}}^{p}d\mu\leq\int_{X\setminus K}g^{p}d\mu= \int_{X}g^{p}d\mu-\int_{K}g^{p}d\mu\leq\int_{X}g_{*}^{p}+\epsilon 2^{-4}-\int_{K}g_{*}^{p }d\mu=\int_{X\setminus K}g_{*}^{p}+\epsilon 2^{-4}.\] ### Newton-Sobolev functions are quasicontinuous **Theorem 3.35**.: _If \(X\) is complete and separable and \(f\in N^{1,p}(X)\), then \(f\) is quasicontinuous._ We follow the arguments in [3] and [21], but without relying on the hypothesis of properness and density of continuous functions in \(N^{1,p}(X)\). Hence, we only provide a sketch. Sketch of the proof of Theorem 3.35.: By Theorem 3.7, there is a sequence \(f_{i}\in N^{1,p}(X)\cap C(X)\), for \(i\in\mathbb{N}\), with \(\|f_{i}-f\|_{N^{1,p}(X)}\leq 2^{-i}\). Next, we apply the argument from the proof of [21, Theorem 3.7] to show that \(f_{i}\) converges capacity almost everywhere to \(f\in N^{1,p}(X)\). Fix \(\epsilon_{0}>0\), and let \(E_{\epsilon_{0},n}=\{x\in X:|f_{n}-f|\geq\epsilon_{0}/n\}\). We have \[\int_{X}|f_{n}-f|^{p}d\mu\leq 2^{-np},\] and thus \(\mu(E_{\epsilon_{0},n})\leq n^{p}2^{-np}\epsilon_{0}^{-p}\). Let \(E_{N}=\bigcup_{n\geq N}E_{\epsilon,N}\). By a union bound, we get that for any \(\epsilon>0\), there exists an \(N\), so that \(\mu(E_{N})\leq\epsilon.\) The sequence of functions \(f_{n}(x)\) converges uniformly to \(f\) for any \(x\in X\setminus E_{N}\), and thus for a.e. \(x\in X\), since \(\epsilon>0\) is arbitrary. By considering \(u_{n}=|f_{n}-f|n\epsilon_{0}^{-1}\) as a test function, we get \(\operatorname{Cap}_{p}(E_{\epsilon_{0},n})\leq n^{p}2^{-np}\epsilon_{0}^{-p}\). At the expense of possibly increasing \(N\), we get \(\operatorname{Cap}_{p}(E_{N})\leq\epsilon.\) Since \(\epsilon>0\) is arbitrary, \(\lim_{N\to\infty}\operatorname{Cap}_{p}(E_{N})=0\). Further, \(f_{i}\) converges pointwise to \(f\) outside the set \(E=\cap_{n=1}^{\infty}E_{n}\), which has capacity zero. Since the convergence is uniform, \(f\) is continuous in \(X\setminus E_{N}\), for every \(N\). Therefore, \(f\) is quasicontinuous, since \(\lim_{N\to\infty}\operatorname{Cap}_{p}(E_{N})=0\) ## 4. Localization and when \(X\) is locally complete The previous section was focused entirely on complete spaces. In the final sections we improve these statements to a locally complete setting. Specifically, we prove the following theorems, where in each we assume that \(X\) is locally complete and separable. * Theorem 1.10: Concluding that \(f\in N^{1,p}(X)\) is quasicontinuous. * Theorem 1.6: Concluding that \(N^{1,p}(X)\cap C(X)\) is dense in \(N^{1,p}(X)\). These theorems will all be reduced to the complete setting by taking completions. This makes \(X\) into an open set in its completion, and we are left to consider domains \(\Omega\) in complete spaces. Then, in each case, we consider the set of points \(X_{\delta}\subset X\), whose distance do the boundary in the completion is at least \(\delta\), and construct partitions of unity subordinate to such sets. Each \(X_{\delta}\) is complete, and the proofs mainly involve checking that we can "patch" together the information from each \(X_{\delta}\) to their union, which is \(X\). For technical reasons, we prove these theorems in a slightly different order from those in the complete setting. ### Preliminaries on taking a completion First, we address some measure theoretic issues in taking a completion. Let \(X\) be locally complete, and let \(\hat{X}\) be its completion. The completion is separable, if \(X\) is separable. Further \(X\) is an open subset of \(\hat{X}\). If \(\mu\) is a Radon measure on \(X\), then we can define a Radon measure \(\hat{\mu}\) on \(\hat{X}\) as follows. If \(E\subset X\) is Borel, then \(E\cap X\) is also Borel and we can define \(\hat{\mu}(E)=\mu(E\cap X)\). (In fact, by a different argument \(E\cap X\) is Borel in \(X\) whenever \(E\) is Borel even when \(X\) is not Borel measurable in \(\hat{X}\), see [20, Proof of Lemma 1]). Since \(\mu\) is finite on balls, so is \(\hat{\mu}\) and therefore \(\hat{\mu}\) is a Radon measure. (See discussion at the beginning of Section 2). Since we will be dealing with concepts relative to \(X\) and \(\hat{X}\) we need some care in our notation. For capacity, we will indicate the space \(Y\) with respect to which it is computed in the superscript, as in \(\operatorname{Cap}_{p}^{Y}(E)\), for \(E\subset Y\). We remark, that if \(E\subset X\subset Y\) and the measures on the spaces relate by restriction \(\mu_{X}=\mu_{Y}|_{X}\) (where \(X\) is measurable in \(Y\)), then \(\operatorname{Cap}_{p}^{X}(E)\leq\operatorname{Cap}_{p}^{Y}(E)\). Here, we use the fact that in this same setting if \(u\in N^{1,p}(Y)\), then \(u|_{X}\in N^{1,p}(X)\), as readily follows from the definition. ### Quasicontinuity Proof of Theorem 1.10.: Fix \(f\in N^{1,p}(X)\). Let \(\hat{X}\) be the completion of \(X\) and \(\hat{\mu}\) be the extension of \(\mu\) to \(\hat{X}\). We have that \(X\) is also an open set in \(\hat{X}\) since \(X\) is locally complete. Let \(\delta>0\) be arbitrary. Define \(X_{\delta}=\{x:d(x,\hat{X}\setminus X)\geq\delta\}\). Then, \(X_{\delta}\) is a closed subset of \(\hat{X}\). Choose \(\psi_{\delta}(x)=\min\{1,\frac{2}{\delta}d(x,\hat{X}\setminus X_{\delta/2})\}\). Then \(\psi_{\delta}|_{X_{\delta}}\geq 1\), \(\psi_{\delta}\) is \(2/\delta\)-Lipschitz and \(\psi_{\delta}|_{\hat{X}\setminus X_{\delta/2}}=0\). Let \(f_{\delta}=f\psi_{\delta}\). We have \(f_{\delta}|_{X}\in N^{1,p}(X)\) and \(f_{\delta}|_{\hat{X}\setminus X_{\delta/2}}=0\in N^{1,p}(\hat{X}\setminus X_{ \delta/2})\). Then, \(f_{\delta}\in N^{1,p}(\hat{X})\) since \(N^{1,p}(X)\) has the sheaf property: If \(A,B\subset\hat{X}\) are open sets and \(f|_{A}\in N^{1,p}(A),f|_{B}\in N^{1,p}(B)\), then \(f|_{A\cup B}\in N^{1,p}(A\cup B)\).1 Footnote 1: This can be seen by the following argument: if \(g_{A},g_{B}\in L^{p}(A)\) are upper gradients for \(f|_{A}\) and \(f|_{B}\), then \(g=g_{A}1_{A}+g_{B}1_{B}\in L^{p}(A\cup B)\) is an upper gradient of \(f|_{A\cup B}\). Indeed, the upper gradient inequality (1.1) can be verified for any rectifiable curve \(\gamma\) in \(A\cup B\) by dividing it into finitely many parts contained in either \(A\) or \(B\). Then, by Theorem 3.35 we have that \(f_{\delta}\) is quasicontinuous in \(\hat{X}\). Therefore, for any \(\delta>0\) there is an open subset \(E_{\delta}\) so that \(f_{\delta}|_{\hat{X}\setminus E_{\delta}}\) is continuous and \(\operatorname{Cap}_{p}^{\hat{X}}(E_{\delta})<\delta.\) Fix \(\epsilon>0\) and let \(E=\cup_{i=1}^{\infty}E_{\epsilon 2^{-i}}\cap X\). We have \(\operatorname{Cap}_{p}^{X}(E)\leq\operatorname{Cap}_{p}^{\hat{X}}(E)\leq\epsilon\). Now, \(f_{2^{-i}\epsilon}|_{X\setminus E}\) is continuous for every \(i\in\mathbb{N}\). Therefore \(f|_{X_{2-i\epsilon}\setminus E}=f_{2^{-i}\epsilon}|_{X_{2-i\epsilon}\setminus E}\) is continuous on for any \(i\in\mathbb{N}\). From this we get \(f|_{X\setminus E}\) is continuous. ### Density of continuous functions Here and in what follows, the support \(\operatorname{supp}(f)\) of a function \(f:X\to\mathbb{R}\) is the smallest closed set \(C\) so that \(f|_{X\setminus C}\) vanishes identically. In the proof of the density of continuous functions we will apply a partition of unity argument We will need a standard construction for a partition of unity subordinate to a cover. Let \(X_{i}=\{x:d(\hat{X}\setminus X,x)\geq 2^{-i}\}\) and \(\Omega_{i}=\{x:d(\hat{X}\setminus X,x)>2^{-i}\}\). In the following, the distance of a point to an empty set is defined as \(\infty\). Also, we say that a sum of functions \(\sum_{i=1}^{\infty}f_{i}(x)\) is _locally finite_ if for every \(x\in X\) there exists a neighborhood, where only finitely many terms are non-zero. **Lemma 4.1**.: _Let \(\hat{X}\) be the completion of \(X\) and let \(X_{i}\) be defined as above. For each \(n\in\mathbb{N}\), There exist \(4^{n}\)-Lipschitz functions \(\psi_{n}:\hat{X}\to[0,1]\), so that_ * \(\operatorname{supp}(\psi_{0})\subset X_{1}\) _and_ \(\operatorname{supp}(\psi_{n})\subset X_{n+1}\setminus X_{n-1}\) _for_ \(n\geq 1\)_;_ * _the functions are a partition of unity:_ \(\sum_{n=0}^{\infty}\psi_{n}(x)=1\) _for_ \(x\in X\)_; and_ * _the previous sum is locally finite in_ \(X\)_: for every_ \(x\in X\) _there exists a_ \(\delta>0\) _so that are at most three_ \(n\in\mathbb{N}\) _so that_ \(\psi_{n}(y)\neq 0\) _for_ \(y\in B(x,\delta)\)_._ Proof.: Let \(\psi_{0}(x)=\min\{1,2d(x,\hat{X}\setminus X_{1})\}\). Recursively, for \(n\geq 1\), define \[\psi_{n}(x)=\left(1-\sum_{k=0}^{n-1}\psi_{k}\right)\min\{1,2^{n+1}d(x,\hat{X} \setminus X_{n+1})\}. \tag{4.2}\] First, \(\psi_{0}\) is \(2\)-Lipschitz, and by induction one can show that \(\psi_{n}\) is Lipschitz with constant \((1+\cdots+4^{n-1})+2^{n+1}\leq 4^{n}\). We have \(\psi_{0}|_{X_{0}}=1\). By induction, we get that \(\sum_{k=0}^{n-1}\psi_{k}|_{X_{n-1}}=1\). Therefore, (b) holds. Moreover, this gives (a), since the first factor in (4.2) vanishes on \(X_{n-1}\) and the second factor vanishes outside \(X_{n+1}\). Finally, we prove (c). If \(x\in X\), then \(x\in X_{n}\setminus X_{n-1}\) for some \(n\geq 0\), where \(X_{-1}=\emptyset\) to simplify the argument. We have for \(\delta=2^{-(n-1)}\) that \(B(x,\delta)\subset X_{n+1}\). Thus, for \(y\in B(x,\delta)\), due to (a), \(\psi_{k}(y)\neq 0\) can only occur for \(k=n-1,n,n+1\). We also need a fairly simple version of the sheaf property for Sobolev functions. **Lemma 4.3**.: _Let \(p\in[1,\infty)\) and let \(X\) be any metric measure space equipped with a Radon measure \(\mu\), finite on balls. If \(A_{i}\subset X\) is any increasing sequence of open sets, and \(H:A:=\bigcup_{i=1}^{\infty}A_{i}\to[-\infty,\infty]\) is a function so that \(H|_{A_{i}}\in N^{1,p}(A_{i})\) with \(\sup_{i\in\mathbb{N}}\|H\|_{N^{1,p}(A_{i})}<\infty\), for \(i\geq 1\), then \(H\in N^{1,p}(A)\). Further, \(\|H\|_{N^{1,p}(A)}=\lim_{i\to\infty}\|H\|_{N^{1,p}(A_{i})}\)._ Proof.: The \(L^{p}\)-version of the claim follows from monotone convergence, and we get \(H\in L^{p}(A)\) and \(\|H\|_{L^{p}(A)}=\lim_{i\to\infty}\|H\|_{L^{p}(A_{i})}\) Let \(g_{i}:X\to[0,\infty]\) be the zero-extension of the minimal \(p\)-weak upper gradient of \(H|_{A_{i}}\). By locality of \(p\)-weak upper gradients, see [14, Proposition 6.3.22], \(g_{i}|_{A_{j}}=g_{j}\) almost everywhere on \(A_{j}\) for all \(j<i\). Thus, there exist a function \(g:A\to[0,\infty]\) with \(g\in L^{p}(A)\) and \(g|_{A_{i}}=g_{i}\) almost everywhere on \(A_{i}\). Thus, \(g|_{A_{i}}\) is a \(p\)-weak upper gradient for \(H|_{A_{i}}\) for all \(i\in\mathbb{N}\). Fix \(\epsilon>0\). By Lemma 2.4, for every \(i\), we can find a lower semi-continuous \(g_{i,\epsilon}:A_{i}\to[0,\infty]\) which is an upper gradient for \(H|_{A_{i}}\) with \(g_{i,\epsilon}\geq g|_{A_{i}}\) and \(\|g_{i,\epsilon}-g|_{A_{i}}\|_{L^{p}(A_{i})}\leq\epsilon 2^{-i}\). Extend \(g_{i,\epsilon}\) by zero, and define \(\tilde{g}=\sup_{i}g_{i,\epsilon}.\) We have, on the set \(A\), \[|\tilde{g}-g|\leq\sum_{i=1}^{\infty}|g_{i,\epsilon}-g|_{A_{i}}.\] Then \(\tilde{g}\in L^{p}(A)\), and, by monotone convergence, \[\|\tilde{g}\|_{L^{p}(A)}\leq\epsilon+\lim_{i\to\infty}\|g_{i}\|_{L^{p}(A_{i})}.\] By construction, \(\tilde{g}|_{A_{i}}\geq g_{i,\epsilon}\) and thus \(\tilde{g}|_{A_{i}}\) is an upper gradient for \(H|_{A_{i}}\). Every rectifiable curve in \(\bigcup_{i\in\mathbb{N}}A_{i}\) is contained in \(A_{i}\) for some \(i\in\mathbb{N}\). This argument verifies (1.1) and \(\tilde{g}\) is an upper gradient for \(H\). Thus, \(H\in N^{1,p}(A)\). Further, \[\|H\|_{N^{1,p}(A)}\leq(\|H\|_{L^{p}(A)}^{p}+\|\tilde{g}\|_{L^{p}(A)}^{p})^{ \frac{1}{p}}\leq\lim_{i\to\infty}((\epsilon+\|g_{i}\|_{L^{p}(A_{i})})^{p}+\|H\|_ {L^{p}(A_{i})}^{p})^{\frac{1}{p}}.\] Since \(\epsilon>0\) is arbitrary the claim follows. Proof of Theorem 1.6.: Let \(f\in N^{1,p}(X)\) be any function. Fix \(\epsilon>0\). Let \(\psi_{n}\) be the partition of unity functions from Lemma 4.1. We also define \(\hat{\psi}_{n}=\psi_{n}+\psi_{n+1}+\psi_{n-1}\) for \(n\geq 1\) and \(\hat{\psi}_{0}=\psi_{0}+\psi_{1}\). For every \(x\in\Omega\) we have finitely many \(n\) so that \(\hat{\psi}_{n}(x)\neq 0\). Further, whenever \(\psi_{n}(x)\neq 0\), we have \(\hat{\psi_{n}}(x)=1\). There are also constants \(L_{n}\) so that \(\hat{\psi}_{n}\) are \(L_{n}\)-Lipschitz. Indeed, with some care, we could show that \(L_{n}\lesssim 4^{n}\), but we will not need this. As in the proof of Theorem 1.10 we set \(f_{n}=f\hat{\psi}_{n}\in N^{1,p}(\hat{X})\). By Theorem 3.7, there is a continuous \(u^{\prime}_{n}\in N^{1,p}(\hat{X})\cap C(\hat{X})\) so that \(\|f_{n}-u^{\prime}_{n}\|_{N^{1,p}(\hat{X})}\leq\epsilon 2^{-4-n}(1+L_{n})^{-1}\). Also, let \(u_{n}=\hat{\psi}_{n}g^{\prime}_{n}\). Then, by using the Leibniz rule (see [14, Proposition 6.3.28]), we get \[\left\|u_{n}-f_{n}\right\|_{N^{1,p}(\hat{X})}=\left\|\hat{\psi}_{n}(u_{n}-f_{n}) \right\|_{N^{1,p}(\hat{X})}\leq 2(1+L_{n})\|u_{n}-f_{n}\|_{N^{1,p}(\hat{X})}\leq 2^{-2-n}\epsilon.\] Let \(u=\sum_{n=1}^{\infty}u_{n}\). Since the sum is a locally finite sum of continuous functions by Property (c) of Lemma 4.1, then \(u\in C(X)\) and the sum is well defined. We show that \(u\in N^{1,p}(X)\). Fix an \(i\in\mathbb{N}\). We have that \(f|_{\Omega_{i}}=\sum_{i=0}^{n+1}f_{i}|_{\Omega_{i}}\) and that \(u|_{\Omega_{i}}=\sum_{i=0}^{n+1}u_{i}|_{\Omega_{i}}\). Thus, \[\|f-u\|_{N^{1,p}(\Omega_{i})}\leq\sum_{n=1}^{\infty}\|f_{n}-u_{n}\|_{N^{1,p}( \Omega_{i})}\leq\epsilon/2\] and \(u|_{\Omega_{i}}\in N^{1,p}(\Omega_{i})\) with a uniformly bounded norm independent of \(i\). By Lemma 4.3, \(u\in N^{1,p}(X)\). Further, \(\|g\|_{N^{1,p}(X)}=\lim_{i\to\infty}\|g\|_{N^{1,p}(\Omega_{i})}\). Finally, by applying this argument to the difference \(f-g\), we obtain \(\|f-g\|_{N^{1,p}(X)}\leq\epsilon\). Since \(g\) is continuous, the claim follows. ## 5. Choquet capacities and equivalence of definitions In the final section we study the capacity \(E\to\operatorname{Cap}_{p}(E)\) and condenser capacity \(\operatorname{Cap}_{p}(E,F)\), and prove that they satisfy certain regularity properties. Specifically, we prove the following three theorems. 1. Theorem 1.7: Concluding that \(E\to\operatorname{Cap}_{p}(E)\) is outer regular. 2. Theorem 1.8: Concluding that \(E\to\operatorname{Cap}_{p}(E)\) is a Choquet capacity for \(p>1\). 3. Theorem 1.3: Concluding that different definitions of \(\operatorname{Cap}_{p}(E,F)\) coincide. In particular, capacity can be computed with locally lipschitz funtions with locally lipschitz upper gradients. ### Choquet capacity and outer regularity We start by defining a Choquet capacity. Denote by \(\mathcal{P}(X)\) the collection of all subsets of \(X\), i.e. its power set. **Definition 5.1**.: A functional \(I:\mathcal{P}(X)\to[0,\infty]\) is called a Choquet capacity, if it satisfies the following three properties. 1. Increasing: If \(A\subset B\subset X\), then \(I(A)\leq I(B)\). 2. Continuity from below: If \((A_{n})_{n\in\mathbb{N}}\) is an increasing sequence of subsets of \(X\), then \[\lim_{n\to\infty}I(A_{n})=I\left(\bigcup_{n\in\mathbb{N}}A_{n}\right).\] 3. Continuity from above: If \((K_{n})_{n\in\mathbb{N}}\) is a decreasing sequence of compact subsets of \(X\), then \[\lim_{n\to\infty}I(K_{n})=I\left(\bigcap_{n\in\mathbb{N}}K_{n}\right).\] A reader interested in Choquet capacities may consult any of the following [7, 8]. A condensed treatise is available in [5]. An earlier result showing that a variant of \(\operatorname{Cap}_{p}\), see Remark 1.9, is Choquet is presented in [16]. One of the main motivations for introducing Choquet capacities is the "Capacitability theorem" of Choquet, which states that any analytic subset \(A\subset X\) satisfies: \(I(A)=\sup_{K\subset A}I(K)\), where the supremum is taken over compact subsets of \(A\). For \(I=\operatorname{Cap}_{p}\), the increasing property is immediate from the definition. The continuity from below holds without further assumptions, when \(p>1\). The continuity from above is reduced to the functional being _outer-regular_. Recall that the functional \(I\) is outer regular, if for every compact set \(K\subset X\), and any \(\epsilon>0\), there exists an open set \(O\) such that \(K\subset O\) and \(I(O)\leq I(K)+\epsilon\). In other words, the main object is to establish outer regularity, and then collect all the pieces together to prove that \(\operatorname{Cap}_{p}\) is a Choquet capacity. We first prove the outer regularity of the capacity, which was stated in Theorem 1.7. This is a repetition of the argument in [3, Proof of Corollary 1.3], with the only change being that Theorem 3.35 is used instead of [3, Theorem 1.1]. For the reader's convenience, we sketch the idea here. Sketch of proof of Theorem 1.7.: Let \(u\in N^{1,p}(X)\) be any non-negative function with \(u|_{E}\geq 1\). Fix \(\epsilon>0\). Then, \(u\) is quasicontinuous by Theorem 1.10, and there is an open set \(V\) with \(\operatorname{Cap}_{p}(V)<\epsilon^{p}\) so that \(u|_{X\setminus V}\) is continuous. Choose a non-negative function \(v\) so that \(\|v\|_{N^{1,p}(X)}\leq\epsilon\) and \(v|_{V}\geq 1\). By continuity in \(X\setminus V\), there is an open set \(O_{E}\) with \(E\setminus V\subset O_{E}\) so that \(u|_{O_{E}\cap X\setminus V}\geq 1-\epsilon\). Consider the function \(u_{\epsilon}=\frac{u}{1-\epsilon}+v\). Then \(u_{\epsilon}|_{O_{E}\cup V}\geq 1\). The set \(O_{E}\cup V\) is open, and thus, \[\inf_{E\subset O}\operatorname{Cap}_{p}(O)\leq\|u_{\epsilon}\|_{N^{1,p}(X)}^{p} \leq\left(\frac{1}{(1-\epsilon)}\|u\|_{N^{1,p}(X)}+\epsilon\right)^{p}.\] Taking an infimum over \(u\in N^{1,p}(X)\) with \(u|_{E}\geq 1\) and letting \(\epsilon\to 0\) yields the claim. Next, we prove that \(\operatorname{Cap}_{p}\) is a Choquet capacity. This was stated in the introduction as Theorem 1.8. Proof of Theorem 1.8.: We verify the three properties of a Choquet capacity from Definition 5.1. 1. **Increasing:** If \(A\subset B\subset X\), then \(\operatorname{Cap}_{p}(A)\leq\operatorname{Cap}_{p}(B)\), since every function \(u\in N^{1,p}(X)\) with \(u|_{B}=1\) also satisfies \(u|_{A}=1\). 2. **Continuity from below:** We follow the proof of [16], which is presented with a slightly different definition of capacity. Let \(A_{n}\) be any increasing sequence of sets. By the increasing property, the property of continuity from below is automatic if \(\lim_{n\to\infty}\operatorname{Cap}_{p}(A_{n})=\infty\). Thus, we may assume that \(\lim_{n\to\infty}\operatorname{Cap}_{p}(A_{n})<\infty\). Choose any sequence \(u_{n}\in N^{1,p}(X)\) so that \(u_{n}|_{A_{n}}=1\) and \[\lim_{n\to\infty}\|u_{n}\|_{N^{1,p}(X)}^{p}=\lim_{n\to\infty}\operatorname{ Cap}_{p}(A_{n}).\] The functions \(u_{n}\) and their minimal \(p\)-weak upper gradients \(g_{u_{n}}\) are uniformly bounded in \(L^{p}(X)\). Therefore, by Mazur's Lemma, we may choose convex combinations \(\tilde{u}_{n}\) of \(\{u_{k}\}_{k=n}^{\infty}\) and corresponding convex combinations \(\tilde{g}_{n}\) of \(\{g_{k}\}_{k=n}^{\infty}\) so that \(\tilde{u}_{n}\) and \(\tilde{g}_{n}\) are Cauchy in \(L^{p}(X)\) and so that \(\tilde{g}_{n}\) is an upper gradient for \(\tilde{u}_{n}\). Choose a subsequence \((n_{k})_{k=1}^{\infty}\), with \(n_{k}\geq k\), so that (5.2) \[\sum_{k=1}^{\infty}|\tilde{u}_{n_{k+1}}-\tilde{u}_{n_{k}}|+|\tilde{g}_{n_{k+1} }-\tilde{g}_{n_{k}}|\in L^{p}(X).\] Next, define \(\tilde{u}_{l}=\sup_{k\geq l}\tilde{u}_{n_{l}}\) and \(\tilde{g}_{l}=\sup_{k\geq l}\tilde{u}_{n_{l}}\). It follows from (5.2) that \(\tilde{u}_{l},\tilde{g}_{l}\in L^{p}(X)\), and that as \(l\to\infty\) they converge in \(L^{p}(X)\). A fairly direct calculation using the definition (1.1) shows that \(\tilde{g}_{l}\) is a \(p\)-weak upper gradient for \(\tilde{u}_{l}\), and \(\tilde{u}_{l}\in N^{1,p}(X)\). Note that \(\|\tilde{u}_{l}\|_{N^{1,p}(X)}^{p}\leq\|\tilde{u}_{l}\|_{L^{p}(X)}^{p}+\| \tilde{g}_{l}\|_{L^{p}(X)}^{p}\). Then, by construction and the \(L^{p}(X)\) convergence, we get \[\lim_{l\to\infty}\|\tilde{u}_{l}\|_{L^{p}(X)}^{p}+\|\tilde{g}_{l}\|_{L^{p}(X)} ^{p}\leq\lim_{n\to\infty}\|u_{n}\|_{N^{1,p}(X)}^{p}=\lim_{n\to\infty} \operatorname{Cap}_{p}(A_{n}).\] Now, \(\tilde{u}_{l}|_{A_{k}}\geq 1\) for every \(k\geq l\). Thus \(\tilde{u}_{l}|_{\bigcup_{k}A_{k}}\geq 1\). In particular, \[\operatorname{Cap}_{p}\left(\bigcup_{k}A_{k}\right)\leq\|\tilde{u}_{l}\|_{L^{p} (X)}^{p}+\|\tilde{g}_{l}\|_{L^{p}(X)}^{p}.\] Sending \(l\to\infty\), gives \[\operatorname{Cap}_{p}\left(\bigcup_{k}A_{k}\right)\leq\lim_{n\to\infty} \operatorname{Cap}_{p}(A_{n}).\] The opposite inequality follows from the increasing property. This completes the proof of continuity from below. 3. **Continuity from above:** Let \((K_{n})_{n\in\mathbb{N}}\) be any decreasing sequence of compact sets and let \(K=\bigcap_{n}K_{n}\). From the capacity being increasing, we get \(\operatorname{Cap}_{p}(K)\leq\lim_{n\to\infty}\operatorname{Cap}_{p}(K_{n})\). We next establish this inequality in the opposite direction. If \(\operatorname{Cap}_{p}(K)=\infty\), then \(\operatorname{Cap}_{p}(K)=\lim_{n\to\infty}\operatorname{Cap}_{p}(K_{n})\). Thus, consider the case of \(\operatorname{Cap}_{p}(K)<\infty\). By Theorem 1.7, for every \(\epsilon>0\), there exists an open set \(O\) with \(K\subset O\) and \(\operatorname{Cap}_{p}(O)\leq\operatorname{Cap}_{p}(K)+\epsilon\). For \(n\) sufficiently large \(K_{n}\subset O\), and thus by the increasing property, we get \[\operatorname{Cap}_{p}(K)\leq\lim_{n\to\infty}\operatorname{Cap}_{p}(K_{n}) \leq\operatorname{Cap}_{p}(K)+\epsilon.\] Since \(\epsilon>0\) is arbitrary, the claim follows. ### Different definitions of capacity agree Proof of Theorem 1.3.: Let \(E,F\neq\emptyset\) be two closed, disjoint non-empty subsets in \(X\) with \(d(E,F)>0\). It is straightforward to show that \[\operatorname{Cap}_{p}(E,F)\leq\operatorname{Cap}_{p}^{c}(E,F)\leq \operatorname{Cap}_{p}^{\operatorname{lip}}(E,F)\leq\operatorname{Cap}_{p}^{( \operatorname{lip},\operatorname{lip})}(E,F).\] Thus, it suffices to prove \(\operatorname{Cap}_{p}^{(\operatorname{lip},\operatorname{lip})}(E,F)\leq \operatorname{Cap}_{p}(E,F)\). If \(\operatorname{Cap}_{p}(E,F)=\infty\), this is obvious. Thus, assume \(\operatorname{Cap}_{p}(E,F)<\infty\). Let \(\epsilon>0\) be arbitrary. We can choose a function \(u\in N^{1,p}(X)\) which is non-negative, with \(u|_{E}=0\) and \(u|_{F}=1\), and with an upper gradient \(g_{\epsilon}\) such that \[\int g_{\epsilon}^{p}d\mu\leq\operatorname{Cap}_{p}(E,F)+\epsilon.\] Let \(\hat{X}\) be the completion of \(X\). Fix \(x_{0}\in X\). Extend \(u\) and \(g_{\epsilon}\) by zero to functions in \(L^{p}(\hat{X})\). Let \(X_{j}=\{x\in X\cap\overline{B(x_{0},j)}:d(x,\hat{X}\setminus X)\geq 2^{-j}\}\). Let \(\psi_{j}\) be the partition of unity constructed in Lemma 4.1. Recall that \(\sum_{n=0}^{\infty}\psi_{n}(x)=1\), each \(\psi_{n}\) is \(L_{n}\)-Lipschitz for some \(L_{n}<\infty\), and that \(\operatorname{supp}(\psi_{n})\subset X_{n+1}\setminus X_{n-1}\) for \(n\geq 1\) and \(\operatorname{supp}(\psi_{0})\subset X_{0}\). Let \(E_{j}=E\cap X_{j},F_{j}=F\cap X_{j}\). Let \(j_{0}\in\mathbb{N}\) be so that \(E_{j},F_{j}\neq\emptyset\) for all \(j\geq j_{0}\). The space \(X_{j}\) is complete and bounded, and so we can apply Theorem 3.3 and Corollary 3.4 applied to the functions \(u|_{X_{j}}\) and \((g_{\epsilon})|_{X_{j}}\). We obtain that for every \(j\geq j_{0}\) there exist Lispchitz functions \(u_{j}\in N^{1,p}(X_{j})\) with locally Lipschitz upper gradients \(g_{j}\in L^{p}(X_{j})\), for \(j\in\mathbb{N}\), so that \(u_{j}|_{E_{j}}=1,u_{j}|_{F_{j}}=0\) and \(\|g_{j}-g_{\epsilon}\|_{L^{p}(X_{j})}\leq 2^{-j}\) in \(L^{p}(X_{j})\). Extend \(u_{j}\) and \(g_{j}\) by zero to an \(L^{p}(X)\) function defined on all of \(X\). We have \(0\leq u_{j}\leq 1\). We will briefly consider the space \(L^{2}(X_{k})\) in order to avail ourselves of weak compactness in this space. The sets \(X_{k}\) are bounded and have bounded measure for \(k\in\mathbb{N}\). Thus, \(u_{j}|_{X_{k}}\in L^{2}(X_{k})\), for every \(k\) and every \(j\in\mathbb{N}\), and \(\sup_{j\in\mathbb{N}}\|u_{j}\|_{L^{2}(X_{k})}<\infty\). Weak compactness allows us to take a subsequence converging weakly in \(L^{2}(X_{k})\). Further, by Mazur's Lemma and a diagonal argument, we can take finite convex combinations \(v_{j}\) of \(\{u_{j},u_{j+1},\dots\}\) which converge in \(L^{2}(X_{k})\) for every \(k\in\mathbb{N}\). It is direct to show that \(v_{j}\) converges in \(L^{p}(X_{k})\) for every \(k\in\mathbb{N}\). Consider the corresponding convex combinations \(h_{j}\) of the upper gradients in \(\{g_{j},g_{j+1},\dots\}\). By using the definition of \(g_{j}\), these converge, for every \(k\in\mathbb{N}\), in \(L^{p}(X_{k})\) to \(g_{\epsilon}|_{X_{k}}\), since \(g_{j}|_{X_{k}}\) converge to \(g_{\epsilon}|_{X_{k}}\) as \(j\to\infty\). By construction, \(h_{j}\) is an upper gradient for \(v_{j}\) on \(X_{j}\), \(v_{j}|_{E_{k}}=0,v_{j}|_{F_{k}}=1\) and \(v_{j}\) is Lipschitz on \(X_{k}\) for every \(j\geq k\). Further, each \(h_{j}\) is locally Lipschitz. Choose a subsequence \((n_{k})_{k\in\mathbb{N}}\) so that \(\|v_{n_{k}}-v_{n_{k+1}}\|_{L^{p}(X_{k+2})}\leq\epsilon L_{k}^{-1}2^{-k}\), \(\|h_{n_{k}}-g_{\epsilon}\|_{L^{p}(X_{k+1})}\leq\epsilon 2^{-k}\) and so that \(n_{k}\geq k+1\). Finally, define \[U=\sum_{i=0}^{\infty}v_{n_{i}}\psi_{i}.\] By Property (c) of Lemma 4.1, the sum in \(U\) is locally finite and \(U\) is locally Lipschitz. It is not hard to see that \(U|_{E}=0\) and \(U|_{F}=1\). Let \(G:=\sum_{i=0}^{\infty}h_{n_{i}}\psi_{i}+L_{i}(\psi_{i-1}+\psi_{i}+\psi_{i+1})|v _{n_{i}}-U|\). We have the following estimates, since \(\psi_{i}\) is a partition of unity: \[\|G\|_{L^{p}(X)} \leq\|g_{\epsilon}\|_{L^{p}(X)}+\|G-g_{\epsilon}\|_{L^{p}(X)}\] \[\leq\|g_{\epsilon}\|_{L^{p}(X)}+\sum_{i=0}^{\infty}\|(h_{n_{i}}-g_ {\epsilon})\psi_{i}+L_{i}\mathbbm{1}_{X_{i+2}}|v_{n_{i}}-U|\|_{L^{p}(X)}\] \[\leq\|g_{\epsilon}\|_{L^{p}(X)}+\sum_{i=1}^{\infty}\|(h_{n_{i}}-g_ {\epsilon})\|_{L^{p}(X_{i+1})}+L_{i}\|v_{n_{i}}-U\|_{L^{p}(X_{i+2})}\] \[\leq\operatorname{Cap}_{p}(E,F)^{1/p}+4\epsilon.\] Assume for the moment that \(G\) is a \(p\)-weak upper gradient of \(U\). Then, by Lemma 2.4 for every \(\epsilon>0\), there exists an upper gradient \(g\) of \(U\) with \(\|g\|_{L^{p}(X)}\leq\|G\|_{L^{p}(X)}+\epsilon\leq\operatorname{Cap}_{p}(E,F)^{1/p}+5\epsilon.\) Since \(\epsilon>0\) is arbitrary the claim follows. Thus, we only need to show that \(G\) is a \(p\)-weak upper gradient of \(U\). This is a matter of a final Lemma. **Lemma 5.3**.: _Suppose that \(\psi_{i}\) are \(L_{i}\) Lipschitz functions, so that \(\operatorname{supp}\{\psi_{i}\}\subset X_{i}\), and so that \(\sum_{i=1}^{\infty}\psi_{i}=1\), where the sum is locally finite. Then, if \(v_{n_{i}}\in N^{1,p}(X_{i+1})\) are functions with continuous upper gradients \(h_{n_{i}}\in L^{p}(X_{i+1})\), then_ \[G:=\sum_{i=0}^{\infty}h_{n_{i}}\psi_{i}+L_{i}(\psi_{i-1}+\psi_{i}+\psi_{i+1})|v _{n_{i}}-U|\] _is an upper gradient of_ \[U=\sum_{i=0}^{\infty}v_{n_{i}}\psi_{i}.\] Proof.: Extend each \(h_{n_{i}},v_{n_{i}}\) by zero outside of \(X_{i+1}\). This does not alter definitions of the functions \(G\) and \(U\) since \(\operatorname{supp}(\psi_{i})\subset X_{i+1}\). We have that \(h_{n_{i}}\) is an upper gradient for \(v_{n_{i}}\) on \(X_{i+1}\). Note that \(L_{i}(\psi_{i-1}+\psi_{i}+\psi_{i+1})\) is an upper gradient of \(\psi_{i}\), since \(\psi_{i}\) is \(L_{i}\) Lipschitz, \(\operatorname{supp}(\psi_{i})\subset X_{i+1}\setminus X_{i-1}\) and \(1_{X_{i+1}\setminus X_{i}}\leq(\psi_{i-1}+\psi_{i}+\psi_{i+1})\). Thus, by the Leibnitz rule (see the proof of [14, Proposition 6.3.28]) we have that \(h_{n_{i}}\psi_{i}+L_{i}(\psi_{i-1}+\psi_{i}+\psi_{i+1})|v_{n_{i}}-U|\) is an upper gradient for \(v_{n_{i}}\psi_{i}\) on \(X_{i+1}\). Since \(v_{n_{i}}\psi_{i}\) vanishes outside of \(X_{i+1}\), \(h_{n_{i}}\psi_{i}+L_{i}(\psi_{i-1}+\psi_{i}+\psi_{i+1})|v_{n_{i}}-U|\) is an upper gradient on all of \(X\). Summing over \(i\in\mathbb{N}\), we get that \(G\) is a \(p\)-weak upper gradient for \(U\). (It is direct to see that (1.1) is stable under countable sums.)
2305.15874
Normal distribution of bad reduction
We prove normal distribution laws for primes of bad semistable reduction in families of curves. As a consequence, we deduce that when ordered by height, $100\%$ of curves in these families have, in a precise sense, many such primes.
Robert J. Lemke Oliver, Daniel Loughran, Ari Shnidman
2023-05-25T09:10:26Z
http://arxiv.org/abs/2305.15874v1
# Normal distribution of bad reduction ###### Abstract. We prove normal distribution laws for primes of bad semistable reduction in families of curves. As a consequence, we deduce that when ordered by height, \(100\%\) of curves in these families have, in a precise sense, many such primes. 2010 Mathematics Subject Classification: 11G30; 60F05, ###### Contents * 1 Introduction * 2 An Erdos-Kac type result * 3 Application to polynomials * 4 Semistable reduction * 5 Families of hyperelliptic curves * 6 Families of plane curves ## 1. Introduction A famous theorem of Erdos and Kac [6] states that the function \(\omega(n)=\#\{\text{primes }p\colon p\mid n\}\) behaves like a normal distribution with mean and variance \(\log\log n\); more precisely the random variables \[\{n\in\mathbb{N}:n\leqslant B\}\to\mathbb{R},\quad n\mapsto\frac{\omega(n)- \log\log B}{\sqrt{\log\log B}}\] converge in distribution to the standard normal distribution (throughout the paper all finite sets are equipped with the uniform probability measure). In this paper we prove versions of this result for bad reduction types in families of curves. For applications, one often wants to detect finer arithmetic information than just bad reduction, such as whether the reduction is semistable. We show that the primes of bad semistable reduction obey an Erdos-Kac type theorem. **Theorem**.: 1. _Over the set of hyperelliptic curves of fixed genus, the (renormalised) number of primes of bad semistable reduction is normally distributed._ 2. _Over the set of plane curves of fixed degree, the (renormalised) number of primes of bad semistable reduction is normally distributed._ Our methods, which come from the paper [5], are robust enough to allow for more general families of curves under suitable assumptions. See Sections 5 and 6 for precise statements and further details. An immediate application of our results is the following: for any given \(N>0\), one hundred percent of degree \(d\) hyperelliptic (resp. plane) curves \(C\) have at least \(N\) primes \(p\) of bad semistable reduction. The case \(N=1\) for hyperelliptic curves is due to Van ## 1. Introduction Let \(X\) be a quasi-projective variety over \(\mathbb{Q}\). Let \(\mathcal{X}\) be a projective space and \(\mathcal{X}\) be a projective variety over \(\mathbb{Q}\). Let \(\mathcal{X}\) be a projective space and \(\mathcal{X}\) be a projective variety over \(\mathbb{Q}\). Let \(\mathcal{X}\) be a projective variety over \(\mathbb{Q}\). We will prove this by applying the result [5, Thm. 1.9]. This concerns the function \[\omega_{\mathcal{D}_{1}}(x)=\#\{p:x\bmod p\in\mathcal{D}_{1}(\mathbb{F}_{p})\},\] and shows that \[\frac{\omega_{\mathcal{D}_{1}}(x)-c_{D_{1}}\log\log B}{\sqrt{c_{D_{1}}\log\log B }}\] converges in distribution to a standard normal, where \(c_{D_{1}}\) denotes the number of irreducible components of \(D_{1}\). Thus, the new pieces in Theorem 2.1 amount simply to showing that the imposition of the further conditions defining \(\omega_{\mathcal{D}_{1}\setminus\mathcal{D}_{2}}\) and \(\omega^{1}_{\mathcal{D}_{1}\setminus\mathcal{D}_{2}}\) do not affect the limiting distribution compared to the divisor \(D_{1}\). We do so by establishing in the following pair of lemmas that the number of primes at which these definitions possibly differ have bounded moments. Consequently, after accounting for the normalizing factor \(\sqrt{\log\log B}\) that tends to infinity, these primes will have no impact on the limiting distribution. **Lemma 2.2**.: _Let \(\mathcal{Z}\subset\mathcal{X}\) be closed of codimension \(2\). Then for each integer \(k\geqslant 1\),_ \[\limsup_{B\to\infty}\frac{1}{\#\{x\in\mathcal{X}(\mathbb{Z}):H(x)\leqslant B \}}\sum_{\begin{subarray}{c}x\in\mathcal{X}(\mathbb{Z})\setminus\mathcal{Z}( \mathbb{Z})\\ H(x)\leqslant B\end{subarray}}\omega_{\mathcal{Z}}(x)^{k}\] _exists._ Proof.: We begin by considering for any \(x\in\mathcal{X}(\mathbb{Z})\setminus\mathcal{Z}(\mathbb{Z})\) and any \(y\geqslant 1\) the moments of the related function \[\omega_{\mathcal{Z},y}(x):=\#\{p\leqslant y:x\bmod p\in\mathcal{Z}(\mathbb{F}_ {p})\}.\] Changing the order of summation, we find \[\sum_{\begin{subarray}{c}x\in\mathcal{X}(\mathbb{Z})\setminus\mathcal{Z}( \mathbb{Z})\\ H(x)\leqslant B\end{subarray}}\omega_{\mathcal{Z},y}(x)^{k}=\sum_{p_{1},\dots, p_{k}\leqslant y}\#\{x\in\mathcal{X}(\mathbb{Z})\setminus\mathcal{Z}(\mathbb{Z}):H(x) \leqslant B,x\bmod p_{i}\in\mathcal{Z}(\mathbb{F}_{p_{i}})\,\forall i\leqslant k\}.\] By the Lang-Weil estimates [8] we have \(\#\mathcal{X}(\mathbb{F}_{p})\sim p^{n}\) and \(\#\mathcal{Z}(\mathbb{F}_{p})\ll p^{n-2}\) where \(n=\dim X\). We apply our equidistribution assumption (2.1) to thus obtain \[\frac{1}{\#\{x\in\mathcal{X}(\mathbb{Z}):H(x)\leqslant B\}}\sum_{ \begin{subarray}{c}x\in\mathcal{X}(\mathbb{Z})\setminus\mathcal{Z}(\mathbb{Z}) \\ H(x)\leqslant B\end{subarray}}\omega_{\mathcal{Z},y}(x)^{k}\ll\sum_{p_{1},\dots, p_{k}\leqslant y}\frac{1}{\operatorname{lcm}(p_{1},\dots,p_{k})^{2}}+O(B^{-\eta}y^{kM}).\] The summation above converges as \(y\to\infty\), and choosing \(y=B^{\eta/kM}\), the error term remains bounded. Thus, \[\limsup_{B\to\infty}\frac{1}{\#\{x\in\mathcal{X}(\mathbb{Z}):H(x)\leqslant B \}}\sum_{\begin{subarray}{c}x\in\mathcal{X}(\mathbb{Z})\setminus\mathcal{Z}( \mathbb{Z})\\ H(x)\leqslant B\end{subarray}}\omega_{\mathcal{Z},y}(x)^{k} \tag{2.4}\] exists. To compare \(\omega_{\mathcal{Z}}\) with \(\omega_{\mathcal{Z},y}\), we note that if \(x\bmod p\in\mathcal{Z}(\mathbb{F}_{p})\) then \(p\ll H(x)^{d}\), where \(d=\deg\mathcal{Z}\). This gives \[\omega_{\mathcal{Z}}(x)-\omega_{\mathcal{Z},y}(x)\leqslant 1+\frac{\log B^{d}}{ \log y}\ll\frac{dkM}{\eta}\] by our choice \(y=B^{\eta/kM}\). This implies that \[\omega_{\mathcal{Z}}(x)^{k}=\omega_{\mathcal{Z},y}(x)^{k}+O\left(\frac{dkM}{ \eta}\omega_{\mathcal{Z},y}(x)^{k-1}\right),\] and so \[\limsup_{B\to\infty}\frac{1}{\#\Omega_{B}}\sum_{\begin{subarray}{c}x\in\mathcal{X}( \mathbb{Z})\setminus\mathcal{Z}(\mathbb{Z})\\ H(x)\leqslant B\end{subarray}}\omega_{\mathcal{Z}}(x)^{k}\] must exist by comparison to the analogous quantity (2.4). **Lemma 2.3**.: _For each integer \(k\geqslant 1\),_ \[\limsup_{B\to\infty}\frac{1}{\#\{x\in\mathcal{X}(\mathbb{Z}):H(x)\leqslant B \}}\sum_{\begin{subarray}{c}x\in\mathcal{X}(\mathbb{Z})\setminus\mathcal{D}_{ 1}(\mathbb{Z})\\ H(x)\leqslant B\end{subarray}}(\omega_{\mathcal{D}_{1}\setminus\mathcal{D}_{2} }(x)-\omega_{\mathcal{D}_{1}\setminus\mathcal{D}_{2}}^{1}(x))^{k}\] _exists._ Proof.: Let \(\mathcal{D}:=\mathcal{D}_{1}\) and let \(\mathcal{Z}\subset\mathcal{X}\) denote the union of the non-smooth locus of \(\mathcal{D}\) and the restriction of the non-smooth locus of \(\mathcal{X}\) to \(\mathcal{D}\). We have \(\mathcal{Z}\neq\mathcal{D}\) as \(X\) is normal and \(D_{1}\) is reduced. Then \[\omega_{\mathcal{D}_{1}\setminus\mathcal{D}_{2}}(x)-\omega_{\mathcal{D}_{1} \setminus\mathcal{D}_{2}}^{1}(x)\ll|\omega_{\mathcal{D}}(x)-\omega_{\mathcal{ D}}^{1}(x)|\ll|\omega_{\mathcal{D}\setminus\mathcal{Z}}(x)-\omega_{\mathcal{D} \setminus\mathcal{Z}}^{1}(x)|+\omega_{\mathcal{Z}}(x).\] By Lemma 2.2 the moments of \(\omega_{\mathcal{Z}}\) exist, so it suffices to consider \(\omega_{\mathcal{D}\setminus\mathcal{Z}}(x)-\omega_{\mathcal{D}\setminus \mathcal{Z}}^{1}(x)\). On the one hand, by [1, Cor. 2.4] we have \[\#\{x\in\mathcal{X}(\mathbb{Z}/p^{2}\mathbb{Z}):x\text{ meets }\mathcal{D} \bmod p\text{ non-transversely in a smooth point}\}\ll p^{2n-2},\] where \(n=\dim X\). On the other hand, by the Lang-Weil estimates [8] and Hensel's lemma [1, Lem. 2.1] applied to the smooth locus of \(\mathcal{X}\), we have \(\#\mathcal{X}(\mathbb{Z}/p^{2}\mathbb{Z})\gg p^{2n}\). It follows that the proportion of \(x\in\mathcal{X}(\mathbb{Z}/p^{2}\mathbb{Z})\) which meet \(\mathcal{D}\) non-transversely in a smooth point is \(O(1/p^{2})\). We now proceed exactly as in the proof of Lemma 2.2. We now complete the proof of Theorem 2.1. Proof of Theorem 2.1.: Applying [5, Thm. 1.9] to the function \[\omega_{\mathcal{D}_{1}}(x)=\#\{p:x\bmod p\in\mathcal{D}_{1}(\mathbb{F}_{p})\}\] shows that \[\frac{\omega_{\mathcal{D}_{1}}(x)-c_{D_{1}}\log\log B}{\sqrt{c_{D_{1}}\log \log B}}\] converges in distribution to a standard normal, where \(c_{D_{1}}\) denotes the number of irreducible components of \(D_{1}\). Write \(D_{1}\cap D_{2}=E\sqcup Z\) where \(E\) is a divisor and \(Z\) has codimension \(2\) in \(X\). Let \(\mathcal{E}\) and \(\mathcal{Z}\) be their respective closures in \(\mathcal{X}\). As \(E\) and \(Z\) are disjoint, we have \[\omega_{\mathcal{D}_{1}\setminus\mathcal{D}_{2}}(x)=\omega_{\mathcal{D}_{1}}(x )-\omega_{\mathcal{E}}(x)-\omega_{\mathcal{Z}}(x)+O(1).\] Using \(c_{D_{1}}=c_{D_{1}\setminus D_{2}}+c_{E}\), to prove the first part, it thus suffices to show that \[\frac{\omega_{\mathcal{Z}}(x)}{\sqrt{\log\log B}}\] converges in distribution to \(0\). However by Lemma 2.2 we have \[\lim_{B\to\infty}\frac{\omega_{\mathcal{Z}}(x)^{k}}{(\log\log B)^{k/2}}=0\] for every integer \(k\geqslant 1\), which shows the desired claim. For the second part, it suffices to show that \[\frac{\omega_{\mathcal{D}_{1}\setminus\mathcal{D}_{2}}(x)-\omega_{\mathcal{D}_{1 }\setminus\mathcal{D}_{2}}^{1}(x)}{\sqrt{\log\log B}}\] converges in distribution to \(0\). This similarly follows from Lemma 2.3. **Remark 2.4**.: A version of Theorem 2.1 will hold for general variants \(\omega_{\mathcal{D}_{1}}^{*}(x)\) of \(\omega_{\mathcal{D}_{1}}(x)\), like \(\omega_{\mathcal{D}_{1}\setminus\mathcal{D}_{2}}\) and \(\omega_{\mathcal{D}_{1}\setminus\mathcal{D}_{2}}^{1}\), provided the following holds: whether a prime \(p\) counted by \(\omega_{\mathcal{D}_{1}}(x)\) is not counted by \(\omega_{\mathcal{D}_{1}}^{*}(x)\) is determined by congruence conditions \(A\subset\mathcal{X}(\mathbb{Z}/p^{k}\mathbb{Z})\) such that \((\#\mathcal{D}(\mathbb{Z}/p^{k}\mathbb{Z})-\#A)/\#\mathcal{X}(\mathbb{Z}/p^{k }\mathbb{Z})=O(p^{-1-\delta})\) for some divisor \(\mathcal{D}\subset\mathcal{X}\) and some \(\delta>0\). ## 3. Application to polynomials The rest of our results are based on the following simple application of Theorem 2.1. **Theorem 3.1**.: _Let \(h_{1},h_{2}\in\mathbb{Z}[x_{1},\ldots,x_{n}]\) be non-zero polynomials with \(h_{1}\) non-constant and squarefree in \(\mathbb{Q}[x_{1},\ldots,x_{n}]\). Let \(c\) be the number of non-associated irreducible factors of \(h_{1}\) not dividing \(h_{2}\), and suppose that \(c>0\). Then the random variables_ \[\{\mathbf{x}\in\mathbb{Z}^{n}:h_{1}(\mathbf{x})\neq 0,\|\mathbf{x}\| \leqslant B\}\to\mathbb{R}, \mathbf{x}\mapsto\frac{\#\{p\mid h_{1}(\mathbf{x}):p\nmid h_{2}( \mathbf{x})\}-c\log\log B}{\sqrt{c\log\log B}}\] \[\{\mathbf{x}\in\mathbb{Z}^{n}:h_{1}(\mathbf{x})\neq 0,\|\mathbf{x}\| \leqslant B\}\to\mathbb{R}, \mathbf{x}\mapsto\frac{\#\{p:v_{p}(h_{1}(\mathbf{x}))=1,p\nmid h _{2}(\mathbf{x})\}-c\log\log B}{\sqrt{c\log\log B}}\] _converge in distribution to a standard normal._ Proof.: Apply Theorem 2.1 with \(\mathcal{X}=\mathbb{A}_{\mathbb{Z}}^{n}\) and \(D_{1}:h_{1}(\mathbf{x})=0\) and \(D_{2}:h_{2}(\mathbf{x})=0\). **Remark 3.2**.: Theorem 2.1 also gives versions of Theorem 3.1 for projective space instead of affine space. The corresponding effective equidistribution property is proven in [11, Prop. 2.1]. **Corollary 3.3**.: _Let \(h_{1}\) and \(h_{2}\) be as in Theorem 3.1. For \(\mathbf{t}\in\mathbb{Z}^{n}\), let \(\omega_{h_{1},h_{2}}(\mathbf{t})\) be the number of primes \(p\) dividing \(h_{1}(\mathbf{t})\) but not \(h_{2}(\mathbf{t})\). Then_ \[\lim_{B\to\infty}\frac{\#\left\{\mathbf{t}\in\mathbb{Z}^{n}:\ h_{1}(\mathbf{t })\neq 0,\|\mathbf{t}\|\leqslant B,\omega_{h_{1},h_{2}}(\mathbf{t})\geqslant( \log\log B)/(\log\log\log B)\ \right\}}{\#\{\mathbf{t}\in\mathbb{Z}^{n}:\|\mathbf{t}\| \leqslant B\}}=1.\] Corollary 3.3 was used in [2] to show that \(100\%\) of specializations in a certain family of genus two Jacobians have at least \(N\) primes of semistable bad reduction (for any fixed \(N\)). In the rest of this paper, we show how to deduce similar results about rather general families of curves and abelian varieties. ## 4. Semistable reduction We recall some basic properties of semistable curves and Jacobians [13, Tag 0E6X]. Another good reference is [10, SS8-10], but the definitions there are slightly different. Let \(C\) be a geometrically connected projective curve over a field \(F\), and assume the genus \(g=\dim_{F}H^{1}(C,\mathcal{O}_{C})\) is at least \(1\). Let \(\overline{F}\) be an algebraic closure of \(F\) and \(C_{\overline{F}}\) the base change of \(C\) to \(\overline{F}\). Then \(C\) is _smooth_ if \(C_{\overline{F}}\) is smooth over \(\overline{F}\) (and in particular, irreducible). The curve \(C\) is _semistable_ if \(C_{\overline{F}}\) is smooth over \(\overline{F}\) apart from finitely many nodes, and has no irreducible components isomorphic to \(\mathbb{P}^{1}_{\overline{F}}\) that meet the rest of \(C_{\overline{F}}\) in only one point. This last condition excludes curves like \(\mathbb{P}^{1}\) or \(\{xy=0\}\subset\mathbb{P}^{2}\), consistent with the condition \(g\geqslant 1\) **Definition 4.1**.: A smooth proper geometrically integral curve \(C\) of genus \(g\geqslant 1\) over \(\mathbb{Q}\) has _good_ (_resp. semistable_) _reduction_ at \(p\) if there exists a proper model \(\mathcal{C}\) of \(C_{\mathbb{Q}_{p}}\) over \(\operatorname{Spec}\mathbb{Z}_{p}\) such that the special fibre \(\mathcal{C}_{\mathbb{F}_{p}}\) is a smooth (resp. semistable) curve over \(\mathbb{F}_{p}\). We say \(C\) has _bad reduction_ at \(p\) if it does not have good reduction at \(p\). **Remark 4.2**.: \(C/\mathbb{Q}_{p}\) is semistable if and only if its minimal proper regular model \(\mathcal{C}/\operatorname{Spec}\mathbb{Z}_{p}\) has semistable special fiber [10, 10.3.34]. Moreover, if \(F/\mathbb{Q}_{p}\) is a finite extension with ring of integers \(R\subset F\), then the base change \(\mathcal{C}_{\operatorname{Spec}R}\) is the minimal proper regular model for \(C_{F}\)[10, 10.3.36]. Thus if \(C/\mathbb{Q}_{p}\) admits at least one bad but semistable model, then all other models are bad as well and \(C\) has bad reduction over every finite extension of \(\mathbb{Q}_{p}\). In other words, the good/bad reduction type of a semistable curve is'stable'. Now let \(A/\mathbb{Q}_{p}\) be an abelian variety, and let \(\mathcal{A}\) be its Neron model over \(\operatorname{Spec}\mathbb{Z}_{p}\) with special fiber \(\mathcal{A}_{\mathbb{F}_{p}}\). The connected component of the identity \(\mathcal{A}_{\mathbb{F}_{p}}^{0}\) is a geometrically connected commutative algebraic group over \(\mathbb{F}_{p}\), hence sits in a short exact sequence \[0\to U\times T\to\mathcal{A}_{\mathbb{F}_{p}}^{0}\to B\to 0,\] where \(B\) is an abelian variety, \(T\) is a torus, and \(U\) is a unipotent group. The numbers \(a=\dim B\), \(t=\dim T\), and \(u=\dim U\) are the _abelian_, _toric_, and _unipotent ranks_ of \(A\) respectively. We have \(u+t+a=\dim A\). **Definition 4.3**.: An abelian variety \(A\) over \(\mathbb{Q}\) has _good_ (resp. _semistable_) _reduction at_ a prime \(p\) if the connected component of the identity of the special fibre \(\mathcal{A}_{\mathbb{F}_{p}}\) of its Neron model \(\mathcal{A}\) over \(\mathbb{Z}_{p}\) is an abelian variety (resp. semi-abelian variety). Thus \(A/\mathbb{Q}_{p}\) has good (resp. semistable) reduction at \(p\) if and only if \(u+t=0\) (resp. \(u=0\)). In the definition above, we can equivalently ask for the existence of _some_ proper model of \(A_{\mathbb{Q}_{p}}\) over \(\mathbb{Z}_{p}\) with the corresponding property in the special fibre. If \(C/F\) is a smooth curve, we write \(\operatorname{Jac}(C)=\operatorname{Pic}^{0}(C)\) for its Jacobian, the abelian variety over \(F\) of dimension \(g\) parameterizing degree zero line bundles on \(C\). **Lemma 4.4**.: _Let \(C/\mathbb{Q}\) be a smooth proper geometrically integral curve of genus \(g\geqslant 1\)._ 1. \(C\) _has semistable reduction at_ \(p\) _if and only if_ \(\operatorname{Jac}(C)\) _has semistable reduction at_ \(p\)_._ 2. _If_ \(\mathcal{C}\) _is a semistable model for_ \(C\) _over_ \(\mathbb{Z}_{p}\)_, then the toric rank of_ \(\operatorname{Jac}(C)\) _is equal to_ \(m-c+1\)_, where_ \(m\) _is the number of nodes in_ \(\mathcal{C}_{\mathbb{F}_{p}}\) _and_ \(c\) _is the number of irreducible components._ Proof.: (1) is a special case of [4, Thm. 2.4] and (2) is [10, 7.5.18]. If a curve or abelian variety over \(\mathbb{Q}\) has semistable but not good reduction at \(p\), then we say it has _bad semistable reduction at \(p\)_. **Remark 4.5**.: If \(C\) has good reduction, then so does \(J\), by Lemma 4.4(2). The converse may fail, however. For example, say \(g=2\) and \(C\) reduces to a union \(E\cup E^{\prime}\) of two elliptic curves over \(\mathbb{F}_{p}\) intersecting at a node. Then \(C\) has bad semistable reduction, but \(J\) has toric rank \(0\) by Lemma 4.4, and hence good reduction. In fact, \(J\) reduces to \(E\times E^{\prime}\) in this case. If \(A/\mathbb{Q}_{p}\) is an abelian variety with Neron model \(\mathcal{A}\), the Tamagawa number \(c_{p}(A)\) is by definition the number of \(\mathbb{F}_{p}\)-rational components of the group \(\mathcal{A}_{\mathbb{F}_{p}}\). This is a crude measure of _how bad_ the reduction of \(A\) is at \(p\). For instance, if \(A\) has good reduction then \(c_{p}(A)=1\). The converse, however, is not true, as the following example shows. **Example 4.6**.: If \(E/\mathbb{Q}\) is an elliptic curve with squarefree discriminant \(\Delta\), then by Tate's algorithm [14], we have \(c_{p}(E)=1\) for all primes \(p\), including those dividing \(\Delta\). More generally, the Tamagawa number \(c_{p}(J)\) of a semistable Jacobian \(J=\operatorname{Jac}(C)\) can be computed from the intersection matrix of the irreducible components of the special fiber of a minimal proper regular model \(\mathcal{C}\) over \(\mathbb{Z}_{p}\). This uses Raynaud's theorem, that the Neron model of \(J\) is represented by \(\operatorname{Pic}^{0}_{\mathcal{C}/\mathbb{Z}_{p}}\). See [3, SS9.6] for more details. In the next two sections, we consider families of curves \(\{C_{\mathbf{t}}\}_{\mathbf{t}\in\mathbb{Z}^{n}}\) and prove Erdos-Kac type laws for the number of primes of bad semistable reduction for specializations \(\mathbf{t}\) of bounded height. For the sake of applications, we will prove a more precise result, which shows that it is primes of _minimally_ bad reduction that play the role of prime numbers under this analogy. The notion of'minimally bad reduction' will generalize Example 4.6: a certain discriminant polynomial will have \(p\)-adic valuation \(1\), which will imply that \(C_{\mathbf{t}}\) and \(\operatorname{Jac}(C_{\mathbf{t}})\) have bad semistable reduction and moreover \(c_{p}(\operatorname{Jac}(C_{\mathbf{t}}))=1\). ## 5. Families of hyperelliptic curves Let \(g\geqslant 1\), \(n\geqslant 1\) and let \(a_{0},\ldots,a_{2g+2}\in\mathbb{Q}[t_{1},\ldots,t_{n}]\) be polynomials with integer coefficients. We consider the corresponding family \[y^{2}=f_{\mathbf{t}}(x):=a_{2g+2}(\mathbf{t})x^{2g+2}+\cdots+a_{1}(\mathbf{t} )x+a_{0}(\mathbf{t})\] of hyperelliptic curves over \(\mathbb{A}^{n}\). Denote by \(\Delta(\mathbf{t})\in\mathbb{Q}[t_{1},\ldots,t_{n}]\) the discriminant of \(f_{\mathbf{t}}(x)\). We say \(C_{\mathbf{t}}\) has _minimally bad reduction_ at \(p\) if \(v_{p}(\Delta(\mathbf{t}))=1\). **Lemma 5.1**.: _If \(v_{p}(\Delta(\mathbf{t}))=1\), then_ 1. _both_ \(C_{\mathbf{t}}\) _and_ \(\operatorname{Jac}(C_{\mathbf{t}})\) _have bad semistable reduction;_ 2. _the curve_ \(y^{2}=f_{\mathbf{t}}(x,z)\) _over_ \(\mathbb{Z}_{p}\) _is a minimal proper regular model for_ \(C_{\mathbf{t}}\)_;_ 3. \(c_{p}(\operatorname{Jac}(C_{\mathbf{t}}))=1\)_._ Proof.: If \(f_{\mathbf{t}}\) has a root of multiplicity three or higher over \(\mathbb{F}_{p}\), then by Dedekind's theorem, we have \(v_{p}(\Delta(\mathbf{t}))\geqslant 2\). It follows that \(f_{\mathbf{t}}\) has at most double roots over \(\mathbb{F}_{p}\), and since the valuation of \(\Delta(\mathbf{t})\) is \(1\) it must have exactly one double root. For such hyperelliptic curves, the homogenization \(\mathcal{C}_{\mathbf{t}}\colon y^{2}=f_{\mathbf{t}}(x,z)\) is a minimal regular model for \(C_{\mathbf{t}}\) over \(\mathbb{Z}_{p}\), and the special fiber is an irreducible genus \(g-1\) curve with a simple node [10, 8.3.53]. Hence \(C_{\mathbf{t}}\) has bad semistable reduction, and moreover the component group of \(\operatorname{Jac}(C_{\mathbf{t}})\) is trivial ([3, SS9.6]). By Lemma 4.4, the toric rank of \(\operatorname{Jac}(C_{\mathbf{t}})\) is \(1\), hence \(\operatorname{Jac}(C_{\mathbf{t}})\) has bad reduction as well. We say \(h\in\mathbb{Q}[x_{1},\ldots,x_{n}]\) is _squarefull_ if each irreducible factor \(g\mid h\) satisfies \(g^{2}\mid h\). **Theorem 5.2**.: _Assume \(\Delta(\mathbf{t})\) is non-constant and not squarefull. Let \(c\) be the number of non-associated irreducible polynomials \(h(\mathbf{t})\) exactly dividing \(\Delta(\mathbf{t})\). Then the random variables_ \[\{\mathbf{t}\in\mathbb{Z}^{n}:\Delta(\mathbf{t})\neq 0,\|\mathbf{t} \|\leqslant B\}\to\mathbb{R},\] \[\mathbf{t}\mapsto\frac{\#\{p:C_{\mathbf{t}}\text{ has minimally bad reduction at }p\}-c\log\log B}{\sqrt{c\log\log B}}\] _converge in distribution to a standard normal as \(B\to\infty\)._ Proof.: Write \(\Delta(\mathbf{t})=\prod_{i=1}^{c}h_{i}(\mathbf{t})\prod_{i=c+1}^{k}h_{i}( \mathbf{t})^{a_{i}}\), where the \(h_{i}\) are irreducible and pairwise coprime, and \(a_{i}\geqslant 2\) for \(i>c\). We apply Theorem 3.1 with \(f_{1}=\prod_{i=1}^{c}h_{i}(\mathbf{t})\) and \(f_{2}=\Delta(\mathbf{t})/f_{1}\). By construction, the second counting function in Theorem 3.1 exactly counts the number of primes \(p\) of minimally bad (and by Lemma 5.1, semistable) reduction for \(C_{\mathbf{t}}\). As an example, we apply this to the family of all hyperelliptic curves. **Corollary 5.3**.: _Consider the family_ \[y^{2}=a_{2g+2}x^{2g+2}+\cdots+a_{1}x+a_{0}\] _of all hyperelliptic curves over \(\mathbb{A}^{2g+2}\). As \(B\to\infty\), the random variables_ \[\{\mathbf{a}\in\mathbb{Z}^{2g+2}: \Delta(\mathbf{a})\neq 0,\|\mathbf{a}\|\leqslant B\}\to\mathbb{R},\] \[\mathbf{a}\mapsto\frac{\#\{p:C_{\mathbf{a}}\text{ has minimally bad reduction at }p\}-\log\log B}{\sqrt{\log\log B}}\] \[\{\mathbf{a}\in\mathbb{Z}^{2g+2}: \Delta(\mathbf{a})\neq 0,\|\mathbf{a}\|\leqslant B\}\to\mathbb{R},\] \[\mathbf{a}\mapsto\frac{\#\{p:C_{\mathbf{a}}\text{ has bad reduction at }p\}-\log\log B}{\sqrt{\log\log B}}\] _converge in distribution to a standard normal._ Proof.: The polynomial \(\Delta\) is irreducible as an element of \(\mathbb{C}[a_{0},\ldots,a_{2g+1}]\) (see [7, Ex. 1.4]). Applying Theorem 3.1 to \(\Delta\) we see that only primes of minimally bad reduction contribute to the distributions. Corollary 5.3 implies, as in Corollary 3.3, that for \(100\%\) of hyperelliptic curves, both \(C_{\mathbf{t}}\) and \(\operatorname{Jac}(C_{\mathbf{t}})\) have bad semistable reduction for at least \(N\) primes, for any \(N>0\). The case \(N=1\) of this result for a related family of hyperelliptic curves is due to Van Bommel [15]. For families which may not satisfy the hypotheses of Theorem 5.2, we prove the following variant whose conclusion is a bit weaker. Denote by \(\Delta^{\prime}(\mathbf{t})\) the discriminant of \(f^{\prime}_{\mathbf{t}}(x):=\frac{\mathrm{d}\mathbf{f}_{\mathbf{t}}(x)}{ \mathrm{d}x}\). **Theorem 5.4**.: _Assume that \(\Delta(\mathbf{t})\) and \(\Delta^{\prime}(\mathbf{t})\) are non-constant and let \(c\) be the number of non-associated irreducible factors of \(\Delta(\mathbf{t})\) not dividing \(\Delta^{\prime}(\mathbf{t})\)._ 1. _If_ \((\Delta^{\prime}(\mathbf{t}))\not\subseteq\operatorname{rad}(\Delta(\mathbf{t }))\)_, then the random variables_ \[\{\mathbf{t}\in\mathbb{Z}^{n}:\Delta(\mathbf{t})\neq 0,\|\mathbf{t}\| \leqslant B\}\to\mathbb{R},\] \[\mathbf{t}\mapsto\frac{\#\{p:\operatorname{Jac}(C_{\mathbf{t}})\text{ has bad semistable reduction at }p\}-p\nmid\Delta^{\prime}(\mathbf{t})\}-c\log\log B}{\sqrt{c\log\log B}}\] _converge in distribution to a standard normal as_ \(B\to\infty\)_._ 2. _If_ \(\Delta(\mathbf{t})\) _and_ \(\Delta^{\prime}(\mathbf{t})\) _are coprime then the random variables_ \[\{\mathbf{t}\in\mathbb{Z}^{n}:\Delta(\mathbf{t})\neq 0,\|\mathbf{t}\| \leqslant B\}\to\mathbb{R},\] \[\mathbf{t}\mapsto\frac{\#\{p:\operatorname{Jac}(C_{\mathbf{t}})\text{ has bad semistable reduction at }p\}-c\log\log B}{\sqrt{c\log\log B}}\] _converge in distribution to a standard normal as_ \(B\to\infty\)_._ Proof.: By Lemma 5.5, this follows from Theorem 3.1 with \(h_{2}=\Delta^{\prime}\) and \(h_{1}\) a generator of the ideal \(\operatorname{rad}(\Delta)\). **Lemma 5.5**.: _Let \(\mathbf{t}\in\mathbb{Z}^{n}\) and suppose \(p>2g+2\). If \(p\mid\Delta(\mathbf{t})\) but \(p\nmid\Delta^{\prime}(\mathbf{t})\), then both \(C_{\mathbf{t}}\) and \(\operatorname{Jac}(C_{\mathbf{t}})\) have bad semistable reduction._ Proof.: This is well-known, but we sketch a proof for completeness. We have \(p\mid\Delta(\mathbf{t})\) if and only if \(f_{\mathbf{t}}(x)\) has a root of multiplicity at least two over \(\mathbb{F}_{p}\). Since \(p\nmid\Delta^{\prime}(\mathbf{t})=\operatorname{disc}(f_{\mathbf{t}}^{\prime})\), all such roots must have multiplicity equal to two. Since \(\Delta^{\prime}(\mathbf{t})=\operatorname{Res}_{x}(f_{\mathbf{t}}^{\prime},f_ {\mathbf{t}}^{\prime\prime})\), the condition \(p\nmid\Delta^{\prime}(\mathbf{t})\) also implies that \(p\) does not divide all the coefficients of \(f\). We may assume the curve \(C_{\mathbf{t}}\colon y^{2}=f_{\mathbf{t}}(x)\) over \(\mathbb{F}_{p}\) has no points at infinity. (If \(p\) happens to divide the leading coefficient, we change coordinates so that \(\infty\in\mathbb{P}^{1}(\mathbb{F}_{p})\) is not a root of the homogenization of \(f_{\mathbf{t}}\) over \(\mathbb{F}_{p}\); the condition \(p>2g+2\) guarantees that this is always possible.) This curve is then smooth aside from singularities etale locally of the form \(y^{2}=x^{2}g(x)\), where \(g\) is non-vanishing at \(x=0\). The singularities are therefore nodes, say \(m>0\) of them. If \(m<g+1=\frac{1}{2}\deg(f_{\mathbf{t}})\), then there is one (singular) irreducible component of genus \(g-m\), so \(C_{\mathbf{t}}\) has bad semistable reduction. By Lemma 4.4, the toric rank is \(m>0\), so \(J_{\mathbf{t}}\) also has bad semistable reduction. If \(m=g+1\), then there are two (non-singular) irreducible components crossing at \(m\geqslant 2\) points so again \(C\) is semistable, and the toric rank is \(g>0\), so \(J_{\mathbf{t}}\) has bad semistable reduction as well. **Remark 5.6**.: See [2, eq. (10)] for examples where Theorem 5.4 applies but Theorem 5.2 does not. For families of Jacobians with everywhere potentially good reduction, the hypotheses of Theorems 5.2 and 5.4 are evidently not satisfied: if \(J\) has potentially good reduction at \(p\), then it cannot have bad semistable reduction at \(p\). A simple example is the family \(y^{2}=x^{\ell}+t\). For every \(t\), the Jacobian has potentially good reduction since it has complex multiplication over \(\mathbb{Q}(\zeta_{\ell})\), but \(\Delta(t)=-\ell^{\ell}t^{\ell-1}\) and \(\Delta^{\prime}(t)=0\), so both Theorems do not apply. Similarly, Theorems 5.2 and 5.4 do not apply in twist families such as \(C_{t}\colon ty^{2}=f(x)\), which necessarily have finitely many primes of bad semistable reduction in the entire family. **Example 5.7**.: For a non-isotrivial example, consider the curves \(C_{t}\colon y^{2}=f_{t}(x)\), where \[f_{t}=(x^{2}+2x-2)(x^{4}+4x^{3}+(2t^{2}-8)x-t^{2}+4).\] We have \(\Delta(t)=-2^{6}3^{6}(t^{2}-4)^{2}t^{12}\) whereas \(\Delta^{\prime}(t)=-2^{8}3^{8}(t^{2}-4)t^{4}g(t)\) for some irreducible sextic polynomial \(g(t)\), so Theorem 5.4 does not apply. The Jacobian of any curve in this family, which is taken from [9], has quaternionic multiplication by the quaternion algebra of discriminant \(6\), and hence has no primes of bad semistable reduction. Is there a geometric characterization of the families of hyperelliptic curves not satisfying the condition \((\Delta^{\prime}(\mathbf{t}))\not\subseteq\operatorname{rad}(\Delta(\mathbf{ t}))\) of Theorem 5.4(1)? All examples that we encountered so far are either isotrivial or have Jacobians with large endomorphism algebra. In particular, they have everywhere potentially good reduction aside from finite many primes which depend only on the family. ## 6. Families of plane curves Let \(V_{d}\) be the space of homogeneous polynomials \(f(x,y,z)\) of degree \(d\). There is a polynomial \(\Delta=\Delta_{d}\) on \(V_{d}\), called the _discriminant_, with the property that \(\Delta(f)=0\) if and only if the curve \(C_{f}\colon f(x,y,z)=0\) is singular [7, 13.1.D]. We say \(C_{f}\) has _minimally bad reduction_ at a prime \(p\) if \(v_{p}(\Delta)=1\). The equation \(f(x,y,z)=0\) then gives a minimal regular model for \(C_{f}\) over \(\mathbb{Z}_{p}\), and the singular locus in the special fiber is a single node [12, Thm. 1.1]. By Bezout's theorem, the special fiber is irreducible, and it follows that \(C_{f}\) has bad semistable reduction at \(p\). The toric rank of \(\operatorname{Jac}(C_{f})\) is \(1\) by Lemma 4.4, so \(J\) also has bad semistable reduction at \(p\). By [3, SS9.6] and the irreducibility of the special fiber, we have \(c_{p}(J)=1\) as well. **Theorem 6.1**.: _Let \(d\geqslant 3\). Consider the family_ \[\sum a_{ijk}x^{i}y^{j}z^{k}=0\] _of all degree \(d\) plane curves over affine \(\binom{2+d}{d}\)-space. Then as \(B\to\infty\), the random variables_ \[\{\mathbf{a}\in\mathbb{Z}^{\binom{2+d}{d}}:\Delta(\mathbf{a})\neq 0,\| \mathbf{a}\|\leqslant B\}\to\mathbb{R},\] \[\mathbf{a}\mapsto\frac{\#\{p:C_{\mathbf{a}}\text{ has minimally bad reduction at }p\}-\log\log B}{\sqrt{\log\log B}}\] _converge in distribution to a standard normal._ Proof.: This follows from Theorem 3.1 and [12, Thm. 1.1], and the fact that \(\Delta\) is an irreducible polynomial in the \(a_{ijk}\)[7, SS13.1.D]. Just as in Theorem 5.2, Theorem 6.1 generalizes immediately to parameterized families of plane curves \(C_{\mathbf{t}}\) over \(\mathbb{A}^{n}\) such that \(\Delta(\mathbf{t})\) is not squarefull. For more general families, we prove an analogue of Theorem 5.4. For \(f=\sum a_{ijk}x^{i}y^{j}z^{k}\), let \(H_{xy}=f_{xx}f_{yy}-f_{xy}^{2}\) be the upper left \(2\)-by-\(2\) minor of its Hessian matrix. The resultant \(R(f)=\operatorname{Res}(H_{xy},f_{x},f_{y})\) is a polynomial in the \(a_{ijk}\) which vanishes precisely when \(H_{xy},f_{x}\), and \(f_{y}\) have a common zero [7, SS13]. **Lemma 6.2**.: \(H_{xy}\) _vanishes whenever \(C_{f}\) has a non-nodal singularity. Hence so does \(R(f)\)._ Proof.: Let \(H=H(f)\) be the \(3\)-by-\(3\) Hessian matrix of double partial derivatives. Suppose \(C_{f}\) is singular at a point \(P\). Then \(P\) is a triple point (or worse) if and only if the matrix \(H(P)\) vanishes identically, i.e. has rank \(0\). If \(P\) is a double point, then the rank of \(H(P)\) is either one or two, and in the latter case \(P\) is an ordinary double point (i.e. a node) since the tangent lines are separated. Thus \(H_{xy}\) vanishes at all singular points which are not nodes. **Proposition 6.3**.: _Let \(f\in V_{d}(\mathbb{Z}_{p})\), \(\Delta(f)\neq 0\). Suppose \(p\mid\Delta(f)\) but \(p\nmid R(f)\). Then both \(C_{f}\) and \(\operatorname{Jac}(C_{f})\) have bad semistable reduction at \(p\)._ Proof.: By assumption \(C_{f}\) is proper over \(\mathbb{Z}_{p}\) with only nodal singularities in the special fibre. It therefore has a semistable model over \(\mathbb{Z}_{p}\)[13, Lemma 0CDG]. In fact, we claim that \(C_{f}\) is itself semistable over \(\mathbb{Z}_{p}\). Assume otherwise. Write \(\overline{f}=\prod f_{i}\), with \(f_{i}\in\mathbb{F}_{p}[x,y,z]\) irreducible. The condition \(p\nmid R(f)\) implies that \(f_{i}\neq f_{j}\) for \(i\neq j\), in other words the reduction \(C_{f,p}/\mathbb{F}_{p}\) is reduced, with irreducible components \(C_{i}=\{f_{i}=0\}\), for \(i=1,\dots,r\). We may assume \(r>1\). (If \(r=1\), then \(C_{f,p}=C_{1}\) has only nodal singularities and hence is semistable.) As \(C_{f}\) is not semistable, at least one of the \(C_{i}\) has genus \(0\) intersecting the rest of the special fibre in a single reduced point. Since \(d\geqslant 3\), this cannot happen by Bezout's theorem. By Remark 4.2, \(C\) has bad semistable reduction. To prove that \(J\) also has _bad_ semistable reduction, we need to show that the toric rank of \(J\) is non-zero, or equivalently, that the abelian rank is strictly less than \(g=(d-1)(d-2)/2\). However, from the above semistable model we see that the abelian rank is at most \(g^{\prime}=\frac{1}{2}\sum_{i=1}^{r}(d_{i}-1)(d_{i}-2)\), where \(d_{i}=\deg(f_{i})\) and \(\sum d_{i}=d\). Since \(g^{\prime}<g\) if \(r\neq 1\), we may assume that \(C_{f,p}\) is irreducible of degree \(d\) and with \(t\geqslant 1\) nodes. But then the abelian rank of \(J\) is \(\frac{1}{2}(d-1)(d-2)-t<g\), as claimed. For simplicity, we state only the analogue of Theorem 5.4.(2) in this setting. **Theorem 6.4**.: _Let \(a_{ijk}(\mathbf{t})\in\mathbb{Z}[t_{1},\dots,t_{n}]\), and consider the family \(C_{\mathbf{t}}\colon f_{\mathbf{t}}(x,y,z)=0\) of degree \(d\) plane curves over \(\mathbb{Q}\), where \(f_{\mathbf{t}}=\sum_{ijk}a_{ijk}(\mathbf{t})x^{i}y^{j}z^{k}\). Assume that \(\Delta(f_{\mathbf{t}})\) and \(R(f_{\mathbf{t}})\) are non-constant and coprime. Let \(c\) be the number of non-associated irreducible factors of \(\Delta(\mathbf{t})\). Then as \(B\to\infty\), the random variables_ \[\{\mathbf{t}\in\mathbb{Z}^{\binom{2+d}{d}}:\Delta(f_{\mathbf{t}}) \neq 0,\|\mathbf{t}\|\leqslant B\}\to\mathbb{R},\] \[\mathbf{t}\mapsto\frac{\#\{p:C_{\mathbf{t}}\text{ has bad semistable reduction at }p\}-c\log\log B}{\sqrt{c\log\log B}}\] _converge in distribution to a standard normal._ Proof.: By Proposition 6.3 it is enough to apply Theorem 3.1, using \(h_{2}=R(f_{\mathbf{t}})\) and \(h_{1}\) a generator of the ideal \(\operatorname{rad}(\Delta(f_{\mathbf{t}}))\). **Remark 6.5**.: Over \(V_{d}\), the polynomials \(\Delta\) and \(R\) have no common factors: Since \(\Delta\) is irreducible it is enough to exhibit a single \(f\in V_{d}\) with \(\Delta(f)=0\) and \(R(f)\neq 0\). Consider the curve \(C_{f}\colon x^{d}+y^{d}=xyz^{d-2}\), which has a node at the origin. The resultant \[R(f)=\operatorname{Res}(d^{2}(d-1)^{2}x^{d-2}y^{d-2}-z^{2(d-2)},dx^{d-1}-yz^{d -2},dy^{d-1}-xz^{d-2}),\] is non-vanishing, since the scheme cut out by these three polynomials is empty. The results of this section generalize immediately to Erdos-Kac type results for reduction types of degree \(d\) hypersurfaces \(H\colon h(\mathbf{x})=0\) in \(\mathbb{P}^{n}\). Indeed, there is a discriminant polynomial \(\Delta\) for such hypersurfaces [7, 13.1.D]. Moreover, \(v_{p}(\Delta(h))=1\) implies that \(H\otimes\mathbb{Z}_{p}\) is regular with a unique singular point (a node) in its special fiber [12, Thm. 1.1], hence \(H\) has semistable reduction over \(\mathbb{Q}_{p}\) in that case.
2301.06441
Production of $X_{cs\bar{c}\bar{s}}$ in heavy ion collisions
The yields of $X_{cs\bar{c}\bar{s}}$ with its two possible configurations, i.e., the hadronic molecular state and tetraquark state, for Pb-Pb collisions at $\sqrt{s_{NN}}=5.02~\rm{TeV}$ is studied. A volume effect is found from the centrality distribution of $X_{cs\bar{c}\bar{s}}$, which could help to distinguish the inner structure of $X_{cs\bar{c}\bar{s}}$. We also show the rapidity and the transverse momentum distributions of $X_{cs\bar{c}\bar{s}}$ production as well as its elliptic flow coefficient as a function of the transverse momentum.
Yuanyuan Hu, Hui Zhang
2023-01-16T14:11:51Z
http://arxiv.org/abs/2301.06441v2
# Production of \(X_{cs\bar{c}\bar{s}}\) in heavy ion collisions ###### Abstract The yields of \(X_{cs\bar{c}\bar{s}}\) with its two possible configurations, i.e., the hadronic molecular state and tetraquark state, for Pb-Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV is studied. A volume effect is found from the centrality distribution of \(X_{cs\bar{c}\bar{s}}\), which could help to distinguish the inner structure of \(X_{cs\bar{c}\bar{s}}\). We also show the rapidity and the transverse momentum distributions of \(X_{cs\bar{c}\bar{s}}\) production as well as its elliptic flow coefficient as a function of the transverse momentum. ## I Introduction Quarks and gluons are the fundamental degrees of freedom of quantum chromodynamics (QCD). Because of the nonperturbative feature of QCD, we can only observe confined colorless hadrons. A normal hadron has two modes: a meson is made up of one quark and one antiquark, and a baryon is made up of three (anti)quarks. Multiquark hadrons made up of more than three quarks were proposed at the beginning of the construction of the quark model by Gell-Mann and Zweig [1; 2; 3]. However, the existence of tetraquarks and pentaquarks was not proven until the observation of \(XYZ\) states [4], hidden-charm \(P_{c}\) states [5; 6; 7; 8], doubly-charm \(T_{cc}^{+}\)[9; 10] and fully-charm tetraquark states [11], etc. [12; 13; 14; 15; 16; 17; 18] Five \(J/\psi\phi\) structures \(X(4140)\), \(X(4274)\), \(X(4500)\), \(X(4685)\) and \(X(4700)\) in the \(B^{+}\to J/\psi\phi K^{+}\) decay process were observed by the LHCb Collaboration [19; 20; 21], CDF Collaboration [22; 23], CMS Collaboration [24], D0 Collaboration [25] and BARAR Collaboration [26]. \(X(4140)\) and \(X(4274)\) are considered as the \(cs\bar{c}\bar{s}\) tetraquark ground states, whereas \(X(4500)\) and \(X(4700)\) are considered as the \(cs\bar{c}\bar{s}\) tetraquark excited states, in various theoretical methods [27; 28; 29; 30; 31; 32; 33; 34; 35]. In Ref. [36], \(X(4685)\) was also considered as the axial vector 2S radial excited \(cs\bar{c}\bar{s}\) tetraquark state. In Refs. [37; 38; 39; 40], the mass spectra of the S-wave and D-wave \(cs\bar{c}\bar{s}\) tetraquarks in different excitation structures are calculated using the QCD sum rules method. There have been many theoretical studies on the inner structure of these \(X\)'s, such as the molecular states [41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59], compact or diquark-antidiquark states [60; 61; 62; 63; 64; 65; 66; 67; 68], cusp effects [69; 70; 71], dynamically generated resonances [72; 73], conventional charmonium [74], and hybrid charmonium states [42; 43]. However, overall, the inner structure of \(X(4140)\), \(X(4274)\), \(X(4500)\), \(X(4685)\) and \(X(4700)\) remains an open question. In the molecular picture, a \(X_{cs\bar{c}\bar{s}}\) is formed by a strange-charmed meson \(D_{s}^{+}\) (\(D_{s}^{-}\)) and a \(D_{s}^{s-}\) (\(D_{s}^{+}\)), while a \(X(3872)\) is formed by a charmed meson \(D_{0}\) (\(D_{0}^{*}\), \(D^{+}\), \(D^{-}\)) and a \(\bar{D_{0}}\) (\(\bar{D_{0}^{*}}\), \(D^{*-}\), \(D^{*+}\)). In the tetraquark picture, a \(X_{cs\bar{c}\bar{s}}\) is formed by a spin triplet diquark \([cs]_{1}\) (spin singlet diquark \([cs]_{0}\)) and a spin singlet antidiquark \([\bar{c}\bar{s}]_{0}\) (spin triplet antidiquark \([\bar{c}\bar{s}]_{1}\)), while a \(X(3872)\) is formed by a diquark \([cq]_{1}\) (\([cq]_{0}\)) and a \([\bar{c}\bar{q}]_{0}\) (\([\bar{c}\bar{q}]_{1}\)), \(q\) for \(u\) and \(d\) quarks. Although light quarks \(u\) and \(d\) in \(X(3872)\) are replaced with \(s\) quarks in \(X_{cs\bar{c}\bar{s}}\), their inner structures may or may not be the same. This motivates the present study, in which we examine whether the approach we proposed in Ref. [75] can also be applied to the \(X_{cs\bar{c}\bar{s}}\) case and thus find a way to distinguish the two internal structures with heavy ion measurements. In this work, we try to distinguish the two aforementioned possible inner structures of \(X_{cs\bar{c}\bar{s}}\), i.e., a loose hadronic molecule or a compact tetraquark, by studying its production in heavy ion collisions. The remainder of this paper is organized as follows. In section II, we introduce the generation mechanism of \(X_{cs\bar{c}\bar{s}}\) into the AMPT model corresponding to its two possible inner structures following the production of \(X(3872)\) described in Ref. [75]. In section III, we examine the production of \(X_{cs\bar{c}\bar{s}}\) as a function of centrality, transverse momentum, and rapidity. A volume effect is found, which can be a probe of the inner structure of \(X_{cs\bar{c}\bar{s}}\). A summary and outlook are presented in section IV. ## II Framework In this study, we generate a total of one million minimum bias events for Pb-Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV by using the framework developed in Ref. [75]. We introduce the production mechanism to produce \(X_{cs\bar{c}\bar{s}}\) for its two possible configurations, i.e., the hadronic molecular configurations and the tetraquark configurations into the default version (v1.26t9b) of AMPT transport model [76]. Given that \(X_{cs\bar{c}\bar{s}}\) contains (anti-)charm quarks and (anti-)strange quarks, we need to generate a reasonable number of individual charm and strange quarks in the partonic phase. On top of the default version of AMPT, we modify the factor of \(K\) ([77]) to enhance the initial \(c\) and \(\bar{c}\) spectra because of a lack of some channels related to initial heavy quarks. The AMPT calculation gives a reasonable (order-of-magnitude) de scription of the experimental data [78] for the total yield of \(D^{+}+D^{+*}\) in the low \(p_{T}\) region (see upper panel of Fig. 1). For the strange quarks, an upper limit on the relative production of strange to non-strange quarks in AMPT is set to 0.6 because of the strangeness enhancement effect (see [79]), and our calculations also give a reasonable (order-of-magnitude) description of the experimental data [80] for the yield of \(D_{s}^{+}\) meson (see lower panel of Fig. 1). The main purpose of this work is to distinguish two inner structures of \(X_{cs\bar{c}\bar{s}}\) through their significantly different production rates. The difference of \(D\) and \(D_{s}^{+}\) mesons production between our calculation and experimental data should not influence the relative yield between two inner structures and thus cannot change the qualitative results. We use the same production mechanism developed in Ref. [75] for the hadronic molecule and tetraquark configurations of the \(X_{cs\bar{c}\bar{s}}\). For the molecular picture, the charmed-strange mesons are collected after the hadronization process. Then, \(D_{s}^{+}\) (\(D_{s}^{-}\)) and \(D_{s}^{-*}\) (\(D_{s}^{+*}\)) are coalesced (similar to the hadronization process mentioned in [76]) to form the "molecule" \(X_{cs\bar{c}\bar{s}}\) according to the following conditions: the relative distance within the region [5fm, 7fm] and invariant mass within the region [\(2M_{D_{s}^{+}}\), \(2M_{D_{s}^{+*}}\)]. For the tetraquark picture, the "tetra" \(X_{cs\bar{c}\bar{s}}\) is formed via two steps. (i) First, diquarks (\(cs\)) and diquarks (\(\bar{c}\bar{s}\)) are formed by matching a (anti-)charm quark with the nearest (in both position space and momentum space) (anti-)strange quark in the parton. (ii) Then, these (anti)diquarks are coalesced to form the \(X_{cs\bar{c}\bar{s}}\) according to the following conditions: the relative distance \(<\) 1fm and invariant mass within the region [\(2M_{[cs]_{1}}\), \(2M_{[cs]_{0}}\)] (the spin triplet and singlet diquark masses are defined in Refs. [30; 31]). Owing to a lack of spin information in the AMPT model for the formation of the charmed-strange mesons and (anti)diquarks, the relative yield ratios are estimated using the thermal model: \[R(\frac{A}{B})\equiv\frac{\mathrm{Yield}(A)}{\mathrm{Yield}(B)}=e^{-(m_{A}-m_ {B})/T_{\mathrm{He}emont}}, \tag{1}\] where \(m_{A}\) and \(m_{B}\) represent the masses of hadrons A and B, respectively. Here, \(T_{\mathrm{freezeout}}\simeq 160\) MeV is the freeze-out temperature. For the hadronic picture, A and B are the \(D_{s}^{+}\) and \(D_{s}^{+*}\) mesons, respectively. For the tetraquark picture, A and B are the spin triplet and singlet diquark, respectively. This estimate indicates a composition of 30%(70%) for \(D_{s}^{+*}(D_{s}^{+*})\) and a composition of 35%(65%) for spin triplet(singlet) diquarks. We also vary the composition between 20%(80%) and 40%(60%) to show the uncertainty bands. ## III Results and Discussions Within this simulation framework, we use the Monte Carlo method to generate a total of one million minimum bias events for Pb-Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV. The inclusive yield of \(X_{cs\bar{c}\bar{s}}\) is found to be approximately 42000 in the molecular picture and approximately 200 in the tetraquark picture. As a benchmark for comparison, we also estimate the yield of \(X(3872)\) within the same framework (see the production mechanism in Ref. [75], the yield should be multiplied a factor \(\frac{1}{4}\) owing to wavefunction normalization for both the molecular and tetraquark pictures). The inclusive yield of \(X(3872)\) is found to be approximately 171000 in the molecular picture and approximately 600 in the tetraquark picture. The yield of \(X_{cs\bar{c}\bar{s}}\) is approximately \(\frac{1}{4}\) of that of \(X(3872)\). Compared with the experimental data of \(X(3872)\) measured by the CMS collaboration for Pb-Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV [81], our finding suggests that an observable signal of \(X_{cs\bar{c}\bar{s}}\) could be measured in heavy ion collisions at the LHC energy. One can also find the production in the molecular picture significantly exceeds that in the tetraquark picture, by a factor of 200 -- a 2-order-of-magnitude difference. This result may be understood as follows: the \(c-\bar{c}\) and \(s-\bar{s}\) quarks must be pair produced in the initial conditions of heavy ion collisions and then expand and cool with the bulk flow; the molecule \(X_{cs\bar{c}\bar{s}}\) needs a large volume to be formed, while the tetraquark \(X_{cs\bar{c}\bar{s}}\) needs a compact volume to be formed; thus, the probability of the formation of hadron molecules is far higher than that for the tetraquark state. We plot the \(X_{cs\bar{c}\bar{s}}\) production as a function of centrality in Pb-Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV for the hadronic molecular state and tetraquark state in Fig. 2. Figure 1: (color online) Upper panel: total production of \(D^{+}+D^{+*}\) from the ALICE Collaboration [78]; Lower panel: the production of \(D_{s}^{+}\) from the ALICE Collaboration [80]. The bands reflect the uncertainty due to constituent composition as discussed around Eq. (1) that are obtained from varying the composition fraction by \(\pm 10\%\). One can find the yield of the \(X_{cs\bar{c}\bar{s}}\) in the molecular picture is 2 orders of magnitude larger than that in the tetraquark picture. From the central collision region to the peripheral collision region, the production first increases then decreases for both the molecular state and the tetraquark state, and the slope of the decrease is far larger in the molecular state than in the tetraquark state. This results from a competing effect between the volume of the bulk system and the size of \(X_{cs\bar{c}\bar{s}}\). For central collisions, the number of (anti-)charm and (anti-)strange quarks is large, the bulk volume is large, and its evolution time is long; thus, the (anti-)charm and (anti-)strange quarks separate sufficiently, which benefits the production of a large-size molecular state. For the peripheral collisions, both the number of (anti-)charm and (anti-)strange quarks and the size of the fireball are small; as such, the evolution time of the fireball is short, which benefits the production of small-sized tetraquark states. This size effect could help to explore the internal structure of \(X_{cs\bar{c}\bar{s}}\) through different collision systems, e.g., Pb-Pb, Au-Au, Xe-Xe, Cu-Cu, O-O, and \(d-A/p-A\). In Fig. 3, we present the rapidity and the transverse momentum distributions of \(X_{cs\bar{c}\bar{s}}\). One can find that the distribution for both the hadronic molecular state and the tetraquark state is similar to that of the usual hadrons [82; 83]. We also show the elliptic flow coefficient \(v_{2}\) of \(X_{cs\bar{c}\bar{s}}\) as a function of the transverse momentum \(p_{T}\) in Fig. 4. The elliptic flow is sensitive to the geometry of the initial fireball and the generation mechanism of \(X_{cs\bar{c}\bar{s}}\). ## IV Summary and outlook In this work, we studied the yields of \(X_{cs\bar{c}\bar{s}}\) for Pb-Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV by introducing the production mechanism of two possible configurations, i.e., the hadronic molecular state and tetraquark state, into the AMPT model. We found that the production in the molecular picture exceeds in the tetraquark picture by two orders of magnitude. The centrality distribution of the yields of \(X_{cs\bar{c}\bar{s}}\) shows a strongly decreasing trend for the hadronic molecular state and a mild change for the Figure 3: (color online) Rapidity \(y\) and transverse momentum \(p_{T}\) distribution of the \(X_{cs\bar{c}\bar{s}}\) yield in Pb-Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV for hadronic molecular configuration (red solid boxes) and tetraquark configuration (blue shaded boxes). The bands are determined as described in Fig. 2. Figure 4: (color online) Elliptic flow coefficient \(v_{2}\) versus transverse momentum \(p_{T}\) for produced \(X_{cs\bar{c}\bar{s}}\) in minimum bias Pb-Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV, predicted from our computation for the hadronic molecule picture. The bands are similarly determined as described in Fig. 2. Figure 2: (color online) Centrality dependence of the \(X_{cs\bar{c}\bar{s}}\) in Pb-Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV for hadronic molecular configuration (red solid boxes) and tetraquark configuration (blue shaded boxes). The bands reflect both statistical uncertainty from our simulations and the uncertainty due to constituent composition as discussed around Eq. (1), and are obtained from varying the composition fraction by \(\pm 10\%\). tetraquark state. This system size dependence could be a good probe for the inner structure of \(X_{cs\bar{c}\bar{s}}\). We also showed the rapidity and the transverse momentum distributions of \(X_{cs\bar{c}\bar{s}}\) production, as well as its elliptic flow coefficient, as a function of the transverse momentum, which can be tested in the future experimental measurements. In Ref. [80], a strangeness enhancement effect in heavy ion collisions was found by ALICE Collaboration, which could be evidence for quark-gluon plasma. We expect a similar effect to be found in the ratio of \(X_{cs\bar{c}\bar{s}}\) to \(X(3872)\), which will be studied in our future work. ###### Acknowledgements. The authors would like to thank Dr. J. Liao, E. Wang, Q. Wang, and H. Xing for the helpful discussion. This work is partly supported by Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008, the National Natural Science Foundation of China with Grant No. 12105107, Science and Technology Program of Guangzhou No. 2019050001.
2303.07126
Mirror U-Net: Marrying Multimodal Fission with Multi-task Learning for Semantic Segmentation in Medical Imaging
Positron Emission Tomography (PET) and Computer Tomography (CT) are routinely used together to detect tumors. PET/CT segmentation models can automate tumor delineation, however, current multimodal models do not fully exploit the complementary information in each modality, as they either concatenate PET and CT data or fuse them at the decision level. To combat this, we propose Mirror U-Net, which replaces traditional fusion methods with multimodal fission by factorizing the multimodal representation into modality-specific branches and an auxiliary multimodal decoder. At these branches, Mirror U-Net assigns a task tailored to each modality to reinforce unimodal features while preserving multimodal features in the shared representation. In contrast to previous methods that use either fission or multi-task learning, Mirror U-Net combines both paradigms in a unified framework. We explore various task combinations and examine which parameters to share in the model. We evaluate Mirror U-Net on the AutoPET PET/CT and on the multimodal MSD BrainTumor datasets, demonstrating its effectiveness in multimodal segmentation and achieving state-of-the-art performance on both datasets. Our code will be made publicly available.
Zdravko Marinov, Simon Reiß, David Kersting, Jens Kleesiek, Rainer Stiefelhagen
2023-03-13T13:57:29Z
http://arxiv.org/abs/2303.07126v1
# Mirror U-Net: Marrying Multimodal Fission with Multi-task Learning ###### Abstract Positron Emission Tomography (PET) and Computed Tomography (CT) are routinely used together to detect tumors. PET/CT segmentation models can automate tumor delineation, however, current multimodal models do not fully exploit the complementary information in each modality, as they either concatenate PET and CT data or fuse them at the decision level. To combat this, we propose Mirror U-Net, which replaces traditional fusion methods with multimodal fission by factorizing the multimodal representation into modality-specific decoder branches and an auxiliary multimodal decoder. At these branches, Mirror U-Net assigns a task tailored to each modality to reinforce unimodal features while preserving multimodal features in the shared representation. In contrast to previous methods that use either fission or multi-task learning, Mirror U-Net combines both paradigms in a unified framework. We explore various task combinations and examine which parameters to share in the model. We evaluate Mirror U-Net on the AutoPET PET/CT and on the multimodal MSD BrainTumor datasets, demonstrating its effectiveness in multimodal segmentation and achieving state-of-the-art performance on both datasets. Our code will be made publicly available. ## 1 Introduction PET/CT scans are commonly used for cancer diagnosis and therapy to estimate tumor characteristics, such as size, location, and changes over time [46]. PET data can highlight areas with a high metabolic activity, which is typical for tumors [3], by administering a radioactive tracer like Fluorodeoxyglucose (FDG). To provide detailed anatomical information and aid in accurate tumor localization, CT scans are typically used in conjunction with PET scans [40]. Deep learning models can automatically segment lesions in PET/CT scans, providing radiologists with metabolic tumor volume (MTV) and shape information as biomarkers to monitor disease progression [10, 18]. However, high metabolic activity in PET is not specific to tumors and can be found in organs and regions with inflammation or infection [5]. Additionally, while CT scans provide anatomical information, they are not sufficient for visualizing lesions on their own [40]. These factors make tumor segmentation from PET/CT data challenging, especially due to the limited availability of voxel-wise labeled PET/CT datasets. As Figure 1: Mirror U-Net combines multimodal fission [26] and multi-task learning. We obtain a shared representation for both modalities and feed it into modality-specific decoders, each optimized for a tailored task to learn useful features from its modality. Tasks 1 and 2 use modality-specific features via the skip connections but are also conditioned by the other modality via the shared representation, hence focusing on the conditional entropy between the modalities. Task 3, on the other hand, only processes the shared representation, focusing on the mutual information. a result, current multimodal PET/CT segmentation models have yet to demonstrate reliability for clinical use. Recently, the AutoPET MICCAI 2022 Challenge [10] released a large-scale labeled PET/CT database of 1014 studies involving 900 patients. However, the top-performing methods in the final leaderboard rely on either early fusion [36, 47, 4, 51, 13] and/or late fusion ensembles [49, 39, 13], which do not fully leverage the complementary information in the PET and CT modalities. We propose Mirror U-Net, a unified framework that combines multimodal fission [26] and multi-task learning. Rather than fusing modality features, Mirror U-Net factorizes multimodal features into modality-specific decoder branches and an auxiliary multimodal decoder, allowing us to disentangle modality-specific features, such as metabolic and anatomical cues in PET/CT, from multimodal features. Using an encoder-decoder U-Net model [37] for each modality, we obtain modality-specific features and share layers between them to produce the multimodal representation, as shown in Figure 1. To emphasize the dichotomy between modalities and their shared features, we extend multimodal fission with multi-task learning, investigating four combinations of tasks tailored to unimodal or multimodal features. In our qualitative experiments, we demonstrate how Mirror U-Net utilizes the complementary information from each modality. Our approach surpasses traditional fusion schemes, as well as fission-only or multi-task-only methods, achieving state-of-the-art performance on two benchmarks, AutoPET [10] and MSD BrainTumor [2]. Our contributions are summarized as follows: 1. A novel unification of multimodal fission and multi-task learning for multimodal medical segmentation. 2. Mirror U-Net - a simple yet powerful multimodal fission architecture that achieves state-of-the-art performance on AutoPET [10] and MSD BrainTumor [2], demonstrating great potential for deploying PET/CT segmentation models in clinical practice. 3. We conduct extensive experiments to determine which tasks to assign to the decoder branches and which layers to share to obtain the multimodal representation. ## 2 Related Work ### Multimodal Fusion Combining multiple imaging modalities for automatic segmentation has advanced significantly, particularly in PET/CT [45, 6, 9, 10, 12], CT/MRI [50, 19, 20], and multi-contrast MRI [32, 18, 21]. Multimodal approaches often use early fusion, where modalities are concatenated into a single input [18, 21, 6, 36, 47, 4, 51, 28, 13], or late fusion, where predictions from unimodal models are combined [49, 39, 43]. However, late fusion may not exploit the mutual information in cross-modal representations, and early fusion may not highlight the contribution of each modality to the task [30]. Fu [9] use a PET U-Net model [37] to guide a CT model in a cascade framework, while Xue [45] segment liver lesions by combining predictions from low- and high-level feature maps in separate PET/CT decoders. Early fusion is shown to outperform late and middle fusion for brain tumor segmentation from MRI/PET/CT data [11]. Other approaches translate CT into MRI images via GANs with cycle- and shape-consistency [50], organ attention [19], or using a cross-modality prior [20] to generate more data. In contrast to fusion approaches, Mirror U-Net disentangles unimodal and multimodal features using modality-specific decoders and a multimodal decoder, allowing us to define tasks for each decoder to explicitly focus on the modality's strengths, such as anatomical knowledge from CT and metabolic activity in PET. ### Multimodal Fission Multimodal fission decomposes data into multimodal and unimodal information that captures the unique structure and semantics of each modality [26, 41, 16]. This separation can be achieved through disentangled representation learning [41, 16] or explicit separation of unimodal and multimodal pathways in the model [23, 15, 29]. In our work, we adopt the latter using our Mirror U-Net architecture. Joze [22] introduce a Multi-Modal Transfer Module as an independent multimodal pathway, while in [38], multimodal and unimodal paths are coordinated by squeeze and excitation operations. Valindria [42] use a similar architecture to Mirror U-Net but alternate training iterations between CT and MRI volumes for segmentation and do not employ multi-task learning. Mirror U-Net differs from existing fission approaches in that we combine it for the first time with multi-task learning to control the type of features learned in the modality-specific layers. Hickson [14] provide a summary of existing fission methods, noting that all previous methods either use a joint encoder before the fission into individual decoders [27, 44, 1, 31] or share skip connections [25]. In contrast, Mirror U-Net disentangles modalities by using modality-specific encoders and skip connections, both of which are essential to separate unimodal and multimodal features. ### Multi-Task Learning Multi-task learning has been widely used in medical image segmentation to exploit the correlation between different tasks [1, 7, 31] or to regularize the segmentation [34, 44, 27]. For instance, Meng [31] train a survival prediction network on feature maps extracted from a PET/CT U-Net [37] and outperform the single-task model. Other approaches utilize image reconstruction as an auxiliary task to guide and regularize the segmentation [44, 27]. Some methods employ the classification of tumor presence as an additional task to reduce false positives [13] or prevent the network from learning irrelevant features [34]. However, these multi-task methods are all limited to traditional multimodal fusion. In contrast, Mirror U-Net combines multimodal fission with multi-task learning, which has not been explored before. We show in our experiments that this combination outperforms multi-task fusion methods on two challenging datasets, despite using a simple architecture. to predict empty masks in healthy cases and identify low-uptake tumors in positive cases. We extend the **(v2)** model with a classification task, as described in Equation 3, where \(\lambda_{\text{class}}\in[0,1]\) and \(c\in\{0,1\}\) indicates tumor presence. \[\begin{split}\mathcal{L}_{\text{V3}}(x,y)=\mathcal{L}_{\text{V2}} (x,y)+\\ +\lambda_{\text{class}}\cdot\mathcal{L}_{\text{BCE}}(c,T(E_{C}(x_{C })\oplus E_{P}(x_{P})))\end{split} \tag{3}\] **Ablation (v4).** In our final version, we jointly train the modality-specific branches on segmentation by combining their logits through a weighted sum, as shown in Equation 4. This is a challenging task for the CT branch as lesions are often not visually prominent in CT. To address this, we introduce a parameter \(\theta\in[0.1,0.5]\) to balance the CT- and PET-branch logits. A lower \(\theta\) indicates a weaker reliance on CT data. We also explore the possibility of learning \(\theta\) by the model, rather than manually tuning it. We refer to **(v4)** as an ablation since it is a single-task multimodal fission. \[\begin{split}\mathcal{L}_{\text{V4}}(x,y)=\mathcal{L}_{\text{ DiecCE}}(y,\hat{y})\\ \hat{y}=(1-\theta)\cdot D_{P}(E_{P}(x_{P}))+\theta\cdot D_{C}(E_{ C}(x_{C})))\end{split} \tag{4}\] ### Weight Sharing While shared weights between the two U-Net models [37] facilitate learning a multimodal representation, the literature on the optimal location to share parameters within multimodal fission models is mostly confined to small ablation studies [26, 41]. To address this gap, we investigate different locations to share parameters between the branches (see Figure 3) and identify the optimal location based on empirical results. We index the layers of Mirror U-Net from 1 to 8 and denote the shared layers as \(L\). We examine sharing layers before the bottleneck in the encoder branches where features are closer to the input modality, as well as sharing after the bottleneck in the decoders where features are more task-specific. We investigate how different shared layers impact the performance across the four versions **(v1) - (v4)**, to determine if there is a consistent weight-sharing scheme that is optimal for all versions, and to evaluate the sensitivity of each version to changes in hyperparameters. ### Generalization to Brain Tumor Segmentation To demonstrate the generalizability of Mirror U-Net to other tasks and imaging modalities, we evaluate it on the MSD BrainTumor dataset [2]. It consists of 750 volumes of multimodal Magnetic Resonance Imaging (MRI) data, including (**T1**), post-contrast T1-weighted (**T1Gd**), (**T2**), and Fluid Attenuated Inversion Recovery (**FLAIR**) modalities. Similar to the complementary physiological and anatomical information in PET/CT data, we use the complementary FLAIR and T1Gd modalities, which are representative of the tumor edema (the accumulation of fluid around a tumor) and the tumor core (its central part), respectively. Together, the edema and core form the entire tumor. We use Mirror U-Net **(v2)** instead of **(v3)** as each volume of the dataset contains a tumor, and classification is not feasible. We replace the PET/CT branches with T1Gd/FLAIR and set the tasks for T1Gd and FLAIR to tumor core and edema segmentation, respectively. We also set the shared task to whole tumor segmentation. However, to obtain the final whole-tumor segmentation, we unite the predictions for core and edema from the T1Gd and FLAIR branches, since the whole tumor output from the bottleneck is used only as a regularization. We opt for segmentation tasks on all branches since unlike CT, FLAIR has a strong signal for edema segmentation. However, for consistency, we train a model **(v2)-rec** to reconstruct FLAIR and T1Gd in the FLAIR and shared branch, respectively, and segment all three tumor classes in the T1Gd branch, to demonstrate that Mirror U-Net can generalize well with the default tasks in Figure 2. ### Implementation Details **AutoPET.** In order to ensure a fair comparison between models, we use the same (811 training + 203 testing) sam Figure 2: We explore 4 combinations of tasks **(v1) - (v4)** by assigning different tasks to each decoder branch in Mirror U-Net. In **(v1)** we train the CT branch for CT reconstruction and assign the primary segmentation task to the PET branch. In **(v2)**, we extend **(v1)** by adding a bottleneck decoder that reconstructs PET data and in **(v3)** we also add a classifier, which predicts whether the patient is healthy or not. In **(v4)**, we train both the CT and PET branches for segmentation and fuse their predictions via a weighted sum of logits as an ablation study. ples of AutoPET [10] for all models. We utilize Standardized Uptake Values (SUV) of PET scans to reduce interpatient variation. CT and PET volumes have a voxel size of 2.0mm\(\times\)2.0mm\(\times\)3.0mm and the values are clipped to [-100, 250] and [0, 15] for CT and PET respectively. All values are then scaled to [0, 1]. We employ a sliding window inference model by randomly sampling patches of size [96, 96, 96] with a \(p=\frac{2}{3}\) probability of containing a tumor. We train using Adam [24] with a learning rate of \(10^{-3}\), weight decay of \(10^{-6}\), and batch size of 4 for 400 epochs. For Equations 1-3, we set \(\lambda_{\text{rec}}=10^{-4},\lambda_{\text{seg}}=0.5,\lambda_{\text{class}}=10 ^{-3}\). Encoder and decoder layers have a kernel size of 3 and stride of 2. Unless otherwise stated, the same data pre-processing is used for comparison to other methods. **MSD BrainTumor.** We use the same (484 training + 266 testing) samples all models. We train Mirror U-Net using whole MRI volumes with a patch size of [224, 224, 144], with a voxel size of 1.0mm\(\times\)1.0mm\(\times\)1.0mm and normalize the intensity for each modality using z-score normalization. We use the same optimization and model hyperparameters as in our AutoPET [10] experiments. ## 4 Experiments and Results ### Task Combinations and Weight Sharing **Quantitative Results.** Table 2 presents the results on AutoPET [10] for all Mirror U-Net versions **(v1)-(v4)** and all weight-sharing variants \(L\). We observe three tendencies: **(1) Self-Supervision.** Firstly, regularizing the reconstruction by adding noise or voxel shuffling in **(v1)-(v3)** leads to a consistent improvement regardless of the shared layers \(L\), where voxel shuffling achieves the best results. Figure 4 also confirms this tendency and shuffling shows the lowest sensitivity to changes in shared layers \(L\), indicated by the lower variance in the box plot, making it not only the best performing but also the most robust method. **(2) Weight Sharing.** Secondly, sharing only the bottleneck layer (\(L=\{5\}\)) results in the best results for all multi-task settings. Figure 5 shows that the performance behaves similarly for all **(v1) - (v4)** when varying shared layers \(L\). Sharing shallower, deeper, or multiple layers decreases the performance significantly. The reason for this may be that shallow layers are more modality-specific and deep layers are more task-specific. Sharing such layers does not allow the network to specialize on either the modality or task. **(3) Adding Tasks.** Thirdly, Figure 5 and Table 2 show that each Mirror U-Net version consistently outperforms the last **(v4) \(<\) (v1) \(<\) (v2) \(<\) (v3)**, with the exception of one **(v4)** outlier. The multi-task settings **(v1) - (v3)** share a similar design and build on top of each other by adding new tasks. Hence, incrementally adding meaningful tasks to Mirror U-Net leads to a consistent improvement, regardless of the shared layers \(L\). **Ablation (v4) Results.** The results in Table 2 show that the optimal fusion parameter \(\theta\) strongly varies for different shared layers. We also train models by setting \(\theta\) as a learnable parameter to avoid manual tuning. However, the best Figure 4: Comparison between the self-supervision methods used in Mirror U-Net **(v1)–(v3)**. Each box aggregates results from all weight-sharing combinations \(L\). Figure 5: Comparison between all Mirror U-Net versions. The x-axis represents the different weight-sharing schemes. The dots represent the best-performing models for each pair of multi-task settings and weight-sharing scheme, i.e., bold numbers in Table 2. Figure 3: Different locations in the model to share the parameters between the modality-specific branches. Shared layers are colored orange. The set of layer indices \(L\subset\{1...8\}\) are aligned with the layers via horizontal dashed lines. Best viewed in color. scores are achieved with a fixed threshold. Figure 6 shows that sharing layers near the bottleneck (\(L=\{5\}\)) leads to both a higher average performance and a lower sensitivity to changes in \(\theta\). Sharing more than one layer leads to a sharp performance loss. This observation is consistent with the findings in Figure 5, which further confirms that sharing only the bottleneck improves not only the performance but also the robustness to parameter changes. **Qualitative Results.** Figure 7 presents the qualitative results for each branch of Mirror U-Net **(v1) - (v4)** and gives insight into what each branch has learned. **Versions (v1) - (v3).** As voxel shuffling is the best self-supervision strategy we only include one example for Gaussian noise supervision in **(v1)-noise**. In the **(v1)-noise** row, we observe that the CT branch successfully reduces a significant portion of the noise, but struggles with black voxels within the body. On the other hand, in the **(v1)-shuffle** row, Mirror U-Net restores some edges from the shuffled voxels, which requires the model to remember the structures present within the CT. This leads to a better representation and segmentation results. Moreover, in the **(v2)-shuffle** model, we reconstruct the PET data from the bottleneck, which is much coarser due to the lack of skip connections. Despite this, the reconstruction includes the regions with the highest metabolic activity, resulting in significantly better segmentation. Finally, in **(v3)-shuffle**, we add a classification head and the PET reconstruction contains only three regions: the brain, bladder, and the spatial location of the lesions, resulting in much finer boundaries compared to **(v2)-shuffle**. This refinement leads to a more accurate spatial attention encoded in the bottleneck, as evidenced by the well-delineated lesions in the final segmentation. **Ablation (v4).** Since lesion boundaries can be hard to discern in CT scans, the CT branch in **(v4)** produces a segmentation mask that covers a large portion of the body. However, this mask excludes organs with high metabolic activity, such as the brain, liver, heart, and urinary bladder. On the other hand, the PET branch captures regions with high metabolic activity, including the brain and bladder. Combining the logits from the final layers of each branch results in a fused prediction that better matches the ground-truth mask. These results suggest that the CT branch is \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{ \begin{tabular}{c} Mirror U-Net \\ Version \\ \end{tabular} } & \multicolumn{8}{c|}{Shared Layers \(L\)} \\ \hline \multirow{2}{*}{**(v2)**} & \multirow{2}{*}{Parameters} & \(L=\{3\}\) & \(L=\{4\}\) & \(L=\{5\}\) & \(L=\{6\}\) & \(L=\{7\}\) & \(L=\{4,5,6\}\) & \(L=\{3,4,5,6,7\}\) \\ \cline{2-9} & & L2 & 57.39 & 62.88 & 61.92 & 61.91 & 62.06 & 62.08 & 58.06 \\ \hline \multirow{2}{*}{**(v1)**} & L2 + noise & 59.76 & 62.83 & 62.34 & 62.21 & 62.65 & 62.36 & 60.86 \\ & L2 + shuffling & **62.75** & **63.97** & **64.57** & **63.80** & **63.09** & **63.01** & **62.72** \\ \hline \multirow{2}{*}{**(v2)**} & L2 & 58.42 & 63.34 & 62.15 & 61.74 & 62.24 & 62.49 & 59.00 \\ & L2 + noise & 59.96 & 64.00 & 62.62 & 62.21 & 62.85 & 63.01 & 61.44 \\ & L2 + shuffling & **63.15** & **64.28** & **65.50** & **64.08** & **63.69** & **63.29** & **63.33** \\ \hline \multirow{2}{*}{**(v3)**} & L2 & 59.12 & 63.36 & 62.78 & 62.25 & 63.01 & 63.22 & 60.01 \\ & L2 + noise & 60.23 & 64.33 & 63.01 & 62.89 & 63.22 & 63.30 & 60.51 \\ & L2 + shuffling & **63.33** & **64.55** & **65.91** & **64.44** & **64.00** & **63.43** & **63.34** \\ \hline \multirow{2}{*}{**(v4)**} & \(\theta=0.1\) & 59.61 & **63.78** & 63.65 & 63.23 & 60.58 & 61.79 & **60.81** \\ & \(\theta=0.2\) & 58.65 & 62.66 & **64.24** & 63.35 & **62.30** & **63.99** & 56.79 \\ \(\theta=0.3\) & **61.67** & 63.76 & 64.22 & 62.26 & 61.89 & 61.33 & 58.21 \\ & \(\theta=0.4\) & 58.95 & 62.93 & 58.43 & **63.64** & 56.15 & 60.69 & 59.40 \\ & \(\theta=0.5\) & 59.86 & 61.84 & 61.00 & 62.48 & 60.38 & 60.51 & 57.79 \\ & Learnable \(\theta\) & 60.89 & 60.94 & 60.81 & 58.48 & 56.52 & 59.63 & 47.42 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison between all Mirror U-Net versions **(v1) – (v4)** for all weight-sharing variants \(L\). The best Dice score in each box is in **bold**. The best Dice score for each Mirror U-Net version **(v1) – (v4)** is underlined. \begin{table} \begin{tabular}{c|c c c c c c|c c c c} \hline \hline Metric & \multicolumn{8}{c|}{Baselines} & \multicolumn{8}{c}{Mirror U-Net (Ours)} \\ \hline & CT & PET & EF & MF & LF-Logit & LF-\(\cup\) & LF-\(\cap\) & **(v1)** & **(v2)** & **(v3)** & Ablation **(v4)** \\ \hline Dice \(\uparrow\) & 26.00 & **60.99** & 54.89 & 55.53 & 57.41 & 59.89 & 21.60 & 64.57 & 65.50 & **65.91** & 64.24 \\ FPV \(\downarrow\) & 15.64 & 5.38 & 4.98 & 4.77 & 4.88 & 3.95 & **1.67** & 2.93 & 2.83 & **1.55** & 2.93 \\ FNV \(\downarrow\) & 44.15 & **2.15** & 3.13 & 3.02 & 2.88 & 3.01 & 99.74 & 1.66 & 0.94 & **0.76** & 1.99 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison to the baselines with the same U-Net backbone as Mirror U-Net. EF: Early, MF: Middle, LF: Late Fusion. Figure 6: Influence of the fusion parameter \(\theta\) on the different weight-sharing schemes for the Decision Fusion setting. Each box aggregates results for \(\theta\in[0.1,0.5]\). not well-suited for segmentation, however, Mirror U-Net's simple architecture allows the CT branch to provide spatial guidance for the PET branch by highlighting regions that are likely to contain lesions and filtering unlikely regions, such as the brain and bladder. Overall, our qualitative results demonstrate that Mirror U-Net utilizes the complementary nature of CT and PET images and successfully transfers knowledge via the shared features to improve the primary segmentation task. ### Comparisons to Other Approaches **Comparison to Baselines.** We compare Mirror U-Net to several baselines - traditional early (EF), middle (MF), and late fusion (LF), as well as a unimodal U-Net [37] trained solely on either CT or PET data. We compare to three late fusion variants: sum of logits (LF-Logits), and predictions' union (LF-\(\cup\)) or intersection (LF-\(\cap\)). We use the Dice score and the average false positive (FPV) and false negative volumes (FNV) as evaluation metrics, which measure the average volume of over- (FPV) and under-segmented (FPN) regions in mm\({}^{3}\). The results, presented in Table 3, show that the unimodal CT model has poor performance, as lesions are hardly visible in CT scans, which limits its potential as a standalone modality. The unimodal PET model, on the other hand, outperforms all traditional fusion strategies, highlighting the limitations of traditional fusion in effectively combining PET and CT features without extensive fine-tuning. However, the PET model has a higher FPV since it is challenging to distinguish lesions from highly active organs based solely on PET data. The union late fusion achieves the highest Dice score among the fusion baselines, whereas the intersection late fusion has the lowest FPV due to the limited agreement between PET and CT predictions. All variants of Mirror U-Net consistently outperform the baseline methods on all metrics. **Comparison to Related Work.** Our main contribution is the combination of multimodal fission and multi-task learning. Therefore, we compare Mirror U-Net to related methods that utilize only fission, only multi-task learning, or neither approach. For the **neither** category, we com Figure 7: Qualitative results from our multi-task experiments. The last column refers to either the \(\theta\)-combination of the CT and PET branches from **(v4)** or to the output of the bottleneck decoder \(D_{BTL}\) in **(v2), (v3)**. The final segmentation for each version is in a red box. pare Mirror U-Net to nnUNet [17], a standard benchmark for medical segmentation. Additionally, we include Blackbean [47], the winner of the AutoPET 2022 challenge, to demonstrate state-of-the-art performance on AutoPET [10]. In the **multi-task-only** category, we compare to related approaches using multi-task learning and multimodal **fusion**. SF-Net [27] utilizes a decoder branch to reconstruct an image fusion of T1c and T2 MRI while preserving the structures of both modalities via an L2 and SSIM loss [48]. Andrearczyk _et al_. [1] and DeepMTS [31] utilize feature maps from a U-Net model [37] to train a classifier for survival prediction. Weninger _et al_. [44] utilize reconstruction and tumor classification (enhancing or non-enhancing) to regularize an early fusion U-Net model [37]. Lastly, we conduct an ablation study where we only use either CT or PET-only data in all branches in **(v1)-(v3)** to show that the presence of both modalities is necessary for Mirror U-Net. For the **fission-only category** We compare to the single-task model of Valindria _et al_. [42] which uses a similar architecture to Mirror U-Net, but alternates between modalities during each iteration and uses both branches for segmentation. We also consider Mirror U-Net **(v4)** as a fission-only method as it only has one segmentation task. We train and evaluate all models on the same 80/20 training/validation split and use the same data preprocessing as Mirror U-Net, except for nnUNet [17] and Blackbean [47], which require specific preprocessing steps. The results in Table 4 show that Mirror U-Net consistently outperforms all other models, demonstrating state-of-the-art performance on AutoPET [10]. This underscores the power of combining multimodal fission with multi-task learning, highlighting the efficacy of our proposed method. ### Generalizing to Brain Tumor Segmentation We compare Mirror U-Net **(v2)** to nnUNet [17], Seg-ResNet [35], which is the winner of the Brain Tumor Segmentation BraTS 2018 challenge, and to Mirror U-Net **(v1)** and **(v2)-rec** as ablations. Our findings, shown in Table 5, demonstrate that Mirror U-Net outperforms all other methods on all 3 tumor classes. Notably, we observe a significant performance drop when the bottleneck task is omitted in Mirror U-Net **(v1)**. Using the default tasks in **(v2)-rec** achieves the second-best results for tumor core and edema segmentation. However, unlike CT, both FLAIR and T1Gd modalities have a strong signal, making segmentation tasks more suitable than the default reconstruction. In Figure 8, we show that the bottleneck layer has learned a coarse segmentation mask of the whole tumor, with some over-segmentation. However, this coarse segmentation mask provides valuable spatial guidance for the edema and core segmentation tasks, resulting in a much finer final segmentation of the whole tumor. These qualitative results, combined with our quantitative findings in Table 5, suggest that Mirror U-Net has the potential to generalize well to other imaging modalities and tasks beyond the specific PET/CT segmentation task studied in this work. ## 5 Conclusion and Discussion In summary, we propose Mirror U-Net, which combines multimodal fission and multi-task learning for the first time. Our model outperforms traditional fusion methods as well as fission-only and multi-task-only approaches on the AutoPET 2022 Challenge and shows state-of-the-art performance, which demonstrates the power of combining fission \begin{table} \begin{tabular}{l||c|c|c|c|c} \hline Method & Dice \(\uparrow\) & FPV \(\downarrow\) & FNV \(\downarrow\) & Tasks & Multimodal Fission & Multi-task \\ \hline nnUNet [17] & 62.75 & 2.83 & 1.59 & Seg & & \\ Blackbean [47] & 63.15 & 2.55 & 1.76 & Seg & & \\ \hline SF-Net [27] & 61.21 & 3.44 & 2.95 & Seg + Rec & & ✓ \\ Andrearczyk et al. [1] & 61.45 & 2.98 & 1.89 & Seg + Class & & ✓ \\ DeepMTS [31] & 61.91 & 3.22 & 2.76 & Seg + Class & & ✓ \\ Weninger et al. [44] & 61.22 & 3.98 & 2.82 & Seg + Rec + Class & & ✓ \\ CT-only Mirror U-Net **(v3)** & 12.37 & 28.24 & 50.02 & Seg + Rec + Class & & ✓ \\ PET-only Mirror U-Net **(v3)** & 56.14 & 4.81 & 3.02 & Seg + Rec + Class & & ✓ \\ \hline Mirror U-Net **(v4)** & 64.24 & 2.93 & 1.99 & Seg & ✓ & \\ Valindria _et al_. [42] & 39.84 & 7.89 & 17.00 & Seg & ✓ & \\ \hline (Ours) Mirror U-Net **(v3)** & **65.91** & **1.55** & **0.76** & Seg + Rec + Class & ✓ & ✓ \\ \hline \end{tabular} \end{table} Table 4: Comparison to related fission-only and multi-task only methods and to the current state-of-the-art on the AutoPET dataset [10, 47]. Figure 8: Qualitative results for Mirror U-Net **(v2)** on brain tumor segmentation. The final whole tumor prediction is in a red box and is obtained by the union of the edema and tumor core predictions. \begin{table} \begin{tabular}{l||c|c|c} \hline \hline & \multicolumn{3}{c}{Dice \(\uparrow\)} \\ \hline Method & Whole Tumor & Edema & Tumor Core \\ \hline Mirror U-Net **(v1)** & 88.37 & 71.91 & 82.22 \\ Mirror U-Net **(v2)-rec** & 91.12 & 77.12 & 84.56 \\ nnUNet [17] & 89.10 & 76.35 & 84.05 \\ SegResNet [35] & 91.29 & 77.01 & 84.22 \\ \hline (Ours) Mirror U-Net **(v2)** & **92.52** & **78.12** & **85.84** \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison to other methods on MSD BrainTumor [2]. with multi-task learning. Our results indicate that sharing only the bottleneck layer is optimal while sharing shallower or deeper layers leads to a performance drop. We also demonstrate the generalizability of Mirror U-Net to brain tumor segmentation from multimodal MRI scans. Our qualitative experiments reveal that selecting appropriate tasks improves performance as the shared representation learns a spatial guidance that boosts the primary segmentation task. Our model's robustness to hyperparameter changes and high performance is a promising step toward deploying PET/CT segmentation models in clinical practice. ## 6 Acknowledgements The present contribution is supported by the Helmholtz Association under the joint research school "HIDSS4Health - Helmholtz Information and Data Science School for Health. This work was performed on the HoreKa supercomputer funded by the Ministry of Science, Research and the Arts Baden-Wurttemberg and by the Federal Ministry of Education and Research.
2306.02680
BeAts: Bengali Speech Acts Recognition using Multimodal Attention Fusion
Spoken languages often utilise intonation, rhythm, intensity, and structure, to communicate intention, which can be interpreted differently depending on the rhythm of speech of their utterance. These speech acts provide the foundation of communication and are unique in expression to the language. Recent advancements in attention-based models, demonstrating their ability to learn powerful representations from multilingual datasets, have performed well in speech tasks and are ideal to model specific tasks in low resource languages. Here, we develop a novel multimodal approach combining two models, wav2vec2.0 for audio and MarianMT for text translation, by using multimodal attention fusion to predict speech acts in our prepared Bengali speech corpus. We also show that our model BeAts ($\underline{\textbf{Be}}$ngali speech acts recognition using Multimodal $\underline{\textbf{At}}$tention Fu$\underline{\textbf{s}}$ion) significantly outperforms both the unimodal baseline using only speech data and a simpler bimodal fusion using both speech and text data. Project page: https://soumitri2001.github.io/BeAts
Ahana Deb, Sayan Nag, Ayan Mahapatra, Soumitri Chattopadhyay, Aritra Marik, Pijush Kanti Gayen, Shankha Sanyal, Archi Banerjee, Samir Karmakar
2023-06-05T08:12:17Z
http://arxiv.org/abs/2306.02680v1
# BeAts: Bengali Speech Acts Recognition using Multimodal Attention Fusion ###### Abstract Spoken languages often utilise intonation, rhythm, intensity, and structure, to communicate intention, which can be interpreted differently depending on the rhythm of speech of their utterance. These speech acts provide the foundation of communication and are unique in expression to the language. Recent advancements in attention-based models, demonstrating their ability to learn powerful representations from multilingual datasets, have performed well in speech tasks and are ideal to model specific tasks in low resource languages. Here, we develop a novel multimodal approach combining two models, wav2vec2.0 for audio and MarianMT for text translation, by using multimodal attention fusion to predict speech acts in our prepared Bengali speech corpus. We also show that our model BeAts (**B**engali speech acts recognition using Multimodal **A**ttention Fusion) significantly outperforms both the unimodal baseline using only speech data and a simpler bimodal fusion using both speech and text data. Project page: [https://soumitri2001.github.io/BeAts](https://soumitri2001.github.io/BeAts) Ahana Deb\({}^{*1}\), Sayan Nag\({}^{*2}\), Ayan Mahapatra\({}^{*1}\), Soumitri Chattopadhyay\({}^{1}\), Aritra Marik\({}^{*1}\), Pijush Kanti Gayen\({}^{1}\), Shankha Sanyal\({}^{1}\), Archi Banerjee\({}^{3}\), Samir Karmakar\({}^{1}\)\({}^{1}\)Jadavpur University, India \({}^{2}\)University of Toronto, Canada \({}^{3}\)IIT Kharagpur, India [email protected], [email protected], [email protected] **Index Terms:** speech act, multimodal fusion, transformer, low-resource language ## 1 Introduction According to the Speech Act theory [1], issuance and utterance of words, which happens during the articulation of speech, provide the foundation of communication between the speaker and the listener. In any communication through spoken language, the listener is dependent on their own ability to decode the explicit or implicit intention encoded in the speaker's delivery for the communication to be successful. Every spoken language has its unique set of morphemes, intonations, and sentence structures, that adds crucial meaning to the message being conveyed. Unique arrangements of these building blocks of speech can completely change the proper meaning intended by the speaker. Thus, the same utterance of a sentence can be interpreted differently, depending completely on the rhythm of speech of utterance. Prior studies in speech act recognition relied on both linguistics and non-linguistic aspects, like sequential context, physical context, adjacency pair, etc [2, 3]. However one of the critical aspects of speech recognition in multilayered speech act condition is prosody and intonation, and the modulation of parameters of this prosody changes the meaning of the intended communication. For instance, a simple statement like "The sun rises in the east", can be interpreted as a statement, but also with a slight change by raising the tone in the last phrase uttered, can imply that the speaker intends it as a question. Therefore, with the same morpho-syntactic strings, a difference in intonation allows the same sentence to be interpreted as a question or an assertion. In this study we make an attempt to explore this domain for Bengali (Bangla), a low-resource Indo-Aryan language. Our contributions are: (i) preparing a corpus of speech acts in Bengali 1, (ii) proposing a novel multimodal architecture (baseline) and investigating its performance in classifying these speech acts in a low-resource setting. Footnote 1: Dataset can be found in Project Page ## 2 Related works Initial works in processing and classification of speech data rely mostly on classifying low level acoustic speech parameters and spectral features using SVMs, and shallow NNs [4, 5, 6, 7, 8, 9]. Further, CNNs were used for emotion recognition task in speech [10, 11]. For speech act classification task, Kipp, 1998 [12] trained an Elman/Jordan network on German speech. Furthermore, another study [13] used neural network, parsing model, and a linguistic rule-based classifier to classify speech acts of Assertion, Figure 1: **BeAts** _performs better than unimodal_ **wav2vec2.0** _across all 3 speech acts classes in both Precision (P) and Recall (R) scores, by learning rich features utilizing multimodal attention fusion module._ WH-Question, Directive, and Yes/No Questions. To utilise the various aspects of human communication to decode the underlying meaning, multimodal structures were introduced. Earlier works include [14] which proposed the use of SVMs for classifying the acoustic features forming a feature subspace, and manually defining emotional keywords for text analysis. Subsequent works on multimodal analysis followed partial reliance on manufactured features, using ConvNet structures to process both textual and visual features [15]. Yue Gu et. al, 2017 [16] presented a multimodal structure consisting of two independent CNNs, processing speech and text data. Recent success of transformers in sequence modeling tasks has seen a wide application in automatic speech recognition, which completely eliminates the need to create pre-defined features. In this regard, BERT-like [17] models achieved a significant improvement on previous architectures. Following BERT, subsequent architectures emerged for audio data modeling, such as the wav2vec2.0 model [18]. During pre-training the model learns general representations of speech phonemes and complex features, providing a good starting point to be fine-tuned on other low resource languages, such as Bengali, with labeled data, for speech acts classification task. In our approach, we use pretrained wav2vec2.0, and fine-tune the model to classify speech acts in Bengali speech audio. We further fine-tune Marian-NMT [19], a neural machine translation framework pre-trained on multilingual datasets [20], on Bengali text to English text transcription of the speech data. We use a multimodal-modal attention fusion using two separate schemes, firstly with optimal transport kernels [21] and secondly with multimodality transformer. This facilitates a greater level of interconnectivity and fusion between the two networks processing different data from different modalities. Finally, we feed the output to a series of fully connected layers and a decision softmax layer for prediction. In contrast to several recent works on multimodal speech and text processing, we use self-attention heads to learn both speech and text latent representations, and feed them to a separate downstream model. We introduce a novel architecture BeAts, and extend the work of classifying perceptually understood speech acts into our prepared Bengali speech corpus, and we compare our architectures with the individual unimodal and bimodal models evaluated on the same dataset. ## 3 Experimental Setup We prepared a dataset of 85 utterances, consisting of request, question, and order, having 25,35, and 25 utterances in the class groups respectively. The duration of the chosen utterances were approx 1300 ms and with 5 to 7 words each recorded in 44.1 kHz sampling rate in a soundproof recording room. For our experiment 2 male and 2 female (average age = 23 years, SD= 2,3 years) L1 Bengali native speakers were chosen. Prior to the experiment, the participants were instructed to carefully read each of the sentence and comprehend their meanings. The sentences for each individual category were recorded one after another without providing any prior context. A standard amplitude normalization protocol was used after the recording. Since the number of samples in the dataset is small we resorted to using augmentations (to increase effective sample size) as shown in Fig 2. Here we refrained from doing any spectral augmentations in the audio dataset because that might have an adverse effect on the data. Further, we did not augment Bengali text because the parity between the audio and the text will be lost. However, we carefully augmented the English samples so that it prevents the semantic meaning upon translation. ## 4 Methodology ### wav2vec2.0 Raw wave input, which is positionally encoded by a convolutional neural network (CNN), GELU activation function [22] and layer normalised, is provided as an input to the wav2vec2.0 [18] transformer architecture as shown in Fig 3. We use wav2vec2.0 which has been pre-trained on Librispeech and LibriVox dataset using contrastive loss (CL). CL is obtained by computing the cosine similarity between the context representation (obtained at the transformer output) and the quantized representation (produced through choosing discrete entries from codebooks). During finentuning this model on our audio dataset, the final layer is modified to output probability scores of the classes, and a binary cross-entropy loss is calculated for upgrading gradients. ### Opus-MT Here, we used contextual text representation obtained in Bengali to English translation, as an additional input for the classification task. This is motivated by the clear structural distinction in how requests are expressed from orders in English, which is not observed in Bengali. For example, the sentence "Ekta nodir naam bolo tub" has the same utterance for both question and request in Bengali, however in English, it is expressed as "Can you tell me the name of a river" when it is intended as a question, and as "Please tell me the name of a river" when intended as a request. Our goal is to utilise this difference in interjections in these two speech act classes in English, to improve our performance on classifying speech acts in Bengali, by learning a latent representation of translation from Bengali to English of this transcribed text. The Opus-MT model [20] utilises Marian-NMT(Neural Machine Translation) [19] as its framework architecture, based on the standard transformer ar Figure 2: Augmentations used in this study. chitecture. The model is pre-trained on large bitext repository OPUS [23], and fine-tuned on the contextually transcribed Bengali to English text of our Bengali speech data set for the speech act classification task. The architecture follows the base model described in [24]. Here, the latent representation is concatenated with the latent speech representation obtained from the wav2vec2.0 model, and fed to fully connected layers to output a softmax prediction over the individual classes. ### Multimodal Attention Fusion To further better our model's performance, instead of directly concatenating the latent speech representation and text translation representation from the two models, we feed them separately into a multimodality fusion block consisting of two schemes, namely, an Optimal Transport Kernel (OTK) scheme, and a multimodal fusion transformer scheme. In both these schemes, we use the output of the multimodality fusion block and pass it through fully connected layer. Furthermore, for training we introduce a joint loss combining three weighted loss terms (Fig 3): \[L_{total}=\alpha L_{speech}+\beta L_{fused}+\gamma L_{text} \tag{1}\] Ablation studies on these weights are discussed in the following subsection. **Multimodal Fusion Transformer:** Here, we propose a MultiModal Fusion Transformer where we adapt transformers for fusion among the speech and text modalities. We concatenate the features from respective modalities together along with a special [CLS] token as the first feature vector of the aggregated sequence to be used as input to the multimodal transformer. **OTK:** The intuition of choosing OTKs is their robustness which have been clearly demonstrated over usual aggregation methods (mean/max pooling or attention) in recent studies, in long sequences of varying sizes, with long range dependencies. Furthermore a single layer of OTKs have also outperformed multi-layer feed forward NNs and in some cases multi-layer CNNs too [21]. Learning representations from a large set of feature vectors with long range interactions actively benefits from pooling to reduce the number of feature vectors, and here the pooling rule is similar to that of self-attention as it follows inductive bias, aggregating features based on similarity scores. Given a set and a learned reference, alignment of these elements are done using optimal transport protocol, and then using this similarity they are weighted accordingly to be pooled, which produces the output embedding. Cross attention is an intuitive multimodal fusion method in attentive modules where attention masks from Figure 4: The same utterance can be interpreted as Request or Question in Bengali, whereas the expressions are structurally different in English. Figure 3: BeAts takes in as input raw waveform, annotated with the speech act class labels, and transcribed Bengali to English translation text input. The data undergoes positional embedding and is fed as input to the transformer architectures for sequence modeling task. The respective outputs are fed to a multimodality fusion block comprising of two separate schemes (i) an optimal transport kernel (OTK) based attention, and (ii) a multimodal fusion transformer. The output of this fusion block is passed through fully connected layers for classification task. one modality (audio representations) are used to highlight the extracted features in other modalities (text representations). This is different from self-attention where attention masks from one modality (text) are used to highlight it's own features. We use a combination of both cross attention (multimodal attentive fusion) and self-attention to facilitate a greater level of interconnectivity and fusion between the two networks processing different data from different modalities, while also preserving the individual representations of the models via the respective self-attention layers. ## 5 Results As a baseline, we initially performed unimodal classification with wav2vec2.0. The Precision (P) and Recall (R) scores are reported in Table 1. Based on the results, it is quite evident that the speech modality alone is not enough to capture all the information. Therefore, given the assumption that Question-Request speech act expression is structurally distinct in English and Bengali, we further fine-tune Opus-MT transformer on our transcribed dataset, and concatenate the two latent representations obtained from the models, and use fully-connected layers to produce a softmax score. This bimodal approach increased the overall performance as compared to the unimodal approach, as can be ascertained from Table 1. The experimental results for BeAts are listed in Fig 1. We extend our bimodal approach by using multimodal attention fusion via OTKs and Multimodal Transformers respectively. BeAts achieved a significant boost in performance across all classes for both the schemes 1 indicating the impact of multimodal fusion in the Bengali speech acts classification task. Furthermore, we did ablation studies on the joint loss function (Eq. 1) used for training the BeAts model. For simplicity, we have considered \(\alpha=\gamma\) and therefore, \(\beta=1-2\alpha=1-2\gamma\). We have considered \(\alpha=\gamma=[0.1,0.15,0.20,0.25,0.30]\) and reported the F1 scores for both OTK and Multimodal Transformer (Xformer) schemes for all the 3 classes (Fig 5). From our experiments, we have observed that the best performance is obtained for a value of 0.15. ## 6 Conclusion In this work we presented a novel multimodal approach for speech act classification in Bengali utterances. We combine evidence from the intonations of the utterances as well as the structural difference in speech act classes in English, to obtain significantly better performance on our Bengal speech corpus, compared to the individual unimodal and bimodal approaches. Here a wav2vec2.0 transformer models multilingual speech data and a Marian-NMT transformer models neural machine translation, and the combination of these two significantly increases the accuracy of speech act classification from just using a single wav2vec2.0 model using only audio data to classify. Furthermore, the performance of multimodal attention fusion is proved to be much better than solely using a fully-connected layer in combining latent space representations. Both wav2vec2.0 based audio transformers and Marian-NMT like text transformers are demonstrated as multilingual models, which after being pre-trained in other widely used (or with availability of rich datasets) languages, can be applied to more diverse and data/resource constrained languages directly, without the loss of nuance. Our novel method of using both learning from audio data and text data to classify speech acts on our prepared Bengali speech corpus demonstrates the usefulness and robustness of human-like multimodal approach to learning and navigating specifically prosodic nuances and broadly any other language specific spoken communication. Our future works will include further improvements on this study by updating the dataset and applying novel architectures. We are excited about the future of multimodal language processing, especially in a low-resource setting, where a limited amount of labeled data can improve the performance of a classification task, used to produce more accurate translation, eliminate specific inherent biases taken in from data. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Model** & \multicolumn{2}{c}{**Request**} & \multicolumn{2}{c}{**Question**} & \multicolumn{2}{c}{**Order**} \\ \cline{2-7} & **P** & **R** & **P** & **R** & **P** & **R** \\ \hline wav2vec2.0 & 0.87 & 0.72 & 0.83 & 0.79 & 0.83 & 0.81 \\ wav2vec2.0 & & & & & \\ +Opus-MT & 0.91 & 0.73 & 0.86 & 0.80 & 0.88 & 0.82 \\ BeAts (Xformer) & **0.94** & **0.89** & 0.91 & **0.90** & **0.93** & 0.90 \\ BeAts (OTK) & **0.94** & 0.86 & **0.92** & 0.89 & **0.93** & **0.91** \\ \hline \hline \end{tabular} \end{table} Table 1: _Precision (P) and Recall (R) scores of the respective speech act classes._ Figure 5: _Ablation with F1 scores._
2301.09772
SONIA: an immersive customizable virtual reality system for the education and exploration of brain networks
While mastery of neuroanatomy is important for the investigation of the brain, there is an increasing interest in exploring the neural pathways to better understand the roles of neural circuitry in brain functions. To tackle the limitations of traditional 2D-display-based neuronavigation software in intuitively visualizing complex 3D anatomies, several virtual reality (VR) and augmented reality (AR) solutions have been proposed to facilitate neuroanatomical education. However, with the increasing knowledge on brain connectivity and the functioning of the sub-systems, there is still a lack of similar software solutions for the education and exploration of these topics, which demand more elaborate visualization and interaction strategies. To address this gap, we designed the immerSive custOmizable Neuro learnIng plAform (SONIA), a novel user-friendly VR software system with a multi-scale interaction paradigm that allows flexible customization of learning materials. With both quantitative and qualitative evaluations through user studies, the proposed system is shown to have high usability, attractive visual design, and good educational value. As the first immersive system that integrates customizable design and detailed narratives of the brain sub-systems for the education of neuroanatomy and brain connectivity, SONIA showcases new potential directions and provides valuable insights regarding medical learning and exploration in VR.
Owen Hellum, Christopher Steele, Yiming Xiao
2023-01-24T01:04:15Z
http://arxiv.org/abs/2301.09772v2
SONIA: an immersive customizable virtual reality system for the education and exploration of brain networks ###### Abstract While mastery of neuroanatomy is important for the investigation of the brain, there is an increasing interest in exploring the neural pathways to better understand the roles of neural circuitry in brain functions. To tackle the limitations of traditional 2D-display-based neuronavigation software in intuitively visualizing complex 3D anatomies, several virtual reality (VR) and augmented reality (AR) solutions have been proposed to facilitate neuroanatomical education. However, with the increasing knowledge on brain connectivity and the functioning of the sub-systems, there is still a lack of similar software solutions for the education and exploration of these topics, which demand more elaborate visualization and interaction strategies. To address this gap, we designed the immerSive custOmizable Neuro learnIng plAform (SONIA), a novel user-friendly VR software system with a multi-scale interaction paradigm that allows flexible customization of learning materials. With both quantitative and qualitative evaluations through user studies, the proposed system is shown to have high usability, attractive visual design, and good educational value. As the first immersive system that integrates customizable design and detailed narratives of the brain sub-systems for the education of neuroanatomy and brain connectivity, SONIA showcases new potential directions and provides valuable insights regarding medical learning and exploration in VR. **Keywords:** virtual reality, human-computer interaction, neuroanatomy, education, brain connectivity ## 1 Introduction The human brain is a highly complex organ that consists of small anatomical structures that are tightly packed and interconnected through different pathways. To aid spatial understanding and exploration of the brain's 3D anatomy, volumetric data is often sliced into 2D representation due to the limitations of traditional media (e.g., paper and 2D screens). However, this often fails to effectively reflect the complex geometry and spatial arrangement of the anatomical structures [1, 2]. With the advancement of modern bioimaging techniques, the exploration of functional and structural brain connectivity is gaining increasing interest. Intuitive demonstration of brain connectivity along the associated neuroanatomy and the insights gained through various studies will be instrumental to the education and further exploration of neuroscience [3]. So far, a number of augmented reality (AR) and virtual reality (VR) solutions [4, 5] have been proposed to provide more intuitive visualization and understanding of neuroanatomy for educational and surgical planning purposes, with positive responses from user studies. These solutions have employed a range of display devices, including mobile devices (e.g., tablet and smartphone), VR headset, and Hololens. In comparison to the primary focus on the anatomy, only a few AR/VR systems[3; 6; 7; 8; 9] have been proposed to visualize and demonstrate the neural pathways and brain networks. Keiriz _et al_.[6] proposed NeuroCave, a web-based immersive platform for exploring connectomic data. Later, workflows that leverage existing software solutions to visualize brain tractograms and functional connectivities have been demonstrated[3; 7]. More recently, Schloss _et al_.[8] built a VR application to visualize the information pathways of visual and auditory systems for educational purposes. While existing solutions tackle the challenges in spatial understanding of the 3D anatomy through visualization, very few experimented with new user interaction paradigms, which can potentially enhance the usability and learning experience[9]. In addition, among the limited efforts[3; 6; 7; 8; 9] in visualizing brain networks, no reports attempted to incorporate descriptive insights along the pathway exploration or learning module design. To meet the emerging need for the education, demonstration, and investigation of brain connectivity and to promote related neuroscientific insights, we proposed the immerSive custOmizable Neuro learnIng plAform (SONIA), which provides interactive visualization and learning modules for both neuroanatomy and the associated structural and functional networks. The new VR system has several novel features. First, inspired from VR-based geological data navigation[10; 11], we experimented with a multi-scale interaction paradigm that places the user at the centre of a large, expanded brain while also manipulating a small brain model to facilitate spatial understanding of brain anatomy. Second, we designed a progression-based strategy with completion metrics and multimedia interactions, including visual guidance and audio voice-over to provide a stimulating and enriching user experience. Finally, the system's customizable design to incorporate detailed narratives of brain sub-systems opens the door for future projects, allowing many different types of content to be visualized and explored with the proposed software framework. To demonstrate the proposed system, we created an interactive visualization of the research work of Xie _et al_.[12] on the functional system and brain network of anxiety. We conducted quantitative and qualitative user assessments that indicated that the system exhibits excellent usability, visual design, and educational value. Thus, in conjunction with conventional learning materials composed of 2D graphic representations, our proposed novel, customizable, and intuitive VR system has significant promise and value for the education and exploration of neuroanatomy and neural pathways. ## 2 Methods and Materials ### Virtual brain model To demonstrate the proposed VR system, we used the anxiety-relevant functional brain network summarized in a recent review by Xie et al.[12]. A summary of the network, which involves six key anatomical structures (amygdala, hippocampus, striatum, medial prefrontal cortex (mPFC), hypothalamus, and the bed nucleus of the stria terminalis (BNST)) is illustrated in Fig 1 of Xie _et al_.[12]. Briefly, Xie _et al_. summarized five subsystems that regulate anxiety, including cognitive control, fear conditioning, uncertainty anticipation, motivation processing, and stress regulation; with each subsystem made up of pathways between two to three anatomical structures. For the system, we constructed the virtual brain model based on the AAL116 brain atlas[13], which is widely used in neuroimaging research. Five of the six key structures involved in the anxiety-related functional systems were extracted from the atlas. As the AAL116 atlas does not contain the BNST, it was segmented manually according to Theiss _et al_.[14] on the MNI152 brain template[15] in the same space as the AAL116 atlas. All atlas structures were converted to.fbx mesh models from the discrete labels in the NIFTI images for use in the 3D VR environment. For the virtual brain models, we only highlight these six structures while keeping the rest as semi-transparent to provide the additional visual references to further enhance the spatial understanding of the anatomy and richness in the final rendering. Finally, both opaque and semi-transparent lines were added between both the six key structures and the rest of structures from the AAL116 atlas, respectively, to indicate functional and anatomical connectivities between them. ### Virtual reality environment construction For the virtual environment, we explored a multi-scale visualization paradigm with two brain models of different sizes to facilitate interaction, visualization, and spatial understanding of the neuroanatomy. Multi-scale VR interaction was recently suggested for the exploration and navigation of geographic data to facilitate the understanding of spatial arrangement[10, 11]. For anatomical navigation, we expect that this approach will also benefit the spatial understanding of the neuroanatomy, as well as provide an enriching, fun, and immersive experience for the user. In the VR environment, the user is positioned on a "mission control" platform suspended at the center of a magnified brain model at the scale of a large house, which is out of reach for the user, but still allows clear recognition of the spatial arrangement of different anatomical structures (hereafter referred to as the large brain). At the same time, a smaller forward-facing brain model that mirrors the large brain is placed in front of the user to allow interaction with the learning modules and the large brain model (the small brain). Along with the small brain, three floating information panels are also presented to the user to display the schematic diagram of anxiety-related functional subsystems, descriptions for all brain connectivities, and the percentage of completion for the learning content for each functional sub-system. The schematic of the designed VR environment is demonstrated in **Fig. 1** and the details of each visual element for the "mission control" platform are illustrated in **Fig. 2**. Different from the magnified brain, the small brain displays the geometries of the six key anatomical structures with different shades of red on the left hemisphere while the right hemisphere depicts a graph representation such that each region is a color-coded sphere (located in the regional centroid) and their connectivities are denoted by connecting lines. The node in this graph representation offers clear visualization of the connectivity relationships between the anatomies and makes it easier to select each anatomical structure, allowing the small brain model to serve as the main media to interact with the rest of the visual elements in the virtual environment. Corresponding to the right hemisphere of the small brain model, lines that connect the centroids of the key structures are also shown on the same side of the magnified brain. Figure 1: Overview of the virtual reality environment set-up. **a**. The spatial relationship between the “large brain” environment and the “mission control” platform; **b**. The layout of the user-interface, which consists of three information panels and a small brain model that allows the user to interact with the “large brain” environment and complete the learning modules. Our VR system was created using the Unity game engine (version 2021.3.2f1) with the SteamVR plugin. We employed the HTC VIVE Pro Eye VR HMD headset and a Razer Blade 15 laptop (Intel Core i7 CPU, NVIDIA GeForce RTX 2070 GPU, and 16 GB RAM) to run the system. No lagging or frame freezing was observed for our system, and it ran consistently at an average of 45-50 frames per second. Only one VR controller is required to perform target selection and confirmation for the VR system. ### Overview of the system workflow Before understanding the subsystems and brain networks that regulate a neural process, it is important to first elucidate the spatial arrangement of each neuroanatomy that is involved. Therefore, we designed the workflow for the user in the SONIA system in two general phases (anatomical learning and connectivity learning), both utilizing a single VR controller in the dominant hand for pointing and selection. In the first phase, the user is guided to learn about the key brain structures involved in the target neural network. Upon completion, the user is guided to the second phase to explore the connections between the structures and the roles of different subsystems in a neural process, until all subsystems have been visited. At each step of the workflow, we have designed appropriate user interaction strategies that fully utilize the visual elements in the environment to provide a stimulating experience. In both stages of the system, the user does not need to select the structures and connections in any predefined order, thus giving them the opportunity to select items and knowledge points that most interest them, or where perhaps closely related to the structures that they had just visited. By granting participants this freedom, the users are given a chance to exercise limited agency in their own educational experiences and learn at their own pace. We will further elaborate the user interaction strategies and system workflows for the two phases in the following sections. ### Anatomical learning phase During anatomical learning, the user is tasked to navigate all key brain structures to learn their spatial arrangements and roles in the neurological system. To accomplish this, a short virtual stick extends from the controller with a small sphere at the tip, which is used as the default pointing and selection tool. The workflow of this phase is illustrated in **Fig. 3**. When the user touches the target object with the virtual stick, the hit object becomes highlighted with a white halo, which, when confirmed by pressing the controller's trigger button, will remain to highlight the structure. This user interaction strategy is only applicable on the right hemisphere of the small brain model, where the key anatomical structures are represented as interconnected nodes for selection to reduce visual clutter. Once a node selection is confirmed, the corresponding anatomical structure in its full geometric representation in the left hemisphere of the small brain becomes highlighted with a white halo. Syncing with the interaction upon the small brain, the same white halo that indicates selection and hover-over is also shown in the corresponding structures in the right side of the large brain. This signals the user of the link between the two brain models of different scales. Two information panels are employed as the key UI elements in anatomical learning. First, the learning material panel is positioned above the small brain to display the name and the key knowledge points for the selected brain anatomy. Second, the connectivity diagram **(Fig. 2c)** demonstrating the relationship between the anatomical structures is placed to the left of the small brain. Although the panel is empty at the beginning, once a structure is visited, the corresponding item will become visible in the diagram until all structures have been selected at least once. The connections between structures will also be revealed as the item list populates. ### Connectivity learning phase Upon completing the anatomical learning module, the user will proceed to the connectivity learning phase, where all three information panels illustrated in **Fig. 2c-2e** are employed together with the small and large brain models to fulfill an interactive learning experience. Note that among the three panels, only the one Figure 3: Workflow for anatomical learning phase. **a**. the user points the controller line into a structure and presses the trigger to select it, **b**. the structure becomes highlighted in the small brain, **c**. the structure becomes highlighted in the large brain (background), **d**. the display panel showing the name and description of the selected structure. Figure 2: Detailed demonstration for the user-interface of SONIA. **a**. Composition of the small brain model that allows interaction and visualization of brain anatomy and connectivity; **b**. Inside view of the “mission control” platform; **c**. Schematic diagram of anxiety-related functional systems and brain structures; **d**. Information panel for displaying learning materials regarding the key brain regions shown in **c**; **e**. Information panel that notifies learning material progress. Note that across **c**, **d**, and **e**, the same color-coding strategies are used consistently to code for different processes in the response to anxiety. that displays the learning materials has selectable menus for direct user interaction, and its setup is different from that of the anatomical learning. While we maintain the user interaction strategy in Section 2.4 for picking anatomical structure using the small brain, a 'laser pointer' now extends from the controller which is used to select menu items in the learning material panel, as it is not within the arm's reach. To start investigating a connection, the user needs to first pick a brain structure from the small brain model. Then, the name of this structure and those in the network that it passes information towards will be listed in the learning material panel. For each menu item that represents a unidirectional connection, small dots with the color-coding that signifies the membership of a subsystem are marked on it as well. Then, a further selection of an item in the list will trigger the display of the key knowledge points regarding the description of the connectivity between the two structures within the brain network under study. At the same time, this connection under investigation and its directionality will be annotated in both the large and small brain models using colour coding strategies corresponding to the subsystem(s) that it belonged to, as well as in the connectivity diagram (**Fig. 2c**) using a white color. As the user gradually explores all connections in the connectivity diagram, the progress panel will track the completion of the learning materials for each subsystem with bar graphs showing the percentages of the connections that have been viewed. To better demonstrate the workflow, an example of exploring the connection from the amygdala to the mPFC using the SONIA system is shown in **Fig. 4**. As mentioned previously, colour coding is used extensively throughout the experience, such as on all the information panels, both to denote a belonging to a particular structural subsystem and to show which structures and/or connections are selected. As a part of the customizable design, visually distinct colours are automatically generated by SONIA for each of the system's subsystems. The use of colour coding facilitates the user to establish immediate association between the connections and their subsystems. Note that the white color is reserved for our software system to indicate that the structure and/or connections have been selected. ### Customizable system design To enable flexibility and adaptability for new learning materials, our proposed SONIA system was designed in such a way as to allow alternative datasets that define the anatomical models and the functional relationship between them to be loaded. Specifically, the following data are necessary for the system to function: a collection of 3D model files (e.g.,.fbx,.obj, etc.) for the anatomical structures, a.csv spreadsheet containing the names and descriptions of the structures, and a.csv spreadsheet with the connectivity matrix between structures. Additional files are optional but can further enhance the learning experience. They include.csv files that list the subsystem names and descriptions, membership of structures and connections to the subsystems, as well as extra 3D model files for peripheral anatomical structures and their connectivity matrix to help enrich the visual content if needed. Besides these customizable data for alternative learning modules, the users are also welcome to tweak the visualization styles (e.g. colors and mesh textures) in the Unity editor. As the Unity scenes, user interaction strategies, and UI displays are programmed, they will remain unchanged in customization. Furthermore, to achieve the optimal visualization of the information panel for displaying connectivity diagrams, the user will be encouraged to design the layout that best suits their target applications and population. By placing these required files in a specific folder and updating the editor script variables to point to the correct locations and change any additional settings, different learning contents can be generated for either subject-specific brain models or existing brain atlases (e.g., AAL116). With even a simple set of meshes, a connectivity map, and structure descriptions, an interactable experience can be produced. By leveraging the AAL116 atlas, our demonstrated case study took full advantage of such a setup. As no frame rate loss or system errors were observed with full rendering of all brain parcellations of the atlas with different levels of transparency, we believe that the system is highly scalable for complex neuroanatomical models. ### User study design and system validation The usability of the SONIA system was assessed with both quantitative and qualitative evaluations in user studies. Upon informed consent, we recruited 11 subjects (age=31.1\(\pm\)6.0, 4 female, 7 male) to participate in the study. All participants were either somewhat or very familiar with neuroanatomy and/or the concept of brain connectivity, and represent the main target users of the system. Among them, only one did not have VR experience before the study. Subjects spent 20\(\sim\)30 minutes following the directions of a tutorial while they interacted with the environment. The tutorial consisted of a text-to-speech (TTS) voice-over and accompanying texts that described the interactions and responses of the system. This tutorial guided participants through each interaction scheme and visual change, and explained all the subsystems within the loaded data. Subjects with glasses were allowed to wear them while participating, as the HTC VIVE Pro Eye headset is compatible with them. No participants experienced motion sickness. Figure 4: Workflow of the connectivity learning phase using the connection from the amygdala to the mPFC as an example (yellow circles indicate important events). **a**. Information panel showing all available brain structures that the amygdala is connected to when it is selected by the user. **b**. When the mPFC is selected, the description of the connection in anxiety processing is shown. **c**. the connection becomes highlighted in the small (and the large) brain. **d**. connectivity diagram, with the currently selected connection highlighted in white. **e**. subsystem completion diagram with percentages of completion for each subsystem for anxiety processing. Please note that all the arrows indicating the directions of the connectivity are color-coded by the corresponding subsystems. Upon completing the VR experience, we asked each participant to complete a three-part questionnaire, consisting of both quantitative and qualitative assessments. The first part of the questionnaire contained the System Usability Scale (SUS) evaluation[16], which is widely used to validate the usability of software systems. It is comprised of 10 questions on the scale of 1-5, evaluating complexity, user-friendliness, and confidence when using a software system. Among the 10 questions, each odd-numbered question is scored as x-1, and each even-numbered question is scored as 5-x, where x is the question's resulting value. The scores for each participant are then summed, and then multiplied by 2.5 - resulting in a maximum SUS score of 100. A software system that receives a SUS score above 68 indicates good usability. The second part of the questionnaire contained an additional feedback form pertaining to the visual design, interaction design, and learning experience of the system (rated along scales from 1-5 for each sub-question, with higher values indicating more positive responses). They complement the SUS results to further identify the effectiveness of our system's more specific design decisions. First, for visual design, three questions were asked to establish the perceived complexity of the visual elements, pleasantness of graphic styles, and usefulness of colour coding schemes for sub-system representation. They assess whether we have appropriately balanced the accuracy of data representation in the UI and the artistic colours and placements of visual elements. The averaged score of these questions was also obtained to obtain an overall assessment of the visual design. Second, an interaction design question asked participants how well the interface communicated the process of navigating through the anatomical structures and neural connectivity. Thirdly, the learning experience had two questions to determine whether the multi-scale navigation strategy with the small and large brains enhances spatial understanding of the anatomy and the overall learning yield of the system. For the total SUS score, a one-sample t-test was used to assess whether the results are significantly different from 68, and for each sub-score in Part 1 and 2, we compared the results to a neutral response (score=3), also with one-sample t-tests. Here, a p-value\(<\)0.05 indicates a statistically significant difference. Finally, in Part 3 of the questionnaire, we provided open-ended questions to allow the participants to provide additional suggestions on how to improve the system further and justify some of the ratings given to the previous sections as they see fit. The responses were reviewed carefully to help understand the quantitative assessments and potential directions for future improvements. ## 3 Results ### Quantitative evaluation The full SUS score from the user study was 79.8\(\pm\)11.6, significantly greater than the minimum 68 for a usable system (p=0.007). Every sub-score, except system complexity, indicated a positive user-interaction (p\(<\)0.05) with the SONIA system for the overall ease of use, intuitiveness, and consistency. Interestingly, the opinions of the participants were divided in terms of the complexity of the system, resulting in a rating of 2.3\(\pm\)1.2 (lower score indicates lower perceived system complexity), though this value was not statistically different from a neutral response (p=0.07). In Part 2 of the questionnaire, the feedback for visual, interaction, and learning experience was generally positive. While the overall visual score was 3.9\(\pm\)0.5 (p=0.0001), for three associated sub-scores, the results for the complexity of the visual elements, pleasantness of graphic styles, and usefulness of colour-coding schemes for sub-system representation were 3.5\(\pm\)1.1 (p=0.14), 4.2\(\pm\)0.8 (p=0.0004), and 4.0\(\pm\)0.9 (p=0.004) respectively. With a score of 3.6\(\pm\)0.9, the interaction design was rated as effective for navigating the anatomies and connections (p=0.046). Finally, though participants were mostly neutral (3.1\(\pm\)1.1, p=0.80) regarding the multi-scale strategy for enhancing anatomical understanding, they felt strongly that they had learnt a lot (3.9\(\pm\)0.7, p=0.002) while using the system. ### Qualitative evaluation Besides quantitative responses, the feedback form also contained a qualitative section to allow participants to remark freely upon general impressions, opinions, and improvements regarding the system. Participants commented about both positive and negative aspects of their experience. In particular, users found the utility and function of the system to be helpful and novel, standing out as a good way to represent the data. In terms of the user interface, participants often found it to be containing too much visual information. Due to the large volume of text and UI placements, they felt mildly overwhelmed, and found it difficult to absorb some of the informative material. ## 4 Discussion Using the functional network of anxiety regulation[12] as a case study, the proposed SONIA VR system is the first to integrate descriptive insights along neural pathway exploration and learning module design. Besides visualization of the 3D anatomy, we explored novel interaction and user-interface designs intended to benefit the usability and user experience. As the knowledge of neuroanatomy is a prerequisite to understanding neural pathways, we designed the workflow of the system to encompass the phases of anatomical and connectivity learning. In each phase, following the popular practice of player agency (a practice in game design to leverage control of the environment towards the player), the user is free to select the anatomies and the associated connections at their will and pace to trigger changes in the UI elements in the virtual environment. Together with the information panel that displays the progress of completion, these components are designed to enhance the motivation and ease of using the system. Both strategies have shown positive impacts in the design of games and educational content[17, 18]. The system focused on this built-in reward system (the visual demonstration of progress), rather than point- or trophy-based rewards, as these less tangible markers of completion have been shown to be less impactful on feelings of reward and on learning amount[19]. The positive feedback from our user evaluation regarding the willingness to use the system frequently partially signifies the benefit of these user-interaction designs. Besides leveraging the superiority in 3D visualization of virtual reality[1], the ease of use of our system is key to facilitating the understanding of complex neural pathways in the brain. To achieve this, we implemented a number of visualization and interaction strategies. First, the small brain model is used as the central device to interact with the rest of the environment and UI elements; Second, both node-based and full anatomical representations are used to reduce clutter, facilitate spatial understanding, and simplify object selection; finally, systematic colour-coding is employed to signify the association to the subsystems of the brain network. Their positive impacts are confirmed by the SUS assessments, particularly in the sub-scores for the intuitiveness and ease of use, as well as the visualization and interaction experience evaluations. In terms of the complexity of both the software design and visual elements, the participants indicated slightly favorable (but not statistically significant) opinions, respectively. The discrepancy may be subject to the participants' varied levels of experience with the VR system and neuroscience. This leaves more room to improve the system further in our future strategies. Potential solutions could involve further simplification of the UI and options to expand descriptions rather than have them be constantly present - these changes would serve to reduce the amount of textual information presented to the user at any given time. Another interesting fact is that the participants reported that they were neutral on the beneficial role of multi-scale representation of the brain for anatomical understanding. This may be due to the choice of scales and the limited freedom of active movement in the large brain model, which is in contrast to previous works in geographic data exploration, where multiscale approaches have been shown to be beneficial[10; 11]. However, as the representation forms the overall visual style of the system, creating a visually appealing and enriching environment, the overall visual styles were highly appreciated by the participants. In addition, the user study confirmed a highly positive learning experience from the participants. In the freeform feedback, most participants (9/11) listed a small number of remarks on both the usefulness and difficulties that they encountered with the system, as well as suggestions for improvements. Among them, the issue of complexity of the user interface and the large volume of knowledge points was mentioned by several participants (5/11), but none reported being unable to learn or being too overwhelmed. Although the multi-scale strategy for spatial learning was rated neutrally, a subset of participants (4/11) reported liking it and praised the system as a learning tool. The more detailed responses show that although improvements are still needed to better present the complex information, which is indeed challenging as suggested by previous works[6; 20; 6], the overall system was generally well received. Due to the time constraint, we showcased here the proposed system with only a single example of brain pathways as a proof of concept. However, based on the results of the user studies, it has demonstrated good usability, positive user experience, and educational value. With the customizable design, which supports easy adaptation of alternative learning content, we will continue to evaluate the performance and impact of the software platform with new materials and network models. As brain connectivity is a more advanced topic than neuroanatomy and the main focus of SONIA, our target user group (and the recruited participants in the user study) are those who are at least somewhat familiar with neuroanatomy. Along with other suggestions of improvement from the user study, further strategies for the anatomical learning phase will be developed and tested for lay users in the future. ## 5 Conclusion We have built a novel virtual-reality system, SONIA, with a customizable design to help create an immersive learning experience for understanding and demonstration of functional brain systems and networks. Unlike the prior arts that primarily focus on simple anatomical visualization, our proposed system integrates a more immersive, user-friendly, and enriching environment with detailed narratives of the brain sub-systems and effective user-interaction strategies, which is validated through user studies. Through this prototype as the first system of its kind, we demonstrate new potential directions regarding medical learning and exploration in VR. ### Data availability The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable requests. The code of the SONIA system is made publicly available at [https://github.com/HealthX-Lab/SONIA](https://github.com/HealthX-Lab/SONIA). ### Author contributions O.H. was responsible for the study concept, image processing, software implementation, data analysis and interpretation, and writing of the manuscript. C.S. was responsible for the study concept, consultation of content design of the learning materials, and editing of the manuscript. Y.X. was responsible for the study concept, image processing, data analysis, provided the overall study oversight, and edited the manuscript. All authors reviewed the manuscript and approved the submission. ### Competing interests The authors declare no competing interests. ### Acknowledgements The authors thank the insightful discussion on brain networks and VR user interaction design with Dr. Najmeh Khalili-Mahani.
2308.05443
Occupancy Grid Map to Pose Graph-based Map: Robust BIM-based 2D-LiDAR Localization for Lifelong Indoor Navigation in Changing and Dynamic Environments
Several studies rely on the de facto standard Adaptive Monte Carlo Localization (AMCL) method to localize a robot in an Occupancy Grid Map (OGM) extracted from a building information model (BIM model). However, most of these studies assume that the BIM model precisely represents the real world, which is rarely true. Discrepancies between the reference BIM model and the real world (Scan-BIM deviations) are not only due to furniture or clutter but also the usual as-planned and as-built deviations that exist with any model created in the design phase. These deviations affect the accuracy of AMCL drastically. This paper proposes an open-source method to generate appropriate Pose Graph-based maps from BIM models for robust 2D-LiDAR localization in changing and dynamic environments. First, 2D OGMs are automatically generated from complex BIM models. These OGMs only represent structural elements allowing indoor autonomous robot navigation. Then, an efficient technique converts these 2D OGMs into Pose Graph-based maps enabling more accurate robot pose tracking. Finally, we leverage the different map representations for accurate, robust localization with a combination of state-of-the-art algorithms. Moreover, we provide a quantitative comparison of various state-of-the-art localization algorithms in three simulated scenarios with varying levels of Scan-BIM deviations and dynamic agents. More precisely, we compare two Particle Filter (PF) algorithms: AMCL and General Monte Carlo Localization (GMCL); and two Graph-based Localization (GBL) methods: Google's Cartographer and SLAM Toolbox, solving the global localization and pose tracking problems. The numerous experiments demonstrate that the proposed method contributes to a robust localization with an as-designed BIM model or a sparse OGM in changing and dynamic environments, outperforming the conventional AMCL in accuracy and robustness.
Miguel Arturo Vega Torres, Alexander Braun, André Borrmann
2023-08-10T08:59:47Z
http://arxiv.org/abs/2308.05443v1
Occupancy Grid Map to Pose Graph-based Map: Robust BIM-based 2D-LiDAR Localization for Lifelong Indoor Navigation in Changing and Dynamic Environments ###### Abstract Several studies rely on the de facto standard Adaptive Monte Carlo Localization (AMCL) method to localize a robot in an Occupancy Grid Map (OGM) extracted from a building information model (BIM model). However, most of these studies assume that the building information model (BIM model) precisely represents the real world, which is rarely true. Discrepancies between the reference building information model (BIM model) and the real world (Scan-BIM deviations) are not only due to the presence of furniture or clutter but also due to the usual as-planned and as-built deviations that exist with any model created in the design phase. These Scan-BIM deviations may affect the accuracy of AMCL drastically. This paper proposes an open-source method to generate appropriate Pose Graph-based maps from BIM models for robust 2D-LiDAR localization in changing and dynamic environments. First, 2D OGMs are automatically generated from complex BIM models. These OGMs only represent structural elements allowing indoor autonomous robot navigation. Then, an efficient technique converts these 2D OGMs into Pose Graph-based maps enabling a more accurate robot pose tracking. Finally, we leverage the different map representations for accurate, robust localization with a combination of state-of-the-art algorithms. Moreover, we provide a quantitative comparison of various state-of-the-art localization algorithms in three simulated scenarios with varying levels of Scan-BIM deviations and dynamic agents. More precisely, we compare two Particle Filter (PF) algorithms: AMCL and General Monte Carlo Localization (GMCL); and two Graph-based Localization (GBL) methods: Google's Cartographer and SLAM Toolbox, solving the global localization and pose tracking problems. We found that in a real office environment (under medium level of Scan-BIM deviations) the translational RMSE of AMCL increases by a factor of four (from \(8.5\,\mathrm{cm}\) in the empty environment to \(33.7\,\mathrm{cm}\) in the real one). On the contrary, pose Graph-based algorithms demonstrate their superiority in contrast to particle filter (PF) algorithms, achieving an RMSE of \(7.2\,\mathrm{cm}\), even in the real environment. The numerous experiments demonstrate that the proposed method contributes to a robust localization with an as-designed BIM model or a sparse OGM in changing and dynamic environments, outperforming the conventional AMCL in accuracy and robustness. ## 1 Introduction An accurate localization system is crucial for successful autonomous mobile robot deployment in indoor GPS-denied environments. The indoor localization problem has been approached by applying several techniques. While some of them rely on the known position of landmarks, such as AprilTags or textual cues, others depend on sensors that have to be installed strategically on the building, such as Beacons or WiFi access points. However, in most cases, the exact location of specific landmarks is not known in advance. On top of that, having additional sensors increases the cost of the navigation stack. A BIM model, which is available for most of the current architecture, can be used as a reference map for LiDAR localization. Moreover, the additional semantic information of the model can be exploited to create advanced automated robotic tasks, like object inspection [17] or painting [16], which at the same time depend on an accurate localization system. The main issue of using a BIM model or a floor-plan as a reference map for 2D-LiDAR localization is the presence of Scan-BIM deviations. These deviations can be caused by furniture or clutter not present in the model, as-planned and as-built deviations, and dynamic or "quasi-static" changes in the environment. In an effort to address this challenge, we contribute with a system that creates OGMs from BIM models and allows their automatic transformation in pose graph-based maps. These maps are leveraged for quick, memory efficient, and accurate localization in indoor GPS-denied environments, enabling safer autonomous navigation. More specifically, the following are our contributions: * A method to extract OGMs out of complex multi-story BIM models to allow path planning and autonomous navigation of robots in indoor GPS-denied environments. * An efficient open-source method 1 to convert these 2D OGMs into pose graph-based maps for accurate 2D-LiDAR localization and navigation. Footnote 1: Available at: [https://github.com/MigVega/Ogm2Pgbm](https://github.com/MigVega/Ogm2Pgbm) * An extensive quantitative comparison of various state-of-the-art 2D LiDAR localization algorithms in three carefully designed simulated scenarios with different levels of Scan-BIM deviations and with and without dynamic agents. The remainder of this paper is organized as follows. Section 2 introduces the problem formulation of LiDAR localization as well as the main principles behind the particle filter-based and the graph-based localization strategies. Section 3 describes previous work done on BIM-based LiDAR localization. Section 4 introduces our method to generate OGMs from BIM models and pose graph-based maps from OGMs, as well as the proposed employment of these maps for robust localization. Section 5 presents the experimental settings, followed by the results and analysis in section 6. Finally, section 7 concludes our work. ## 2 Theoretical background Before presenting current state-of-the-art methods, a brief introduction to the theoretical basis behind the two main types of localization algorithms used in this research is provided. ### _Localization problem_ In this paper, we address the robot pose tracking and global localization problems, i.e., with and without approximated initial pose respectively, given a 3D BIM model as a prior map which omits considerable information about the real environment and assuming that the robot employs a 2D-LiDAR sensor. In the 2D problem, the pose of the robot at time \(t\) is defined as position and orientation \(\mathbf{x}_{t}=[x,y,\theta]^{\top}\) in the coordinate system of the map. We aim to estimate the most likely robot's pose \(\mathbf{x}_{t}^{*}\) given the measurements \(\mathbf{z}_{t}\) and the map \(\mathbf{m}\). Formally, the goal is to compute: \[\mathbf{x}_{t}^{*}=\operatorname*{arg\,max}_{\mathbf{x}}p\;(\mathbf{x}_{t}\mid\mathbf{z}_{t}, \mathbf{m}) \tag{1}\] Two widely used methods that aim to calculate this estimate are the Particle Filter (PF) and the Graph-based Localization (GBL) algorithms. ### _Particle Filter algorithms_ PF algorithms, also called Monte Carlo Localization (MCL) methods, are probabilistic approaches that represent the pose estimate with a set of normalized weighted particles. Each particle \(\mathbf{s}_{t}^{i}=\left\langle\mathbf{x}_{t}^{i},\omega_{t}^{i}\right\rangle\) consist of a pose \(\mathbf{x}_{t}^{i}\) and a weight \(\omega_{t}^{i}\). Initially, a set \(\mathcal{M}\) of particles is sampled from a Gaussian distribution around the possible locations of the robot. Subsequently, three steps are repeated iteratively in the algorithm: motion update, importance weighting, and particle resampling. For a more detailed explanation of every step, the reader is referred to Thrun et al. (2005). In our comparison we implemented Adaptive Monte Carlo Localization (AMCL) (Pfaff et al., 2006) and General Monte Carlo Localization (GMCL) (Alshikh Khalil and Hatem, 2021) to be tested under different levels of Scan-BIM deviations. ### _Graph-based algorithms_ Graph-based, also called optimization-based localization methods use previously acquired pose-graph data for pose estimation. This pose-graph contains landmarks of the environment (which can be represented as submaps) associated with nodes (which are the poses from where the landmarks were observed). Additionally, the nodes are bound to each other with spatial constraints. In a sliding window manner, the method not only considers the most recent measurement but a set of them to compute the current pose. Under the assumption that the measurements are normally distributed and i.i.d., it is possible to represent eq. 1 as a weighted least squares problem. This problem is commonly solved iteratively using the Levenberg-Marquart algorithm. In this paper we compare Cartographer (Hess et al., 2016) and SLAM Toolbox (Macenski and Jambrecic, 2021) as GBL algorithms. While particle filter algorithms are easier to implement and can represent non-Gaussian distributions, graph-based localization algorithms, besides being deterministic, can handle delayed measurements and maintain a recent history of poses. A more exhaustive qualitative comparison is given by Wilbers et al. (2019). ## 3 Related research A BIM model with 3D geometric information can be used as a prior map to accurately localize robots in indoor GPS-denied environments and allow autonomous navigation.This section will overview state-of-the-art methods, which used prior building information, i.e., BIM models or floor plans, to find the correct robot position and orientation. Follini et al. (2020) show that the transformation matrix between the reference system of the robot and the map extracted from the BIM model can be retrieved by applying the standard AMCL algorithm. The same algorithm was used by Prieto et al. (2020), Karimi et al. (2021), Kim et al. (2021), and Kim and Peavy (2022) to localize a wheeled robot in an OGM generated from the BIM model. The main difference between these methods relies on how they create the OGM from the BIM model. While Follini et al. took the vertices of elements that intersect a horizontal plane and used the Open CAS-CADE viewer to generate an OGM in \(pgm\) format with the corresponding resolution and map origin information, Prieto et al. uses the geometry of the spaces in the Industry Foundation Classes (IFC) file and the location and size of each one of the openings. Karimi et al. (2020) developed a Building Information Robotic System (BIRS), enabling the generation and semantic transfer of topological and metric maps from a BIM model to Robot Operating System (ROS). The tool was further developed in (Karimi et al., 2021) with an optimal path planner, integrating critical components for construction assessment. Kim et al. (2021) implemented a method to convert an IFC file into a ROS-compliant Simulation Definition Format (SDF) world file suitable for robot task planning. They evaluated their approach for the purpose of indoor wall painting. Later, to incorporate dynamic objects and for the aim of door inspection, Kim and Peavy (2022) proposed a technique to convert an IFC model into a Universal Robot Description Format (URDF) building world. Once they have the URDF model, they use the PgmMap creator (Yang, 2018) to create an OGM out of it. Hendrikx et al. (2021) proposed an approach that instead of using an OGM uses a robot-specific world model representation extracted from an IFC file for 2D-LiDAR localization. In their factor graph-based localization approach, the system queries semantic objects in its surroundings and creates data associations between them and the laser measurements. While they demonstrated that the method can track the pose of the robot, it was not evaluated quantitatively. Instead of using a BIM model, Boniardi et al. (2017) use a CAD-based architectural floor plan for 2D LiDAR-based localization. In their localization system, they implement Generalized ICP (GICP) for scan matching together with a pose graph Simultaneous Localization and Mapping (SLAM) system. Later, they proposed an improved pipeline for long-term localization in dynamic environments Boniardi et al. (2019). Zimmerman et al. (2022) uses an OGM obtained from a sliced Terrestrial Laser Scanner (TLS) point cloud a together with human-readable localized cues to assist localization. With their text detection-based localization technique, they can detect known room numbers and thus can robustly handle symmetric environments with structural changes. While several approaches have emerged aiming to create OGMs from BIM models, none of them deal with complex non-convex models with multiple stories and slanted floors. Moreover, most of the proposed techniques are based on the strong assumption that the BIM model represents the actual current state of the building very precisely, ignoring the presence of possible Scan-BIM deviations due to clutter, furniture, as-planned vs. as-built differences, changes due to long-term operation, or the presence of dynamic agents. ## 4 Methodology Our method can be divided in three main steps: **Step 1:** Creation of an OGM from an IFC file employing IfcConvert and OpenCV. **Step 2:** Automatic generation of a Pose Graph-based map out of an OGM with a combination of image processing, coverage path planner, and ray casting. **Step 3:** Robust localization using particle filter algorithm and graph-based localization system. ### OGM generation from an IFC For the creation of suitable 2D OGM for robot localization and navigation from a complex multi-story IFC models the IfcConvert tool of IfcOpenSchell (Krijnen, 2015) and image processing techniques are used. IfcConvert allows the creation of a 2D map in Scalable Vector Graphics (SVG) format with the desired elements in the IFC model that cross a plane at the desired height. In our case, non-permanent entities such as spaces, windows, and doors are excluded from the resulting 2D OGM by ignoring the corresponding entity names. This exclusion is essential to filter only structural information about the building, enabling further autonomous navigation between the rooms that want to be explored. Besides having the permanent structures in the OGM, and with the aim of global localization and posterior correct pose graph map generation, it is crucial to differentiate between outdoor (unknown) and indoor (navigable) spaces in the OGM. This distinction can be automated creating a second OGM with all the entities in the IFC file (i.e., with doors, windows, and spaces). The final separation of outdoor (gray color), indoor (white) and obstacle (black) is done based on the contours in the SVG image. OpenCV allows the processing of the contours depending on their hierarchy, i.e., depending if they are inside (child contours) or outside another contour (parent contours). The resulted file is finally converted to _pgm_, which together with its properties (the resolution and origin) in a _.yaml_ file can then be loaded in the robotic system as prior environment information, allowing robot localization, path planning, and autonomous navigation. A similar procedure can be followed for multi-story level buildings. In the particular case of non-overlapping stories, the different OGMs can be merged in a single one if the relative position between them is known. To maintain this spacial relationship, while obtaining the OGMs, reference auxiliary elements with a height equal to the maximum building's height can be included in its surroundings. With these additional elements, all the OGMs will have the same dimensions allowing its merging. Creating 2D OGMs with IfcConvert is relatively straightforward when the desired section is horizontal (parallel with the XY plane). However, if the model has a ramp or a slightly slanted floor, the model must be rotated before the occupancy map is generated. Favorably, IfcConvert also allows the rotation of the model at the desired angle given a quaternion calculated from the vector of rotation. ### _OGM2PGBM: OGM to Pose Graph-based map conversion_ The automatic generation of data suitable for GBL methods from BIM models implies the simulation of sequential laser data in the entire navigable space in the model with the corresponding odometry data. For this aim, the previously generated 2D OGM are used. Applying the skeleton method proposed by Lee et al. (1994) enables the interconnection of all the rooms in a smooth trajectory. Subsequently, a Wavefront Coverage Path Planner Zelinsky et al. (1993) is applied over the navigable area inside a dilated version of the skeleton, allowing finding the waypoints over which the laser will be simulated. Then, using a ray casting algorithm and without a real-time simulation engine (such as Gazebo), laser sensor data and odometry are simulated following the waypoints found in the previous step. Finally, a trajectory builder merges these sensor data creating an accurate pose graph-based map, serialized as a _.pbfstream_ file for Cartographer or as a _.posegraph_ file for SLAM Toolbox. Graph optimization is not required since every scan's position is known accurately from the simulation. This pipeline allows the automatic efficient generation of pose graph-based maps (with submaps, nodes, and constraints) from a 2D OGM. As our OGM2PGBM workflow does not require Gazebo for Figure 1: Proposed IFC to Pose Graph-based map for robust 2D-LiDAR localization. In the first step, an OGM is created from Multi-story non-convex BIM models which can have slanted floors, this map is suitable for path planning and autonomous robot navigation. In the second step, a Pose Graph-based map is generated from the OGM. Finally, in the third step, these maps allow fast global localization and robust pose tracking in changing and dynamic environments. data simulation, it is faster and more portable than a Gazebo-based pipeline, allowing its execution on an isolated manner. Moreover, since the technique does not consider the complete 3D model but only a 2D OGM, it is very efficient. In addition, it can be used from any given OGM, which besides of been generated from a BIM model (with the method presented in the previous section), can be generated out of a floor plan or a previously scanned map. ### Robust Localization Once the different needed map representations (OGM and pose graph-based maps) are generated from a BIM model, they can be used for robust localization in changing environments. We propose to take advantage of the Self-Adaptive PF of GMCL to spread particles only in the Similar Energy Region (SER) regions and solve the global localization problem efficiently. As it is shown later (in Section 6), PF algorithms being able of representing non-Gaussian distributions can solve the global localization faster than graph-based algorithms. Once an estimated pose is found with a covariance smaller than 0.05, the nodes of GMCL are stopped, and a GBL algorithm can be started. For example, to track the pose of the robot accurately, Cartographer can be activated with the _start_traj_ service at the time when GMCL converges and using the _._pbstream_ map generated with the method proposed in Section 4.3. Similarly, SLAM Toolbox can be started with an initial pose, however with a prior _._posegraph_ map. ## 5 Experiments This section presents the evaluation scenarios designed to evaluate the various techniques and details of the implementation and evaluation. ### Evaluation Scenarios As illustrated in Figure 2, three different scenarios were conceived to evaluate the different methods. Each scenario increases the level of clutter present in the environment and, therefore, decreases the level of overlap that a perception sensor would have with permanent building objects (such as walls, columns, floors, and ceilings). The latter are the elements that are usually present in a BIM model. Additionally, to increase the simulation's realism level, we added animated walking human models (also called dynamic agents) moving in the environment. In scenarios 1 and 2, five humans walk from each room to the closest exit of that room. In the scenario Nr. 3 ("Disaster"), a total of six people move faster, trying to escape through the main door. Once the agents reach their goal, they start again, moving from their initial planned position in an infinite loop. ### Gazebo Simulation To simulate the experimental data, we use Gazebo. Once the IFC model is converted to Collada format using IfcConvert, it can be imported in Gazebo. While importing complex IFC models in Gazebo is essential to ensure that every element has its own geometric representation. One way to avoid instantiating multiple objects from the same data is using the export capabilities of Blender. For trustworthy data simulation, we separate between collision and visual models. Since LiDAR sensors cannot perceive glass materials, windows and glass doors were removed in the collision models. ### Robot Simulation The robot used for the simulated experiments was the holonomic Robotnik SUMMIT XL equipped with a 2D LiDAR Hokuyo UST-10LX. It was commanded with stable linear and angular velocity of at approximately 1 m/s and 1 deg/s, respectively. Using the URDF model of this robot it is possible to leverage the different packages of the ROS Navigation Stack for ROS visualization (RViz). One of these packages is NAVFN which assumes a circular robot and allows to plan a path from a start point to an endpoint in a grid based on a Costmap. A Costmap is an inflated version of the given 2D OGM with a specified amplification radius created to avoid the robot colliding with obstacles while navigating through the environment. To speed up the usage of the OGM for robot simulation, the Gazebo Plug-in PgmMap creator (Yang, 2018), was also implemented, allowing the creation of maps with known origin position. In practice, this step is not required since the alignment between the real world and the map can be retrieved as a localization system result. It is worth mentioning that using navigational goals instead of single movement commands is very convenient for data simulation since it significantly reduces the probability of collisions, which can make the entire sequence useless. Figure 2: Evaluation Scenarios. (a) Empty Room: represents a typical BIM Model, without furniture; (b) Reality: represents a standard office environment and is based on real-world TLS data; (c) Disaster: is an environment after a simulated disaster with large Scan-BIM deviations. Following this approach, 2D LiDAR, Inertial Measurement Units (IMU), Wheel odometry, and ground truth odometry were simulated in the six scenarios (three models with and without dynamic agents). The resulted trajectories of the simulation are presented in Figure 3. ### Implementation details Due to the stochastic nature of PF algorithms (AMCL and GMCL) and similarly as done by (Alshikh Khalil and Hatem, 2021), these methods were executed 30 times in each sequence, and the average values were calculated. Similarly as (Zimmerman et al., 2022), we consider that a method converges when its pose estimate is within a distance of 0.5 m from the ground truth pose. If after the first 95 % of the sequence, convergence does not happen, then it is considered a failure. Unfortunately, SLAM Toolbox could not be evaluated for global localization since it does not provide this service. The lifelong mapping mode of SLAM Toolbox was also tested for the matter of completeness; however, it yields to unwanted results, with a poor performance. ## 6 Results and Analysis The libraries provided by Grupp (2017) and Zhang and Scaramuzza (2018) were used to calculate the error metrics of the various methods on the different sequences. ### Pose tracking In Table 1 we present the translational and rotational Root Mean Square Error (RMSE) for each sequence for each method evaluated on the pose tracking problem with the ground truth from the simulation. Figure 4 presents a summary of the statistics of the translational errors for all the methods in all sequences. Overall it can be seen that GBL methods always perform better than PF algorithms in the pose tracking problem. Among the tested PF algorithms, GMCL performs most of the time better than AMCL. Only in the scenarios 2-2, 3-1, and 3-2, AMCL achieves lower RMSE. In scenarios 2-2 and 3-2 GMCL has a very high translational RMSE. This shows that the additional filters of GMCL cause the method to be more sensitive to dynamic environments in changing environments. Regarding the GBL algorithms, SLAM Toolbox achieves the best performance in scenarios 1 and 3. As expected, scenario 3 (with the most significant Scan-BIM deviations) was the most challenging scenario for all the methods. On top of that, in this scenario, the pure localization mode of Cartographer always found wrong data associations, resulting in wrong relative constraints that cause localization failure. Therefore Cartographer could not be quantitatively evaluated in this environment, even when an initial approximated pose was provided. Nonetheless, Cartographer achieved an impressive performance in the scenario 2 (real-world scenario), accomplishing a translational RMSE four times lower than SLAM Toolbox, in the environment without dynamic agents (7.19 cm and 28.69 cm respectively) and almost six times lower in the scenario with dynamic agents (4.11 cm and 23.57 cm respectively). ### Global localization The performance of the different methods regarding convergence time is presented in Figure 5. GMCL, thanks to its Self-Adaptive PF, performs the best in the global localization problem. Only in scenario 1-2 Cartographer shows a slight superiority. Meanwhile, AMCL takes always at least twice as long compared to the other methods to converge to a good pose. In addition, it does not converge in scenario 2-1. Due to the high level of Scan-BIM deviations, none Figure 3: Sequences of data with the respective OGMs. (a) and (b) correspond to an empty environment (i.e., without furniture) with and without dynamic agents resp.; (c) and (d) similar but in an scenario with furniture as it is in the real world; (e) and (f) in the disaster environment. To better visualize the different levels of Scan-BIM deviations, the OGM of the empty environment is presented over the other OGMs in blue color. The change in color of the trajectory represents the initial and end position of the robot, with dark blue being the start and red the endpoint. of the implemented methods converges while trying to solve the global localization problem in scenario 3. ## 7 Conclusions In this paper, besides contributing with methods to create OGMs from BIM models and transforming them to pose graph-based maps for robust localization, we provide an extensive comparison of diverse state-of-the-art localization 2D-LiDAR algorithms in three different levels of Scan-BIM deviations, with and without dynamic agents. We found that GBL algorithms overperform PF algorithms in the pose tracking problem. In the case of a map with very low (or negligible) Scan-BIM deviations, SLAM Toolbox achieves the best performance. On the contrary, if the map has a medium level of Scan-BIM deviations (for example, due to large pieces of furniture or as-planned and as-built differences), as in a real-world office building, Cartographer is the best performing method. However, in a case where the level of changes in the environment is too high (such as in scenario 3), SLAM Toolbox, while with a relatively high error, would be the best option among the tested localization algorithms. The fact that PF algorithms only consider the most recent observation to update the belief of the current pose gives them certain robustness to deal with high ambiguity scenarios (such as scenario 3). However, it also causes high inaccuracies when the level of Scan-BIM deviations is medium (such as in scenario 2). On the other hand, GBL algorithms taking advantage of a recent history of observations, can handle better this real-world scenario and can track the robot's pose more accurately. Nonetheless, GMCL perform better for the global localization problem than GBL algorithms. In general we recommend using a GBL algorithm for accurate BIM-based (or floor plan-based) 2D LiDAR pose tracking in real-world environments and GMCL for global localization. To facilitate the correct implementation of Graph-based Localization algorithms, we contribute with an open-source method to create efficiently accurate pose graph-based maps from any OGM. In addition, we provide with a method to create OGMs from complex multi-story BIM models, which additionally can be leveraged for path planning and autonomous navigation. State-of-the-art SLAM techniques have switched from using particle filters to graph-based optimization approaches, based on our experiments, we can conclude that it will be analogously advantageous for most localization systems. ## 8 Future Work In the light of the experimental results and motivated by related research, we believe that the following are promising future research directions: Consider not only 2D-LiDAR information but also 3D-LiDAR sensor data is a promising direction to reliably handle large levels Scan-BIM deviations, as partially shown by Blum et al. (2021) and Moura et al. (2021). Fuse multiple sensor modalities, such as IMU, RGB-D cameras, together with LiDAR sensors, would increase the robustness of a localization method to deal with fast angular movements and deprecated scenarios, as demonstrated by Lin and Zhang (2021) and Xu \begin{table} \begin{tabular}{c|c|c|c||c|c|c||c|c|c|c} \hline \hline Method & 1-1 & 1-2 & 2-1 & 2-2 & 3-1 & 3-2 \\ \hline AMCL & 8.49 & 0.44 & 8.47 & 0.50 & 33.68 & 2.71 & 37.44 & 3.26 & 63.04 & 3.29 & 65.12 & 3.37 \\ GMCL & 8.27 & 0.24 & 7.86 & 0.24 & 24.27 & 2.57 & 52.38 & 4.37 & 66.60 & 3.70 & 126.91 & 4.46 \\ SLAM Toolbox & **3.69** & **0.17** & **3.95** & **0.17** & 28.69 & 1.50 & 23.57 & 1.50 & **37.84** & **1.34** & **37.96** & **1.70** \\ Cartographer & 4.01 & 0.24 & 3.96 & 0.25 & **7.19** & **0.15** & **4.11** & **0.21** & **-** & **-** & **-** \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the quantitative evaluation results for each sequence. Translational RMSE in centimeters and angular RMSE in degrees respectively. Figure 4: Statistics of the pose error estimates in translation for each method on the six evaluation scenarios. Figure 5: Convergence time in seconds for the various methods in the different scenarios. et al. (2022). To achieve major robustness, the extraction of detailed information from a BIM model, such as the position of room numbers labels, doors and windows can support to solve the global localization problem even in symmetric environments, as done by Zimmerman et al. (2022) and Haque et al. (2020). ## 9 Acknowledgements The presented research was conducted in the frame of the project "Intelligent Toolkit for Reconnaissance and assessmEnt in Perilous Incidents" (INTREPID) funded by the EU's research and innovation funding programme Horizon 2020 under Grant agreement ID: 883345.
2308.13221
A Steady Loop Current Does Not Radiate
According to mainstream electrodynamics, a steady loop current does not radiate. Edward, Kenyon, and Lemon took the approximated electric field formula (the magnetic one is trivial) for a moving point charge and showed that the electric field of a steady loop current could be partitioned into a static field falling with the square of distance and an integral of an exact differential (also called total differential, full differential, or perfect differential), thus no radiation was omitted. Inspired by their work, we do the same to the electric and magnetic fields without approximation.
Shengchao Alfred Li
2023-08-25T07:38:12Z
http://arxiv.org/abs/2308.13221v1
# A Steady Loop Current Does Not Radiate ###### Abstract According to mainstream electrodynamics, a steady loop current does not radiate. Edward, Kenyon, and Lemon took the approximated electric field formula (the magnetic one is trivial) for a moving point charge and showed that the electric field of a steady loop current could be partitioned into a static field falling with the square of distance and an integral of an exact differential (also called total differential, full differential, or perfect differential), thus no radiation was omitted. Inspired by their work, we do the same to the electric and magnetic fields without approximation. ## 1 Introduction It can be derived from mainstream electrodynamics that a steady loop current does not radiate [1][2][3][4]. From the classical continuous charge's point of view, this is easily inferred from the fact that both the scalar and the vector potentials are constant [3](p444). It is also reached from the moving point charge's point of view [1][2][4]. The authors started from assumptions about the shape of the loop and the speed of the charges [1] (p370, assuming a circular loop and constant charge speed) [2] (problem 14.24, assuming constant charge speed) [4] (assuming a circular loop and constant charge speed) and concluded that when the charge density became larger and larger, the radiation became smaller and smaller and approached zero. The moving point charge's point of view of mainstream electrodynamics includes the Lienard-Wiechert potentials and the fields1, with the remarkable property that they all involve electromagnetic retardation [1][10][2][3]. Footnote 1: Except for a few [5][6][7], most textbook authors refrained from naming the fields the Lienard-Wiechert fields, or naming them at all. According to O’Rahilly [8][9](a republication of [8], p218 and p223), several pioneers worked on the fields and the force, including Lienard, Heaviside, and Schwarzschild. O’Rahilly suggested calling the force formula the Lienard-Schwarzschild force formula, and the approximation up to \(1/c^{2}\) the Schwarzschild-Ritz approximation. To avoid the difficulties in dealing with retarded values, authors use the Taylor expansion to come up with approximated fields depending only on the current-time status of the moving point charge up to \(1/c^{2}\)[8][9](a republication of [8], presenting Ritz's work) or even up to \(1/c^{5}\)[11][12]. The former is obtainable from Darwin's Lagrangian [13](section 65) [2](section 16.2). In [14]2, Edward, Kenyon, and Lemon used the current-time approximation up to \(1/c^{2}\) and partitioned the electric field of a steady loop current into a static field falling with the square of distance and a loop integral of an exact differential (the exact differential were obtained earlier by Darwin [7]). The latter is apparently zero thus a steady loop current does not radiate. Footnote 2: Their challenged [15] experimental results about an electric field proportional to the square of the current of the steady loop current is irrelevant here. Their work combined the two points of view. On the one hand, continuous charge is assumed. On the other hand, the electric field formula for a moving point charge is used. The advantage is that there is no need for the argument about increasing charge density and diminishing radiation. Inspired by their work, we start from the same combined point of view and set out to make a similar partitioning to the fields without using the Taylor expansion or approximation, and discover an expected exact differential. This task can be judged to be promising because the appearance of Eqs. (5) and (6) in section 2 has suggested an exchange of differentiation and integration. Previous Results: Potentials, Fields, Approximations and an Exact Differential The Lienard-Wiechert scalar potential \(V\) and vector potential \(\mathbf{A}\) of a moving point charge are [10]3(Eqs. (6-2-3) and (6-2-5)), Footnote 3: Note that we make necessary conversions from Gaussian unites used in [10](p122). \[V= \frac{q}{4\pi\epsilon_{0}r^{\prime}\left(1-\mathbf{n}^{\prime} \cdot\frac{\mathbf{v}^{\prime}}{c}\right)}, \tag{1}\] \[\mathbf{A}= \frac{q\mathbf{v}^{\prime}}{4\pi\epsilon_{0}c^{2}r^{\prime}\left( 1-\mathbf{n}^{\prime}\cdot\frac{\mathbf{v}^{\prime}}{c}\right)}=\frac{ \mathbf{v}^{\prime}}{c^{2}}V, \tag{2}\] where \(\epsilon_{0}\) is the permittivity of the vacuum, \(q\) is the amount of charge, \(\mathbf{v}^{\prime}\) is the retarded velocity of the charge, \(\mathbf{n}^{\prime}\) is the unit direction of the vector pointing from the retarded charge position to the observer position, \(r^{\prime}\) is the length of this vector, and \(c\) is the speed of light. In this paper we place the observer position at the origin \(O\), thus we have \(r^{\prime}=|\mathbf{0}-\mathbf{r}^{\prime}|\) and [10](Eq. (6-2-4)) \(\mathbf{n}^{\prime}=(\mathbf{0}-\mathbf{r}^{\prime})/|\mathbf{0}-\mathbf{r}^ {\prime}|=(\mathbf{0}-\mathbf{r}^{\prime})/r^{\prime}\), where \(\mathbf{r}^{\prime}\) is the vector pointing to the retarded charge position. We notice that there is nothing special about \(O\) and our analyses are good for any observer position. If the charge in a steady loop current is treated as being continuous, and the fields of an infinitesimal volume of charge are treated as from a point charge, the total retarded potentials of the steady loop current are, \[V_{L}= \oint\frac{dq}{4\pi\epsilon_{0}r^{\prime}\left(1-\mathbf{n}^{ \prime}\cdot\frac{\mathbf{v}^{\prime}}{c}\right)}, \tag{3}\] \[\mathbf{A}_{L}= \oint\frac{\mathbf{v}dq}{4\pi\epsilon_{0}c^{2}r^{\prime}\left(1- \mathbf{n}^{\prime}\cdot\frac{\mathbf{v}^{\prime}}{c}\right)}, \tag{4}\] where \(V_{L}\) and \(\mathbf{A}_{L}\) denote the scalar potential and the vector potential of the loop current as seen at \(O\), and \(dq\) is an infinitesimal volume of charge to be integrated with. The electric field \(\mathbf{E}\) and the magnetic field \(\mathbf{B}\) at the observer position \(O\) can be found by differentiating the Lienard-Wiechert potentials as follows [1] (Eqs. (14-1) and (7-40)) [10](Eqs. (3-5-4) and (3-5-5)), \[{\bf E}= -\nabla V-\frac{\partial{\bf A}}{\partial t}, \tag{5}\] \[{\bf B}= \nabla\times{\bf A}. \tag{6}\] Alternatively, the fields can be found in a few other ways [12][1][6][16]. The fields are [1](Eqs. (20-13) and (20-14)) [10](Eqs. (6-3-9) and (6-3-11)), \[{\bf E}= \frac{1}{4\pi\epsilon_{0}}\frac{q}{c^{2}r^{\prime}(1-{\bf n}^{ \prime}\cdot\frac{{\bf v}^{\prime}}{c})^{3}}\left(-{\bf a}^{\prime}+{\bf n}^{ \prime}({\bf n}^{\prime}\cdot{\bf a}^{\prime})+{\bf n}^{\prime}\times\left({ \bf a}^{\prime}\times\frac{{\bf v}^{\prime}}{c}\right)\right)+\] \[\frac{1}{4\pi\epsilon_{0}}\frac{q}{r^{\prime 2}(1-{\bf n}^{ \prime}\cdot\frac{{\bf v}^{\prime}}{c})^{3}}\left(1-\frac{v^{\prime 2}}{c^{2}} \right)\left({\bf n}^{\prime}-\frac{{\bf v}^{\prime}}{c}\right), \tag{7}\] \[{\bf B}= \frac{1}{4\pi\epsilon_{0}}\frac{q}{c^{3}r^{\prime}(1-{\bf n}^{ \prime}\cdot\frac{{\bf v}^{\prime}}{c})^{3}}{\bf n}^{\prime}\times\left(-{\bf a }^{\prime}+{\bf n}^{\prime}\times\left({\bf a}^{\prime}\times\frac{{\bf v}^{ \prime}}{c}\right)\right)+\] \[\frac{1}{4\pi\epsilon_{0}}\frac{q}{cr^{\prime 2}(1-{\bf n}^{ \prime}\cdot\frac{{\bf v}^{\prime}}{c})^{3}}\left(1-\frac{v^{\prime 2}}{c^{2}} \right)\left(\frac{{\bf v}^{\prime}}{c}\times{\bf n}^{\prime}\right), \tag{8}\] which satisfy [1](Eq. (20-15)) \({\bf B}={\bf n}^{\prime}/c\times{\bf E}\). The current-time approximations up to \(1/c^{2}\) are [9](Eq. (7.15a)) [14](Eq. (4)) \[{\bf E}\approx \frac{q}{4\pi\epsilon_{0}}\left(\frac{{\bf n}}{r^{2}}+\frac{1}{ 2}\frac{v^{2}{\bf n}}{r^{2}c^{2}}-\frac{3}{2}\frac{({\bf n}\cdot{\bf v})^{2}{ \bf n}}{r^{2}c^{2}}-\frac{{\bf a}}{2rc^{2}}-\frac{({\bf n}\cdot{\bf a}){\bf n} }{2rc^{2}}\right), \tag{9}\] \[{\bf B}\approx \frac{q}{4\pi\epsilon_{0}}\frac{{\bf v}\times{\bf n}}{r^{2}c^{2}}, \tag{10}\] where \({\bf n}\), \({\bf v}\) and \(r\) are current-time values, comparing to their retarded value counterparts \({\bf n}^{\prime}\), \({\bf v}^{\prime}\) and \(r^{\prime}\). The approximation expression of \({\bf B}\) is simple because \({\bf B}\) carries \(1/c^{2}\) by itself thus higher-order terms do not survive the approximation. Edward, Kenyon, and Lemon [14] partitioned terms inside the parentheses of Eq. (9) into a static term falling with \(1/r^{2}\) and a term of exact differential. The latter is as follows [14](Eq. (7)4) [7](presenting Darwin's work, Eq.(15) and (16)), Footnote 4: Eq. (7) in [14] misses a square symbol. \[-\frac{1}{2c^{2}}\frac{d}{dt}\left(\frac{({\bf n}\cdot{\bf v}){\bf n}}{r}+ \frac{{\bf v}}{r}\right)=\left(\frac{1}{2}\frac{v^{2}{\bf n}}{r^{2}c^{2}}- \frac{3}{2}\frac{({\bf n}\cdot{\bf v})^{2}{\bf n}}{r^{2}c^{2}}-\frac{{\bf a}} {2rc^{2}}-\frac{({\bf n}\cdot{\bf a}){\bf n}}{2rc^{2}}\right), \tag{11}\] and the total electric field incurred by the steady loop current is \(\mathbf{E}_{L}\approx\oint(\cdots)dq\) where \((\cdots)\) represents the right side of Eq. (9) with \(q\) replaced with \(1\), and \(dq\) is the charge in an infinitesimal segment of the loop current. When the current is steady, we know that \(dq=Idt\) where \(dt\) is the time interval that the charge travels the infinitesimal segment. Note that here it is assumed that all current bearing charge in the infinitesimal segment travels with the same velocity5\(\mathbf{v}\). Thus the total electric field \(\mathbf{E}_{L}\) is, Footnote 5: Microscopically, for a current in a conductive wire, it corresponds to the averaged velocity of a large number of point charges passing by that position at the same time. \[\mathbf{E}_{L} \approx\oint(\cdots)dq=I\oint(\cdots)dt\] \[= \frac{I}{4\pi\epsilon_{0}}\oint\frac{\mathbf{n}}{r^{2}}dt+\frac{ I}{4\pi\epsilon_{0}}\oint d\left(-\frac{1}{2c^{2}}\frac{(\mathbf{n}\cdot \mathbf{v})\mathbf{n}}{r}-\frac{1}{2c^{2}}\frac{\mathbf{v}}{r}\right)\] (use Eqs. (9) and (11)) \[= \frac{I}{4\pi\epsilon_{0}}\oint\frac{\mathbf{n}}{r^{2}}dt=\frac{ 1}{4\pi\epsilon_{0}}\oint\frac{\mathbf{n}}{r^{2}}dq. \tag{12}\] This is a loop integral of the electric field by the current-time Coulomb Law. We also have \[\mathbf{B}_{L}\approx\oint\frac{1}{4\pi\epsilon_{0}c^{2}}\frac{\mathbf{v} \times\mathbf{n}}{r^{2}}dq=\oint\frac{I}{4\pi\epsilon_{0}c^{2}}\frac{\mathbf{v }\times\mathbf{n}}{r^{2}}dt. \tag{13}\] This is a loop integral of the magnetic field by the current-time Biot-Savart Law. ## 3 The Infinitesimals in a Steady Loop Current Following Edward, Kenyon and Lemon[14], we take the continuous charge's point of view. To limit the scope of this paper, we shall focus on the moving charge consisting the current and leave out static charge such as that in a neutral wire. We make the standard assumption that charge passing by a given position on the loop has a given velocity that does not change over time. In order to trace the charge in the steady loop current, imagine that we stamp the charge in the loop current for the purpose of identification. Imagine that a stamping machine sits at a checkpoint \(C\), as depicted6 on the left panel of Figure 1. Footnote 6: Note that the loop is not necessarily in a plane. Assume the charge takes time \(T\) to make one round trip along the loop. Thus \(IT\) is the total amount of charge that circulates in the loop and makes the current. Suppose that as seen at time \(\tau-T\) at \(O\), the stamping machine at \(C\) stamps the charge just crossing \(C\) at the retarded time (past time) \(\tau-T-r^{\prime}/c\) with a time stamp \(s=T\). After that, the machine stamps charge crossing \(C\) with decreasing time stamps at a steady pace. For example, if the charge crosses \(C\) after a time interval \(x\) later than the earliest stamping, it is stamped with a time stamp \(s=T-x\). This procedure continues (depicted on the middle panel in Figure 1) until the earliest stamped charge crosses \(C\) again at time \(\tau-r^{\prime}/c\) at \(C\) (seen at time \(\tau\) at \(O\)). At that time the procedure stops and we know that all charge consisting the current is stamped, with time stamps between \(0\) and \(T\), as depicted on the right panel in Figure 1, which we shall focus on. We reproduce the right panel of Figure 1 in Figure 2. We pay special attention to the following set of stamps, \(s_{N}=T\), \(s_{N-1}=((N-1)/N)T\), Figure 1: An imagined procedure to stamp charge in a steady loop current. \(\cdots\), \(s_{1}=(1/N)T\), and \(s_{0}=0\), where \(N\) is an arbitrarily chosen large number (we notice that the first stamped charge has double stamps \(s=T\) and \(s=0\)). At time \(\tau\), as seen by \(O\), charge with these set of stamps is at positions7\(\mathbf{r}^{\prime}_{N}\), \(\mathbf{r}^{\prime}_{N-1}\), \(\cdots\), \(\mathbf{r}^{\prime}_{1}\), and \(\mathbf{r}^{\prime}_{0}\) (note that \(\mathbf{r}^{\prime}_{0}=\mathbf{r}^{\prime}_{N}\)), respectively. We denote the distance from \(O\)\(r^{\prime}_{N}\), \(r^{\prime}_{N-1}\), \(\cdots\), \(r^{\prime}_{1}\), and \(r^{\prime}_{0}\), respectively. Footnote 7: For convenience, if there is no confusion, we use subscript \({}_{k}\) for \({}_{s_{k}}\), such as \(\mathbf{r}_{k}=\mathbf{r}_{s_{k}}\) When \(N\) is large and the loop is smooth, we approximate \(r^{\prime}\), \(\mathbf{v}^{\prime}\) and \(\mathbf{n}^{\prime}\) of charge between stamp \(s_{k}\) at \(\tau-r^{\prime}_{k}/c\) and stamp \(s_{k+1}\) at \(\tau-r^{\prime}_{k+1}/c\), which contributes to fields at \(O\) at time \(\tau\), with \(r^{\prime}\), \(\mathbf{v}^{\prime}\) and \(\mathbf{n}^{\prime}\) of stamp \(s_{k+1}\). An important fact is that according to our stamping procedure, at any given time as seen at position \(O\), the amount of charge between the succeeding two such specific stamps is exactly \(IT/N\). _Remarkably_, the amount of charge contributing to fields at \(O\) at time \(\tau\) between stamp \(s_{k}\) at retarded time \(\tau-r^{\prime}_{k}/c\) and stamp \(s_{k+1}\) at retarded time \(\tau-r^{\prime}_{k+1}/c\) is \(IT/N\). This is also guaranteed by the fact that no charge moves faster than light speed \(c\), contributing less than or more than once. Let \(\Delta q=IT/N=I\Delta s\) where \(\Delta s=T/N\). If we let \(N\rightarrow\infty\) thus \(\Delta s\to 0\), we write \(\Delta s\) as \(ds\) and write down the integrals, \[V_{L}=I\int_{s=0}^{s=T}\frac{ds}{4\pi\epsilon_{0}r^{\prime}\left( 1-\mathbf{n}^{\prime}\cdot\frac{\mathbf{v}^{\prime}}{c}\right)} \tag{14}\] \[\mathbf{A}_{L}=I\int_{s=0}^{s=T}\frac{\mathbf{v}^{\prime}ds}{4 \pi\epsilon_{0}r^{\prime}c^{2}\left(1-\mathbf{n}^{\prime}\cdot\frac{\mathbf{ v}^{\prime}}{c}\right)}, \tag{15}\] where \(r^{\prime}\), \(\mathbf{n}^{\prime}\) and \(\mathbf{v}^{\prime}\) are retarded values at time \(\tau-r^{\prime}_{s}/c\). Let us define a new variable8, \(s^{\prime}=s-r^{\prime}_{s}/c\). The physical meaning of \(s^{\prime}\) is, if the observer sees a charge with stamp \(s\) at a position with distance \(r^{\prime}_{s}\) away from \(O\), the charge currently passing by that position is with stamp \(s^{\prime}\). We have, Footnote 8: Note that the prime “\({}^{\nu\prime}\)” in \(s^{\prime}\) does not indicate retardation, because \(s\) and \(s^{\prime}\) are not time but stamps, although they have units of time. \[ds^{\prime}= ds-dr^{\prime}_{s}/c\] (from \[s^{\prime}=s-r^{\prime}_{s}/c\] or from Figure 2 ) \[= ds+d\mathbf{l}^{\prime}\cdot\mathbf{n}^{\prime}/c\] (sign change is due to direction of \[\mathbf{n}^{\prime}\] ) \[= ds+ds^{\prime}\mathbf{v}^{\prime}\cdot\mathbf{n}^{\prime}/c, (ds^{\prime}\] equals time used to travel \[\mathrm{d}\mathbf{l}^{\prime}\] or, \[ds^{\prime}=\frac{ds}{1-\mathbf{n}^{\prime}\cdot\frac{\mathbf{v}^{ \prime}}{c}}. \tag{16}\] This result is remarkably similar to the relationship between \(dt^{\prime}\) and \(dt\) in the discussion of Lienard-Wiechert potentials [10](Eq. (6-3-2)), \[dt^{\prime}=\frac{dt}{1-\mathbf{n}^{\prime}\cdot\frac{\mathbf{ v}^{\prime}}{c}}, \tag{17}\] where \(dt\) is current value and \(dt^{\prime}\) is retarded value. This shall not be surprising because there is a relationship between the two equations, which we shall make explicit here. As depicted in Figure 3, we trace a single point charge \(q\) for its whole round trip along the loop. Suppose as observed at position \(O\) at time \(\tau-T\), at retarded time \(\tau-T-r_{0}^{\prime}/c\) (we notice that \(r_{0}^{\prime}=r_{N}^{\prime}\)), the charge passes by checkpoint \(C\) (\(C\) is also labeled \(t_{0}^{\prime}\) in Figure 3). Choose the same large number \(N\) as in Figure 2. At retarded time \(\tau-((N-1)/N)T-r_{1}^{\prime}/c\) Figure 2: The infinitesimals of the steady loop current. the charge's position, labeled \(t_{1}^{\prime}\) in Figure 3, is exactly \(s_{1}\) in Figure 2. This correspondence is true for all other positions along the loop, and the last one is at retarded time \(\tau-r_{N}^{\prime}/c\), when the charge passes by \(C\) again, completing the round trip. We define a pair of new physical values, \(I/q\) times the scalar potential \(V\) and \(I/q\) times the vector potential \(A\) integrated over time of the whole round trip, as seen by the observer at position \(O\), \[I\int_{\tau-T}^{\tau}\frac{dt}{4\pi\epsilon_{0}r^{\prime}\left(1 -\mathbf{n}^{\prime}\cdot\frac{\mathbf{v}^{\prime}}{c}\right)} \tag{18}\] \[I\int_{\tau-T}^{\tau}\frac{\mathbf{v}^{\prime}dt}{4\pi\epsilon_{ 0}r^{\prime}c^{2}\left(1-\mathbf{n}^{\prime}\cdot\frac{\mathbf{v}^{\prime}}{c }\right)}, \tag{19}\] where \(r^{\prime}\), \(\mathbf{n}^{\prime}\) and \(\mathbf{v}^{\prime}\) are retarded values at time \(t^{\prime}=t-r^{\prime}/c\). They are _remarkably_ similar to Eqs. (14) and (15). A comparison of them as well as of Figure 2 and Figure 3 reveals that (we use \(\oint\) for \(\int\) because the value of Figure 3: The infinitesimals of a single charge traveling the loop. \(\tau\) makes no difference), \[V_{L}= I\oint\frac{dt}{4\pi\epsilon_{0}r^{\prime}\left(1-\mathbf{n}^{\prime} \cdot\frac{\mathbf{v}^{\prime}}{c}\right)} \tag{20}\] \[\mathbf{A}_{L}= I\oint\frac{\mathbf{v}^{\prime}dt}{4\pi\epsilon_{0}r^{\prime}c^ {2}\left(1-\mathbf{n}^{\prime}\cdot\frac{\mathbf{v}^{\prime}}{c}\right)}. \tag{21}\] We finally reach these results through the rather involved steps. As a comparison, the current-time counterparts in Section 2 are easy because, there, the current-time values were used throughout in the approximation. ## 4 An Exact Differential in a Steady Loop Current We set out to partition each of Eq. (7) and Eq. (8) into a static field falling with the square of distance and an integral of an exact differential. Though it can be done from Eq. (7) and Eq. (8) directly, it is formidable even from the first glance. It shall be much easier if we start from Eqs. (20), (21), (5) and (6) and interchange the differentiation and the integration. We denote the Doppler factor9\(D^{\prime}\) Footnote 9: As far as the author knows, except for Podolsky and Kunz [17], most textbook authors refrained from naming the factor \(1/(1-\mathbf{n}^{\prime}\cdot\mathbf{v}^{\prime}/c)\) the Doppler factor or naming it at all, including Davidson [18] who invoked acoustics when deriving the Lienard-Wiechert Potentials and Griffiths [3] who said it was “reminiscent of the Doppler effect”. Its other names include “volumetric correction” [18], “direction factor” [19], and its inverse, Jacobian (of a transformation) [20] or “shrinkage factor” [20]. \[D^{\prime}= \frac{1}{1-\mathbf{n}^{\prime}\cdot\mathbf{v}^{\prime}/c} \tag{22}\] and get \[\mathbf{E}_{L}= -\nabla\oint\frac{D^{\prime}}{4\pi\epsilon_{0}r^{\prime}}Idt- \frac{\partial}{\partial t}\oint\frac{D^{\prime}\mathbf{v}^{\prime}}{4\pi \epsilon_{0}r^{\prime}c^{2}}Idt\qquad\quad\text{(use Eqs. (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq \[= \frac{I}{4\pi\epsilon_{0}}\oint\left(-\nabla\frac{D^{\prime}}{r^{ \prime}}\bigg{|}_{\begin{subarray}{c}t^{\prime}\text{ const} \end{subarray}}-\nabla t^{\prime}\frac{\partial}{\partial t^{\prime}}\frac{D^ {\prime}}{r^{\prime}}-\frac{\partial t^{\prime}}{\partial t}\frac{\partial}{ \partial t^{\prime}}\left(\frac{\mathbf{v}^{\prime}}{c^{2}}\frac{D^{\prime}} {r^{\prime}}\right)\right)dt\] \[\qquad\qquad\qquad\qquad\qquad\qquad\left(\text{similar to }[10]\text{ p217 line 15 and p218 line 5}\right)\] \[= \frac{I}{4\pi\epsilon_{0}}\oint\left(\frac{\mathbf{n}^{\prime}- \frac{\mathbf{v}^{\prime}}{c}}{r^{\prime 2}\left(1-\mathbf{n}^{\prime}\cdot \frac{\mathbf{v}^{\prime}}{c}\right)}+\frac{\mathbf{n}^{\prime}}{c}\frac{ \partial}{\partial t^{\prime}}\frac{D^{\prime}}{r^{\prime}}-\frac{\partial}{ \partial t^{\prime}}\left(\frac{\mathbf{v}^{\prime}}{c^{2}}\frac{D^{\prime}} {r^{\prime}}\right)\right).\] \[\qquad\qquad\qquad\frac{dt}{\left(1-\mathbf{n}^{\prime}\cdot \frac{\mathbf{v}^{\prime}}{c}\right)}\quad\text{(use }[10]\text{ (6-3-2), (6-3-3) and p217 line 16)}\] \[= \frac{I}{4\pi\epsilon_{0}}\oint\left(\frac{\mathbf{n}^{\prime}- \frac{\mathbf{v}^{\prime}}{c}}{r^{\prime 2}\left(1-\mathbf{n}^{\prime}\cdot \frac{\mathbf{v}^{\prime}}{c}\right)}+\frac{\mathbf{n}^{\prime}}{c}\frac{ \partial}{\partial t^{\prime}}\frac{D^{\prime}}{r^{\prime}}-\frac{\partial}{ \partial t^{\prime}}\left(\frac{\mathbf{v}^{\prime}}{c^{2}}\frac{D^{\prime}} {r^{\prime}}\right)\right)dt^{\prime}\] (23) \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ thus by rearrangement, we get \[\frac{{\bf n}^{\prime}}{c}\frac{\partial}{\partial t^{\prime}}\frac{D^{\prime}}{ r^{\prime}}=-\frac{{\bf n}^{\prime}-\frac{{\bf v}^{\prime}}{c}}{r^{\prime 2}\left(1-{\bf n}^{ \prime}\cdot\frac{{\bf v}^{\prime}}{c}\right)}+\frac{{\bf n}^{\prime}}{r^{ \prime 2}}+\frac{\partial}{\partial t^{\prime}}\left(\frac{{\bf n}^{\prime}} {c}\frac{D^{\prime}}{r^{\prime}}\right).\] We plug it into Eq. (23) and get \[{\bf E}_{L}= \frac{I}{4\pi\epsilon_{0}}\oint\left(\frac{{\bf n}^{\prime}}{r^{ \prime 2}}+\frac{\partial}{\partial t^{\prime}}\left(\frac{{\bf n}^{\prime}}{ c}\frac{D^{\prime}}{r^{\prime}}\right)-\frac{\partial}{\partial t^{\prime}} \left(\frac{{\bf v}^{\prime}}{c^{2}}\frac{D^{\prime}}{r^{\prime}}\right) \right)dt^{\prime}\] \[= \frac{I}{4\pi\epsilon_{0}}\oint\frac{{\bf n}^{\prime}}{r^{\prime 2 }}dt^{\prime}+\frac{I}{4\pi\epsilon_{0}}\oint d\left(\left({\bf n}^{\prime}- \frac{{\bf v}^{\prime}}{c}\right)\frac{D^{\prime}}{cr^{\prime}}\right) \tag{24}\] \[= \frac{I}{4\pi\epsilon_{0}}\oint\frac{{\bf n}^{\prime}}{r^{\prime 2 }}dt^{\prime}. \tag{25}\] (loop integral of an exact differential is zero) This is a loop integral of the electric field by the retarded Coulomb Law. We apply a similar method to the magnetic field, \[{\bf B}_{L}= \nabla\times\oint\left(\frac{I}{4\pi\epsilon_{0}}\frac{{\bf v}^{ \prime}}{c^{2}}\frac{D^{\prime}}{r^{\prime}}\right)dt\] (use Eqs. ( 6 ) and ( 21 )) \[= \frac{I}{4\pi\epsilon_{0}}\oint\left(\nabla\times\left(\frac{{\bf v }^{\prime}}{c^{2}}\frac{D^{\prime}}{r^{\prime}}\right)\right)dt\] (exchange differentiation and integration) \[= \frac{I}{4\pi\epsilon_{0}c^{2}}\oint\left(\frac{D^{\prime}}{r^{ \prime}}(\nabla\times{\bf v}^{\prime})+\nabla\frac{D^{\prime}}{r^{\prime}} \times{\bf v}^{\prime}\right)dt\] (differentiation by parts) \[= \frac{I}{4\pi\epsilon_{0}c^{2}}\oint\frac{D^{\prime}}{r^{\prime} }\left(\nabla t^{\prime}\times\frac{\partial{\bf v}^{\prime}}{\partial t^{ \prime}}\right)dt+\] (chain rule) \[\frac{I}{4\pi\epsilon_{0}c^{2}}\oint\left(-\frac{{\bf n}^{\prime} }{r^{\prime 2}}-\frac{\partial}{\partial t^{\prime}}\left(\frac{{\bf n}^{\prime} }{c}\frac{D^{\prime}}{r^{\prime}}\right)\right)\times{\bf v}^{\prime}dt^{\prime}\] ( \[\nabla\frac{D^{\prime}}{r^{\prime}}\] from the analysis of \[{\bf E}_{L}\] above) \[= \frac{I}{4\pi\epsilon_{0}c^{2}}\oint-\frac{{\bf n}^{\prime}}{c} \frac{D^{\prime}}{r^{\prime}}\times\frac{\partial{\bf v}^{\prime}}{\partial t ^{\prime}}dt^{\prime}+\] ( [10] Eq. ( 6-3-3), and by Eq. 17 ) \[\frac{I}{4\pi\epsilon_{0}c^{2}}\oint\left(-\frac{{\bf n}^{\prime} }{r^{\prime 2}}\times{\bf v}^{\prime}-\frac{\partial}{\partial t^{\prime}} \left(\frac{{\bf n}^{\prime}}{c}\frac{D^{\prime}}{r^{\prime}}\times{\bf v}^{ \prime}\right)+\frac{{\bf n}^{\prime}}{c}\frac{D^{\prime}}{r^{\prime}}\times \frac{\partial{\bf v}^{\prime}}{\partial t^{\prime}}\right)dt^{\prime}\] (use the expansion of \[\frac{\partial}{\partial t^{\prime}}\left(\frac{{\bf n}^{\prime}}{c}\frac{D^{ \prime}}{r^{\prime}}\times{\bf v}^{\prime}\right)\] with chain rule) \[= \frac{I}{4\pi\epsilon_{0}c^{2}}\oint\left(-\frac{{\bf n}^{\prime}}{r^ {\prime 2}}\times{\bf v}^{\prime}\right)dt^{\prime}-\frac{I}{4\pi\epsilon_{0}c^ {2}}\oint\frac{\partial}{\partial t^{\prime}}\left(\frac{{\bf n}^{\prime}}{c} \frac{D^{\prime}}{r^{\prime}}\times{\bf v}^{\prime}\right)dt^{\prime}\] \[= \frac{I}{4\pi\epsilon_{0}c^{2}}\oint{\bf v}^{\prime}\times\frac{{ \bf n}^{\prime}}{r^{\prime 2}}dt^{\prime}+\frac{I}{4\pi\epsilon_{0}c^{2}} \oint d\left({\bf v}^{\prime}\times\left({\bf n}^{\prime}-\frac{{\bf v}^{ \prime}}{c}\right)\frac{D^{\prime}}{cr^{\prime}}\right)\] (26) \[= \frac{I}{4\pi\epsilon_{0}c^{2}}\oint{\bf v}^{\prime}\times\frac{{ \bf n}^{\prime}}{r^{\prime 2}}dt^{\prime}.\] (27) \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad Using these approximations, we get (by throwing out terms with higher order than \(1/c^{2}\)), \[\frac{1}{r^{\prime 3}}\approx \frac{1}{r^{3}}\left(1-3\frac{\mathbf{n}\cdot\mathbf{v}}{c}+3\frac {(\mathbf{n}\cdot\mathbf{v})^{2}}{c^{2}}-3\frac{v^{2}-(\mathbf{n}\cdot\mathbf{ v})^{2}-r\mathbf{n}\cdot\mathbf{a}}{2c^{2}}\right),\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ \[=-\frac{\mathbf{v}}{r^{2}}+2\mathbf{n}\frac{d}{dt}(\mathbf{r}\cdot \mathbf{r})^{-\frac{1}{2}}=-\frac{\mathbf{v}}{r^{2}}+2\mathbf{n}(-\frac{1}{2r^{3 }})\frac{d}{dt}(\mathbf{r}\cdot\mathbf{r})\] (chain rule) \[=-\frac{\mathbf{v}}{r^{2}}-2\frac{\mathbf{n}}{r^{3}}\mathbf{r} \cdot\frac{d\mathbf{r}}{dt} \text{(differentiation by parts)}\] \[=-\frac{\mathbf{v}}{r^{2}}+2\frac{(\mathbf{n}\cdot\mathbf{v}) \mathbf{n}}{r^{2}}. \text{(use }\mathbf{n}=(\mathbf{0}-\mathbf{r})/r)\] Eq. (28) when integrated for the loop, gives Eq. (12). Thus we confirm that Edward, Kenyon, and Lemon's results are consistent with ours up to \(1/c^{2}\). ## 6 Double Check the Math To double-check the math, we rewrite Eq. (24) as \[\frac{I}{4\pi\epsilon_{0}}\oint\frac{\mathbf{n}^{\prime}}{r^{\prime 2}}D^{ \prime}dt+\frac{I}{4\pi\epsilon_{0}}\oint\frac{d}{dt}\left(\left(\mathbf{n}^{ \prime}-\frac{\mathbf{v}^{\prime}}{c}\right)\frac{D^{\prime}}{cr^{\prime}} \right)dt,\] and if we approximate the differential part in the second term, \(\frac{d}{dt}\left(\left(\mathbf{n}^{\prime}-\frac{\mathbf{v}^{\prime}}{c} \right)\frac{D^{\prime}}{cr^{\prime}}\right)\), up to \(1/c^{2}\) and combine the result with the differential part of Eq. (28), we expect to recover the left side of Eq. (11) to "close the loop". Indeed, we have the following approximations, \[\mathbf{n}^{\prime}= \frac{\mathbf{0}-\mathbf{r}^{\prime}}{r^{\prime}}\] (definition of \[\mathbf{n}^{\prime}\] ) \[\approx \mathbf{n}\left(1-\frac{\mathbf{n}\cdot\mathbf{v}}{c}-\frac{v^{ 2}-(\mathbf{n}\cdot\mathbf{v})^{2}-r\mathbf{n}\cdot\mathbf{a}}{2c^{2}}\right) +\frac{\mathbf{v}}{c}\] (use approximated \[\mathbf{0}-\mathbf{r}^{\prime}\] and \[1/r^{\prime}\] ) \[\frac{D^{\prime}}{cr^{\prime}}\approx \frac{1}{cr}.\text{ (use approximated }D^{\prime}\text{ and }1/r^{\prime}\text{; simple because there was }1/c\text{)}\] Using these results, we have \[\frac{d}{dt}\left(\left(\mathbf{n}^{\prime}-\frac{\mathbf{v}^{ \prime}}{c}\right)\frac{D^{\prime}}{cr^{\prime}}\right)\approx \frac{d}{dt}\left(\frac{\mathbf{n}}{cr}\left(1-\frac{\mathbf{n} \cdot\mathbf{v}}{c}\right)\right)\] (use approximated \[D^{\prime}/(cr^{\prime})\], \[\mathbf{n}^{\prime}\] and \[\mathbf{v}^{\prime}/c\] ) \[= \frac{d}{dt}\left(\frac{1}{c}\frac{\mathbf{n}}{r}-\frac{(\mathbf{n }\cdot\mathbf{v})\mathbf{n}}{c^{2}r}\right),\] and our expectation is met. Thus we have enhanced our confidence that the derivations are "correct". Discussion From the moving point charge's point of view, we discover an exact differential in steady loop current and show that a steady loop current does not radiate. This conclusion works for an arbitrary steady loop current, without restrictions in previous analyses on the shape of the loop and the velocity of the charge and without approximation. It is as expected but the derivation is rather involved. We hope our derivation may shed new light on this century-old result.
2309.04474
Weakly supervised learning for pattern classification in serial femtosecond crystallography
Serial femtosecond crystallography at X-ray free electron laser facilities opens a new era for the determination of crystal structure. However, the data processing of those experiments is facing unprecedented challenge, because the total number of diffraction patterns needed to determinate a high-resolution structure is huge. Machine learning methods are very likely to play important roles in dealing with such a large volume of data. Convolutional neural networks have made a great success in the field of pattern classification, however, training of the networks need very large datasets with labels. Th is heavy dependence on labeled datasets will seriously restrict the application of networks, because it is very costly to annotate a large number of diffraction patterns. In this article we present our job on the classification of diffraction pattern by weakly supervised algorithms, with the aim of reducing as much as possible the size of the labeled dataset required for training. Our result shows that weakly supervised methods can significantly reduce the need for the number of labeled patterns while achieving comparable accuracy to fully supervised methods.
Jianan Xie, Ji Liu, Chi Zhang, Xihui Chen, Ping Huai, Jie Zheng, Xiaofeng Zhang
2023-07-30T12:42:19Z
http://arxiv.org/abs/2309.04474v2
# Weakly supervised learning for pattern classification in serial femtosecond crystallography ###### Abstract Serial femtosecond crystallography at X-ray free electron laser facilities opens a new era for the determination of crystal structure. However, the data processing of those experiments is facing unprecedented challenge, because the total number of diffraction patterns needed to determinate a high-resolution structure is huge. Machine learning methods are very likely to play important roles in dealing with such a large volume of data. Convolutional neural networks have made a great success in the field of pattern classification, however, training of the networks need very large datasets with labels. This heavy dependence on labeled datasets will seriously restrict the application of networks, because it is very costly to annotate a large number of diffraction patterns. In this article we present our job on the classification of diffraction pattern by weakly supervised algorithms, with the aim of reducing as much as possible the size of the labeled dataset required for training. Our result shows that weakly supervised methods can significantly reduce the need for the number of labeled patterns while achieving comparable accuracy to fully supervised methods. ## 1 Introduction X-ray crystallography at synchrotron radiation light sources plays an important role in the determination of macromolecular structure, but radiation damage has always been a difficult problem in the measurement. The scientists had to increase crystal sizes and put the crystals under cryo-cooled conditions to allow for higher radiation tolerances and thus improve the structural resolution. The development of X-ray free electron laser (XFEL) makes it possible to produce X-ray pulse with extreme peak brilliance and ultrashort pulse width, thereby enables the radiation damage to be overcome by "diffraction-before-destruction" principle [1, 2]. In contrast to traditional X-ray crystallography, serial femtosecond crystallography (SFX) at XFELs is able to measure atomic structure of microcrystals in room temperature [3, 4, 5]. XFEL pulse is so intense that the sample is destroyed after interacting with the pulse, and therefore in SFX experiments samples must be continuously replenished by the delivery system (see Fig. 1). X-ray pulses interact with randomly oriented crystals and then generate diffraction frames on a pixel array detector, but actually the ratio of XFEL pulses hitting the samples is very small. For example, only 6.1% patterns were identified as effective frames in the measurement of Photosystem I [3] at the Linac Coherent Light Source (LCLS) [6], and only 3.4% frames were found to contain crystal diffraction during the HEWL data acquisition [7] at the European XFEL [8]. In order to determine the structure of microcrystals at atomic scale, it is usually necessary to collect millions of diffraction frames. In the study of native nanocrystalline granulovirus, a total of 1.5 millions of collected detector frames yielded 2 A resolution, and the analysis clearly showed that averaging more diffraction frames improved all figures of merit with a better signal-to-noise ratio [9]. Considering the poor hit rate, most collected frames contain only noise and thus are useless in SFX experiments. Consequently, selecting patterns with crystal diffraction from the tremendous volume of raw frames is an essential step in the processing of SFX data. Several open-source software packages have been developed for automatic hit-finding and pattern classification, e.g., \(Cheetah\)[10] and _DIALS_[11]. In practice, the parameters of those toolkits usually need manual tuning due to some complex factors in experiments such as low signal-to-noise ratio, detector artifact, unstable beam etc. Machine learning algorithms provide another approach to classifying the millions of diffraction frames, especially the convolutional neural networks (CNNs), which have achieved remarkable success in the field of pattern recognition. Compared with traditional programs, CNN can encode both human expertise and latent characters of the figures, and is less sensitive to some noise. In recent years several works have demonstrated the performance of CNNs in the classification of diffraction patterns [12, 13, 14]. However, CNN is a typical supervised algorithm, which means that a large dataset with labels is needed to train the network. This heavy dependency on large size of labeled dataset has hindered the application of CNN, because it is very costly to annotate millions of figures in each experiment, especially for diffraction patterns with complex features. In contrast to supervised learning, training a model by unlabeled datasets is called unsupervised learning. Unfortunately, so far the performance of unsupervised algorithms is not so good in the classification of diffraction patterns. Recently, a kind of methods called weakly supervised learning has been proposed and developed rapidly [15], which managed to train models with the combination of small labeled datasets and large unlabeled datasets. Because it is very easy to get unlabeled diffraction patterns, weakly supervised algorithms are promising solutions to reducing the heavy dependence of CNNs on human annotations, thereby promoting the application of CNNs in the processing of SFX data. In this paper we present our study of using three weakly supervised CNN models to identify SFX frames with crystal diffraction. Figure 1: Diagram of XFEL SFX experimental setup. X-ray pulses hit the samples and then produce diffraction patterns on the pixel array detector. Microcrystals are continuously injected into XFEL beam. Methods ### Train, validate and test The training of a CNN model consists of multiple epochs, and each epoch typically includes three steps: training, validation ant test. Accordingly, the whole dataset is usually divided into three parts: training set, validation set and test set. In the training steps the model learns features of the input data and updates its weights. The task of validation steps is to monitor the performance of the model with the goal of better control over the training process. The generalization performance of a model should be evaluated in test steps before applying it to new data, and the dataset used in this stage must not be fed to the model in the training or validation steps. Test steps are very important to avoid overfitting, which means the models perform extremely well on the training set but poorly on the test set. Overfitting is a very common problem in machine learning, which can be overcome by reducing the complexity of the model or increasing the size of the training set. ### Convolutional neural network The basic architecture of a neural network can be divided into a number of fully connected (FC) layers and activation layers. Except for the last layer, which is often called the output layer, the output of an FC layer is passed to an activation layer, and the output of an activation layer is fed to the next FC layer. The FC layer acts as a linear mapping, while the activation layer plays the role of nonlinear mapping. A model consists of many FC layers is very time consuming to be trained, because the number of trainable parameters is large. In CNNs most FC layers are replaced by convolutional layers, thus greatly reducing the number of parameters to be trained. The most remarkable difference between an FC layer and a convolutional layer is that each neuron in the FC layer has its own weights, while neurons in some region of the convolutional layer share the same weights. In addition to weights sharing, pooling layers are also used in CNN to downsample the outputs of previous layers and thus to further reduce the number of trainable parameters. Compared with the FC networks, CNNs not only can be trained more efficiently but also are more powerful in the field of pattern recognition. A typical CNN for classification task can be roughly divided into two parts: a convolutional base and a classifier (Fig. 2). The convolutional base extracts and encodes the latent features of the input data, and then the encoded features are transformed into a one-dimensional (1D) vector by a flatten layer so that it can be fed to the classifier. The output of the classifier is a 1D vector commonly interpreted as approximating a probability distribution, with each element representing the predicted probability of a class. Convolutional base consists of several blocks, each of which is usually constructed by stacking a convolutional layer, a batch normalization (BN) layer [16], an activation layer, and a pooling layer. In the training steps, the BN layer zero-centers and normalizes the output of the convolutional layer, which has been proved to be significant for improving the training process. A classifier usually consists of two or three FC layers and their activation layers. Dropout is widely used to suppress the overfitting of CNNs, which actually deactivates part of neurons randomly [17, 18]. Adding one or a few dropout layers on the top of convolutional base has been proved to be a simple but efficient strategy to prevent overfitting. A CNN is usually trained through the backpropagation (BP) algorithm [19], which divides a train step into two processes: First, the response of each layer is propagated forward, i.e., from the input layer to the output layer, and the loss representing the difference between the output and the label is calculated by a predefined function; the second process starts with the loss value and works backward from the output layer to the input layer, updating the weights of each layer in the direction of decreasing the loss value: \[\mathbf{w}\leftarrow\mathbf{w}-\eta\cdot\nabla_{\mathbf{w}}L, \tag{1}\] where **w** is the weight vector, \(\eta\) is the learning rate and \(L\) is the loss. ### Weakly supervised learning Large datasets with labels are essential for supervised learning, otherwise only overfitting or weak models can be obtained. Generally it is easy to get large datasets but costly to label them, especially for XFEL diffraction patterns since labeling them requires expertise. This severe dependency is a big obstacle for processing XFEL diffraction patterns with supervised learning models. Since it is difficult to perform classification by fully unsupervised learning, weakly supervised learning should be a better option, which tries to reduce the size of the required labeled dataset as much as possible. Particularly, training a model by the combination of a small labeled dataset and a large unlabeled dataset is a promising solution. In this section we give a brief introduction to several methods to implement weakly supervised learning. #### 2.3.1 Transfer learning Essentially, the features learned by a CNN are encoded in its convolutional base. It is reasonable to assume that similar datasets should have similar characteristics in the latent space, thus reusing and fine tuning the base of a CNN model is a good solution to the problems with only a small labeled dataset. This method is called transfer learning [20], and it is probably the most popular approach when having a reusable CNN model. A three-step pipeline of transfer learning is shown in Fig. 3. A new CNN can be constructed by concatenating the convolutional base of a well-trained CNN to a randomly initialized classifier, and then be trained with a new dataset in two steps: First, freeze the convolutional base to make its parameters unchangeable and train the CNN; secondly, unfreeze one or two top convolutional blocks and retrain it. The required size of the labeled dataset is considerably smaller than training a new CNN from scratch. It should be noted that the performance of transfer learning relies heavily on the similarity between the two datasets. For the diffraction data produced in SFX experiments, the similarity may be affected by several factors such as detectors, experimental methods, and samples. #### 2.3.2 Dimensionality reduction and feature engineering Usually, only some features in the original data are useful for classification whereas the other features are useless. Learning the redundant features in a dataset often requires more complex Figure 2: A typical CNN can be roughly divided into a convolutional base and a classifier. The convolutional base extracts features of the input data and then passes them to the classifier. The output of classifier is a 1D vector, each element of which represents the predicted probability of a class. The convolutional base consists of several convolutional blocks, and each block is built by stacking a convolution layer, a batch normalization layer, an activation layer, and a pooling layer. It is common to add a dropout layer in each of the top two or three blocks to prevent overfitting. models, which need not only longer training time, but also larger size of labeled datasets. Therefore removing the redundant features through dimensionality reduction is widely used in machine learning [21]. It transforms data from a high-dimensional space to a low-dimensional space, making the intrinsic characteristic more evident and thus reducing the demand for labels. Another method to resolve redundant features is feature engineering [22], where the goal is to create some new features from the dataset, which usually have higher signal-to-noise ratio and are easier to be learned by models, thereby also relieving the requirement for labels. In practice, creating new features usually reduces the dimensionality of the original data as well. #### 2.3.3 Domain Adversarial neural network The core idea of transfer learning is that two datasets collected with similar experimental setups should have similar latent characteristic spaces. Another natural strategy is to train a CNN model to learn the features of those two datasets simultaneously. That is exactly the idea of domain adversarial neural network (DANN) [23]. The structure diagram of DANN is shown in Fig. 4, from which it can be seen that DANN contains one convolutional base but two classifiers, namely the label and domain classifiers (Accordingly there are two loss functions). The key design of DANN is the gradient reverse layer (GRL), which acts as an identity transformation in the forward propagation but changes the sign of the gradient, i.e., multiple it by -1, before passing it to the convolutional base in the backward propagation. In the training steps, both the labeled dataset (source) and unlabeled dataset (target) are fed to the network; features extracted from both are propagated to the GRL and domain classifier, whereas only the features extracted from source are passed to the label classifier. The domain classifier is trained to distinguish the features between source and target and thus to decrease the loss, however the convolutional base is trained to increase the loss of domain classification because the backpropagated gradient is reversed by the GRL. The adversarial relationship between domain classifier and convolutional base forces the latter to learn features from the common characteristic spaces of source and target. Figure 3: The proposed steps of transfer learning. The new CNN is constructed by concatenating the convolutional base of another well-trained CNN to a randomly initialized classifier. The new model can be trained with a new dataset in two steps: First, freeze the convolutional base to make its parameters unchangeable and train the model; secondly, unfreeze one or two top convolutional blocks and retrain it. Consequently, after the training of DANN, the network composed of the convolutional base and label classifier should be able to predict the labels of target images. ## 3 Results In this section we first introduce the datasets used in our work and then present the results of those weakly supervised methods described above. The predictions of each method are given in separate tables, with bold values denoting recall (See the definition in section 3.2), i.e., the main metric we focus on in our study. ### Datasets The datasets used in this research were downloaded from the Coherent X-ray Imaging Data Bank (CXIDB) [24], accessing number 76, at [http://cxidb.org/id-76.html](http://cxidb.org/id-76.html), which were collected at the Coherent X-ray Imaging (CXI) [25] and Macromolecular Femtosecond Crystallography (MFX) [26] instruments of LCLS. There are five data files named L498, LG36, LN84, LN83 and LO19, each of which contains 2000 diffraction patterns. Only some of the frames contain valid diffraction signals, while others contain only background. By manually inspecting each frame, Ke _et al_. [12] classified all frames into three categories according to the number of Bragg spots. Frames with ten or more Bragg spots were labeled as "Hit", those with four to nine Bragg spots were labeled as "Maybe", and the rest as "Miss". Because there is no annotation of LG36 and the quality of images in L498 is not good, only the other three datasets are used in our work. Using the same datasets, Ke _et al_. have done an excellent work in screening frames with good diffraction signature via supervised CNN models, showing that even a CNN model with simple structure outperformed the carefully tuned automatic hit-finding program [12]. Examples of three kinds of diffraction patterns are given in Fig. 5, where panels a, b and c represent the patterns annotated with "Hit", "Maybe" and "Miss", respectively. In panel b a few less visible Bragg spots are marked out by the red boxes. Because the frames labeled as "Maybe" are probably useful in downstream analysis, we merge "Hit" and "Maybe" into one category in our work and thus we only need to study binary classification. Figure 4: The structure diagram of DANN [23]. There is one convolutional base but two classifiers in this network. The solid arrows represent the forward propagation, while dashed arrows represent the backpropagation of the losses or their gradients. \(L_{l}\) and \(L_{d}\) denote the losses of label prediction and domain prediction respectively; \(\mathbf{w}_{l}\) and \(\mathbf{w}_{d}\) denote the weight vectors of label classifier and domain classifier respectively. The domain classifier is trained to distinguish the features between source and target and thus to decrease \(L_{d}\), however the convolutional base is trained to increase \(L_{d}\) because the backpropagated gradient is reversed by the gradient reverse layer. The adversary between domain classifier and convolutional base forces the latter to learn features from the common characteristic spaces of source and target. #### 3.1.1 Data pre-processing Data pre-processing on raw images is an essential step in machine learning, which usually reduces noise and makes the raw data easier to be learned by models. The pre-processing in our study is described as follows. It is very time-consuming to train CNN models with the raw images of size \(1920\times 1920\). Considering that most diffraction spots are typically located in the central region, the central cropping with a size of \(724\times 724\) is performed. This step effectively reduces the size and also preserves the as many diffraction signatures as possible (Fig. 6). Due to the large dynamic range of the pixel array detector used in SFX experiments, the pixel values of diffraction patterns were distributed over a wide range, e.g., from 0 to 10000 or even higher. Both large pixel values and wide pixel ranges make the training of CNN models very difficult, therefore it is necessary to normalize every image so that its pixel values have a standard Figure 5: Examples of frames annotated as “Hit” (a), “Maybe” (b), and “Miss” (c). A few less visible Bragg spots are marked out by the red boxes in panel (b). Figure 6: Example of the central cropping with a size of \(724\times 724\). Most of the Bragg spots are located in the central region. deviation of 1 and a mean of 0. Hence, in addition to the central cropping, we perform the same normalization as done by Ke _et al._[12], i.e., processing each image by global contrast normalization (GCN) and local contrast normalization (LCN) [27]. GCN zero-centers and scales the pixel values to a small range, and then LCN removes the local background and makes the boundaries of Bragg spots more distinctive (Fig. 7). After LCN the size of each image is reduced to \(720\times 720\). The pre-processing GCN and LCN makes the CNN convergence faster during the training and also makes the trained model more robust. ### Metrics In order to evaluate the performance of a machine learning model during the training epochs, it is important to monitor some metrics in real time. The most commonly used metric in classification tasks is accuracy, i.e., the ratio of samples that are correctly classified. But in unbalanced dataset, where the class distribution is skewed, e.g., most samples belong to one or two class (majority samples) while only a small fraction of samples belong to other classes (minority samples), accuracy may be a misleading metric, because a model can still achieve a high accuracy even if it misclassifies all minority samples. In particular, using accuracy as a metric will yield very bad results if minority samples are much more important than majority samples. As mentioned above, the hit rate of an SFX experiment is very low, therefore the frames with diffraction signatures are minority samples. For unbalanced datasets, recall and precision are more useful metrics than accuracy. Usually precision and recall are dilemmatic metrics, i.e., it is difficult to improve them simultaneously, and which one is more important depends on the problem to be solved. In our study, precision denotes the ratio of correct prediction among all the images predicted as hit, while recall denotes the ratio of correct prediction among all the images annotated as hit. Images with diffraction signals should be identified as correctly as possible, thus the model should achieve a high recall and while a relatively low precision is acceptable. ### Benchmark Firstly, a two-dimensional (2D) CNN are trained with sufficient labeled data as a benchmark. For each dataset, the total 2000 images are divided into 1200, 400, 400 for training, validation, and test, respectively. The architecture of the CNN is shown in Fig. 2, and the results are summarized in Table 1. For comparison, each trained 2D CNN is directly tested on the other two datasets. It can be seen that a 2D CNN trained with sufficient labeled images performs very well on its own dataset, but not so well on other datasets with many "Hit or Maybe" images being misclassified. For each dataset, we also build an insufficiently trained 2D CNN model by dividing the total Figure 7: LN84, shot 560. The effect of contrast normalization. (a) is the experimental image without any normalization, (b) shows the same image processed by GCN, and (c) shows the same image processed by GCN and LCN. 2000 images into training, validating and test sets with 200, 200, 1600 images, respectively. The prediction accuracies of those models are shown in Table 2. Compared with Table 1, it can be seen that the recalls are much worse, namely many "Hit or Maybe" patterns are misclassified as "Miss". The poor recalls indicate that the 2D CNN trained by a small labeled dataset is unable to learn the features of the "Hit or Maybe" frames. ### Results of weakly supervised models #### 3.4.1 Transfer learning Following the steps depicted in Fig. 3, the new CNNs are constructed by reusing the convolutional bases of the fully trained 2D CNNs and then fine-tuned by 200 labeled frames from the other two datasets. Each transferred model is validated with 200 images and tested with 1600 images. The results are given in Table 3. Compared with the predictions in Table 1, it can be seen that the recalls have been improved significantly after the fine tuning. Furthermore, compared with training a 2D CNN from scratch by the same 200 labeled images (see Table 2), it can be concluded that reusing the convolutional base of a well-trained 2D CNN is a much better approach. It is important to note that those three datasets were acquired with the same instrument and detector, but different photon energies, samples and sample delivery devices [12]. It seems that the SFX datasets acquired with the same instrument and detector have some hidden similarities and thus transfer learning works well. #### 3.4.2 Dimensionality Reduction The essential difference between "Miss" and "Hit or Maybe" images is that the latter has more Bragg spots, which are characterized by high photon counts. Furthermore, Bragg spots are \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \multicolumn{5}{c}{Prediction} \\ \cline{2-6} & LN83 & \multicolumn{2}{c}{LN84} & \multicolumn{2}{c}{LO19} \\ \cline{2-6} Training & Hit or Maybe & Miss & Hit or Maybe & Miss & Hit or Maybe & Miss \\ \hline LN83 & **96.92\%** & 99.40\% & **90.28\%** & 98.91\% & **70.37\%** & 99.58\% \\ \hline LN84 & **67.69\%** & 100.00\% & **94.91\%** & 97.83\% & **81.48\%** & 100.00\% \\ \hline LO19 & **95.38\%** & 99.40\% & **87.50\%** & 98.91\% & **96.30\%** & 97.06\% \\ \hline \hline \end{tabular} \end{table} Table 1: The accuracy of 2D CNN with sufficient training. \begin{table} \begin{tabular}{c c c} \hline \hline Datasets & Hit or Maybe & Miss \\ \hline LN83 & **51.06\%** & 100.00\% \\ \hline LN84 & **65.07\%** & 99.76\% \\ \hline LO19 & **84.87\%** & 99.56\% \\ \hline \hline \end{tabular} \end{table} Table 2: The accuracy of 2D CNN with insufficient training. often sparsely distributed over the whole image, suggesting that the image may contain some information that is redundant for the purpose of classification. These two facts inspire us to transform the diffraction patterns from 2D space to 1D space. For each image, we first scan the image to find the pixels with maximum and minimum values in each row, and then perform the following subtraction: \[d_{i}=p_{i}^{\text{max}}-p_{i}^{\text{min}}, \tag{2}\] where \(i=1,2,3,...,720\) is the row index, \(p_{i}^{\text{max}}\) and \(p_{i}^{\text{min}}\) denote the maximum and minimum values in the \(i\)-th row, respectively. Consequently, in each dataset all images are converted to 2000 1D vectors: \(\mathbf{D}_{1},\mathbf{D}_{2},\cdots,\mathbf{D}_{2000}\), where \(\mathbf{D}_{j}=(d_{1},d_{2},\cdots,d_{720})\). This transformation is called row-wise decomposition (RWD) in our work. The 1D vectors clearly show the distinguishing features between "Miss", "Hit" and "Maybe" (Fig. 8) The 2000 1D vectors are divided into training set, validation set and test set with 200, 200, 1600 samples, respectively. Then a 1D CNN is trained and tested by those new sets. The predictions \begin{table} \begin{tabular}{c c c c c c c} \hline & \multicolumn{6}{c}{Prediction} \\ \cline{2-7} & \multicolumn{2}{c}{LN83} & \multicolumn{2}{c}{LN84} & \multicolumn{2}{c}{LO19} \\ \cline{2-7} Training & Hit or Maybe & Miss & Hit or Maybe & Miss & Hit or Maybe & Miss \\ \hline LN83 & - & - & **96.68\%** & 92.33\% & **95.82\%** & 93.38\% \\ \hline LN84 & **94.53\%** & 91.82\% & - & - & **94.24\%** & 93.49\% \\ \hline LO19 & **98.18\%** & 97.40\% & **98.01\%** & 91.62\% & - & - \\ \hline \end{tabular} \end{table} Table 3: The accuracy of transfer learning. Figure 8: The 1D vectors produced by row-wise decomposition clearly show the distinguishing features between “Miss”, “Hit” and “Maybe”. of all datasets are shown in Table 4, from which it can be seen that 200 labeled samples are able to generate models with good performance after the dimensionality reduction, while only weak CNN models can be produced before that (see Table 2). Additionally, 1D CNN processes 11,123 vectors per second in the test, while the 2D CNN can only process 126 frames per second on the same device (A NVIDIA Tesla A100 GPU card with the memory of 40 GByte). Therefore dimensionality reduction is a very promising approach to screening the diffraction frames in real-time. #### 3.4.3 Dann As described in Section 2.3.3, the training of a DANN model needs two datasets. We compose six pairs of datasets, one for source and another for target (labels are ignored in training), and then train the DANNs. At the end of training, the convolutional base is able to extract features from the common characteristic spaces of the source and target, meanwhile the label classifier is able to identify the extracted features as "Hit or Maybe" or "Miss". Finally, the CNN composed of the convolutional base and the label classifier is tested on the entire target dataset with human annotations. The results for all models are shown in Table 5. The essential premise of DANN is that the source and target datasets share a common \begin{table} \begin{tabular}{c c c} \hline Datasets & Hit or Maybe & Miss \\ \hline LN83 & **95.14\%** & 96.22\% \\ \hline LN84 & **94.95\%** & 86.19\% \\ \hline LO19 & **95.24\%** & 94.04\% \\ \hline \end{tabular} \end{table} Table 4: The accuracy of 1D CNN after dimensionality reduction. \begin{table} \begin{tabular}{c c c c c c c c} \hline & & \multicolumn{6}{c}{Prediction} \\ \cline{3-8} & & \multicolumn{2}{c}{LN83} & \multicolumn{2}{c}{LN84} & \multicolumn{2}{c}{LO19} \\ \cline{3-8} Source & Target & Hit or Maybe & Miss & Hit or Maybe & Miss & Hit or Maybe & Miss \\ \hline \multirow{3}{*}{LN83} & LN84 & **90.77\%** & 98.81\% & **90.19\%** & 91.9\% & - & - \\ \cline{2-8} & LO19 & **92.31\%** & 98.81\% & - & - & **89.67\%** & 98.39\% \\ \hline \multirow{3}{*}{LN84} & LN83 & **92.18\%** & 91.77\% & **92.59\%** & 97.28\% & - & - \\ \cline{2-8} & LO19 & - & - & **91.67\%** & 97.28\% & **85.58\%** & 96.43\% \\ \hline \multirow{3}{*}{LO19} & LN83 & **96.09\%** & 97.36\% & - & - & **92.59\%** & 97.90\% \\ \cline{2-8} & LN84 & - & - & **90.41\%** & 91.53\% & **86.42\%** & 97.00\% \\ \hline \end{tabular} \end{table} Table 5: The accuracy of DANN models. characteristic space. Because the main features of "Hit" and "Miss" frames are clear and simple, i.e., ten or more Bragg spots for "Hit" and three or less Bragg spots for "Miss", it is reasonable to assume that "Hit" and "Miss" frames from both datasets have similar latent features. However, the features of "Maybe" frames can be different. In order to visualize the high-dimensional features learned by the DANN model, the extracted features, i.e., the output of convolutional base (see Fig. 4), are projected into 2D space through t-distributed stochastic neighbor embedding (t-SNE) algorithm [28] using the toolkit scikit-learn v1.0.2 [29]. t-SNE algorithm embeds each high-dimensional instance into low-dimensional space in a way that similar instance are modeled by nearby points while dissimilar instances are modeled by distant points with high probability. The distributions of the projected features of "Miss", "Maybe" and "Hit" frames of LN83 (source, red dots) and LO19 (target, blue dots) are shown in Fig. 9 (panels a, b, and c, respectively), and the same distributions produced from the CNN trained on LN83 are also given as a comparison (panels d, e and f, respectively). The more overlap between the distributions of the two embedded features indicates that the model is more capable of learning features from the common space of these two datasets. It is clear from Fig. 9 that DANN is better at learning features from the common spaces than ordinary CNN, which is consistent with the significant improvement of recall on the target dataset (89.67% vs. 70.37%, see Table 5 and Table 1). On the other hand, there are still some distinctions between the characteristic spaces of source and target, especially for "Maybe" frames (See panels b in Fig. 9). Probably that is the reason why DANN performs much better on the target than the CNN trained only by the source, but slightly worse than the model fine-tuned by the target. Figure 9: Comparison of the features learned by the DANN and the CNN trained only on source. The extracted features by deep learning models are usually high-dimensional, hence all the features are embedded into 2D space through t-SNE algorithm for visualization. Red and blue dots represent 2D embeddings of features extracted from the source (LN83) and target(LO19), respectively, and the more overlap between their distributions indicates that the model is more capable of learning features from the common space of these two datasets. Models and labels of the features shown in each panel are: (a) DANN, miss; (b) DANN, maybe; (c) DANN, hit; (d) CNN, miss; (e) CNN, maybe; (f) CNN, hit. ## 4 Discussion The three methods above have their own advantages and disadvantages in classifying diffraction patterns. The comparison in a few aspects is given in Table 6. The RWD algorithm for dimensionality reduction in our work (See section 3.4.2) looks simple and achieves superior performance, but the potential drawback is the lack of general applicability. In other words, the decomposition in this study may not work in other classification tasks of diffraction patterns, in which case we may need to think about a new method for dimensionality reduction or feature engineering. There may be various feature extraction methods for SFX data, but it is not an easy task to determine which method works the best [30]. That is why we mark the pre-processing of RWD as hard in Table 6. Nevertheless, RWD algorithm has two important properties in screening diffraction patterns. First, the 1D CNN model based on RWD achieves a speed of over 11 thousands frames per second in labels prediction, which is more than 80 times faster than 2D CNN models (See section 3.4.2). As the pulse frequency of XFELs increases to tens of thousands or even higher [31], there is a great potential for the RWD algorithm to be developed into an online screening tool for SFX experiments. Secondly, RWD has introduced a new feature that is easier to recognize, and thus appears to be more universal than original features in identifying whether a diffraction images contain physical signals. Without any tuning, we test the RWD-based models and the fully supervised 2D CNN models across datasets, and the result clearly indicates that the former shows a higher level of adaptability and generalization on the datasets collected with different experimental settings (Table 7). It is desirable to train a universal models which can be used to screen images in various experiments, although it may be difficult [12, 30]. Our work indicates that training a model to learn some one-dimensional features may be a promising approach. Both transfer learning and DANN rely on another annotated dataset acquired in similar experimental setup, but their training processes are quite different. The domain classifier of a DANN model is designed to be fooled by the domains of source or target, thus its loss and accuracy are supposed to oscillate around certain values. In order to verify this hypothesis, we monitor the loss and accuracy in validation steps and draw their trends in Fig. 10, from which it can be seen that the loss and accuracy of the label prediction converge to plateaus gradually, whereas those two metrics of domain prediction tend to oscillate. It is reasonable to assume that the two plateaus and two oscillations indicate that the DANN model has been trained, and thus the labels of the target are no longer needed for validation. Although the transferred models work slightly better, they need extra labeled patterns for fine tuning and validation. Furthermore, the results of our study indicate that the same instrument and the same detector are able to yield similar latent features in the SFX diffraction frames so that both transfer learning \begin{table} \begin{tabular}{l l l l} \hline & Transfer learning & RWD & DANN \\ \hline Size of labels & Low & Low & None \\ Pre-processing & Easy & Hard & Easy \\ Train & Medium & Easy & Hard \\ Recall & High & High & Medium \\ Resource consuming & Medium & Low & High \\ Speed of prediction & Slow & Fast & Slow \\ \hline \end{tabular} \end{table} Table 6: Comparison of the three weakly supervised methods. and DANN work well. Therefore, for experimental datasets produced at the same SFX instrument and by the same detector, regardless of the samples, we may only need to fully annotate one dataset, while a small number of annotated frames in other datasets is enough to train a good CNN model. ## 5 Conclusion In this paper, we studied three weakly supervised models, i.e., transfer learning, dimensionality reduction, and DANN, to classify SFX diffraction patterns and showed that although these models were trained by only 200 labeled samples or even less, they demonstrated comparable performance to the fully supervised CNN models (i.e., trained by 1200 labeled patterns). With the development of advanced X-ray sources, the processing of experimental data is facing increasing challenges \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & & \multicolumn{6}{c}{Prediction} \\ \cline{3-8} & & LN83 & \multicolumn{2}{c}{LN84} & \multicolumn{2}{c}{LO19} \\ \cline{3-8} Training & Model & Hit or Maybe & Miss & Hit or Maybe & Miss & Hit or Maybe & Miss \\ \hline \multirow{3}{*}{LN83} & 1D CNN & - & - & **96.91\%** & 78.53\% & **90.01\%** & 98.39\% \\ \cline{2-8} & 2D CNN & - & - & **90.28\%** & 98.91\% & **70.37\%** & 99.58\% \\ \hline \multirow{3}{*}{LN84} & 1D CNN & **94.87\%** & 95.79\% & - & - & **91.94\%** & 96.60\% \\ \cline{2-8} & 2D CNN & **67.69\%** & 100.00\% & - & - & **81.48\%** & 100.00\% \\ \hline \multirow{3}{*}{LO19} & 1D CNN & **97.07\%** & 94.03\% & **96.27\%** & 86.06\% & - & - \\ \cline{2-8} & 2D CNN & **95.38\%** & 99.40\% & **87.50\%** & 98.91\% & - & - \\ \hline \end{tabular} \({}^{a}\)2D CNNs in this table are the models trained with 1200 labeled patterns. \end{table} Table 7: Cross-dataset test of RWD-based 1D CNNs and fully supervised 2D CNNs\({}^{a}\) Figure 10: In the validation steps of a DANN model, the loss and accuracy of label prediction converge to plateaus gradually, while those two metrics of domain prediction tend to oscillate. and machine learning methods are expected to play an important role. This study demonstrates that the label dependence of CNNs can be greatly reduced, thus providing a promising approach to efficient SFX data processing. Besides the three methods described above, we are also studying some other algorithms, e.g., unsupervised pre-training by generative adversarial network (GAN) [32], building an ensemble model by combining several weak neural networks [33]. In the future we will continue to work on weakly supervised algorithms, with the ultimate goal of finding weakly supervised solutions to various classification tasks, including not only the recognition of "Hit" or "Miss" images in SFX experiment, but also some problems in similar experiments at XFEL, e.g., selecting single hit frames in single-particle imaging (SPI) data [34, 14, 35]. There are two XFEL facilities in Shanghai, of which one is Shanghai soft X-ray free-electron laser (SXFEL) [36, 37] and the other is Shanghai High repetition rate XFEL and Extreme Light Facility (SHINE). SXFEL started its commissioning in 2021 and opened to users in 2022 [38]. The repetition rate of SHINE is much higher than that of SXFEL, and therefore the data processing of SHINE will also be far more challenging in the future. All the weakly supervised models in our study will be important tools for efficient data analysis in SXFEL and SHINE. All of the code, written in Python 3.8, using TensorFlow framework 2.3 and executable in JupyterLab 3.2, is available on the request by email. Funding.This work is financially supported by Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDC02070100). Acknowledgments.We acknowledge Ke _et al._ for the deposition of datasets on CXIDB for open access. We also acknowledge the help provided by the course "CS286: AI for Science and Engineering" at ShanghaiTech University. Finally we would like to thank Yaru Yin and Wujun Shi for fruitful discussions. Disclosures.The authors declare no conflicts of interest. Data availability.Data underlying the results presented in this paper are available in Ref. [12].
2305.04979
FedHB: Hierarchical Bayesian Federated Learning
We propose a novel hierarchical Bayesian approach to Federated Learning (FL), where our model reasonably describes the generative process of clients' local data via hierarchical Bayesian modeling: constituting random variables of local models for clients that are governed by a higher-level global variate. Interestingly, the variational inference in our Bayesian model leads to an optimisation problem whose block-coordinate descent solution becomes a distributed algorithm that is separable over clients and allows them not to reveal their own private data at all, thus fully compatible with FL. We also highlight that our block-coordinate algorithm has particular forms that subsume the well-known FL algorithms including Fed-Avg and Fed-Prox as special cases. Beyond introducing novel modeling and derivations, we also offer convergence analysis showing that our block-coordinate FL algorithm converges to an (local) optimum of the objective at the rate of $O(1/\sqrt{t})$, the same rate as regular (centralised) SGD, as well as the generalisation error analysis where we prove that the test error of our model on unseen data is guaranteed to vanish as we increase the training data size, thus asymptotically optimal.
Minyoung Kim, Timothy Hospedales
2023-05-08T18:21:41Z
http://arxiv.org/abs/2305.04979v1
# FedHB: Hierarchical Bayesian Federated Learning ###### Abstract We propose a novel hierarchical Bayesian approach to Federated Learning (FL), where our model reasonably describes the generative process of clients' local data via hierarchical Bayesian modeling: constituting random variables of local models for clients that are governed by a higher-level global variate. Interestingly, the variational inference in our Bayesian model leads to an optimisation problem whose block-coordinate descent solution becomes a distributed algorithm that is separable over clients and allows them not to reveal their own private data at all, thus fully compatible with FL. We also highlight that our block-coordinate algorithm has particular forms that subsume the well-known FL algorithms including Fed-Avg and Fed-Prox as special cases. Beyond introducing novel modeling and derivations, we also offer convergence analysis showing that our block-coordinate FL algorithm converges to an (local) optimum of the objective at the rate of \(O(1/\sqrt{t})\), the same rate as regular (centralised) SGD, as well as the generalisation error analysis where we prove that the test error of our model on unseen data is guaranteed to vanish as we increase the training data size, thus asymptotically optimal. ## 1 Introduction Federated Learning (FL) aims to enable a set of clients to collaboratively train a model in a privacy preserving manner, without sharing data with each other or a central server. Compared to conventional centralised optimisation problems, FL comes with a host of statistical and systems challenges - such as communication bottlenecks and sporadic participation. The key statistical challenge is non-i.i.d. data distributions across clients, each of which has a different data collection bias and potentially a different data annotation policy/labeling function (e.g., in the case of any user preference learning). The classic and most popularly deployed FL algorithms are FedAvg [43] and FedProx [33], however, even when a global model can be learned, it often underperforms on each client's local data distribution in scenarios of high heterogeneity [35; 29; 55]. Studies have attempted to alleviate this by personalising learning at each client, allowing each local model to deviate from the shared global model [51]. However, this remains challenging given that each client may have a limited amount of local data for personalised learning. These challenges have motivated several attempts to model the FL problem from a Bayesian perspective. Introducing distributions on model parameters \(\theta\) has enabled various schemes for estimating a global model posterior \(p(\theta|D_{1:N})\) from clients' local posteriors \(p(\theta|D_{i})\), or to regularise the learning of local models given a prior defined by the global model [59; 4; 15]. However, these methods are not complete and principled solutions - having not yet provided full Bayesian descriptions of the FL problem, and having had resort to ad-hoc treatments to achieve tractable learning. The key difference is that they fundamentally treat network weights \(\theta\) as a random variable shared across all clients. We introduce a _hierarchical_ Bayesian model that assigns each client it's own random variable for model weights \(\theta_{i}\), and these are linked via a higher level random variable \(\phi\) as \(p(\theta_{1:N},\phi)=p(\phi)\prod_{i=1}^{N}p(\theta_{i}|\phi)\). This has several crucial benefits: Firstly, given this hierarchy, variational inference in our framework decomposes into separable optimisation problems over \(\theta_{i}\)s and \(\phi\), enabling a practical Bayesian learning algorithm to be derived that is fully compatible with FL constraints, without resorting to ad-hoc treatments or strong assumptions. Secondly, this framework can be instantiated with different assumptions on \(p(\theta_{i}|\phi)\) to deal elegantly and robustly with different kinds of statistical heterogeneity, as well as for principled and effective model personalisation. Our resulting algorithm, termed Federated Hierarchical Bayes (FedHB) is empirically effective, as we demonstrate in a wide range of experiments on established benchmarks. More importantly, it benefits from rigorous theoretical support. In particular, we provide convergence guarantees showing that FedHB has the same \(O(1/\sqrt{T})\) convergence rate as centralised SGD algorithms, which are not provided by related prior art [59; 15]. We also provide a generalisation bound showing that FedHB is asymptotically optimal, which has not been shown by prior work such as [4]. Furthermore we show that FedHB subsumes classic methods FedAvg [43] and FedProx [33] as special cases, and ultimately provides additional justification and explanation for these seminal methods. ## 2 Bayesian FL: General Framework We introduce two types of latent random variables, \(\phi\) and \(\{\theta_{i}\}_{i=1}^{N}\). Each \(\theta_{i}\) is deployed as the network weights for client \(i\)'s backbone. The variable \(\phi\) can be viewed as a globally shared variable that is responsible for linking the individual client parameters \(\theta_{i}\). We assume conditionally independent and identical priors, \(p(\theta_{1:N}|\phi)=\prod_{i=1}^{N}p(\theta_{i}|\phi)\). Thus the prior for the latent variables \((\phi,\{\theta_{i}\}_{i=1}^{N})\) is formed in a hierarchical manner as (1). The local data for client \(i\), denoted by \(D_{i}\), is generated1 by \(\theta_{i}\), Footnote 1: Note that we do not deal with generative modeling of input images \(x\). Inputs \(x\) are always given, and only conditionals \(p(y|x)\) are modeled. See Fig. 1(b) for the in-depth graphical model diagram. \[\text{(Prior)}\;\;p(\phi,\theta_{1:N})=p(\phi)\prod_{i=1}^{N}p(\theta_{i}| \phi)\quad\text{ (Likelihood)}\;\;p(D_{i}|\theta_{i})=\prod_{(x,y)\in D_{i}}p(y|x,\theta_{i}), \tag{1}\] where \(p(y|x,\theta_{i})\) is a conventional neural network model (e.g., softmax link for classification tasks). See the graphical model in Fig. 1(a) where the iid clients are governed by a single random variable \(\phi\). Given the data \(D_{1},\dots,D_{N}\), we infer the posterior, \(p(\phi,\theta_{1:N}|D_{1:N})\propto p(\phi)\prod_{i=1}^{N}p(\theta_{i}|\phi)p( D_{i}|\theta_{i})\), which is intractable in general, and we adopt the variational inference to approximate it: \[q(\phi,\theta_{1:N};L):=q(\phi;L_{0})\prod_{i=1}^{N}q_{i}(\theta_{i};L_{i}), \tag{2}\] where the variational parameters \(L\) consists of \(L_{0}\) (parameters for \(q(\phi)\)) and \(\{L_{i}\}_{i=1}^{N}\)'s (parameters for \(q_{i}(\theta_{i})\)'s from individual clients). Note that although \(\theta_{i}\)'s are independent across clients under (2), they are differently modeled (emphasised by the subscript \(i\) in notation \(q_{i}\)), reflecting different posterior beliefs originating from heterogeneity of local data \(D_{i}\)'s. ### From Variational Inference to Federated Learning Algorithm Using the standard variational inference techniques [10; 30], we can derive the ELBO objective function (details in Appendix A). We denote the _negative_ ELBO by \(\mathcal{L}\) (to be minimised over \(L\)): \[\mathcal{L}(L):=\sum_{i=1}^{N}\Big{(}\mathbb{E}_{q_{i}(\theta_{i})}[-\log p(D _{i}|\theta_{i})]+\mathbb{E}_{q(\phi)}\big{[}\text{KL}(q_{i}(\theta_{i})||p( \theta_{i}|\phi))\big{]}\Big{)}+\text{KL}(q(\phi)||p(\phi)), \tag{3}\] Figure 1: Graphical models. (a) Plate view of iid clients. (b) Individual client data with input images \(x\) given and only \(p(y|x)\) modeled. (c) \(\&\) (d): Global prediction and personalisation as probabilistic inference problems (shaded nodes \(=\)_evidences_, red colored nodes \(=\)_targets_ to infer, \(x^{*}=\) test input in global prediction, \(D^{p}=\) training data for personalisation and \(x^{p}=\) test input). where we drop the dependency on \(L\) in notation for simplicity. Instead of optimizing (3) over the parameters \(L\) jointly as usual practice, we consider block-wise optimisation, also known as _block-coordinate optimisation_[56], specifically alternating two steps: (i) updating/optimizing all \(L_{i}\)'s \(i=1,\ldots,N\) while fixing \(L_{0}\), and (ii) updating \(L_{0}\) with all \(L_{i}\)'s fixed. That is, * Optimisation over \(L_{1},\ldots,L_{N}\) (\(L_{0}\) fixed). \[\min_{\{L_{i}\}_{i=1}^{N}}\;\sum_{i=1}^{N}\Big{(}\mathbb{E}_{q_{i}(\theta_{i} )}[-\log p(D_{i}|\theta_{i})]+\mathbb{E}_{q(\phi)}\big{[}\text{KL}(q_{i}(\theta _{i})||p(\theta_{i}|\phi))\big{]}\Big{)}.\] (4) As (4) is completely separable over \(i\), and we can optimise each summand independently as: \[\min_{L_{i}}\;\mathcal{L}_{i}(L_{i}):=\mathbb{E}_{q_{i}(\theta_{i};L_{i})}[- \log p(D_{i}|\theta_{i})]+\mathbb{E}_{q(\phi;L_{0})}\big{[}\text{KL}(q_{i}( \theta_{i};L_{i})||p(\theta_{i}|\phi))\big{]}.\] (5) So (5) constitutes local update/optimisation for client \(i\). Note that each client \(i\) needs to access its private data \(D_{i}\) only without data from others, thus fully compatible with FL. * Optimisation over \(L_{0}\) (\(L_{1},\ldots,L_{N}\) fixed). \[\min_{L_{0}}\;\mathcal{L}_{0}(L_{0}):=\text{KL}(q(\phi;L_{0})||p(\phi))-\sum_{ i=1}^{N}\mathbb{E}_{q(\phi;L_{0})q_{i}(\theta_{i};L_{i})}[\log p(\theta_{i}| \phi)].\] (6) This constitutes server update criteria while the latest \(q_{i}(\theta_{i};L_{i})\)'s from local clients being fixed. Remarkably, the server needs not access any local data at all, suitable for FL. This nice property originates from the independence assumption in our approximate posterior (2). **Interpretation.** First, server's loss function (6) tells us that the server needs to update \(q(\phi;L_{0})\) in such a way that (i) it puts mass on those \(\phi\) that have high compatibility scores \(\log p(\theta_{i}|\phi)\) with the current local models \(\theta_{i}\sim q_{i}(\theta_{i})\), thus aiming to be aligned with local models, and (ii) it does not deviate from the prior \(p(\phi)\). Clients' loss function (5) indicates that each client \(i\) needs to minimise the class prediction error on its own data \(D_{i}\) (first term), and at the same time, to stay close to the current global standard \(\phi\sim q(\phi)\) by reducing the KL divergence from \(p(\theta_{i}|\phi)\) (second term). ### Formalisation of Global Prediction and Personalisation Tasks Two important tasks in FL are: _global prediction_ and _personalisation_. The former evaluates the trained model on novel test data sampled from a distribution possibly different from training data. Personalisation is the task of adapting the trained model on a new dataset called personalised data. In our Bayesian model, these two tasks can be formally defined as Bayesian inference problems. **Global prediction.** The task is to predict the class label of a novel test input \(x^{*}\) which may or may not come from the same distributions as the training data \(D_{1},\ldots D_{N}\). Under our Bayesian model, it can be turned into a probabilistic inference problem \(p(y^{*}|x^{*},D_{1:N})\). Let \(\theta\) be the local model that generates the output \(y^{*}\) given \(x^{*}\). Exploiting conditional independence from Fig. 1(c), \[p(y^{*}|x^{*},D_{1:N})=\iint p(y^{*}|x^{*},\theta)\;p(\theta| \phi)\;p(\phi|D_{1:N})\;d\theta d\phi \tag{7}\] \[\approx\iint p(y^{*}|x^{*},\theta)\;p(\theta|\phi)\;q(\phi)\;d \theta d\phi\;=\int p(y^{*}|x^{*},\theta)\;\bigg{(}\int p(\theta|\phi)\;q( \phi)d\phi\bigg{)}\;d\theta, \tag{8}\] where in (8) we use \(p(\phi|D_{1:N})\approx q(\phi)\). (Appendix B for details.) The inner integral (in parentheses) in (8) either admits a closed form (Sec. 3.1) or can be approximated (e.g., Monte-Carlo estimation). **Personalisation.** It formally refers to the task of learning a prediction model \(\hat{p}(y|x)\) given an unseen (personal) training dataset \(D^{p}\) that comes from some unknown distribution \(p^{p}(x,y)\), so that the personalised model \(\hat{p}\) performs well on novel (in-distribution) test points \((x^{p},y^{p})\sim p^{p}(x,y)\). Evidently we need to exploit (and benefit from) the trained model from the FL training stage. To this end many existing approaches simply resort to _finetuning_, that is, training on \(D^{p}\) warm-starting with the FL-trained model. However, a potential issue is the lack of a solid principle on how to balance the initial FL-trained model and personal data fitting to avoid underfitting and overfitting. In our Bayesian framework, the personalisation can be seen as another posterior inference problem with _additional evidence_ of the personal training data \(D^{p}\). Prediction on a test point \(x^{p}\) amounts to inferring: \[p(y^{p}|x^{p},D^{p},D_{1:N})=\int p(y^{p}|x^{p},\theta)\;p(\theta|D^{p},D_{1:N}) \;d\theta. \tag{9}\] So, it boils down to the task of posterior inference \(p(\theta|D^{p},D_{1:N})\) given both the personal data \(D^{p}\) and the FL training data \(D_{1:N}\). Under our hierarchical model, by exploiting conditional independence from graphical model (Fig. 1(d)), we can link the posterior to our FL-trained \(q(\phi)\) as follows: \[p(\theta|D^{p},D_{1:N})\approx\int p(\theta|D^{p},\phi)\ p(\phi|D_{1:N})\ d\phi \;\approx\int p(\theta|D^{p},\phi)\ q(\phi)\ d\phi\;\approx\;p(\theta|D^{p},\phi ^{*}), \tag{10}\] where we disregard the impact of \(D^{p}\) on the higher-level \(\phi\) given the joint evidence, \(p(\phi|D^{p},D_{1:N})\approx p(\phi|D_{1:N})\) due to the dominance of \(D_{1:N}\) compared to smaller \(D^{p}\). See Appendix B for details. The last part of (10) makes approximation using the mode \(\phi^{*}\) of \(q(\phi)\), which is reasonable for our two modeling choices for \(q(\phi)\) to be discussed in Sec. 3.1 and Sec. 3.2. Since dealing with \(p(\theta|D^{p},\phi^{*})\) involves difficult marginalisation \(p(D^{p}|\phi^{*})=\int p(D^{p}|\theta)p(\theta|\phi^{*})d\theta\), we adopt variational inference, introducing a tractable variational distribution \(v(\theta)\approx p(\theta|D^{p},\phi^{*})\). Following the usual variational inference derivations, we have the negative ELBO objective (for personalisation): \[\min_{v}\ \mathbb{E}_{v(\theta)}[-\log p(D^{p}|\theta)]+\text{KL}(v(\theta)||p( \theta|\phi^{*})). \tag{11}\] Once we have the optimised \(v\), our predictive distribution becomes (\(S=\) the number of MC samples): \[p(y^{p}|x^{p},D^{p},D_{1:N})\approx\frac{1}{S}\sum_{s=1}^{S}p(y^{p}|x^{p}, \theta^{(s)}),\;\;\text{where}\;\;\theta^{(s)}\sim v(\theta), \tag{12}\] which simply requires feed-forwarding test input \(x^{p}\) through the sampled networks \(\theta^{(s)}\) and averaging. Thus far, we have discussed a general framework, deriving how the variational inference for our Bayesian model fits gracefully in the FL problem. In the next section, we define specific density families for the prior (\(p(\phi)\), \(p(\theta_{i}|\phi)\)) and posterior (\(q(\phi)\), \(q_{i}(\theta_{i})\)) as our proposed concrete models. ## 3 Bayesian FL: Two Concrete Models We propose two different model choices that we find the most interesting: **Normal-Inverse-Wishart** (Sec. 3.1) and **Mixture** (Sec. 3.2). To avoid distraction, we make this section concise putting only the final results and discussions, and leaving all mathematical details in Appendix C and D. ### Normal-Inverse-Wishart (NIW) Model We define the prior as a conjugate form of Gaussian and Normal-Inverse-Wishart. With \(\phi=(\mu,\Sigma)\), \[p(\phi)=\mathcal{NIW}(\mu,\Sigma;\Lambda)=\mathcal{N}(\mu;\mu_{ 0},\lambda_{0}^{-1}\Sigma)\cdot\mathcal{IMV}(\Sigma;\Sigma_{0},\nu_{0}), \tag{13}\] \[p(\theta_{i}|\phi)=\mathcal{N}(\theta_{i};\mu,\Sigma),\;\;i=1, \ldots,N, \tag{14}\] where \(\Lambda=\{\mu_{0},\Sigma_{0},\lambda_{0},\nu_{0}\}\) is the parameters of the NIW. Although \(\Lambda\) can be learned via data marginal likelihood maximisation (e.g., empirical Bayes), but for simplicity we leave it fixed as2: \(\mu_{0}=0\), \(\Sigma_{0}=I\), \(\lambda_{0}=1\), and \(\nu_{0}=d+2\) where \(d\) is the number of parameters in \(\theta_{i}\) or \(\mu\). Next, our choice of the variational density family for \(q(\phi)\) is the NIW, not just because it is the most popular parametric family for a pair of mean vector and covariance matrix \(\phi=(\mu,\Sigma)\), but it can also admit closed-form expressions in the ELBO function due to the conjugacy as we derive in Appendix C.1. Footnote 2: This choice ensures that the mean of \(\Sigma\) equals \(I\), and \(\mu\) is distributed as 0-mean Gaussian with covariance \(\Sigma\). \[q(\phi):=\mathcal{NIW}(\phi;\{m_{0},V_{0},l_{0},n_{0}\})=\mathcal{N}(\mu;m_{0},l_{0}^{-1}\Sigma)\cdot\mathcal{IV}(\Sigma;V_{0},n_{0}). \tag{15}\] Although the scalar parameters \(l_{0}\), \(n_{0}\) can be optimised together with \(m_{0}\), \(V_{0}\), their impact is less influential and we find that they make the ELBO optimisation a little bit cumbersome. So we fix \(l_{0}\), \(n_{0}\) with some near-optimal values by exploiting the conjugacy of the NIW under Gaussian likelihood (details in Appendix C), and regard \(m_{0},V_{0}\) as variational parameters, \(L_{0}=\{m_{0},V_{0}\}\). We restrict \(V_{0}\) to be diagonal for computational tractability. The density family for \(q_{i}(\theta_{i})\)'s can be a Gaussian, but we find that it is computationally more attractive and numerically more stable to adopt the mixture of two spiky Gaussians that leads to the MC-Dropout [23]. That is, \[q_{i}(\theta_{i})=\prod_{l}\big{(}p\cdot\mathcal{N}(\theta_{i}[l];m_{i}[l], \epsilon^{2}I)+(1-p)\cdot\mathcal{N}(\theta_{i}[l];0,\epsilon^{2}I)\big{)}, \tag{16}\] where (i) \(m_{i}\) is the only variational parameters (\(L_{i}=\{m_{i}\}\)), (ii) \(\cdot[l]\) indicates a column/layer in neural network parameters where \(l\) goes over layers and columns of weight matrices, (iii) \(p\) is the (user-specified) hyperparameter where \(1-p\) corresponds to the dropout probability, and (iv) \(\epsilon\) is small constant (e.g., \(10^{-4}\)) that makes two Gaussians spiky, close to the delta functions. **Client update.** We apply the general client update optimisation (5) to the NIW model. Following the approximation of [23] for the KL divergence between a mixture of Gaussians (16) and a Gaussian (14), we have the client local optimisation (details in Appendix C): \[\min_{m_{i}}\ \mathcal{L}_{i}(m_{i}):=-\log p(D_{i}|\tilde{m}_{i})+\frac{p}{2}( n_{0}+d+1)(m_{i}-m_{0})^{\top}V_{0}^{-1}(m_{i}-m_{0}), \tag{17}\] where \(\tilde{m}_{i}\) is the dropout version of \(m_{i}\), i.e., a reparametrised sample from (16). Note that \(m_{0}\) and \(V_{0}\) are fixed during the optimisation. Interestingly (17) generalises Fed-Avg [43] and Fed-Prox [33]: With \(p=1\) (i.e., no dropout) and setting \(V_{0}=\alpha I\), (17) reduces to the client update formula for Fed-Prox where constant \(\alpha\) controls the impact of the proximal term. **Server update.** The general server optimisation (6) admits the closed-form solution (Appendix C): \[m_{0}^{*}=\frac{p}{N+1}\sum_{i=1}^{N}m_{i},\ \ V_{0}^{*}=\frac{n_{0}}{N+d+2} \Bigg{(}(1+N\epsilon^{2})I+m_{0}^{*}(m_{0}^{*})^{\top}+\sum_{i=1}^{N}\rho(m_{0 }^{*},m_{i},p)\Bigg{)}, \tag{18}\] where \(\rho(m_{0},m_{i},p)=pm_{i}m_{i}^{\top}-pm_{i}m_{0}m_{i}^{\top}-pm_{i}m_{0}^{ \top}+m_{0}m_{0}^{\top}\). Note that \(m_{i}\)'s are fixed from clients' latest variational parameters. It is interesting to see that \(m_{0}^{*}\) in (18) generalises the well-known aggregation step of averaging local models in Fed-Avg [43] and related methods: when \(p=1\) (no dropout), it almost3 equals client model averaging. Also, since \(\rho(m_{0}^{*},m_{i},p=1)=(m_{i}-m_{0}^{*})(m_{i}-m_{0}^{*})^{\top}\) when \(p=1\), \(V_{0}^{*}\) essentially estimates the sample scatter matrix with \((N+1)\) samples, namely clients' \(m_{i}\)'s and server's prior \(\mu_{0}=0\), measuring how much they deviate from the center \(m_{0}^{*}\). The dropout is known to help regularise the model and lead to better generalisation [23], and with \(p<1\) our (18) forms a principled optimal solution. Footnote 3: Only the constant 1 added to the denominator, which comes from the prior and has the regularising effect. **Global prediction.** The inner integral of (8) becomes the multivariate Student-\(t\) distribution. Then the predictive distribution for a new test input \(x^{*}\) can be estimated as4: Footnote 4: In practice we use a single sample (\(S=1\)) for computational efficiency. \[p(y^{*}|x^{*},D_{1:N})\approx\frac{1}{S}\sum_{s=1}^{S}p(y^{*}|x^{*},\theta^{(s )}),\ \ \mbox{where}\ \ \theta^{(s)}\sim t_{n_{0}-d+1}\bigg{(}\theta;m_{0},\frac{(l_{0}+1)V_{0}}{l_{0}(n _{0}-d+1)}\bigg{)}, \tag{19}\] where \(t_{\nu}(a,B)\) is the multivariate Student-\(t\) with location \(a\), scale matrix \(B\), and d.o.f. \(\nu\). **Personalisation.** With the given personalisation training data \(D^{p}\), we follow the general framework in (11) to find \(v(\theta)\approx p(\theta|\bar{D}^{p},\phi^{*})\) in a variational way, where \(\phi^{*}\) obtained from (40). We adopt the same spiky mixture form (16) for \(v(\theta)\), which leads to the learning objective similar to (17). ### Mixture Model Our motivation for mixture is to make the prior \(p(\theta,\phi)\) more flexible by having multiple different prototypes, diverse enough to cover the heterogeneity in data distributions across clients. We consider: \[p(\phi)=\prod_{j=1}^{K}\mathcal{N}(\mu_{j};0,I),\quad p(\theta_{i}|\phi)=\sum_ {j=1}^{K}\frac{1}{K}\mathcal{N}(\theta_{i};\mu_{j};\sigma^{2}I), \tag{20}\] where \(\phi=\{\mu_{1},\ldots,\mu_{K}\}\) contains \(K\) networks (prototypes) that can broadly cover the clients data distributions, and \(\sigma\) is the hyperparameter that captures perturbation scale, chosen by users or learned from data. Note that we put equal mixing proportions \(1/K\) due to the symmetry, a priori. That is, each client can take any of \(\mu_{j}\)'s equally likely a priori. For the variational densities, we define: \[q_{i}(\theta_{i})=\mathcal{N}(\theta_{i};m_{i},\epsilon^{2}I),\quad q(\phi)= \prod_{j=1}^{K}\mathcal{N}(\mu_{j};\tau_{j},\epsilon^{2}I), \tag{21}\] where \(\{r_{j}\}_{j=1}^{K}\) (\(L_{0}\)) and \(m_{i}\) (\(L_{i}\)) are the variational parameters, and \(\epsilon\) is small constant (e.g., \(10^{-4}\)). **Client update.** For our model choice, the general client update (5) reduces to (details in Appendix D): \[\min_{m_{i}}\ \mathbb{E}_{q_{i}(\theta_{i})}[-\log p(D_{i}|\theta_{i})]- \log\sum_{j=1}^{K}\exp\bigg{(}-\frac{||m_{i}-r_{j}||^{2}}{2\sigma^{2}}\bigg{)}. \tag{22}\] It is interesting to see that (22) can be seen as generalisation of Fed-Prox [33], where the proximal regularisation term in Fed-Prox is extended to _multiple_ global models \(r_{j}\)'s, penalizing the local model (\(m_{i}\)) straying away from these prototypes. And if we use a single prototype (\(K=1\)), the optimisation (22) exactly reduces to the local update objective of Fed-Prox. Since log-sum-exp is approximately equal to max, the regularisation term in (22) effectively focuses on the closest global prototype \(r_{j}\) from the current local model \(m_{i}\), which is intuitively well aligned with our motivation. **Server update.** The general form (6) can be approximately turned into (Appendix D for derivations): \[\min_{\{r_{j}\}_{j=1}^{K}}\frac{1}{2}\sum_{j=1}^{K}||r_{j}||^{2}-\sum_{i=1}^{ N}\log\sum_{j=1}^{K}\exp\bigg{(}-\frac{||m_{i}-r_{j}||^{2}}{2\sigma^{2}}\bigg{)}. \tag{23}\] Interestingly, (23) generalises the well-known aggregation step of averaging local models in Fed-Avg and related methods: Especially when \(K=1\), (23) reduces to quadratic optimisation, admitting the optimal solution \(r_{1}^{*}=\frac{1}{N+\sigma^{2}}\sum_{i=1}^{N}m_{i}\). The extra term \(\sigma^{2}\) can be explained by incorporating an extra _zero_ local model originating from the prior (interpreted as a _neutral_ model) with the discounted weight \(\sigma^{2}\) rather than \(1\). Although (23) for \(K>1\) can be solved by standard gradient descent, we apply the Expectation-Maximisation (EM) algorithm5[18] instead: Footnote 5: Instead of performing several EM steps until convergence, in practice we find only one EM step is sufficient. \[\text{(E-step)}\ c(j|i)=\frac{e^{-||m_{i}-r_{j}||^{2}/(2\sigma^{2})}}{\sum_{j =1}^{K}e^{-||m_{i}-r_{j}||^{2}/(2\sigma^{2})}},\quad\text{(M-step)}\ r_{j}^{*}= \frac{\frac{1}{N}\sum_{i=1}^{N}c(j|i)\cdot m_{i}}{\frac{\sigma^{2}}{N}+\frac{1 }{N}\sum_{i=1}^{N}c(j|i)}. \tag{24}\] The M-step (server update) has intuitive meaning that the new prototype \(r_{j}\) becomes the _weighted_ average of the local models \(m_{i}\)'s where the weights \(c(j|i)\) are determined by the proximity between \(m_{i}\) and \(r_{j}\) (i.e., those \(m_{i}\)'s that are closer to \(r_{j}\) have more contribution, and vice versa). This can be seen as an extension of the aggregation step in Fed-Avg to the multiple prototype case. **Global prediction.** We slightly modify our general approach to make individual client data dominantly explained by the most relevant model \(r_{j}\), by introducing a gating function from the mixture of experts [27; 28]. See Appendix D for details. **Personalisation.** With \(v(\theta)\) of the same form as \(q_{i}(\theta_{i})\), the VI learning becomes similar to (22). ## 4 Theoretical Analysis We provide two theoretical results for our Bayesian FL algorithm: (**Convergence analysis**) As a special block-coordinate optimisation algorithm, we show that it converges to an (local) optimum of the training objective (3); (**Generalisation error bound**) We theoretically show how well this optimal model trained on empirical data performs on unseen test data points. Due to space limit, full details and proofs are described in Appendix E,F, and we only state the theorems and remarks here. **Theorem 4.1** (Convergence analysis).: _We denote the objective function in (3) by \(f(x)\) where \(x=[x_{0},x_{1},\dots,x_{N}]\) corresponding to the variational parameters \(x_{0}:=L_{0}\), \(x_{1}:=L_{1}\), \(\dots\), \(x_{N}:=L_{N}\). Let \(\eta_{t}=\overline{L}+\sqrt{t}\) for some constant \(\overline{L}\), and \(\ \overline{x}^{T}=\frac{1}{T}\sum_{t=1}^{T}x^{t}\), where \(t\) is the batch iteration counter, \(x^{t}\) is the iterate at \(t\) by following our FL algorithm, and \(N_{f}\) (\(\leq N\)) is the number of participating clients at each round. With Assumptions 1-3 in Appendix E, the following holds for any \(T\):_ \[\mathbb{E}[f(\overline{x}^{T})]-f(x^{*})\leq\frac{N+N_{f}}{N_{f}}\cdot\frac{ \frac{\sqrt{T}+\overline{L}}{2}D^{2}+R_{f}^{2}\sqrt{T}}{T}=O\Big{(}\frac{1}{ \sqrt{T}}\Big{)}, \tag{25}\] _where \(x^{*}\) is the (local) optimum, \(D\), and \(R_{f}\) are some constants, and the expectation is taken over randomness in minibatches and selection of participating clients._ _Remark_.: It says that \(\overline{x}^{t}\) converges to the optimal point \(x^{*}\) in expectation at the rate of \(O(1/\sqrt{t})\). This rate asymptotically equals that of the conventional (non-block-coordinate, holistic) SGD algorithm. **Theorem 4.2** (Generalisation error bound).: _Assume that the variational density family for \(q_{i}(\theta_{i})\) is rich enough to subsume Gaussian. Let \(d^{2}(P_{\theta_{i}},P^{i})\) be the expected squared Hellinger distance between the true class distribution \(P^{i}(y|x)\) and model's \(P_{\theta_{i}}(y|x)\) for client \(i\)'s data. The optimal solution \((\{q_{i}^{*}(\theta_{i})\}_{i=1}^{N},q^{*}(\phi))\) of the optimisation problem (3) satisfies:_ \[\frac{1}{N}\sum_{i=1}^{N}\mathbb{E}_{q_{i}^{*}(\theta_{i})}[d^{2}(P_{\theta_{ i}},P^{i})]\ \leq\ O\bigg{(}\frac{1}{n}\bigg{)}+C\cdot\epsilon_{n}^{2}+C^{\prime}\bigg{(}r_ {n}+\frac{1}{N}\sum_{i=1}^{N}\lambda_{i}^{*}\bigg{)}, \tag{26}\] _with high probability, where \(C,C^{\prime}>0\) are constant, \(\lambda_{i}^{*}=\min_{\theta\in\Theta}||f_{\theta}-f^{i}||_{\infty}^{2}\) is the best error within our backbone network family \(\Theta\), and \(r_{n},\epsilon_{n}\to 0\) as the training data size \(n\rightarrow\infty\)._ _Remark_.: It implies that the optimal solution of (3) (attainable by our block-coordinate FL algorithm) is _asymptotically optimal_, since the RHS of (26) converges to \(0\) as the training data size \(n\rightarrow\infty\). ## 5 Related Work Due to lack of space, here we point out only the key differences between the proposed approach and existing methods closely related to ours, leaving all references and detailed discussions in Appendix I. **Bayesian or ensemble FL approaches.** Some recent studies tried to tackle the FL problem using Bayesian or ensemble-based methods. As we mentioned earlier, the key difference is that most methods do not introduce Bayesian _hierarchy_ in a principled manner. Instead, they ultimately treat network weights \(\theta\) as a random variable _shared_ across all clients. On the other hand, our approach assigns individual \(\theta_{i}\) to each client \(i\) governed by a common prior \(p(\theta_{i}|\phi)\). The non-hierarchical approaches mostly resort to ad hoc heuristics and/or strong assumptions in their algorithms. For instance, **FedPA**[4] aims to establish the product-of-experts decomposition, \(p(\theta|D_{1:N})\propto\prod_{i=1}^{N}p(\theta|D_{i})\) to allow client-wise inference of \(p(\theta|D_{i})\). However, this decomposition does not hold in general unless a strong assumption of uninformative prior \(p(\theta)\propto 1\) is made. **FedBE** (Bayesian Ensemble) [15] aims to build the global posterior distribution \(p(\theta|D_{1:N})\) from the individual posteriors \(p(\theta|D_{i})\) in some ad hoc ways. **FedEM**[40] forms a seemingly reasonable hypothesis that local client data distributions can be identified as mixtures of a fixed number of base distributions (with different mixing proportions). Although they have sophisticated probabilistic modeling, this method is not a Bayesian approach. **pFedBayes**[59] can be seen as an implicit regularisation-based method to approximate \(p(\theta|D_{1:N})\) from individual posteriors \(p(\theta|D_{i})\). To this end, they introduce the so-called global distribution \(w(\theta)\), which essentially serves as a _regulariser_ to prevent local posteriors from deviating from it. The introduction of \(w(\theta)\) and its update strategy appears to be a hybrid treatment rather than solely Bayesian perspective. **FedPop**[31] has a similar hierarchical Bayesian model structure as ours, but their model is limited to a linear deterministic model for the shared variate. **Yet another Bayesian FL algorithms.** Some approaches [47; 53; 37; 22] proposed hierarchical Bayesian models that are similar to our model in (graphical model) structures. However, these algorithms have significant practical limitations, can only run on simple linear models or single hidden-layer MLPs, mainly due to their use of computationally expensive MCMC sampling [47; 53] or strong reliance on prior-posterior conjugacy [22]. Furthermore, the EM-based optimisation adopted in some approaches [37; 22] can considerably diminish the Bayesian uncertainty modeling effect. Other recent Bayesian methods adopt the expectation-propagation (EP) approximations [5; 24]. In particular, the EP update steps are performed locally with the client data. However, neither of these two works is a hierarchical Bayesian model - unlike our individual client modeling, they have a single model \(\theta\) shared across clients, without individual modeling for client data, thus following FedPA-like inference \(p(\theta|D_{1:N})\). The consequence is that they lack a systematic way to distinctly model global and local parameters for global prediction and personalised prediction respectively. Figure 2: Hyperparameter sensitivity analysis and comparison with simple ensemble baselines. ## 6 Evaluation We evaluate the proposed hierarchical Bayesian models on several FL benchmarks: **CIFAR-100**, **MNIST**, **Fashion-MNIST**, and **EMNIST**. We also have the results on the challenging corrupted CIFAR (**CIFAR-C-100**) (in Appendix G) that renders the client data more heterogeneous both in input images and class distributions. Our implementation6 is based on [44] where MobileNet [26] is used as a backbone, and follow the body-update strategy: the classification head (the last layer) is randomly initialised and fixed during training, with only the network body updated (and both body and head updated during personalisation). We report results all based on this body-update strategy since we observe that it considerably outperforms the full update for our models and other competing methods. The hyperparameters are: (**NIW**) \(\epsilon=10^{-4}\) and \(p=1-0.001\) (See ablation study below for other values); (**Mixture**) \(\sigma^{2}=0.1\), \(\epsilon=10^{-4}\), mixture order \(K=2\) (See Appendix G.2 for other values), and the gating network has the same architecture as the main backbone, but the output cardinality changed to \(K\). Other hyperparameters including batch size (\(50\)), learning rate (\(0.1\) initially, decayed by \(0.1\)) and the number of epochs in personalisation (\(5\)), are the same as those in [44]. Footnote 6: We provide detailed pseudocodes in Appendix H.1. The codes to reproduce the results are in the Supplement. **CIFAR-100.** Following [44], the client data distributions are heterogeneous non-iid, formed by the sharding-based class sampling [43]. More specifically, we partition data instances in each class into non-overlapping equal-sized shards, and assign \(s\) randomly sampled shards (over all classes) to each of \(N\) clients. Thus the number of shards per user \(s\) can control the degree of data heterogeneity: small \(s\) leads to more heterogeneity, and vice versa. The number of clients \(N=100\) (each having \(500\) training, \(100\) test samples), and we denote by \(f\) the fraction of participating clients. So, \(N_{f}=\lfloor N\cdot f\rfloor\) clients are randomly sampled at each round to participate in training. Smaller \(f\) makes the FL more challenging, and we test two settings: \(f=1.0\) and \(0.1\). Lastly, the number of epochs for client local update at each round is denoted by \(\tau\) where we test \(\tau=1\) and \(10\), and the number of total rounds is determined by \(\tau\) as \(\lfloor 320/\tau\rfloor\) for fairness. Note that smaller \(\tau\) incurs more communication cost but often leads to higher accuracy. For the competing methods FedBE [15] and FedEM [40], we set the number of ensemble components or base models to 3. FedPA [4]: shrinkage parameter \(\rho=0.01\). **MNIST/F-MNIST/EMNIST.** Following the standard protocols, we set the number of clients \(N=100\), the number of shards per client \(s=5\), the fraction of participating clients per round \(f=0.1\), and the number of local training epochs per round \(\tau=1\) (total number of rounds 100) or \(5\) (total number of rounds 20) for MNIST and F-MNIST. For EMNIST, we have \(N=200\), \(f=0.2\), \(\tau=1\) (total number of rounds 300). We follow the standard Dirichlet-based client data splitting. For the competing methods FedBE [15] and FedEM [40], we use three-component models. The backbone is \begin{table} \end{table} Table 1: (CIFAR-100) Global prediction and personalisation accuracy. an MLP with a single hidden layer with 256 units for MNIST/F-MNIST, while we use a standard ConvNet with two hidden layers for EMNIST. **Main results and interpretation.** In Table 1 and 2 (also Table 3 in Appendix G), we compare our methods (NIW and Mixture with \(K\!=\!2\)) against the popular FL methods, including FedAvg [43], FedBABU [44], FedProx [33], as well as recent Bayesian/ensemble methods, FedPA [4], FedBE [15], pFedBayes [59]. FedEM [40], and FedPop [31] (See Sec. 5 and Appendix I). We run the competing methods (implementation based on their public codes or our own implementation if unavailable) with default hyperparameters (e.g., \(\mu=0.01\) for FedProx) and report the results. First of all, our two models (NIW and Mix.) consistently perform the best (by large margins most of the time) in terms of both global prediction and personalisation for nearly all FL settings on the two datasets. This is attributed to the principled Bayesian modeling of the underlying FL data generative process in our approaches that can be seen as rigorous generalisation and extension of the existing intuitive algorithms such as FedAvg and FedProx. In particular, the superiority of our methods to the other Bayesian/ensemble approaches verifies the effectiveness of modeling client-wise latent variables \(\theta_{i}\) against the commonly used shared \(\theta\) modeling. Our methods are especially robust for the scenarios of significant client data heterogeneity, e.g., CIFAR-C-100 personalisation on data with unseen corruption types in Appendix G (Table 3). **(Ablation) Hyperparameter sensitivity.** We test sensitivity to some key hyperparameters in our models. For NIW, we have \(p=1-p_{drop}\), the MC-dropout probability, where we used \(p_{drop}=0.001\) in the main experiments. In Fig. 2(a) we report the performance of NIW for different values (\(p_{drop}=0,10^{-4},10^{-2}\)) on CIFAR-100 with (\(s=100,f=0.1,\tau=1\)) setting. We see that the performance is not very sensitive to \(p_{drop}\) unless it is too large (e.g., \(0.01\)). For the Mixture model, different mixture orders \(K=2,5,10\) are contrasted in Fig. 2(b). As seen, having more mixture components does no harm (no overfitting), but we do not see further improvement over \(K=2\) in our experiments (See also results on CIFAR-C-100 in Table 5 in Appendix G). **Further results and analysis.** In Appendix, we provide further empirical results and analysis: (i) performance on the challenging corrupted CIFAR (CIFAR-C-100) dataset (Appendix G.1), (ii) comparison between our mixture model and simple ensemble baselines (Fig. 2(b) and Appendix G.3), and (iii) computational complexity analysis and actual running times (Appendix H.2 and H.3). ## 7 Conclusion We have proposed a novel hierarchical Bayesian approach to FL where the block-coordinate descent solution to the variational inference leads to a viable algorithm for FL. Our method not only justifies the previous FL algorithms that look intuitive but theoretically less underpinned, but also generalises them even further via principled Bayesian approaches. With strong theoretical support in convergence rate and generalisation error, our approach is also empirically shown to be superior to recent FL approaches by large margin on several benchmarks with various FL settings. \begin{table} \end{table} Table 2: (MNIST / Fashion-MNIST / EMNIST) Global prediction and personalisation accuracy.
2308.15888
Generalizing Level Ranking Constraints for Monotone and Convex Aggregates
In answer set programming (ASP), answer sets capture solutions to search problems of interest and thus the efficient computation of answer sets is of utmost importance. One viable implementation strategy is provided by translation-based ASP where logic programs are translated into other KR formalisms such as Boolean satisfiability (SAT), SAT modulo theories (SMT), and mixed-integer programming (MIP). Consequently, existing solvers can be harnessed for the computation of answer sets. Many of the existing translations rely on program completion and level rankings to capture the minimality of answer sets and default negation properly. In this work, we take level ranking constraints into reconsideration, aiming at their generalizations to cover aggregate-based extensions of ASP in more systematic way. By applying a number of program transformations, ranking constraints can be rewritten in a general form that preserves the structure of monotone and convex aggregates and thus offers a uniform basis for their incorporation into translation-based ASP. The results open up new possibilities for the implementation of translators and solver pipelines in practice.
Tomi Janhunen
2023-08-30T09:04:39Z
http://arxiv.org/abs/2308.15888v1
# Generalizing Level Ranking Constraints ###### Abstract In answer set programming (ASP), answer sets capture solutions to search problems of interest and thus the efficient computation of answer sets is of utmost importance. One viable implementation strategy is provided by translation-based ASP where logic programs are translated into other KR formalisms such as Boolean satisfiability (SAT), SAT modulo theories (SMT), and mixed-integer programming (MIP). Consequently, existing solvers can be harnessed for the computation of answer sets. Many of the existing translations rely on program completion and level rankings to capture the minimality of answer sets and default negation properly. In this work, we take level ranking constraints into reconsideration, aiming at their generalizations to cover aggregate-based extensions of ASP in more systematic way. By applying a number of program transformations, ranking constraints can be rewritten in a general form that preserves the structure of monotone and convex aggregates and thus offers a uniform basis for their incorporation into translation-based ASP. The results open up new possibilities for the implementation of translators and solver pipelines in practice. ## 1 Introduction Answer set programming (ASP, [9]) offers rich rule-based languages for knowledge representation (KR) and reasoning. Given some search or optimization problem of interest, its _encoding_ in ASP is a logic program whose answer sets capture solutions to the problem. Thus the efficient computation of answer sets is of utmost importance. One viable implementation strategy is provided by _translation-based_ ASP where logic programs are translated into other KR formalisms such as Boolean satisfiability (SAT, [6]), SAT modulo theories (SMT, [5]), or mixed-integer programming (MIP, [33]). Consequently, existing solver technology can be harnessed for the computation of answer sets. The semantics of answer set programming rests on _stable models_[15] that incorporate a notion of minimality and give a declarative semantics for default negation. Capturing these aspects in satisfaction-based formalisms such as pure SAT is non-trivial; see, e.g., [18, 23]. There are also various syntactic aggregations [3] that enable compact encodings but whose translation is potentially expensive if there is no respective primitive in the target formalism. A typical translation consists of several steps such as (i) _normalization_[8], (ii) _instrumentation_ for loop prevention [7, 18, 24], and (iii) _completion_[11]. The first step concerns the removal of syntactic extensions that have been introduced to increase the expressive power of ASP in favor of _normal_ rules. The second step either transforms the program or adds suitable constraints so that the difference between stable and _supported models_ disappears. The third step captures supported models by transforming rules into equivalences. Ideally, the syntactic details of the target language are deferred during translation and incorporated at the very end; either after or while forming the completion. This strategy realizes a _cross-translation_ approach [20] in analogy to modern compiler designs. Many of the existing translations [21, 26, 27] essentially rely on _level ranking constraints_ formalized by Niemela [28] as formulas in difference logic [29]. Such constraints describe _level numbers_ that order the atoms of a normal program in such a way that stable models can be distinguished among the supported ones [12]. Thus, level numbers are essential when it comes to capturing the minimality of stable models and the semantics of default negation properly. As shown in [18], level numbers can be made unique so that they match with the levels of atoms obtained by applying the _immediately true_ operator \(\mathbf{T}_{P}\) iteratively. Uniqueness can also be enforced in terms of _strong_ level ranking constraints [28]. Unique level numbers are also highly desirable when aiming at one-to-one correspondences with stable models, e.g., when counting solutions to problems or carrying out probabilistic inference [13]. In this work, we take level ranking constraints into reconsideration, aiming at generalizations that cover aggregate-based extensions of ASP in a more systematic way. So far, only normal programs are truly covered [18, 28] and the normalization of input programs is presumed. The generalization for weight constraint programs (WCPs), as sketched by Liu et al. [26], concerns only weak constraints and is confined to translations into MIP. However, the idea of avoiding or delaying normalization is interesting as such, opening up new possibilities for ordering the translation steps discussed above. For instance, if \(\mathrm{NORM}(\cdot)\) and \(\mathrm{LRC}(\cdot)\) stand for translations based on normalization and level ranking constraints, respectively, it would be highly interesting to compare \(\mathrm{LRC}(\mathrm{NORM}(P))\) with potential generalizations \(\mathrm{LRC}(P)\) that express level ranking constraints at an aggregated level. Such representations are expected to be more compact and to favor level rankings with fewer variables. The resulting formulas can also be _Booleanized_ afterwards [17], if translations toward SAT are desirable, or rewritten in some other form that complies with the intended back-end formalism. In the sequel, we use translations \(\mathrm{LRC}(\mathrm{NORM}(P))\) of increasingly complex programs \(P\) as guidelines when lifting level ranking constraints for aggregates. The idea is to cover program classes involving standard aggregations subject to recursion. It turns out that the structure of monotone and convex aggregates can be preserved to a high degree, offering a uniform basis for their incorporation into translation-based ASP. On the one hand, the resulting generalizations exploit _ordered completion_[4] in the reformulation of _weak_ level ranking constraints but, on the other hand, make novel contribution when imposing the uniqueness of level rankings with _strong_ ones. The rest of this article is structured as follows. We begin by recalling the basic notions of _logic programs_ in Section 2, including the usual syntactic fragments, stable and supported model semantics, and other concepts relevant for this work. Then, in Section 3, we explain the details of ranking constraints in their standard form corresponding to _normal_ logic programs. Actually, we present them in a slightly rewritten form in order to pave the way for their generalization for monotone aggregates in Section 4. Therein, we begin the analysis from the case of (positive) cardinality and weight rules and, eventually, incorporate negative conditions to ranking constraints. To illustrate the generality of the results even further, we investigate how certain convex aggregates get also covered via appropriate program transformations in Section 5. Finally, the conclusions of this work are presented in Section 6. ## 2 Preliminaries In the sequel, we will consider _propositional logic programs_ that are finite sets rules of the forms (1)-(5) below. In the rules, \(a\), \(a_{i}\)'s, \(b_{j}\)'s, and \(c_{k}\)'s are _propositional atoms_ (or _atoms_ for short) and \(\sim\) denotes _default negation_. The rules in (1)-(5) are known as _normal_, _choice_, _cardinality_, _weight_, and _disjunctive_ rules, respectively. Each rule has a _head_ and a _body_ separated by the \(\leftarrow\) sign, and the rough intuition is that if the _condition(s)_ formed by the rule body are satisfied, then the respective head atom \(a\) in (1)-(4), or some of the head atoms \(a_{1},\ldots,a_{h}\) in (5) can be derived. \[a \leftarrow b_{1},\ldots,b_{n},{\sim}c_{1},\ldots,{\sim}c_{m}. \tag{1}\] \[\{a\} \leftarrow b_{1},\ldots,b_{n},{\sim}c_{1},\ldots,{\sim}c_{m}.\] (2) \[a \leftarrow l\leq\{b_{1},\ldots,b_{n},{\sim}c_{1},\ldots,{\sim}c_{m}\}.\] (3) \[a \leftarrow w\leq\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}},{\sim}c_{1}=w_{c _{1}},\ldots,{\sim}c_{m}=w_{c_{m}}\}.\] (4) \[a_{1}\mid\ldots\mid a_{h} \leftarrow b_{1},\ldots,b_{n},{\sim}c_{1},\ldots,{\sim}c_{m}. \tag{5}\] The choice regarding the head of (2) is optional while (5) insists on deriving at least one head atom \(a_{i}\) in a minimal way, as to be detailed in Definition 1. A positive body condition \(b_{j}\) holds if \(b_{j}\) can be derived by some other rules whereas \({\sim}c_{k}\) holds if \(c_{k}\) cannot be derived. A cardinality rule (3) demands that at least \(l\) of such conditions are met to activate the rule. Weight rules (4) are similar but body conditions \(b_{j}\) and \({\sim}c_{k}\) are valued by their respective non-negative integer weights \(w_{b_{j}}\) and \(w_{c_{k}}\) when it comes to reaching the bound \(w\). In the sequel, we use shorthands \(\mathrm{B}^{+}(r)=\{b_{1},\ldots,b_{n}\}\), \(\mathrm{B}^{-}(r)=\{c_{1},\ldots,c_{m}\}\), and \(\mathrm{B}(r)=\{b_{1},\ldots,b_{n}\}\cup\{{\sim}c_{1},\ldots,{\sim}c_{m}\}\) when referring to the body conditions occurring in a rule \(r\). The set of head atoms in \(r\) is denoted by \(\mathrm{H}(r)\) and for entire programs \(P\), we define \(\mathrm{H}(P)=\bigcup_{r\in P}\mathrm{H}(r)\). Typical (syntactic) classes of logic programs are as follows: _normal_ logic programs (NLPs) consist of normal rules (1) and the same can be stated about _disjunctive_ logic programs (DLPs) and disjunctive rules (5) that are normal as a special case (\(h=1\)). The class of _weight constraint programs_ (WCPs) [31] is essentially based on normal rules (1) and the aggregated rule types in (2)-(4), out of which weight rules are expressive enough to represent the class of WCPs alone. Contemporary ASP systems--aligned with the ASP-core-2 language standard [10]--support these fragments so well that programmers can mix rule types freely in their encodings. When the fragment is not important, we may refer to _logic programs_ or _programs_ for short. Finally, we say that a rule is _positive_1 if \(m=0\) and it is of the forms (1), or (3)-(5). An entire program is called positive if its rules are all positive. Footnote 1: Note that the head \(\{a\}\) of choice rule embeds hidden (double) negation since it can be expressed as \(a\leftarrow{\sim}{\sim}a\). The _signature_\(\mathrm{At}(P)\) of a logic program \(P\) is the set of atoms that occur in the rules of \(P\). An _interpretation_\(I\subseteq\mathrm{At}(P)\) of \(P\) tells which atoms \(a\in\mathrm{At}(P)\) are _true_ (\(a\in I\), also denoted \(I\models a\)) whereas others are _false_ (\(a\in\mathrm{At}(P)\setminus I\), denoted \(I\not\models a\)). Atoms are also called _positive literals_. Any _negative literal_\({\sim}c\), where \(c\) is an atom, is understood classically, i.e., \(I\models{\sim}c\), iff \(I\not\models c\). The relation \(\models\) extends for the bodies of normal/choice/disjunctive rules \(r\) as follows: \(I\models\mathrm{B}(r)\) iff \(\mathrm{B}^{+}(r)\subseteq I\) and \(\mathrm{B}^{-}(r)\cap I=\emptyset\). The body \(l\leq\mathrm{B}(r)\) of a cardinality rule \(r\) is satisfied in \(I\) iff \(l\leq|\mathrm{B}^{+}(r)\cap I|+|\mathrm{B}^{-}(r)\setminus I|\). More generally, the body of a weight rule \(r\) in (4) is satisfied in \(I\) iff the _weight sum_\(\mathrm{WS}_{I}(b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}},{\sim}c_{1}=w_{c_{1}}, \ldots,{\sim}c_{m}=w_{c_{m}})=\sum_{b\in\mathrm{B}^{+}(r)\cap I}w_{b}+\sum_{c \in\mathrm{B}^{-}(r)\setminus I}w_{c}\) is at least \(w\). For rules \(r\), we have \(I\models r\) iff the satisfaction of its body implies the satisfaction of its head, except that choice rules (2) are always satisfied. An interpretation \(I\subseteq\mathrm{At}(P)\) is a (_classical_) _model_ of a program \(P\), denoted \(I\models P\), iff \(I\models r\) for each \(r\in P\). A model \(M\models P\) is \({\subseteq}\)_-minimal_ iff there is no \(M^{\prime}\models P\) such that \(M^{\prime}\subset M\). The set of \(\subseteq\)-minimal models of \(P\) is denoted by \(\mathrm{MM}(P)\). If \(P\) is positive and non-disjunctive then \(|\mathrm{MM}(P)|=1\) and the respective _least model_ of \(P\) is denoted by \(\mathrm{LM}(P)\). **Definition 1** (Stable models [15, 16, 31]).: _For a program \(P\) and an interpretation \(I\subseteq\mathrm{At}(P)\), the reduct \(P^{l}\) of \(P\) with respect to \(I\) contains_ 1. _a rule_ \(a\leftarrow\mathrm{B}^{+}(r)\) _for each normal rule (_1_) such that_ \(\mathrm{B}^{-}(r)\cap I=\emptyset\)_, and for each choice rule (_2_) such that_ \(a\in I\) _and_ \(\mathrm{B}^{-}(r)\cap I=\emptyset\)_;_ 2. _a rule_ \(a\leftarrow l^{\prime}\leq\mathrm{B}^{+}(r)\) _for each cardinality rule (_3_) and the bound_ \(l^{\prime}=\max(0,l-|\mathrm{B}^{-}(r)\setminus I|)\)_;_ 3. _a rule_ \(a\gets w^{\prime}\leq\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}}\}\) _for each weight rule (_4_) and the bound_ \(w^{\prime}=\max(0,w-\operatorname{WS}_{I}(\sim_{c1}=w_{c_{1}},\ldots,\sim_{c_{m} }=w_{c_{m}}))\)_; and_ 4. _a rule_ \(a_{1}\mid\ldots\mid a_{h}\leftarrow\operatorname{B}^{+}(r)\) _for each disjunctive rule (_5_) such that_ \(\operatorname{B}^{-}(r)\cap I=\emptyset\)_._ _An interpretation \(M\subseteq\operatorname{At}(P)\) is a stable model of the program \(P\) iff \(M\in\operatorname{MM}(P^{M})\)._ **Example 1**.: _Consider a cardinality rule \(a\gets 1\leq\{b_{1},\ldots,b_{n}\}\) with choice rules \(\{b_{1}\}\). \(\ldots\{b_{n}\}\). Besides the empty stable model \(\emptyset\), these rules induce \(2^{n}-1\) stable models \(M=\{a\}\cup N\) with \(\emptyset\subset N\subseteq\{b_{1},\ldots,b_{n}\}\): the head \(a\) is set true whenever at least one of \(b_{1},\ldots,b_{n}\) is chosen to be true._ In the sequel, we mostly concentrate on non-disjunctive programs \(P\). Then, the stability of \(M\subseteq\operatorname{At}(P)\) can also be captured with the fixed point equation \(M=\operatorname{LM}(P^{M})\). Moreover, the well-known \(\mathbf{T}_{P}\) operator, when applied to an interpretation \(I\subseteq\operatorname{At}(P)\), produces the set of atoms \(a\in\operatorname{At}(P)\) that are _immediately true_ under \(I\), i.e., for which there is a positive rule \(r\) having \(a\) as the head and whose body is satisfied by \(I\). It follows that \(M\models P\) holds for a non-disjunctive program \(P\) iff \(\mathbf{T}_{P^{M}}(M)\subseteq M\) and \(M\) is a _supported model_ of \(P\) iff \(M=\mathbf{T}_{P^{M}}(M)\). Given a supported model \(M\), the support is provided by the set of rules \(\operatorname{SuppR}(P,M)\subseteq P\) whose bodies are satisfied by \(M\). Since \(\operatorname{LM}(P^{M})\) is obtained as the least fixed point \(\mathbf{T}_{P^{M}}\uparrow^{\infty}(\emptyset)\), each stable model of \(P\) is also supported. We write \(\operatorname{SM}(P)\) and \(\operatorname{SuppM}(P)\) for the sets of stable and supported models of \(P\), respectively. Thus \(\operatorname{SM}(P)\subseteq\operatorname{SuppM}(P)\) holds in general. Next we recall some concepts related to modularity. First, given a WCP \(P\), the set of _defining rules_ for an atom \(a\in\operatorname{H}(P)\) is \(\operatorname{Def}_{P}(a)=\{r\in P\mid a\in\operatorname{H}(r)\}\). Thus \(P\) can be partitioned as \(\bigcup_{a\in\operatorname{H}(P)}\operatorname{Def}_{P}(a)\). Second, the _positive dependency graph_ of \(P\) is \(\operatorname{DG}^{+}(P)=\langle\operatorname{At}(P),\succeq_{P}\rangle\) where \(a\succeq_{P}b\) holds for \(a,b\in\operatorname{At}(P)\), if \(a\in\operatorname{H}(r)\) and \(b\in\operatorname{B}^{+}(r)\) for some rule \(r\in\operatorname{Def}_{P}(a)\). A _strongly connected component_ (SCC) of \(\operatorname{DG}^{+}(P)\) is a maximal subset \(S\subseteq\operatorname{At}(P)\) such that all distinct atoms \(a,b\in S\) depend on each other via directed paths in \(\operatorname{DG}^{+}(P)\). For an atom \(a\in\operatorname{H}(P)\), the SCC of \(a\) is denoted by \(\operatorname{SCC}(a)\). As shown in [30], each SCC \(S\) of a WCP \(P\) gives rise to a _program module_\(P_{S}=\bigcup_{a\in S}\operatorname{Def}_{P}(a)\) where pure body atoms \(b\in\operatorname{At}(P_{S})\setminus S\) are treated as _input atoms_ taking any truth value, intuitively defined by choice rules \(\{b\}\). This yields a set of stable models \(\operatorname{SM}(P_{S})\) for each module \(P_{S}\) based on Definition 1. Given two stable models \(M\in\operatorname{SM}(P)\) and \(N\in\operatorname{SM}(Q)\), we say that \(M\) and \(N\) are mutually _compatible_, if they agree on the truth values of atoms in \(\operatorname{At}(P)\cap\operatorname{At}(Q)\), i.e., \(M\cap\operatorname{At}(Q)=N\cap\operatorname{At}(P)\). The _module theorem_ of [30] states that the stable models of \(P\) can be obtained as mutually compatible collections of stable models \(M_{1},\ldots,M_{n}\) for the program modules \(P_{S_{1}},\ldots,P_{S_{n}}\) induced by the SCCs \(S_{1},\ldots,S_{n}\) of \(P\). Finally, some notions of equivalence should be introduced. Logic programs \(P\) and \(Q\) are _weakly equivalent_, denoted \(P\equiv Q\), iff \(\operatorname{SM}(P)=\operatorname{SM}(Q)\). They are _strongly equivalent_, denoted \(P\equiv_{\mathrm{s}}Q\), iff \(P\cup R\equiv Q\cup R\) for any other context program \(R\)[22]. Then \(P\equiv_{\mathrm{s}}Q\) implies \(P\equiv Q\) but not vice versa. Strong equivalence can be characterized by using only contexts formed by _unary_ positive rules \(a\gets b\), or semantically by using SE-models [32]. To address the correctness of various translations, however, more fine-grained relations become necessary. The signature \(\operatorname{At}(P)\) of a logic program \(P\) can be split into _visible_ and _hidden_ parts \(\operatorname{At}_{\mathrm{v}}(P)\) and \(\operatorname{At}_{\mathrm{h}}(P)\), respectively. Given a stable model \(M\in\operatorname{SM}(P)\), only its visible projection \(M\cap\operatorname{At}_{\mathrm{v}}(P)\) is relevant when comparing \(P\) with other programs. Thus, \(P\) and \(Q\) are _visibly equivalent_, denoted \(P\equiv_{\mathrm{v}}Q\), iff \(\operatorname{At}_{\mathrm{v}}(P)=\operatorname{At}_{\mathrm{v}}(Q)\) and \(M\cap\operatorname{At}_{\mathrm{v}}(P)=N\cap\operatorname{At}_{\mathrm{v}}(Q)\) holds for each pair of models \(M\in\operatorname{SM}(P)\) and \(N\in\operatorname{SM}(Q)\) in a bijective correspondence [18]. There is a generalization of both \(\equiv_{\mathrm{v}}\) and \(\equiv_{\mathrm{s}}\), viz. _visible strong equivalence_\(\equiv_{\mathrm{vs}}\), that incorporates context programs \(R\) that _respect the hidden atoms_ of \(P\) and \(Q\) for comparisons [8]. The correctness of normalization has been addressed in this sense. E.g., for a weight rule \(r\) in (4), \(\{r\}\equiv_{\mathrm{vs}}\operatorname{NORM}(\{r\})\), which meas that \(r\) can be safely substituted by \(\operatorname{NORM}(\{r\})\) in contexts respecting the hidden atoms introduced by the normalization. ## 3 Level Rankings and Ranking Constraints When a stable model \(M\subseteq\operatorname{At}(P)\) of a _non-disjunctive_ logic program \(P\) is constructed using the reduct \(P^{M}\) and the \(\mathbf{T}_{P^{M}}\) operator, atoms true in the model \(M\) get divided into _levels_\(M_{i}=(\mathbf{T}_{P^{M}}\uparrow^{i}(I))\setminus(\mathbf{T}_{P^{M}}\uparrow^{i- 1}(I))\) where \(i>0\) and \(I\subseteq\operatorname{At}(P)\setminus\operatorname{H}(P)\) is a set of input atoms. By default \(I=\emptyset\) and \(M_{0}=\emptyset\), but if \(I\neq\emptyset\), then \(M_{0}=I\). For finite programs \(P\), the index \(i\) is bounded from above by \(|\operatorname{At}(P)|\). Based on this division of atoms, it is possible to read off a _level ranking_\(\#:\operatorname{At}(P)\to\mathbb{N}\cup\{\infty\}\) for the atoms of the program [28]: the rank \(\#a=i\), if \(a\in M_{i}\), and \(\#a=\infty\), if \(a\not\in M\). A _level numbering_\(\#\)[18] extends any level ranking for the supporting rules \(r\in\operatorname{SuppR}(P,M)\) by the equality2\(\#r=\max\{\#b\mid b\in\operatorname{B}^{+}(r)\}+1\). Intuitively, the level \(\#r\) of a rule \(r\) indicates when \(r\) can be applied to derive its head and, consequently, \(\#a=\min\{\#r\mid r\in\operatorname{Def}_{P}(a)\cap\operatorname{SuppR}(P,M)\}\). By these interconnections, we may use level rankings and numberings interchangeably in the sequel. If \(r\not\in\operatorname{SuppR}(P,M)\), then \(\#r=\infty\). The value \(\infty\) emphasizes that an atom is never derived or a rule becomes never applicable. The other option is to restrict the domain of \(\#\) to \(M\cup\operatorname{SuppR}(P,M)\) for which finite values exist, but some big value greater than any level rank is useful in practice. E.g., given an SCC \(S\) of the program \(P\), the level ranks \(\#a\) of atoms \(a\in S\) can be effectively constrained by \(0<\#a<|S|+1\); cf. (6) below. Footnote 2: This holds for rules whose body is essentially normal (1) while generalizations for more complex bodies follow. Many existing translations of logic programs into SAT, SMT, and MIP rely on program completion [11]. The idea is to translate a (normal) logic program \(P\) into classical equivalences that capture the _supported models_ of the program. The purpose of level ranking constraints [28], however, is to distinguish the stable ones among them by incorporating a requirement that there is a level ranking \(\#\) for a model \(M\in\operatorname{SuppM}(P)\). These constraints can be expressed, e.g., as formulas in _difference logic_ (DL). This SMT-style logic [29] enriches propositional formulas with difference constraints of the form \(x-y\leq k\) where \(x,y\) are real/integer variables and \(k\) is a constant. The evaluation of a difference atom \(x-y\leq k\) is based on an assignment \(\tau:\mathcal{V}\to\mathbb{Z}\) on the set of variables \(\mathcal{V}\) in use. Given \(\tau\), the constraint \(x-y\leq k\) is satisfied by \(\tau\), denoted by \(\tau\models x-y\leq k\), iff \(\tau(x)-\tau(y)\leq k\). A _DL-interpretation_ is a pair \(\langle I,\tau\rangle\) where \(I\) a standard propositional interpretation and \(\tau\) an assignment. A formula \(\phi\) of DL is satisfied by \(\langle I,\tau\rangle\), denoted \(\langle I,\tau\rangle\models\phi\), if \(\phi\) evaluates to true under \(I\) by the usual propositional rules extended by the evaluation of difference atoms subject to \(\tau\). A difference constraint \(x_{b}-x_{a}\leq-1\) (i.e., \(x_{a}>x_{b}\)) can express that \(a\) is derived _after_\(b\) under the assumption that \(x_{a}\) and \(x_{b}\) store the level ranks of \(a\) and \(b\), respectively. Based on this idea, we introduce formulas for the representation of level ranks. Their _scope_ is specified in terms of a set of atoms \(S\subseteq\operatorname{At}(P)\) to be discussed in further detail below. \[(1\leq x_{a}\leq|S|+1), \neg a\to(x_{a}\geq|S|+1), \tag{6}\] \[\operatorname{dep}(a,b) \leftrightarrow b\wedge(x_{a}>x_{b}),\] (7) \[\operatorname{gap}(a,b) \leftrightarrow b\wedge(x_{a}>x_{b}+1). \tag{8}\] By the two formulas in (6), level ranks are positive and fixed to \(|S|+1\) if an atom \(a\) is false. In addition, we introduce two kinds of new atoms to help with the formulation of the actual level ranking constraints. First, the atom \(\operatorname{dep}(a,b)\) defined by (7), denotes an _active_ dependency of a head atom \(a\) on a positive body atom \(b\), i.e., \(b\) must be true. Such dependencies are deployed by Bomanson et al. [7], but we use a definition in terms of the difference constraint. Second, the atom \(\operatorname{gap}(a,b)\), as defined by (8), means a similar relationship except that \(b\) is derived so early that it is not critical for determining the exact level rank of \(a\). Note that \(\operatorname{gap}(a,b)\) implies \(\operatorname{dep}(a,b)\) in general but not vice versa. In particular, if \(\operatorname{dep}(a,b)\) is true and \(\operatorname{gap}(a,b)\) is false, then \(b\) must be true and \(x_{a}=x_{b}+1\), indicating that \(a\) is derived right after \(b\) Such body atoms \(b\) from the preceding level are relevant when \(a\) is derived by some rule \(r\in\mathrm{Def}_{P}(a)\) at level \(x_{a}\). In the following, we present a reformulation of level ranking constraints [28] by exploiting the dependency relations from (7) and (8). Our further goal is to incorporate the idea of _ordered completion_[4] for the sake of more compact representation. Given an atom \(a\in\mathrm{At}(P)\), its completion is based on the set \(\mathrm{Def}_{P}(a)=\{r_{1},\ldots,r_{k}\}\) of its defining rules. In the sequel, the _applicability_ of a rule \(r_{i}\) is denoted by a new atom \(\mathrm{app}(r_{i})\). \[a \leftrightarrow \mathrm{app}(r_{1})\vee\cdots\vee\mathrm{app}(r_{k}), \tag{9}\] \[\mathrm{app}(r_{i}) \leftrightarrow \bigwedge_{b\in\mathrm{B}^{+}(r_{i})\cap S}\mathrm{dep}(a,b) \wedge(\mathrm{B}^{+}(r_{i})\setminus S)\wedge\neg\mathrm{B}^{-}(r_{i})\ \ \ \ \ (1\leq i\leq k),\] (10) \[\mathrm{app}(r_{i}) \rightarrow \bigvee_{b\in\mathrm{B}^{+}(r_{i})\cap S}\neg\mathrm{gap}(a,b) \ \ \ \ (1\leq i\leq k,\ \ \mathrm{B}^{+}(r_{i})\cap S\neq\emptyset),\] (11) \[\mathrm{app}(r_{i}) \rightarrow (x_{a}\leq 1)\ \ \ \ \ (1\leq i\leq k,\ \ \mathrm{B}^{+}(r_{i})\cap S=\emptyset). \tag{12}\] Intuitively, the equivalence (9) sets the head atom \(a\) true if and only if at least one of its defining rules \(r_{i}\) is applicable. This, in turn, is defined by the equivalence (10) insisting that atoms in \(\mathrm{B}^{+}(r_{i})\cap S\) have been previously derived and all remaining positive and negative body conditions are satisfied. This formulation embeds both _weak_ level ranking constraints [28] and ordered completion [4] but relative to the set \(S\). The constraint (11) is the counterpart of _strong_ level ranking constraints [28] enforcing the minimality of level ranks assigned to atoms. Besides this, the formula (12) resets the level of the head atom \(a\) to \(1\) when \(a\) can be derived by applying an _externally supporting_ rule \(r_{i}\) with \(\mathrm{B}^{+}(r_{i})\cap S=\emptyset\). Regarding the scope \(S\), it is natural to think that the head atom \(a\) is included usually. Also, few special cases deserve further attention. (i) If \(S=\mathrm{At}(P)\), then the completion becomes fully ordered, i.e., \(\mathrm{B}^{+}(r_{i})\setminus S\) becomes empty in (10) and the formula (12) is generated only for \(r_{i}\in\mathrm{Def}_{P}(a)\) with an empty \(\mathrm{B}^{+}(r_{i})\). Moreover, if all atoms of \(\mathrm{At}(P)\) are completed using (9)-(12), the resulting formulas capture stable models directly, including level ranks of atoms. (ii) If \(S=\mathrm{SCC}(a)\), then the ordering becomes local to the component \(S\). Then, if all atoms of \(S\) are completed, the formulas capture stable models \(M\) for the _program module_\(P_{S}\) induced by the component \(S\)[30]. It should be emphasized that the input atoms in \(\mathrm{At}(P_{S})\setminus S\) are not subject to completion and they may vary freely. Therefore, given a set of facts \(I\subseteq\mathrm{At}(P_{S})\setminus S\) as an actual _input_ for \(P_{S}\), the stable models of \(P_{S}\) become solutions to \(M=\mathrm{LM}(P_{S}^{M}\cup I)\) whose levels \(i\) are determined by \(\mathbf{T}_{P_{S}^{M}}\uparrow^{i}(I)\). (iii) Finally, if \(S=\emptyset\) and \(a\not\in S\) as an exception, equations (9) and (10) capture the standard completion of \(a\), the formula (11) becomes void, and the formula (12) ensures that \(x_{a}=1\) whenever \(a\) is true. **Example 2**.: _As a minimal example, consider \(a\gets a\) as the only rule \(r_{1}\) of a program \(P\) and the SCC \(S=\{a\}=\mathrm{SCC}(a)\). We obtain the following formulas: \((1\leq x_{a}\leq 2)\), \(\neg a\rightarrow(x_{a}\geq 2)\), \(\mathrm{dep}(a,a)\rightarrow a\wedge(x_{a}>x_{a})\), \(\mathrm{gap}(a,a)\to a\wedge(x_{a}>x_{a}+1)\), \(a\leftrightarrow\mathrm{app}(r_{1})\), \(\mathrm{app}(r_{1})\leftrightarrow\mathrm{dep}(a,a)\), \(\mathrm{app}(r_{1})\rightarrow\neg\mathrm{gap}(a,a)\). They can be satisfied by falsifying \(a\) and all new atoms, as well as by setting \(x_{a}=2\), indicating that \(M_{1}=\emptyset\) is stable. On the other hand, \(M_{2}=\{a\}\) is not stable, which can be realized by an attempt to make a true in the formulas listed above. Thus \(\mathrm{app}(r_{1})\) and \(\mathrm{dep}(a,a)\) must be true, too, and \(\mathrm{gap}(a,a)\) false. By further inspection of the formulas, it follows that \(x_{a}>x_{a}\) is true and \(x_{a}>x_{a}+1\) is false, both indicating a contradiction. \(\blacksquare\)_ The case \(S=\mathrm{SCC}(a)\) is the most general one and deserves justifications for correctness due to reformulations done in view of [28] and the limitations of ordered completion [4] with regard to (11). **Definition 2**.: _Given a normal logic program \(P\) and a scope \(S\subseteq\operatorname{At}(P)\) of completion, the tight ordered completion (TOC) of \(P\)_ relative _to \(S\) is the set of formulas (6) for \(a\in S\), (7) and (8) for \(a,b\in S\) whenever \(a\succeq_{P}b\), and (9)-(12) for each \(a\in\operatorname{At}(P)\) and \(r_{i}\in\operatorname{Def}_{P}(a)\)._ The TOC of \(P\) relative to \(S\) is denoted by \(\operatorname{TOC}^{S}(P)\) and we omit \(S\) from the notation \(\operatorname{TOC}^{S}(P)\), if \(S=\operatorname{At}(P)\). It is worth noting that the length \(\|\operatorname{TOC}^{S}(P)\|\) stays linear in \(\|P_{s}\|\). **Theorem 1**.: _Let \(P\) be a normal logic program, \(S\) an SCC of \(P\), and \(P_{S}\) the module of \(P\) induced by \(S\)._ 1. _If_ \(M\subseteq\operatorname{At}(P_{S})\) _is a stable model of_ \(P_{S}\) _for an input_ \(I\subseteq\operatorname{At}(P_{S})\setminus S\) _and_ \(\#:M\cap S\to\mathbb{N}\) _the respective level ranking, then there is a model_ \(\langle N,\tau\rangle\) _for_ \(\operatorname{TOC}^{S}(P)\) _such that_ \(M=N\cap\operatorname{At}(P_{S})\)_,_ \(\tau(x_{a})=\#a\) _for each_ \(a\in M\cap S\)_, and_ \(\tau(x_{a})=|S|+1\) _for each_ \(a\in S\setminus M\)_._ 2. _If_ \(\langle N,\tau\rangle\) _is a model of_ \(\operatorname{TOC}^{S}(P)\)_, then_ \(M=N\cap\operatorname{At}(P_{S})\) _is a stable model of_ \(P_{S}\) _for the input_ \(I=N\cap(\operatorname{At}(P_{S})\setminus S)\) _and for each_ \(a\in M\cap S\)_,_ \(\#a=\tau(x_{a})-\tau(z)\) _is the respective level rank._ As a preparatory step toward generalizations for aggregated rules, our final example in this section illustrates \(\operatorname{TOC}^{S}\) in the context of a cardinality rule (3) that is normalized before completion. **Example 3**.: _Let us assume that an atom \(a\) is defined by a single cardinality rule \(a\gets 1\leq\{b_{1},\ldots,b_{n}\}\) as part of a larger program \(P\) having an SCC \(S=\operatorname{SCC}(a)\) such that \(\{b_{1},\ldots,b_{n}\}\subseteq S\). The rule is compactly expressible even without auxiliary atoms in terms of \(n\) positive normal rules_ \[a\gets b_{1}.\ \ \ldots\ a\gets b_{n}.\] _The tight ordered completion produces the following formulas for the joint head atom \(a\in S\):_ \[a\leftrightarrow\operatorname{app}^{1}(a)\vee\cdots\vee \operatorname{app}^{n}(a), \tag{13}\] \[\operatorname{app}^{1}(a)\leftrightarrow\operatorname{dep}(a,b_{1} )\,\ldots,\ \operatorname{app}^{n}(a)\leftrightarrow\operatorname{dep}(a,b_{n}),\] (14) \[\operatorname{app}^{1}(a)\to\neg\operatorname{gap}(a,b_{1})\,\ldots,\ \operatorname{app}^{n}(a)\to\neg \operatorname{gap}(a,b_{n}). \tag{15}\] _In the above, we adopt a convention that \(\operatorname{app}^{i}(a)\) denotes the application of \(r_{i}\in\operatorname{Def}_{P}(a)\). Since each \(b_{i}\in S\) the respective rules \(a\gets b_{i}\) may not contribute to external support via (12). \(\blacksquare\)_ ## 4 The Case of Monotone Aggregates Cardinality rules (3) and weight rules (4) with _lower bounds_ are widely used examples of monotone aggregates and, in particular, if the (anti-monotone) effect of negative literals is disregarded in the sense of stable models (cf. Definition 1). The level number \(\#a\) of an atom \(a\in\operatorname{At}(P)\) is generalized in a straightforward way when _positive_ cardinality/weight rules are incorporated to the definition of the \(\mathbf{T}_{P}\) operator [30]. As before, \(\#a\) is the least value \(i\in\mathbb{N}\) such that \(a\in\mathbf{T}_{P}\uparrow^{i}(\emptyset)\) for positive programs \(P\). Default negation is analogously treated via the reduct, i.e., given a stable model \(M\subseteq\operatorname{At}(P)\), the operator \(\mathbf{T}_{P^{M}}\) can be used to assign level ranks for \(a\in\operatorname{At}(P)\). The goal of this section is to generalize tight ordered completion for rules involving monotone aggregates. The resulting formulas can be used to enforce stability in various settings where the semantics is no longer based on stable models themselves. The normalization [7] of cardinality rules is used to guide our intuitions about the intended generalization of tight ordered completion. Besides this, to enable compact representations of aggregates as propositional formulas, we extend the language of difference logic by pseudo-Boolean constraints of the form \(c_{1}a_{1}+\cdots+c_{m}a_{m}\geq b\) where \(a_{1},\ldots,a_{m}\) are atoms, \(c_{1},\ldots,c_{m}\) their respective integer coefficients, and \(b\) an integer bound. Obviously, given an interpretation \(\langle I,\tau\rangle\) in DL, we define \(\langle I,\tau\rangle\models c_{1}a_{1}+\cdots+c_{m}a_{m}\geq b\), iff \(\sum_{l\models a_{l}}c_{i}\geq b\), since the truth values of \(a_{1},\ldots,a_{m}\) are determined by \(I\) independently of \(\tau\). Let us begin with an example that concentrates on a corner case (\(l=1\) and \(m=0\)) of (3) from Example 1. **Example 4**.: _Recalling formulas (13)-(15) from Example 3, we pull them back to the setting of the original cardinality rule \(a\gets 1\leq\{b_{1},\ldots,b_{n}\}\) where \(b_{1},\ldots,b_{n}\) depend recursively on the head \(a\). Based on a connecting formula \(\operatorname{app}^{1}(a)\vee\cdots\vee\operatorname{app}^{n}(a)\leftrightarrow \operatorname{app}(a)\) on the applicability of the \(n\) rules in the normalization versus the applicability of the original rule, we rewrite (13)-(15) as follows:_ \[a\leftrightarrow\operatorname{app}(a), \tag{16}\] \[\operatorname{app}(a)\leftrightarrow(\operatorname{dep}(a,b_{1}) +\cdots+\operatorname{dep}(a,b_{n})\geq 1),\] (17) \[\operatorname{app}(a)\rightarrow(\operatorname{gap}(a,b_{1})+ \cdots+\operatorname{gap}(a,b_{n})<1), \tag{18}\] _where the new atoms \(\operatorname{dep}(a,b_{1}),\ldots,\operatorname{dep}(a,b_{n})\) and \(\operatorname{gap}(a,b_{1}),\ldots,\operatorname{gap}(a,b_{n})\) are still to be interpreted subject to formulas (6)-(8) as in the context of Example 3._ Note that the formula (18) expresses a _dynamic_ check, i.e., it works for any subset \(B\) of \(\{b_{1},\ldots,b_{n}\}\) of atoms _true_ and _derived earlier_ than \(a\). If the cardinality rule is applied (i.e., \(|B|\geq 1\)), \(\operatorname{gap}(a,b_{i})\) must be false for each \(b_{i}\in B\), amounting to the effect of the individual implications in (15). The formulas in Examples 3 and 4 are based on different auxiliary atoms denoting the applicability of rules. The connecting formula \(\operatorname{app}^{1}(a)\vee\cdots\vee\operatorname{app}^{n}(a)\leftrightarrow \operatorname{app}(a)\) describes their intended semantic interconnection for propositional interpretations \(M\) and \(N\), i.e., \(M\models\operatorname{app}^{1}(a)\vee\cdots\vee\operatorname{app}^{n}(a)\) iff \(N\models\operatorname{app}(a)\). Because (7) and (8) disconnect all integer variables from the formulas under consideration, the following proposition concentrates on the propositional parts of DL-interpretations that are intended to satisfy TOC formulas in the end. **Proposition 1**.: _The formulas (13)-(15) and (16)-(18) constrain the respective interpretations \(\langle M,\tau\rangle\) and \(\langle N,\tau\rangle\) equivalently, as conveyed by the satisfaction of the formula \(\operatorname{app}^{1}(a)\vee\cdots\vee\operatorname{app}^{n}(a)\leftrightarrow \operatorname{app}(a)\)._ Proof.: Assuming the connecting formula makes formulas (13) and (16) equivalent. Formulas (13) and (14) imply \(a\leftrightarrow\operatorname{dep}(a,b_{1})\vee\cdots\vee\operatorname{dep}(a, b_{n})\) that is equivalent to (17) under (16). Finally, \(\operatorname{gap}(a,b_{1})+\cdots+\operatorname{gap}(a,b_{n})<1\) is the same as \(\neg\operatorname{gap}(a,b_{1})\wedge\cdots\wedge\neg\operatorname{gap}(a, b_{n})\). These conditions are equally enforced by (15) and (18) when the connecting formula is satisfied. Proposition 1 indicates that the formulas (13)-(15) introduced for the normalizing rules can be safely substituted by the formulas (16)-(18) for the original rule. In this way, the aggregated condition is restored as a subformula in (17) while its negation incarnates in (18). Recall that the truth values of atoms \(\operatorname{gap}(a,b_{i})\) are determined by (8). If (18) were not satisfied by \(\langle N,\tau\rangle\), at least one \(\operatorname{gap}(a,b_{i})\) atom must be true, i.e., \(N\models b_{i}\) and \(\tau\models(x_{a}>x_{b_{i}}+1)\) assuming the satisfaction of (8). Thus \(b_{i}\) would be derived so early that the derivation of \(a\) is feasible before and the value of \(x_{a}\) could be decreased. Consequently, the joint effect of the formulas (17) and (18) is that \(x_{a}=\min\{x_{b_{i}}+1\mid N\models b_{i}\}\) holds which is in harmony with the characterization of [18] when applied to the normalizing rules \(a\gets b_{1}\). \(\ldots a\gets b_{n}\). Before addressing arbitrary cardinality rules, we draw the reader's attention to the other extreme. **Example 5**.: _When \(l=n\) and \(m=0\) in (3), the rule can be directly cast as a positive normal rule \(a\gets b_{1},\ldots,b_{n}\). Still assuming that \(\{b_{1},\ldots,b_{n}\}\subseteq\operatorname{SCC}(a)\), the TOC formulas resulting from (9)-(11) are \(a\leftrightarrow\operatorname{app}^{1}(a)\), \(\operatorname{app}^{1}(a)\leftrightarrow\operatorname{dep}(a,b_{1})\wedge \cdots\wedge\operatorname{dep}(a,b_{n})\), and \(\operatorname{app}^{1}(a)\rightarrow\bigvee_{1\leq i\leq n}\neg\operatorname{ gap}(a,b_{i})\). The corresponding aggregated formulas can be seen in the formulas (17) and (18) if the bound \(1\) is substituted by \(n\). The resulting _strong_ level ranking constraint ensures that at least one body atom \(b_{i}\) is derived _just_ before \(a\) and \(x_{a}=\max\{x_{b_{i}}\mid 1\leq i\leq n\}+1\). The preceding example reveals our plan when it comes to covering more general lower bounds \(1<l<n\) in (3) still pertaining to the positive case \(m=0\) and \(\{b_{1},\ldots,b_{n}\}\subseteq\operatorname{SCC}(a)\). In the sequel, we write \(\subseteq_{l}\) to denote the \(l\)-subset relation restricted to subsets of size \(l\)_exactly_. Due to monotonicity, the satisfaction of the rule body \(l\leq\{b_{1},\ldots,b_{n}\}\) essentially depends on the \(l\)-subsets of \(\{b_{1},\ldots,b_{n}\}\). Thus, the cardinality rule (3) with \(m=0\) can be normalized by introducing a positive rule \(a\gets B\) for each \(B\subseteq_{l}\{b_{1},\ldots,b_{n}\}\). The number of such rules \(\binom{n}{l}\) is at maximum when \(l\) is \(n\) halved.3 In spite of exponential growth, the resulting normalization serves the purpose of understanding the effect of \(l\) on the required TOC formulas. To update equations (13)-(15) for this setting, we need a new atom \(\operatorname{app}^{B}(a)\) for every \(B\subseteq_{l}\{b_{1},\ldots,b_{n}\}\) to capture the individual applicabilities of the respective positive rules \(a\gets B\): Footnote 3: Note that \(\binom{n}{|n/2|}\geq 3^{|a/2|}\) from \(n>4\) onward. \[a \leftrightarrow \bigvee_{B\subseteq_{l}\{b_{1},\ldots,b_{n}\}}\operatorname{app }^{B}(a), \tag{19}\] \[\operatorname{app}^{B}(a) \leftrightarrow \bigwedge_{b\in B}\operatorname{dep}(a,b)\ \ (\text{for }B\subseteq_{l}\{b_{1},\ldots,b_{n}\}),\] (20) \[\operatorname{app}^{B}(a) \rightarrow \bigvee_{b\in B}\neg\operatorname{gap}(a,b)\ \ (\text{for }B\subseteq_{l}\{b_{1},\ldots,b_{n}\}). \tag{21}\] The connecting formula \(\bigvee_{B\subseteq_{k}\{b_{1},\ldots,b_{n}\}}\operatorname{app}^{B}(a) \leftrightarrow\operatorname{app}(a)\) links the above back to the original rule \(a\gets l\leq\{b_{1},\ldots,b_{n}\}\) suggesting the revisions of (16)-(18) for any lower bound \(1\leq l\leq n\): \[a \leftrightarrow \operatorname{app}(a), \tag{22}\] \[\operatorname{app}(a) \leftrightarrow (\operatorname{dep}(a,b_{1})+\cdots+\operatorname{dep}(a,b_{n}) \geq l),\] (23) \[\operatorname{app}(a) \rightarrow (\operatorname{gap}(a,b_{1})+\cdots+\operatorname{gap}(a,b_{n} )<l). \tag{24}\] Most importantly, the length of the formulas (22)-(24) stays linear in \(n\) in contrast with their alternatives (19)-(21) based on \(l\)-subsets. The aggregate-based formulation covers all \(l\)-subsets of \(\{b_{1},\ldots,b_{n}\}\) and their supersets that also satisfy the body of (3) with \(m=0\) by monotonicity. **Proposition 2**.: _The formulas (19)-(21) and (16)-(24) constrain the respective interpretations \(\langle M,\tau\rangle\) and \(\langle N,\tau\rangle\) equivalently, as conveyed by the satisfaction of the formula \(\bigvee_{B\subseteq_{l}\{b_{1},\ldots,b_{n}\}}\operatorname{app}^{B}(a) \leftrightarrow\operatorname{app}(a)\)._ The proof is similar to that of Proposition 1 and amounts to showing that the (big) disjunctive formula \(\bigvee_{B\subseteq_{l}\{b_{1},\ldots,b_{n}\}}\bigwedge_{b\in B}\operatorname {dep}(a,b)\) is expressible as \((\operatorname{dep}(a,b_{1})+\cdots+\operatorname{dep}(a,b_{n})\geq l)\). Similar aggregation is achieved in (24) in terms of \(\operatorname{gap}(a,b_{1}),\ldots,\operatorname{gap}(a,b_{1})\). An important observation is that \(\langle N,\tau\rangle\not\models\) (24) if and only if \(N\models\operatorname{app}(a)\) and \(\exists B\subseteq_{l}\{b_{1},\ldots,b_{n}\}\) such that for each \(b\in B\), \(N\models b\), \(N\models\operatorname{gap}(a,b)\), and \(\tau\models(x_{a}>x_{b}+1)\). Since \(B\) reaches the bound \(l\) the value of \(x_{a}\) could be decreased to \(\max\{\tau(x_{b})\mid b\in B\}+1\) or even further if \(|B|>l\). Thus the satisfaction of (24) means that no \(B\subseteq_{l}(\{b_{1},\ldots,b_{n}\}\cap N)\) of true atoms could be used to decrease the value of \(x_{a}\). The net effect is that \(x_{a}\) has the critical minimum value. Since (23) is also satisfied there are at least \(l\) true atoms derived before \(a\) but sufficiently many of them are derived _just before_\(a\). For those atoms \(b\) we have \(x_{a}=x_{b}+1\), \(\operatorname{dep}(a,b)\) true, but \(\operatorname{gap}(a,b)\) false! **Example 6**.: _Consider the rule \(a\gets 2\leq\{b_{1},b_{2},b_{3},b_{4}\}\) in the context of a model \(\langle N,\tau\rangle\) where \(N=\{a,b_{1},b_{3},b_{4},\operatorname{app}(a)\}\), \(\tau(x_{b_{1}})=\tau(x_{b_{4}})=2\), \(\tau(x_{b_{3}})=1\), and \(\tau(x_{b_{2}})=6\) by default as \(N\not\models b_{2}\), see (6). Now the rule body is satisfied by three 2-subsets \(B_{1}=\{b_{1},b_{3}\}\), \(B_{2}=\{b_{1},b_{4}\}\), and \(B_{3}=\{b_{3},b_{4}\}\) justifying the level rank \(\tau(x_{a})=3\), since \(\max\{\tau(x_{b})\mid b\in B_{i}\}=2\) for each \(1\leq i\leq 3\). We have \(\operatorname{gap}(b_{i},a)\) true only for \(i=3\) and thus (24) is respected for \(l=2\). But, if \(\tau(x_{a})=4\) had been alternatively set, the count of such \(b_{i}\)'s would be 3, falsifying (24)._ ### Weights So far, we have established relatively general form of TOC formulas (22)-(24) that cover cardinality rules (3) when \(m=0\). Before addressing negative body conditions (\(m>0\)) and settings where SCCs play a major role, we take the weights of literals into consideration as already present in weight rules (4) when \(m=0\). Consequently, we have to substitute \(l\)-subsets used in (19)-(21) by _weighted_ subsets \(B\subseteq_{w}\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}}\}\). Such a subset \(B\) can be formally defined in terms of the condition \(\text{WS}_{B}(\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}}\})\geq w\) from Section 2. It is clear by monotonicity that if \(B\subseteq_{w}\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}}\}\), then \(B^{\prime}\subseteq_{w}\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}}\}\) for every \(B^{\prime}\) with \(B\subseteq B^{\prime}\subseteq\{b_{1},\ldots,b_{n}\}\). A weighted set \(B\subseteq_{w}\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}}\}\) is defined to be \(\subseteq\)-minimal with respect to \(w\), if for no \(B^{\prime}\subset B\), \(B^{\prime}\subseteq_{w}\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}}\}\). We use \(\subseteq_{w}^{\min}\) to indicate such \(\subseteq\)-minimal weighted subsets of \(\{b_{1},\ldots,b_{n}\}\). Assuming orthogonal generalizations of (19)-(21) for a _positive_ weight rule (4) and the weighted subsets \(B\subseteq_{w}^{\min}\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}}\}\) of its body, we rather incorporate weights into the formulas (22)-(24) as follows: \[a \leftrightarrow \text{app}(a), \tag{25}\] \[\text{app}(a) \leftrightarrow (w_{b_{1}}\times\text{dep}(a,b_{1})+\cdots+w_{b_{n}}\times\text {dep}(a,b_{n})\geq w),\] (26) \[\text{app}(a) \rightarrow (w_{b_{1}}\times\text{gap}(a,b_{1})+\cdots+w_{b_{n}}\times\text {gap}(a,b_{n})<w). \tag{27}\] **Proposition 3**.: _The formulas (19)-(21) revised for weighted subsets \(B\) subject to the bound \(w\) and the formulas (25)-(27) constrain the respective interpretations \(\langle M,\tau\rangle\) and \(\langle N,\tau\rangle\) equivalently, as conveyed by the satisfaction of the equivalence \(\bigvee_{B\subseteq_{w}^{\min}\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}}\}} \text{app}^{B}(a)\leftrightarrow\text{app}(a)\)._ Proof.: Due to high similarity with respect to Proposition 2, we just point out the equivalence of the formulas \(\bigvee_{B\subseteq_{w}^{\min}\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}}\}} \bigwedge_{b\in B}\text{dep}(a,b)\) and \((w_{b_{1}}\times\text{dep}(a,b_{1})+\cdots+w_{b_{n}}\times\text{dep}(a,b_{n}) \geq w)\). The equivalence involving \(\text{gap}(a,b_{1}),\ldots,\text{gap}(a,b_{n})\) is analogous but negated. Again, \(\langle N,\tau\rangle\not\models\text{(\ref{eq:w_b_b_1}) implies }N\models\text{app}(a)\) and for some \(B\subseteq_{w}^{\min}\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}}\}\), \(N\models B\) and for every \(b\in B\), \(\tau\models(x_{a}>x_{b}+1)\). Then, the value of \(x_{a}\) could be decreased to \(\max\{\tau(x_{b})\mid b\in B\}+1\). Thus the formula (27) makes \(\tau(x_{a})\) minimal as before. **Example 7**.: _Let us consider a positive weight rule \(a\gets 7\leq\{b_{1}=7,b_{2}=5,b_{3}=3,b_{4}=2,b_{5}=1\}\) in the context of a model \(\langle N,\tau\rangle\) where \(N\) sets all body atoms \(b_{1},\ldots,b_{5}\) true and \(\tau\) the level numbers_ \[\tau(x_{a})=5,\quad\tau(x_{b_{1}})=5,\quad\tau(x_{b_{2}})=4,\quad\tau(x_{b_{3} })=3,\quad\tau(x_{b_{4}})=2,\quad\tau(x_{b_{5}})=1.\] _The \(\subseteq\)-minimal satisfiers of the body are \(B_{1}=\{b_{1}\}\), \(B_{2}=\{b_{2},b_{3}\}\), and \(B_{3}=\{b_{2},b_{4}\}\). The only atom in \(B_{1}\) has a weight that reaches the bound \(7\) alone, but it is derived too late to affect the derivation of \(a\). Both \(B_{2}\) and \(B_{3}\) yield the same value \(\max\{\tau(x_{b})\mid b\in B_{i}\}=4\) and hence justify the one higher value \(5\) assigned to \(x_{a}\). Interestingly, there is also an atom \(b_{5}\) that is derived early, but whose weight is irrelevant for satisfying the rule body nor deriving a any earlier. In fact, this weighted atom could be safely deleted from the rule (under strong equivalence)._ _As regards the satisfaction of (27), the relevant body atoms are \(b_{3}\), \(b_{4}\), and \(b_{5}\), for which the atom \(\text{gap}(a,b_{i})\) is made true by (8). The sum of the respective weights \(3+2+1\) is less than \(7\)._ _Also, note that the level numbers assigned by \(\tau\) to \(b_{1},\ldots,b_{5}\) can be easily arranged with positive rules, e.g., by using the chain of rules: \(b_{1}\gets b_{2}\). \(b_{2}\gets b_{3}\). \(b_{3}\gets b_{4}\). \(b_{4}\gets b_{5}\). Given the respective program \(P\), the operator \(\mathbf{T}_{P}\) should be applied \(5\) times to make a true. _ ### Negative Conditions Negative body conditions form the missing pieces when it comes to fully covering WCPs with level ranking constraints as embedded in tight ordered completion. To this end, our strategy is based on rewriting and ideas used in [8] where the correctness of normalization is first shown for positive programs and then generalized for programs with negation. In a nutshell, negative literals in (4) can be replaced by new atoms \(\overline{c_{1}},\ldots,\overline{c_{m}}\) that respectively denote that \(c_{1},\ldots,c_{m}\) cannot be derived. These atoms are subsequently defined by (atomic) normal rules \(\overline{c_{1}}\leftarrow\sim\!\!c_{1}\). \(\ldots\)\(\overline{c_{m}}\leftarrow\sim\!\!c_{m}\). The outcome is a set of rules that is visibly strongly equivalent with the original weight rule (4). The completions of \(\overline{c_{1}},\ldots,\overline{c_{m}}\) are \(\overline{c_{1}}\leftrightarrow\neg c_{1},\ldots,\overline{c_{m}}\leftrightarrow \neg c_{m}\), enabling the substitution of \(\overline{c_{1}},\ldots,\overline{c_{m}}\) by \(\neg c_{1},\ldots,\neg c_{m}\) in any formulas of interest. In this way, \(\overline{c_{1}},\ldots,\overline{c_{m}}\) can be readily forgotten under classical semantics. The transformation described above leaves the SCCs of the program intact, because positive dependencies are not affected. Thus, besides taking care of negative body conditions, our next rewriting step recalls the scope \(S\subseteq\operatorname{At}(P)\) from (10)-(12): we present TOC formulas to cover WCPs split into modules based on SCCs. We say that a WCP is _pure_ if it contains weight rules (4) only. **Definition 3**.: _Let \(P\) be a pure WCP and \(S\subseteq\operatorname{At}(P)\) an SCC of \(P\) used as the scope of completion. The tight ordered completion of \(P\) relative to \(S\), denoted \(\operatorname{TOC}^{S}(P)\), consists of the formulas listed below:_ * _If_ \(|S|>1\)_, then for each_ \(a\in S\)_:_ 1. _the formulas (_6_);_ 2. _the formulas (_7_) and (_8_) for each_ \(b\in S\) _such that_ \(a\succeq_{P}b\)_; plus_ 3. _the following formulas based on the definition_ \(\operatorname{Def}_{P}(a)=\{r_{1},\ldots,r_{k}\}\)_in the program_ \(P\)_:_ \[a \leftrightarrow \operatorname{app}^{1}(a)\vee\cdots\vee\operatorname{app}^{k}(a),\] (28) \[\operatorname{app}^{i}(a) \leftrightarrow \operatorname{int}^{i}(a)\vee\operatorname{ext}^{i}(a),\] (29) \[\operatorname{int}^{i}(a) \leftrightarrow (\sum_{b\in\mathbb{B}^{+}(r_{i})\cap S}(w_{b}\times\operatorname {dep}(a,b))+\sum_{b\in\mathbb{B}^{+}(r_{i})\setminus S}(w_{b}\times b)\] (30) \[\quad-\sum_{c\in\mathbb{B}^{-}(r_{i})}(w_{c}\times c)\geq w_{r_{ i}}-w_{i}),\] \[\operatorname{int}^{i}(a) \rightarrow (\sum_{b\in\mathbb{B}^{+}(r_{i})\cap S}(w_{b}\times\operatorname {gap}(a,b))+\sum_{b\in\mathbb{B}^{+}(r_{i})\setminus S}(w_{b}\times b)\] (31) \[\quad-\sum_{c\in\mathbb{B}^{-}(r_{i})}(w_{c}\times c)<w_{r_{i}}-w_ {i})\vee\operatorname{ext}^{i}(a),\] \[\operatorname{ext}^{i}(a) \leftrightarrow (\sum_{b\in\mathbb{B}^{+}(r_{i})\setminus S}(w_{b}\times b)- \sum_{c\in\mathbb{B}^{-}(r_{i})}(w_{c}\times c)\] (32) \[\quad\geq w_{r_{i}}-w_{i}),\] \[\operatorname{ext}^{i}(a) \rightarrow (x_{a}\leq 1),\] (33) _where_ \(w_{i}=\sum_{c\in\mathbb{B}^{-}(r_{i})}w_{c}\) _is the adjustment to the bound_ \(w_{r_{i}}\) _of_ \(r_{i}\)_._ * _If_ \(|S|=1\) _and_ \(S=\{a\}=\operatorname{SCC}(a)\)_, then (_28_)-(_32_) are replaced by the standard completion_ \[\operatorname{app}^{i}(a)\leftrightarrow(\sum_{b\in\mathbb{B}^{+}(r_{i})}(w_ {b}\times b)-\sum_{c\in\mathbb{B}^{-}(r_{i})}(w_{c}\times c)\geq w_{r_{i}}-w_{ i}).\] (34) In the equations of Definition 3, the treatment of negative literals occurring in a defining rule \(r_{i}\) is justified by their contribution \(\sum_{c\in\mathbb{B}^{+}(r_{i})}(w_{c}\times(1-c))=w_{i}-\sum_{c\in\mathbb{B} ^{+}(r_{i})}(w_{c}\times c)\) toward the bound \(w_{r_{i}}\). Weight rules can also create external support in more flexible ways, i.e., if the bound \(w_{r_{i}}\) can be reached by satisfying any positive body conditions outside the SCC \(S\) in question or any negative body conditions. Moreover, a single weight rule \(r_{i}\) may justify the head \(a\) either internally or externally as formalized by (29), which is different from the case of normal rules. Formulas (30) and (32) capture this distinction. Note that \(\mathrm{ext}^{i}(a)\) implies \(\mathrm{int}^{i}(a)\). The constraint (31) generalizes (11) while (33) is the analog of (12). The consequent of (31) is weakened by the condition \(\mathrm{ext}^{i}(a)\), since the other disjunct is falsified if \(r_{i}\) provides external support, \(x_{a}=1\) holds, and all atoms \(\mathrm{dep}(a,\cdot)\) and \(\mathrm{gap}(a,\cdot)\) associated with \(a\) are falsified. The reader may have noticed that the formulas concerning \(\mathrm{int}^{i}(a)\) and \(\mathrm{ext}^{i}(a)\) share a potentially large subexpression \(s_{i}=\sum_{b\in\mathrm{B}^{+}(r_{i})\setminus S}(w_{b}\times b)-\sum_{c\in \mathrm{B}^{-}(r_{i})}(w_{c}\times c)\). Certain back-end formalisms, such as DL and MIP, enable the representation of this expression only once using an integer variable. The following theorem states the correctness of \(\mathrm{TOC}(P)\) obtained as the union of \(\mathrm{TOC}^{S}(P)\) for the SCCs \(S\) of \(P\). The claimed one-to-one correspondence, however, must take into account the fact that a satisfying assignment \(\langle M,\tau\rangle\) in DL can be replicated into infinitely many copies by substituting \(\tau\) in \(\langle M,\tau\rangle\) by a function \(\tau^{\prime}(x)=\tau(x)+k\) for any \(k\in\mathbb{Z}\). The remedy is to introduce a special variable \(z\) which is assumed to hold \(0\) as its value and the values of the other variables are set relative to the value of \(z\). The current difference constraints mentioning only one variable must be rewritten using \(z\). For instance, \(1\leq x_{a}\leq|S|+1\) in (6) is expressed by the conjunction of \(z-x_{a}\leq-1\) and \(x_{a}-z\leq|S|+1\), and this is how the variable \(z\) gets introduced. **Theorem 2**.: _Let \(P\) be a WCP with SCCs \(S_{1},\ldots,S_{q}\) and \(P_{S_{1}},\ldots,P_{S_{q}}\) the respective modules of \(P\). Then \(P\) and the set of formulas \(F=\bigcup_{j=1}^{q}\mathrm{TOC}^{S_{j}}(P)\) are visibly equivalent (up to assigning \(z=0\))._ ## 5 Generalizations Toward Convex Aggregates Weight rules (4) can be generalized by introducing upper bounds \(u\) besides lower bounds \(l\): \[a\gets l\leq\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}},\sim c_{1}=w_{c_{1}}, \ldots,\sim c_{m}=w_{c_{m}}\}\leq u. \tag{35}\] This gives a rise to a _convex_ condition which is easier to explain for positive rules (\(m=0\)). If the condition can be satisfied by setting the atoms of \(B_{1}\subseteq\{b_{1},\ldots,b_{n}\}\) true and the same holds for a superset \(B_{2}\subseteq\{b_{1},\ldots,b_{n}\}\) of \(B_{1}\), then every intermediate set \(B^{\prime}\) such that \(B_{1}\subseteq B^{\prime}\subseteq B_{2}\) satisfies the condition, too. It is easy to check that this is a property of (35), since \(B_{1}\subseteq B_{2}\) implies \(\mathrm{WS}_{B_{1}}(\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}}\})\leq\mathrm{WS }_{B_{2}}(\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}}\})\) in general. However, the bounds play a role here: upper bounds jeopardize monotonicity in general but convexity is still guaranteed. Thus, we use WCPs based on (35) to understand the role of level ranking constraints in the context of convex aggregates. The effect of negative literals is anti-monotonic, but their semantics is determined by the reduct as usual. Simons et al. [31] present a transformation that checks the upper bound of (35) with another weight rule. The rules below adopt this idea but using a constraint and new atoms that are in line with (34) and \(r_{i}\in\mathrm{Def}_{P}(a)\): \[\mathrm{app}^{i}(a) \leftarrow l\leq\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}},\sim c_{1}=w_{c_{1} },\ldots,\sim c_{m}=w_{c_{m}}\}. \tag{36}\] \[\mathrm{vub}^{i}(a) \leftarrow u+1\leq\{b_{1}=w_{b_{1}},\ldots,b_{n}=w_{b_{n}},\sim c_{1}=w_{ c_{1}},\ldots,\sim c_{m}=w_{c_{m}}\}.\] (37) \[\leftarrow \mathrm{app}^{i}(a),\mathrm{vub}^{i}(a). \tag{38}\] For the moment, this relaxes the notion of applicability for \(r_{i}\) but the constraint (38) makes sure that the upper bound is not _violated_. Since no atom depends positively on \(\mathrm{vub}^{i}(a)\), the resulting TOC formula is analogous to (34) with \(\mathrm{vub}^{i}(a)\) as its head. The constraint (38) can be expressed as \(\neg(\mathrm{app}^{i}(a)\wedge\mathrm{vub}^{i}(a))\). The net effect is that if \(\mathrm{app}^{i}(a)\) is true, then \((\sum_{b\in\mathrm{B}^{+}(r_{i})}(w_{b}\times b)-\sum_{c\in\mathrm{B}^{-}(r_{i })}(w_{c}\times c)\leq u-w_{i})\) must hold. Thus (30)-(32) can be revised for (35) by replacing the previous lower bound \(w_{r_{i}}\) with \(l\) and by incorporating \(u-w_{i}\) into (30) and (32) as upper bounds; either as two pseudo-Boolean constraints or a combined one with two bounds. The respective upper bound does not play a role in (31) that concerns the criticality of the lower bound and this check does not interfere with the satisfaction of the upper bound due to convexity. ### Abstract Constraint Atoms Based on the preceding analysis of weight rules, we will rephrase our approach for an arbitrary convex aggregate \(\operatorname{Aggr}(B)\) that takes a set of (body) atoms \(B\) as input and accepts a certain subset \(\mathcal{S}\) of the powerset \(\mathbf{2}^{B}\) by evaluating to true. This set must satisfy the convexity condition, i.e., if \(S_{1},S_{2}\in\mathcal{S}\) then \(S\in\mathcal{S}\) for each intermediate set \(S_{1}\subseteq S\subseteq S_{2}\), too. Moreover, we let \(\operatorname{Aggr}^{*}(B)\) stand for the _monotonic_ (upward) _closure_ of \(\operatorname{Aggr}(B)\) based on the signature \(B\). The set of satisfiers \(\mathcal{S}\subseteq\mathbf{2}^{B}\) of the latter is extended to a set of satisfiers \(\mathcal{S}^{*}=\{S\mid\exists S^{\prime}\in\mathcal{S},\;S^{\prime}\subseteq S \subseteq B\}\) for the former. The syntax of logic programs can be extended by introducing aggregated conditions as rule bodies in analogy to (3) and (4): \[a\leftarrow\operatorname{Aggr}(b_{1},\ldots,b_{n},\sim c_{1},\ldots,\sim c_{m }). \tag{39}\] Let \(P\) be a logic program consisting of rules of the form (39) and \(M\subseteq\operatorname{At}(P)\) a _model_ of \(P\) that satisfies all rules (39) in the standard sense, i.e., if the body is satisfied by \(M\), then the head \(a\in M\). The reduct \(P^{M}\) can be formed by including a positive rule \(a\leftarrow\operatorname{Aggr}^{*}(b_{1},\ldots,b_{m},c_{1}\in M\,?\,\bot\, \cdot\,\ldots,c_{m}\in M\,?\,\bot\,\cdot\,\top)\) for each (39) such that \(M\models\operatorname{Aggr}(b_{1},\ldots,b_{n},\sim c_{1},\ldots,\sim c_{m})\). In the above, we exploit _conditional substitutions_\(c\,?\,v\colon u\) by the value \(v\) if the condition \(c\) is true and the value \(u\) otherwise. Thus, given a set of input atoms \(I\subseteq\operatorname{At}(P)\setminus\operatorname{H}(P)\), we can calculate the least model of \(P^{M}\) and assign level ranks \(\#a=i\) of atoms \(a\in\operatorname{H}(P)\) based on the membership \(a\in\operatorname{\mathbf{T}}_{P^{M}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! monotone/convex aggregates. Using them level rankings can be made unique which is desirable, e.g., when counting answer sets. A further by-product is that any WCP can be translated into a _tight_ WCP if the formulas in \(\operatorname{TOC}(P)\) are expressed with rules rather than formulas, also justifying "_tight_" as part of TOC. Although the results of this article are theoretical by nature, they enable new kinds of strategies when it comes to implementing the search of stable models using existing solver technology for SAT, SMT, and MIP. E.g., the presented TOC formulas offer a common ground for the translators in the lp2* family [20]. We leave _non-convex_ aggregates as future work for two main reasons. First, there is no consensus about their semantics when recursive definitions are enabled [3]. The ASP-core-2 language standard assumes the _stratified_ setting only whereas the Clingo system implements one particular semantics for recursive non-convex aggregates [14]. Second, there is also evidence [2] that the removal of non-convex aggregates tends to produce disjunctive rules which go beyond level rankings in the first place. One potential solution is provided by the _decomposition_ of non-convex aggregates into their maximal convex regions, cf. [19, 25]. AcknowledgmentsThis research is partially supported by the Academy of Finland projects AI-ROT (#335718), XAILOG (#345633), and ETAIROS (#352441).
2303.06238
Light-induced Nonlinear Spin Hall Current in Single-layer WTe$_2$
In this theoretical investigation, we analyze light-induced nonlinear spin Hall currents in a gated single-layer 1T$'$-WTe$_2$, flowing transversely to the incident laser polarization direction. Our study encompasses the exploration of the second and third-order rectified spin Hall currents using an effective low-energy Hamiltonian and employing Kubo's formalism. We extend our analysis to a wide frequency range spanning both transparent and absorbing regimes, investigating the influence of light frequency below and above the optical band gap. Additionally, we investigate the influence of an out-of-plane gate potential on the system, disrupting inversion symmetry and effectively manipulating both the strength and sign of nonlinear spin Hall responses. We predict a pronounced third-order spin Hall current relative to its second-order counterpart. The predicted nonlinear spin currents show strong anisotropic dependence on the laser polarization angle. The outcomes of our study contribute to a generalized framework for nonlinear response theory within the spin channel will impact the development of the emerging field of opto-spintronic.
Pankaj Bhalla, Habib Rostami
2023-03-10T23:08:32Z
http://arxiv.org/abs/2303.06238v2
# In-gap Nonlinear Spin Hall Current by Two-color Excitation in Single-layer WTe\({}_{2}\) ###### Abstract In this work, we theoretically study the nonlinear spin currents in a gated single-layer 1T\({}^{\prime}\)-WTe\({}_{2}\) with strong spin-orbit coupling driven by an intense laser pulse. We investigate both second and third-order spin currents under the influence of an external displacement field. We use an effective low-energy Hamiltonian for the modeling and employ Kubo's formalism based on Green's function method. We obtain a large third-order two-color rectified spin Hall current generated by the interference of two co-linear polarised light beams that flows transverse to the incident laser polarisation direction. To gain a better understanding of this phenomenon, we also calculated the second-order rectified spin current by a single-color light beam and compared it with the two-color spin current. To further explore this effect, we analyzed the impact of out-of-plane gate potential, which breaks the inversion symmetry of the system and efficiently manipulates the strength and sign of nonlinear spin Hall responses. As the most striking result of this study, we predict an in-gap nonlinear spin current induced by light without inter-band optical absorption. The in-gap nonlinear charge current is discussed previously, and our finding generalizes the effect on the spin channel and third-order response theory. ## I Introduction Electronic transport in two-dimensional (2D) quantum materials has emerged as a central topic for new physics and novel technologies in the spintronic industry and future energy-saving devices [1; 2]. The second-order nonlinear photocurrent in novel quantum materials are attracting interest from both applied and fundamental point of view [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. In particular, one of the main aims is to realize topological protection of nonlinear current in topological quantum materials such as Dirac and Weyl systems [5; 6; 25; 26]. Experimental measurements in 2D WTe\({}_{2}\) show that the nonlinear Hall effect arises due to the momentum derivative of the Berry curvature, so-called the Berry curvature dipole [3] that can be tuned with the out-of-plane potential bias via the top and bottom gate voltages [27; 28]. The counter propagation of electrons with opposite spins [29] conceives the celebrated spin Hall effect on applying a bias voltage between two contacts [30; 31; 32; 33]. The phenomenon arises due to the strong spin-orbit coupling, a key element in their electronic band structure, and generating a spin current [34; 31]. A generalization to the nonlinear regime is a rapidly growing field of research. For instance, we recall theoretical and experimental studies on photoinduced second-order spin current in quantum materials with spin-orbit coupling [35; 36; 37; 38; 39; 40]. Measurement of nonlinear Hall effect [41] using infrared pulse excitation, and high-temperature intrinsic spin-Hall effect [42] in two-dimensional WTe\({}_{2}\) motivates to look for the nonlinear generalization of the spin Hall current in this exotic quantum materials with a rich ground state phase diagram. The generation of the rectified spin current has been widely studied in traditional semiconductors and quantum well structures, where a linear wave vector term in the Bloch Hamiltonian stemming from the spin-orbit interaction leads to the spin splitting in the band structure and plays the significant role [43; 44; 45; 46; 47; 48]. The nonlinear spin current has yet to be discussed or explored for monolayer WTe\({}_{2}\) having different crystalline symmetry, which is the main aim of this work. The time-reversal 1T\({}^{\prime}\)-WTe\({}_{2}\) structure yields second-order current when the inversion symmetry is broken [16; 49] by an out-of-plane potential bias that allows tuning the gap associated with each valley and spin [50]. This results in a phase transition driven by the interplay of the spin-orbit coupling and the out-of-plane bias [51]. The second-order rectified current \(j_{dc}^{(2)}\sim E(\omega)E^{*}(\omega)\) can be generated in non-centrosymmetric materials. There is, however, a growing interest in the third-order rectified current that can be achieved in the presence of Figure 1: Schematic setups for the light-induced nonlinear rectified spin current in a 2D WTe\({}_{2}\) device. The left panel refers to the second-order spin current that arises in response to the single-color light beam. The right panel stems from the interference of two-color light beams and represents the third-order rectified spin current. Here \(\mathbf{E}_{1}(t)\) and \(\mathbf{E}_{2}(t)\) refer to the electric fields, and \(U\) corresponds to the out-of-plane gate potential. an extra direct electric field \(j_{dc}^{(3)}\sim E_{dc}E(\omega)E^{*}(\omega)\) which can induce rectified current in centrosymmetric systems [52, 53]. The topological properties of third-order rectified current is also attracting attention [4, 54]. Another third-order rectification process is obtained in response to a two-color driving field. The two-color field is formally written as \[\mathbf{E}(t)=\mathbf{E}_{1}e^{i(\omega_{1}t+\phi_{1})}+\mathbf{E}_{2}e^{i(\omega_{2}t+\phi_ {2})}+c.c.\, \tag{1}\] where \(\mathbf{E}_{1}\), and \(\mathbf{E}_{2}\) are the electric fields associated with frequency \(\omega_{1}\) and \(\omega_{2}\), and \(\phi_{1}\) and \(\phi_{2}\) phases respectively. Two-color optical rectification is a process that involves the interference of single frequency beam with the second-harmonic laser beam \(j_{dc}^{(3)}\sim E_{1}(\omega_{1})E_{1}(\omega_{1})E_{2}^{*}(\omega_{2})\) with \(\omega_{2}=2\omega_{1}\) which was first introduced by Manykin and Alfanasev [55]. The process has been extensively exploited in semiconductors and two-dimensional materials by illuminating a combination of monochromatic beams of frequency \(\omega\) and \(2\omega\), leads to one-photon and two-photon absorption transitions [56, 57, 58, 59, 60, 61]. With this process, the generation of electrical current can be achieved by adjusting the relative phases of two laser beams which vary sinusoidally as \(\sin(2\phi_{\omega}-\phi_{2\omega})\) due to the asymmetrical distribution of carriers in the momentum space [56, 62, 63]. For instance, co-linearly polarized beams cannot induce third-order two-color electrical rectified current due to the zero phase difference [57]. However, the spin current, which is proportional to \(\cos(2\phi_{\omega}-\phi_{2\omega})\), can be obtained in the co-linear configuration. In this article, we theoretically investigate the nonlinear rectified spin current in monolayer WTe\({}_{2}\)-see Fig. 1, based on second-order and third-order response theories. We utilize a many-body diagrammatic perturbation technique. We discuss that the transverse nonlinear charge current vanishes due to the symmetry conditions, but a nonlinear spin current remains finite in both second and third-order rectification mechanisms. The current injection mechanism dominantly drives the second-order spin current, which is finite in the interband frequency regime \(\hbar\omega>\Delta_{op}\) where \(\Delta_{op}\) is the optical gap based on the Pauli exclusion principle at zero electronic temperature. However, the two-color spin current mechanism is more complex with non-trivial features. Intriguingly, we obtain a sub-gap (i.e., \(\hbar\omega<\Delta_{op}\)) two-color spin response that can be finite in the non-absorbing regime. In other words, the effect is not due to the transport of the photoexcited carrier population. Instead, it arises due to the virtual excitations, which can be finite even in the sub-gap regime in the time-reversal broken systems [64, 65]. We find that the subgap response is non-zero even in the limit \(\eta\to 0\). The feature shows the adiabatic deviation of the driving term having a finite frequency. We further provide a quantitative analysis of the second and third-order spin susceptibilities and discuss their dependence on laser frequency, external displacement field (the gate potential), chemical potential, and laser polarisation angle. ## II Theory and method We utilize an effective continuum \(\mathbf{k}\cdot\mathbf{p}\) model Hamiltonian for the 1T\({}^{\prime}\) phase of the monolayer WTe\({}_{2}\) which incorporates p-orbital of Te and d-orbital of W. This Hamiltonian can nicely describe the low-energy bands, which has the following representation [50, 66]: \[\hat{\mathcal{H}}= \{Ak^{2}\hat{\sigma}_{0}+(\delta+Bk^{2})\hat{\sigma}_{z}+v_{y}k_ {y}\hat{\sigma}_{y}+U\hat{\sigma}_{x}\}\otimes\hat{s}_{0}\] \[+v_{x}k_{x}\hat{\sigma}_{x}\otimes\hat{s}_{y}, \tag{2}\] where \(\hat{\sigma}_{i=x,y,z}\) and \(\hat{s}_{i=x,y,z}\) are Pauli matrices in the orbital and spin basis, respectively. The \(\hat{\sigma}_{0}\) (\(\hat{s}_{0}\)) the identity matrix in the orbital (spin) basis. The parameter \(2\delta\) stands for the gap at \(\Gamma\)-point, and \(v_{y}\) characterizes the anisotropy in the momentum space. The parameters \(2A=1/m_{p}-1/m_{d}\) and \(2B=1/m_{p}+1/m_{d}\), where \(m_{p}\) and \(m_{d}\) define the effective masses of p-orbital of Te and d-orbital of W respectively. The parameter \(U\) represents the displacement field, the coupling between the out-of-plane electric field and the orbitals which breaks the inversion symmetry of the system, \(v_{x}\) is the spin-orbit coupling strength, and the wave vector \(\mathbf{k}=(k_{x},k_{y})\) having \(k=|\mathbf{k}|\). The parameter values are considered from the \(ab\)_initio_ band structure calculations [51] and fitted with the experimental predictions to obtain the spin-orbit coupling gap \(\delta_{\rm soc}=45\) meV at \(Q=\sqrt{|\delta|/2B}\) Dirac points in the momentum space [67]. To compute the nonlinear photogalvanic spin Hall response of 1T\({}^{\prime}\)-WTe\({}_{2}\) to an external homogeneous vector potential \(\mathbf{A}(t)\), with the corresponding driving electric field \(\mathbf{E}(t)=-\partial_{t}\mathbf{A}(t)\). In a spin-diagonal basis, the nonlinear spin response is given as the difference of susceptibility for two spin polarisations: \[\chi^{(n),\rm spin}=\chi^{(n)}(s=1)-\chi^{(n)}(s=-1). \tag{3}\] The second-order photogalvanic spin current thus follows \[j_{a}^{\rm PG}=\sum_{bc}\frac{\chi_{abc}^{(2),\rm spin}(\omega,-\omega)}{(i \omega)(-i\omega)}E_{b}(\omega)E_{c}^{*}(\omega), \tag{4}\] Similarly, the two-color third-order photogalvanic spin current reads \[j_{a}^{\rm 2c-PG}=\sum_{bcd}\frac{\chi_{abcd}^{(3),\rm spin}( \omega,\omega,-2\omega)}{(i\omega)(-i2\omega)}E_{b}(\omega)E_{c}(\omega)E_{d}^ {*}(2\omega), \tag{5}\] where \(\chi_{abc}^{(2)}(\omega_{1},\omega_{2},s)\) and \(\chi_{abcd}^{(3)}(\omega_{1},\omega_{2},\omega_{3},s)\) respectively stand for the second-and third-order spin-resolved susceptibility tensor elements in the two-dimensional x-y coordinate. In addition, \(\mathbf{E}(\omega)\) and \(\mathbf{E}(2\omega)\) stand for the field elements at two colors with frequency \(\omega\) and \(2\omega\). We employ the diagrammatic perturbative approach, and the response functions are represented by the Feynman diagrams in Fig. 2 and 3. The solid lines are non-interacting fermionic propagators, while solid circles represent current vertices. The second-order spin-resolved current given by the correlation of three paramagnetic current operators (or one-photon current coupling) \(\hat{j}_{a}=(-e/\hbar)\partial_{k_{a}}\hat{\mathcal{H}}\) as \(\chi^{(2),P}_{abc}\sim\langle\hat{j}_{a}\hat{j}_{b}\hat{j}_{c}\rangle\) and can be diagrammatically depicted as in Fig. 2(a) which formally reads \[\chi^{(2),P}_{abc}(\omega_{1},\omega_{2},s) =\sum_{\mathcal{P}}\sum_{\{\lambda_{i}\},\mathbf{k}}\frac{j_{a}^{ \lambda_{1}\lambda_{2}}(\mathbf{k},s)j_{b}^{\lambda_{2}\lambda_{3}}(\mathbf{k},s)j_{c }^{\lambda_{3}\lambda_{1}}(\mathbf{k},s)}{\hbar\omega_{\Sigma}+\varepsilon_{\mathbf{k},s}^{\lambda_{2}\lambda_{1}}}\] \[\times\left\{\frac{f^{\lambda_{2}\lambda_{3}}(\mathbf{k},s)}{\hbar \omega_{1}+\varepsilon_{\mathbf{k},s}^{\lambda_{2}\lambda_{3}}}-\frac{f^{\lambda_ {3}\lambda_{1}}(\mathbf{k},s)}{\hbar\omega_{2}+\varepsilon_{\mathbf{k},s}^{\lambda_{3 }\lambda_{1}}}\right\}. \tag{6}\] where \(\omega_{\Sigma}=\omega_{1}+\omega_{2}\), \(\varepsilon_{\mathbf{k},s}^{\lambda_{i}\lambda_{j}}=\varepsilon_{\mathbf{k},s}^{\lambda_ {i}}-\varepsilon_{\mathbf{k},s}^{\lambda_{j}}\) is the energy difference between different bands with given spin index \(s=\pm 1\) and \(f^{\lambda_{i}\lambda_{j}}(\mathbf{k},s)=f(\varepsilon_{\mathbf{k},s}^{\lambda_{i}})- f(\varepsilon_{\mathbf{k},s}^{\lambda_{j}})\) refers to the difference between the Fermi function corresponding to distinct bands. Here \(f(\varepsilon)=[1+e^{\beta(\varepsilon-\mu)}]^{-1}\) is the Fermi-Dirac distribution function, \(\mu\) is the chemical potential, and \(\beta=1/k_{B}T\) having \(k_{B}\) as the Boltzmann constant, \(T\) as an electron temperature. The one-photon coupling vertex is \(j_{a}^{\lambda_{i}\lambda_{j}}(\mathbf{k},s)=\langle u_{\mathbf{k},s}^{\lambda_{i}}| \hat{j}_{a}|u_{\mathbf{k},s}^{\lambda_{j}}\rangle\) with \(|u_{\mathbf{k},s}^{\lambda_{i}}\rangle\) being an eigen vector of the Hamiltonian. Note that \(\sum_{\mathcal{P}}\) stands for the intrinsic permutation symmetry \((b,\omega_{1})\Longleftrightarrow(c,\omega_{2})\) and \(\hbar\omega_{i}\rightarrow\hbar\omega_{i}+i\eta\) with the parameter \(\eta\to 0^{+}\). The diamagnetic contribution to the second-order response (Fig. 2(b)-(d)) is given by \[\chi^{(2),D}_{abc}(\omega_{1},\omega_{2},s)=-\sum_{\mathcal{P}} \sum_{\{\lambda_{i}\},\mathbf{k}}\Big{\{}\xi^{\lambda_{1}\lambda_{1}}_{abc}f( \varepsilon_{\mathbf{k},s}^{\lambda_{1}})\] \[+j_{a}^{\lambda_{1}\lambda_{2}}(\mathbf{k},s)\kappa^{\lambda_{2} \lambda_{1}}_{bc}(\mathbf{k},s)\frac{f^{\lambda_{1}\lambda_{2}}(\mathbf{k},s)}{\hbar \omega_{1}+\varepsilon_{\mathbf{k},s}^{\lambda_{1}\lambda_{2}}}+\frac{f^{\lambda_ {1}\lambda_{2}}(\mathbf{k},s)}{2}\] \[\times\frac{j_{b}^{\lambda_{2}\lambda_{1}}(\mathbf{k},s)\kappa^{ \lambda_{1}\lambda_{2}}_{ac}(\mathbf{k},s)+j_{a}^{\lambda_{2}\lambda_{1}}(\mathbf{k}, s)\kappa^{\lambda_{1}\lambda_{2}}_{bg}(\mathbf{k},s)}{\hbar\omega_{\Sigma}+\varepsilon_{\mathbf{k},s}^{\lambda_{1}\lambda_{2}}}\Big{\}}, \tag{7}\] Notice that the two-photon and three-photon current couplings are \(\hat{\kappa}_{ab}=(-e/\hbar)^{2}\partial_{k_{a}}\partial_{k_{b}}\hat{\mathcal{H}}\) and \(\hat{\xi}_{abc}=-(1/2)(e/\hbar)^{3}\partial_{k_{a}}\partial_{k_{b}}\hat{ \mathcal{H}}\), respectively. Similarly, following the diagrammatic scheme in Fig. 3 (a), to written third-order spin response originating from the paramagnetic current operator is formally given by \(\chi^{(3),P}_{abcd}\sim\langle\hat{j}_{a}\hat{j}_{b}\hat{j}_{c}\hat{j}_{d}\rangle\). In principle, there are seven distinct Feynman diagrams involving multi-photon current couplings [68]. However, for our Hamiltonian model, only four additional diagrams can contribute, as depicted in Fig. 3(b,c,d,f), which can be written in terms of the following correlation functions \(\langle\hat{\kappa}_{ab}\hat{\kappa}_{cd}\rangle\), \(\langle\hat{j}_{a}\hat{j}_{b}\hat{\kappa}_{cd}\rangle\), Figure 3: Feynmann diagrams for the third-order response. (a) refer to the paramagnetic contributions, (b)-(h) to the diamagnetic contributions. Figure 2: Feynmann diagrams for the second-order response. (a) correspond to the paramagnetic contributions, (b)-(d) to the diamagnetic contributions. Here solid lines indicate the fermionic propagators, dashed lines refer to external photons, and solid circles denote current vertices. \(\langle\hat{\kappa}_{ab}\hat{j}_{c}\hat{j}_{d}\rangle\) and \(\langle\hat{j}_{a}\hat{\kappa}_{bc}\hat{j}_{d}\rangle\). We provide the explicit form of the third-order response in Appendix A for brevity. ### Nonlinear spin Hall current The \(n^{\text{th}}\)-order nonlinear response functions to external fields are dictated by the susceptibility tensor components \(\chi^{(n)}_{abc\cdots}\). The tensor components are restricted by space inversion, time reversal, and rotational and mirror symmetries of the crystalline quantum material. Specifically, for a system having mirror symmetry \(\mathcal{M}_{a}\) along the plane perpendicular to an arbitrary spatial axis \(a\) and the spin polarization direction, the susceptibility tensor components with an odd number of '\(a\)' spatial indices do not contribute to the charge current. This arises due to the cancellation of the up and down spin components as \(\chi^{(n)}=\chi^{(n)}_{\uparrow}+\chi^{(n)}_{\downarrow}=0\). But, these tensor components contribute to the spin response function \(\chi^{(n),spin}=\chi^{(n)}_{\uparrow}-\chi^{(n)}_{\downarrow}=2\chi^{(n)}_{\uparrow}\). In the time-reversal symmetric WTe\({}_{2}\) with broken inversion symmetry due to vertical gate potential, the mirror symmetry with respect to \(x\)-axis, i.e., \(\mathcal{M}_{x}\) reduces non-vanishing tensor elements to four and eight for the second-order and third-order responses respectively. These second-order components are \(\chi^{(2)}_{xyy},\chi^{(2)}_{xxx},\chi^{(2)}_{yyx},\chi^{(2)}_{yxy}\). For the third-order response, the non-vanishing tensor elements are the following \[\chi^{(3)}_{yxxx},\chi^{(3)}_{xyyy},\chi^{(3)}_{xxxy},\chi^{(3)}_ {xyxx},\] \[\chi^{(3)}_{xxyx},\chi^{(3)}_{yxyy},\chi^{(3)}_{yyyx}. \tag{8}\] To demonstrate the polarization dependence, we decompose the nonlinear spin Hall current in the longitudinal and transverse basis as \(\mathbf{j}^{\text{spin}}=j^{\text{spin}}_{||}\hat{\mathbf{\epsilon}}_{||}+j^{\text{ spin}}_{\perp}\hat{\mathbf{\epsilon}}_{\perp}\). Here, the unit vectors are orthogonal \(\hat{\mathbf{\epsilon}}_{||}\cdot\hat{\mathbf{\epsilon}}_{\perp}=0\). Accordingly, the longitudinal (parallel) and transverse (perpendicular) components of the spin current can be formulated using the relation \(j^{\text{spin}}_{||}=\hat{\mathbf{j}}^{\text{spin}}\cdot\hat{\mathbf{\epsilon}}_{||}\) and \(j^{\text{spin}}_{\perp}=\mathbf{j}^{\text{spin}}\cdot\hat{\mathbf{\epsilon}}_{\perp}\) respectively. In this study, we focus on the transverse (Hall) components of the nonlinear rectified spin current due to single-color driving field \(\mathbf{E}(t)=|E_{1}|e^{i\phi_{1}}\hat{\mathbf{\epsilon}}(\theta)e^{i\omega t}+c.c.\) and two-color light field \(\mathbf{E}(t)=\hat{\mathbf{\epsilon}}(\theta)\left[|E_{1}|e^{i(\omega t+\phi_{1})}+|E_ {2}|e^{i(2\omega t+\phi_{2})}+c.c.\right]\) where \(\hat{\mathbf{\epsilon}}_{||}=\hat{\mathbf{\epsilon}}(\theta)=\cos\theta\hat{\mathbf{x}}+ \sin\theta\hat{\mathbf{y}}\), with \(\theta\) being the polarization angle, is the linear polarization unit vector. The corresponding light-induced single-color and two-color nonlinear spin-current are given by \(j^{\text{PG}}_{\perp}\) and \(j^{\text{2c-PG}}_{\perp}\) respectively. Symmetry-based argument (as discussed earlier) leads to the following polarisation dependence of the transverse component of the nonlinear spin currents: \[j^{\text{PG}}_{\perp}=\frac{|E_{1}|^{2}}{\omega^{2}}\text{Re}\left[\sin^{2} \theta\chi^{(2)}_{xyy}+\cos^{2}\theta\chi^{(2)}_{xxx}\right]\sin\theta, \tag{9}\] and the two-color spin Hall current is given by \[j^{\text{2c-PG}}_{\perp}=\frac{|E_{1}|^{2}|E_{2}|}{\omega^{3}} \text{Im}\Big{[}e^{i\Delta\phi}\Big{(}\chi^{(3)}_{1}\sin^{4}\theta-\chi^{(3)} _{2}\cos^{4}\theta\] \[+\chi^{(3)}_{3}\sin^{2}2\theta\Big{)}\Big{]}. \tag{10}\] where \(\chi^{(3)}_{1}=\chi^{(3)}_{xyyy}\), \(\chi^{(3)}_{2}=\chi^{(3)}_{yxxx}\), and \[\chi^{(3)}_{3}=\frac{\chi^{(3)}_{xxyy}+\chi^{(3)}_{xxyx}+\chi^{(3)}_{xyxx}- \chi^{(3)}_{yxyy}-\chi^{(3)}_{yyyyx}}{4}. \tag{11}\] The derivation of the nonlinear spin currents is provided in Appendix B. Further, \(\Delta\phi=(2\phi_{1}-\phi_{2})\) is the phase difference between the single- and second-harmonic frequency beams. Here, we get two sets of terms that are proportional to \(\text{Re}[\chi^{(3)}]\sin\Delta\phi\) and \(\text{Im}[\chi^{(3)}]\cos\Delta\phi\) contribute to the third-order rectified spin Hall current depending on the choice of phase difference. It has been found that for the linearly polarized beams with \(\Delta\phi=\pm\pi/2\), the nonlinear Hall current Eq. (10) yields maximum value as discussed in Ref. [57] for the case of graphene. In this study, we consider zero phase difference \(\Delta\phi=0\). The following section presents numerical results in gated single-layer 1T\({}^{\prime}\)-WTe\({}_{2}\) for the second and third-order spin susceptibilities versus laser frequency and the gate potential \(U\). Afterward, we analyze the anisotropic polarisation dependence of the nonlinear spin Hall susceptibilities. ## III Numerical results and discussion We provide numerical results for second and third-order spin Hall current induced by monochromatic and two-color laser excitations, respectively. There are no second-order spin current (\(\propto\chi^{(2)}_{yxx}\)) in the Hall measurement geometry along \(\hat{y}\)-direction with the field along \(\hat{x}\)-direction due to the cancellation of the current induced by up and down spins, but the second-order Hall charge current will remain finite. Unlike the second-order current, the third-order spin Hall current \(j^{(3),\text{spin}}_{a}\) follows different symmetry properties, thus can be obtained in both \(\hat{x}\) and \(\hat{y}\) Hall measurement geometries. Specifically, the third-order spin Hall effect can be observed in both inversion symmetric and time-reversal symmetric materials. Here, we discuss the two Hall components of susceptibility \(\chi^{(3),\text{spin}}_{yxxx}\), and \(\chi^{(3),\text{spin}}_{xyyy}\), subjected to the mirror symmetry possessed by 1T\({}^{\prime}\)-WTe\({}_{2}\), to the linearly polarized two-color optical beam of light. The two beams at frequency \(\omega\) and \(2\omega\) interfere, resulting in the third-order rectified spin current. In Fig. 4 (a)-(c), we demonstrate the density plot for the second-and third-order spin Hall susceptibility tensor components \(\chi^{(2),\rm spin}_{xyy}\), \(\chi^{(3),\rm spin}_{yxxx}\) and \(\chi^{(3),\rm spin}_{xyyy}\) for 1T\({}^{\prime}\)-WTe\({}_{2}\) having spin texture in the momentum space, as a function of \(U/\delta_{\rm SOC}\) and \(\hbar\omega/\delta_{\rm SOC}\), where \(\delta_{\rm SOC}\) is the spin-orbit coupling gap. Here, we mainly focus on the rectified (dc) spin current with single-color and two-color laser beams, contributed by the real part of the second-order and imaginary part of the third-order response functions, respectively. We observe that the second-order spin Hall susceptibility component \(\chi^{(2),\rm spin}_{xyy}\) in response to the electric field along \(\hat{y}\) direction gives interband resonances at an incident energy \(\hbar\omega\geq 2\Delta_{\pm}\), with \(2\Delta_{\pm}=2|U\pm\delta_{\rm SOC}|\) being the band gap at \(Q\) and \(Q^{\prime}\), as indicated by the dark red and dark blue colors in Fig. 4(a). However, resonances' strength and position change with the out-of-plane gate potential, thus shifting in different frequency regimes. The resulting feature is intriguing because of the band closing and reopening around the Dirac points at valleys \(Q\) and \(Q^{\prime}=-Q\) due to the gate potential \(U\). Furthermore, the spin current generation occurs due to the imbalance of carrier excitation at different valleys. Since spin-up and spin-down currents are equal in magnitude and flow in opposite directions, the charge Hall current vanishes and we obtain a pure second-order spin Hall current by irradiating the linearly polarized single-color beam of light. One of the most striking results of this study is the in-gap light-induced nonlinear spin current when \(\hbar\omega<\Delta_{\pm}\). Unlike usual shift and injection current mechanisms which are related to the imaginary part of \(\rm Im[1/(\Delta_{\pm}-\hbar\omega-i0^{+})]\), the in-gap current is given by the real part \(\rm Re[1/(\Delta_{\pm}-\hbar\omega-i0^{+})]\). In the clean limit, the imaginary part leads to a delta function \(\delta(\Delta_{\pm}-\hbar\omega)\) implying that we can induce a current after absorbing the photon and generating a photo-excited carrier. However, the in-gap current is related to the principal value of \(1/(\Delta_{\pm}-\hbar\omega-i0^{+})\) and thus can be finite even without generating a photo-excited carrier density. This mechanism has been discussed for the second-order charge current in time-reversal symmetry breaking systems [64, 65], and its thermodynamics ground is discussed in details [69]. For the case of zero phase difference, the two-color rectified spin current is given by the real part of the third-order Figure 4: top: (a)-(c) Density plots for the second-order and third-order spin Hall response as a function of the normalized incident energy and normalized out-of-plane gate potential with respect to the spin-orbit coupling gap. The dotted lines in the plot (c) refer to the values for the cross-section plots in the second and third-row panels, and the color bar refers to the magnitude of the spin response. Second row: (d)-(f) represent the rectified spin Hall response \(\chi^{(2)}_{xyy}\), \(\chi^{(3)}_{yxxx}\) and \(\chi^{(3)}_{xyyy}\) at out-of-plane gate potential \(U=\delta_{\rm SOC}\) and \(U=2\delta_{\rm SOC}\). Third row: (g)-(i) represent the response at an incident energy \(\hbar\omega=\delta_{\rm SOC}\) and \(\hbar\omega=2\delta_{\rm SOC}\). Here, we set the chemical potential \(\mu=0\) eV and set other parameters based on the DFT calculations [51] as \(v_{z}=0.09\) eVA, \(v_{y}=2.6\) eVA, \(\delta=-0.33\) eV, \(B=5.28\) eVA\({}^{2}\), \(A=-2.64\) eVA\({}^{2}\), and \(T=1\) K. Further, we normalize the response by setting \(\chi^{(2)}_{0}=e^{3}/\hbar\) and \(\chi^{(3)}_{0}=e^{4}/\hbar\). conductivity (see Appendix B) \[j_{\perp}^{2\mathrm{c-PG}}=\mathrm{Re}\left[\hat{\mathbf{\epsilon}}_{ \perp}\cdot\mathbf{\sigma}^{(3)}(\omega,\omega,-2\omega)\dot{\hat{\mathbf{\epsilon}}}_{ \parallel}\hat{\mathbf{\epsilon}}_{\parallel}\hat{\mathbf{\epsilon}}_{\parallel}\right] |E(\omega)|^{2}|E(2\omega)|. \tag{12}\] The real part of two-color third-order conductivity contains principle value terms that can technically explain the emergence of in-gap current. Here, we show that one can generate third-order in-gap spin current in a time-reversal symmetric system. However, there is no in-gap effect in the second-order spin response due to the cancellation of the effect for spin-up and spin-down components. To illustrate features with more detail, we plot the cross-section curves at different values of the scaled back-gate potential \(U/\delta_{\mathrm{SOC}}\) and the incident photon energy \(\hbar\omega/\delta_{\mathrm{SOC}}\). For \(U=0\), the inversion symmetry of the system remains intact, which results in the vanishing second-order response \(\chi_{xyy}^{(2)}\) consistent with the symmetry argument. For finite but small \(U\ll\delta_{\mathrm{SOC}}\), the inversion symmetry breaks down that leads to finite \(\chi_{xyy}^{(2)}\). As seen in Fig. 4(d), the effective band gap \(\Delta_{-}\) vanishes, and the bottom of the conduction and top of the valence bands touch each other at \(Q\) point when \(U=\delta_{\mathrm{SOC}}\) and therefore optical response reveals the gapless nature of the system that is finite even at the tiny value of the light frequency. By further increasing the frequency, we see a step-like feature in the second order response at \(\hbar\omega=2\Delta_{+}=4U\). We have generated the same curve for the case of \(U=2\delta_{\mathrm{SOC}}\) where the effective gaps at different valleys are \(\Delta_{+}/\delta_{\mathrm{SOC}}=3\) and \(\Delta_{-}/\delta_{\mathrm{SOC}}=1\). This leads to the resonant jumps at \(\hbar\omega/\delta_{\mathrm{SOC}}=6\) and \(\hbar\omega/\delta_{\mathrm{SOC}}=2\). In Fig. 4(e)-(f), we find the emergence of resonant peaks in the third-order response due to the two-photon and one-photon absorption processes which correspond to \(\hbar\omega=\Delta_{\pm}\) and \(\hbar\omega=2\Delta_{\pm}\), respectively. Among these, the two peaks associated with a larger effective gap \(\Delta_{+}\) show similar behavior, such as resonances for different back-gate potentials. However, the other two associated with \(\Delta_{-}\) reveal the different feature that is the presence and absence of the resonance peak at \(U=\delta_{\mathrm{SOC}}\) and \(U=2\delta_{\mathrm{SOC}}\). We have constructed vertical cross-section plots to investigate the behavior of nonlinear spin susceptibilities concerning fixed values of photon energy \(\hbar\omega\) and varying values of \(U\). The second-order and third-order effects are depicted in Fig. 4(g) and Fig. 4(h)-(i), respectively. Contrary to the second-order case, the third-order response remains finite at \(U=0\). Furthermore, we observe a strong and non-monotonic dependence of the nonlinear spin current on the back-gate potential \(U\). This behavior implies that small changes in the back-gate potential can induce significant changes in the resulting nonlinear spin response, which could have important implications for the design and optimization of spin-based devices. Further, we present the numerical results for the polarisation dependence of the spin transverse currents for a fixed value of \(\hbar\omega=\delta_{\mathrm{SOC}}\) in polar plots in Fig. 5. The polarization dependence of nonlinear spin current highly depends on the crystalline symmetry and orientation of the monolayer WTe\({}_{2}\). The anisotropic behavior of the second-order spin Hall current that follows \(j_{\perp}^{PG}\propto\text{Re}[\chi_{xyy}^{(2)}]\sin^{3}\theta\). The third-order spin current is proportional to the imaginary part of \(\chi^{(3)}\) components for zero phase difference between two color laser pulses. The anisotropic polarization dependence of third-order spin current is more complex due to the competing behavior of different tensor components of the susceptibility multiplied by \(\sin^{4}\theta\), \(\cos^{4}\theta\), and \(\sin^{2}2\theta\). Finally, we provide the numerical estimation of the spin Hall photocurrents at a particular incident energy and using the microscopic parameter obtained from DFT calculations and experiments [66, 67, 50]. At \(\hbar\omega=90\) meV, which is twice the spin-orbit coupling gap \(\delta_{\text{SOC}}=45\) meV, the out-of-plane gate potential \(U=90\) meV, and the electric field \(E\) of the order of \((10^{6}-10^{9})\) V/m, the magnitude of the second and third-order rectified spin Hall conductivity comes out to be of the order of \((10^{-1}-10^{2})\sigma_{0}\) and \((10^{-4}-10^{2})\sigma_{0}\) respectively, where \(\sigma_{0}=2e/h\) is the unit of spin conductivity. Here, the back-gate potential \(U\) is obtained using the relation \(U\approx eE_{z}d\) and setting vertical field \(E_{z}\approx 0.1\)V/nm, dielectric constant \(\epsilon\approx 3\), and the separation between the top and bottom contacts \(d\approx 0.31\) nm [70, 71]. Compared to the linear dc spin conductivity, the presented rectified nonlinear spin Hall conductivity estimation is feasible to measure experimentally. ## IV Conclusion Our study focuses on the investigation of the rectified nonlinear spin Hall current in the 1T\({}^{\prime}\)-WTe\({}_{2}\) material, utilizing both single-color and two-color laser beams. We have conducted an extensive analysis on the effects of the displacement field, light intensity, and polarization direction on the nonlinear spin Hall current to gain a comprehensive understanding of this phenomenon. Our findings indicate that the nonlinear response exhibits interband resonances arising from one-photon and two-photon absorption processes. It is worth noting that the two-color rectified response was found to be significantly stronger in magnitude than the single-color response when the displacement field was twice the spin-orbit coupling gap. Remarkably, we have also discovered the presence of an in-gap third-order spin current that is finite even when the light frequency is within the electronic bandgap. This discovery has significant implications for future research and technological advancements. Our study provides valuable insights into the intrinsic nonlinear spin hall effect in the 1T\({}^{\prime}\)-WTe\({}_{2}\) material and offers a path toward designing advanced optoelectronic devices capable of operating in nonlinear regimes for spintronics applications. ## Acknowledgment This work was supported by Nordita and the Swedish Research Council (VR Starting Grant No. 2018-04252). Nordita is partially supported by Nordforsk. PB acknowledges the computational facilities provided by SRM-AP.
2302.09222
A review of codebooks for CSI feedback in 5G new radio and beyond
Codebooks have been indispensable for wireless communication standard since the first release of the Long-Term Evolution in 2009. They offer an efficient way to acquire the channel state information (CSI) for multiple antenna systems. Nowadays, a codebook is not limited to a set of pre-defined precoders, it refers to a CSI feedback framework, which is more and more sophisticated. In this paper, we review the codebooks in 5G New Radio (NR) standards. The codebook timeline and the evolution trend are shown. Each codebook is elaborated with its motivation, the corresponding feedback mechanism, and the format of the precoding matrix indicator. Some insights are given to help grasp the underlying reasons and intuitions of these codebooks. Finally, we point out some unresolved challenges of the codebooks for future evolution of the standards. In general, this paper provides a comprehensive review of the codebooks in 5G NR and aims to help researchers understand the CSI feedback schemes from a standard and industrial perspective.
Ziao Qin, Haifan Yin
2023-02-18T03:46:35Z
http://arxiv.org/abs/2302.09222v2
# A Review of Codebooks for CSI Feedback in 5G New Radio and Beyond ###### Abstract Codebooks have been indispensable for wireless communication standard since the first release of the Long-Term Evolution in 2009. They offer an efficient way to acquire the channel state information (CSI) for multiple antenna systems. Nowadays, a codebook is not limited to a set of pre-defined precoders, it refers to a CSI feedback framework, which is more and more sophisticated. In this paper, we review the codebooks in 5G New Radio (NR) standards. The codebook timeline and the evolution trend are shown. Each codebook is elaborated with its motivation, the corresponding feedback mechanism, and the format of the precoding matrix indicator. Some insights are given to help grasp the underlying reasons and intuitions of these codebooks. Finally, we point out some unresolved challenges of the codebooks for future evolution of the standards. In general, this paper provides a comprehensive review of the codebooks in 5G NR and aims to help researchers understand the CSI feedback schemes from a standard and industrial perspective. MIMO; FDD; codebook; CSI; 5G NR. ## I Introduction Since the first release of NR technical specifications, R15, in late 2017, the fifth generation (5G) mobile communication is being deployed all over the world. To meet the ever-growing user requirements, the 5G NR specification keeps evolving, and R17 is finalized in 2022. According to the 3rd Generation Partner Project (3GPP), R18 will be officially referred to as "5G Advanced". In fact, 5G NR technology evolves from Long-Term Evolution (LTE), which jointly provides the overall 5G radio-access solution with NR [1]. Multiple-Input Multiple-Output (MIMO) has been an integral technology to improve system performance since 4G LTE R8 released in 2009. In 5G NR, this technology has evolved to massive MIMO [2] with an increasing scale of the antenna array. Massive MIMO provides higher transmission diversity, higher spatial multiplexing gain, and higher transmission directivity. Hence, higher spectral efficiency and more reliability can be achieved [3]. Particularly, the key to high transmission directivity brought by massive MIMO is beamforming, which enables multi-user spatial multiplexing. To achieve accurate beamforming, Channel State information (CSI) is the indispensable premise. At the base station (BS) side, the downlink (DL) CSI can be acquired by the feedback information from the users (UEs), i.e., CSI report [4]. Note that CSI report is more indispensable in frequency division duplex (FDD) mode than time division duplex (TDD) mode [5]. The reported CSI enables the BS to calculate the precoding matrix for beamforming and user scheduling. In 3GPP standards, the CSI report is enabled by codebooks. At first, a codebook refers to a set of pre-defined precoders, a.k.a., codewords, and the UEs feed back the indices of the codewords to the base station. With the development of the standard nowadays, the meaning of codebook extends to the whole CSI report mechanism, which helps the base station compute the precoding matrix with the feedback from the UEs. CSI report framework includes the procedure of a particular CSI reference signal (CSI-RS) transmitted by the BS and a series of feedback information from the UEs. Even though 5G NR evolves from LTE, the CSI acquisition framework in NR is quite different. Particularly in LTE, the CSI acquisition framework is coupled with the transmission modes (TMs). For example, a codebook-based feedback mode is defined in TM6, also known as the close-loop scheme. At the same time, the open-loop scheme is also supported in LTE, which means no CSI report is needed for precoding. In 5G NR, however, CSI report framework is decoupled with the TM and relies on the CSI report configurations instead. In this way, better flexibility and scalability for CSI report are achieved. More specifically, the CSI report framework configuration consists of two parts, i.e., report resources setting and report type setting [6]. The report resources setting specifies the periodic report manner and the occupied bandwidth part (BWP) according to different usages of the reference signal. For example, the CSI reference signal specializes in CSI calculation [7]. And the report type is configured based on report resources configuration. It mainly reports the CSI indicators and the corresponding codebook configuration. Particularly, the layer indicator (LI) and the rank indicator (RI) specify the optimum layer with best quality and the maximum number of transmission layers, respectively. Correspondingly, the precoding matrix indicator (PMI) is utilized for the base station to reconstruct or calculate the DL precoders. The essence of CSI report framework lies in the codebook design which determines the obtained precoding matrix from CSI report. The corresponding PMI indicates the specific channel characteristics with a chosen codebook scheme. In fact, since 4G, the codebooks have been evolving towards characterizing more detailed channel information with less time-frequency overhead. The number of supported types of codebook has increased over time to six in 5G NR R17 to accommodate different system requirements and to maintain backward compatibility. In this paper, we focus on discussing the codebook evolution, the corresponding PMI report, and the precoding matrix mapping in 5G NR. We first elaborate the codebook evolution timeline and the future developing trend. The relationship and comparison between codebooks are presented. We also explain the physical meanings of important parameters to describe a codebook. Then a thorough analysis of the PMI format, the mapping relationship from the PMI to the precoding matrix, and the PMI report strategies of the codebooks are given. In the end, we discuss some open problems in codebooks for 5G advanced and the sixth Generation (6G) wireless technology, including the support of high mobility, cell-free massive MIMO, and ultramassive MIMO. To the best of our knowledge, this is the first paper that provides an overview of the practical codebooks widely adopted by industry. Since academia and industry diverge a lot nowadays, this paper serves as a bridge over the increasing gap between academia and industry in the perspective of CSI feedback, with the hope of bring the practical limitations and the ideas of industry to the attention of academic researchers. The following of the paper is organized in eight sections. First, we review the codebook evolution history and elaborate the physical meaning of the configured parameters in Sec. II. Then, from Sec. III to Sec. VI, each codebook is reviewed with details of the PMI format and how the next generation NodeB (gNB) calculates the precoding matrix from the reported PMI. In Sec. VII, the codebooks for 5G beyond are discussed. In the end, Sec. VIII concludes the paper. ## II Codebook evolution Since the first release of LTE in 2009, codebooks have been evolving due to the advances of multi-antenna technology and the growing performance requirements. An illustration of the codebook evolution is shown in Figure 1. In the beginning, two different codebooks are supported. Class A Codebook is based on a classical closed-loop feedback. The precoder is based on the feedback from the UEs and comes from a discrete Fourier transform (DFT) matrix [8]. And the pilot sequences are not precoded at the BS. On the contrary, Class B Codebook relies on precoded pilots and the UE selects the precoder after estimating the precoded effective channel, e.g., selecting the precoder index (port number) corresponding to the largest amplitude of the pilot-based channel estimation. The BS may choose the reported precoder for DL data transmission. Note that the precoders here may not be limited to DFT vectors like in Class A Codebook. In the first version of 5G NR released in 2018, new types of codebook were introduced as a derivation of Class A and Class B. Type I Codebook and Type II Codebook evolve from Class A Codebook. Meanwhile, Type II Port Selection Codebook inherits the basic idea of Class B Codebook. Type II Codebook aims to provide more details of the spatial signature than Type I Codebook at the cost of heavier feedback overhead. In 2020, 5G NR R16 introduced Enhanced Type II Codebook and Enhanced Type II Port Selection Codebook. The most significant characteristic of Enhanced Type II Codebook is the support of subband-wise calculation of the PMI while the feedback overhead is balanced through a joint spatial and frequency domain compression. Such a compression is enabled by a larger number of BS antennas and a broader supported bandwidth. And Enhanced Type II Port Selection Codebook is evolved from Type II Port Selection Codebook likewise. In the recent R17, Further Enhanced Type II Port Selection Codebook is proposed in order to further improve the performance of Enhanced Type II Port Selection Codebook with the help of partial reciprocity between the uplink (UL) and downlink (DL) channel [9, 10]. In 5G Advanced and 6G, we believe some other enhanced features will be enabled by future codebooks, for example, to support high mobility transmission, cell-free MIMO, ultramassive MIMO, etc. In the following of the paper, we provide a comprehensive review of each codebook from the perspective of the PMI report mechanism and the calculation of the corresponding precoding matrix. Before elaborating on the details of each codebook, some terminologies and symbols to describe the codebook are explained below: * layers: The streams in MIMO-enabled spatial multiplexing, i.e., transmitting signals simultaneously in the same time/frequency resources. * subband: Several consecutive resource blocks (RBs). The bandwidth of a subband may be configured as 4, 8, 16 RBs, etc. * beam: It means a certain spatial direction, normally corresponding to a column vector from a one-dimensional or two-dimensional DFT matrix. * antenna port: This terminology is not related to a physical "port" anymore. The symbols transmitted on the same antenna port can be assumed to share the same effective channel. * \(v\): The layer limitation configured by the gNB and indicated by RI. * \(N_{\rm{AP}}\): The number of antenna ports at the gNB. * \(N_{1},N_{2},O_{1},O_{2}\): \(N_{1}\) and \(N_{2}\) denote the number of antennas elements in the horizontal and vertical direction, respectively. \(O_{1}\) and \(O_{2}\) are the oversampling factors in the horizontal and vertical direction, respectively. * \(N_{g}\): The number of antenna panels. * \(L\): The number of reported beams in a certain codebook. * \(N_{3}\): The number of subbands in a BWP. In the following sections, we will discuss the PMI report and the precoding matrix calculation of these codebooks. Fig. 1: Codebook evolution from LTE to 5G NR and beyond. ## III Type I Codebook In R17, two sub-types of Type I Codebook are supported, i.e., Type I Codebook with Single-Panel and Type I Codebook with Multi-Panel. The main difference between the two codebooks is the number of the supported transmit antenna panels. First, we discuss Type I Codebook with Single-Panel. ### _Type I Codebook with Single-Panel_ Type I Codebook with Single-Panel is relatively straightforward as the reported PMI reflects the information of a single beam, including the beam selection and the co-phasing information among the dual-polarized antennas. Under the assumption of an uniform planar array (UPA) at the gNB as in [11], the chosen beam is selected from the set of 2D DFT vectors with spatial oversampling, indicated by \((N_{1},N_{2},O_{1},O_{2})\). These parameters are specified by Table V.2.2.2.1-2 in [6]. Figure 2 demonstrates the physical meanings of the PMI. Define the PMI vector as \(\mathbf{I}=\left[\begin{array}{cc}\mathbf{I}_{1}&\mathbf{I}_{2}\end{array}\right]\), where \(\mathbf{I}_{1}\) reports the chosen beam information and \(\mathbf{I}_{2}\) indicates the corresponding phase information, respectively. \(\mathbf{I}_{1}\) includes two indicators \(i_{1,1}\) and \(i_{1,2}\). The indicator \(i_{1,1}\) maps the horizontal beam index \(m_{1}\) and the indicator \(i_{1,2}\) maps the vertical beam index \(m_{2}\). The second part of the PMI \(\mathbf{I}_{2}=i_{2}\) maps the co-phasing information by \(\varphi_{n}=e^{j\pi n/2}\). We should note that \(n\) is binary, except when \(\upsilon=1\), \(n\in\{0,1,2,3\}\). When the layer limitation \(\upsilon\leq 2\), the beam choice is indicated together by \(i_{1,1},i_{1,2},i_{2}\), otherwise by \(i_{1,1},i_{1,2}\). The 2D antenna array structure and the PMI format of Type I Codebook are illustrated in Figure 2. The 2D beam \(\mathbf{w}_{m_{1},m_{2}}\) is a Kronecker product of the vertical beam \(\mathbf{u}_{m_{2}}\) and the horizontal beam \(\mathbf{g}_{m_{1}}\). And the neighboring 2D beam \(\widetilde{\mathbf{w}}_{m_{1},m_{2}}\) consists of a different horizontal beam \(\widetilde{\mathbf{g}}_{m_{1}}\) and the same vertical beam with \(w_{m_{1},m_{2}}\). The indices \(m_{1}\) and \(m_{2}\) are indicated by \(i_{1,1}\) and \(i_{1,2}\), respectively. Figure 2 also demonstrates how to calculate the precoding matrix from the reported PMI and the equation is given \[\mathbf{W}^{(\upsilon)}=\frac{1}{\sqrt{N_{\mathrm{AP}}}}\left[\begin{array}[] {ccc}\mathbf{w}_{m_{1},m_{2}}&&&\\ &&\ddots&\\ &&&\widetilde{\mathbf{w}}_{m_{1},m_{2}}\end{array}\right]\mathbf{W}_{2}^{( \upsilon)}\] This procedure is relatively straightforward and only includes one beam and the co-phasing information. More specifically, the 2D beam \(\mathbf{w}_{m_{1},m_{2}}\) is frequency-irrelevant and reported in wideband mode. In contrast, the co-phasing matrix \(\mathbf{W}_{2}^{(\upsilon)}\) is frequency-dependent and needs to be reported per subband. The column vector of \(\mathbf{W}_{2}^{(\upsilon)}\) characterizes the co-phasing information of each layer, which is specified in Table V.2.2.2.1-(5-12) in [6]. Only if \(N_{\mathrm{AP}}>16\) and \(\upsilon\in\{3,4\}\), the matrix \(\mathbf{W}_{2}^{(\upsilon)}\) also maps a beam choice between the beam \(\mathbf{w}_{m_{1},m_{2}}\) and another beam \(\widetilde{\mathbf{w}}_{m_{1},m_{2}}\) which is defined in Table V.2.2.2.1 of [6]. Evolved from LTE, this codebook is rather simple and works well in strong line of sight (LoS) scenarios. The number of reported coefficients is smaller compared to other codebooks. The computational complexity of calculating the precoding matrix from the PMI is the lowest. Due to the concise structure of Type I Codebook with Single-Panel, the layer limitation can be eight. However, the performance of this codebook is limited, particularly so in multipath scenarios, since only one beam is used for signal transmission. ### _Type I Codebook with Multi-Panel_ This codebook may be applied when the antenna array at the gNB consists of multiple antenna panels instead of one. In order to facilitate the implementation, the number of antenna ports \(N_{\mathrm{AP}}\) is limited to the set \(\{8,16,32\}\) and the layer limitation descends to \(\upsilon\leq 4\). In this codebook, additional co-phasing information needs to be reported. Compared to Type I Codebook with Single-Panel, a new indicator vector \(\mathbf{i}_{1,4}\) is introduced in Type I Codebook with Multi-Panel. The dimension of the vector \(\mathbf{i}_{1,4}\) is associated with the number of antenna panels \(N_{g}\) and the codebook mode \(C_{m}\), varying from one to three. It indicates the inter-panel co-phasing information and the dual-polarization co-phasing information. Other indices in \(\mathbf{I}_{1}\) are consistent with Type I Codebook with Single-Panel. However, the indicator \(\mathbf{I}_{2}\) is reported in a different manner. When \(C_{m}=2\), it consists of three indices, \(i_{2,0}\), \(i_{2,1}\) and \(i_{2,1}\). Otherwise, it only includes one index \(i_{2}\), as in Type I Codebook with Single-Panel. The precoding matrix \(\mathbf{W}^{(\upsilon)}\) of Type I Codebook with Multi-Panel is similar to Type I Codebook with Single-Panel. The main difference lies in the co-phasing matrix \(\mathbf{W}_{2}^{(\upsilon)}\), which is jointly indicated by \(\mathbf{I}_{2}\) and \(\mathbf{i}_{1,4}\). It relies on the parameter combination \((N_{g},C_{m},\upsilon)\), which is specified in Table V.2.2.2.2-1 of [6]. Particularly, the additional co-phasing information is quantified by \(a_{p}=e^{j\pi/4}e^{j\pi p/2}\) and \(b_{n}=e^{-j\pi/4}e^{j\pi n/2}\). The indices \(n\in n_{0},n_{1},n_{2}\) are indicated by \(\mathbf{I}_{2}\) and \(p\in p_{1},p_{2}\) are indicated by \(\mathbf{i}_{1,4}\). In general, the two sub-types of Type I Codebook mentioned above are both able to provide the beam information and the co-phasing information. It is particularly applicable in a single-user MIMO (SU-MIMO) scenario. Besides, Type I Codebook lays the foundation of the other codebooks in subsequent releases of 5G NR. However, the drawbacks of Type I Codebook are also explicit. Due to the large bandwidth in 5G NR, the channels of subbands differ a lot. Type I Codebook only allows for one spatial beam, which will be used in the whole BWP. Fig. 2: The 2D antenna port structure, PMI format and precoding matrix calculation of Type I Codebook. Hence, it has limited capability to characterize the channel with multipath. As a result, the spectral efficiency performance of Type I Codebook is unsatisfactory, especially in massive MIMO. Therefore, other types of codebooks are naturally proposed as enhancements, such as Type II Codebook. ## IV Type II Codebook Type II Codebook is first proposed in 5G NR R15 as an upgrade of Type I Codebook, in order to better characterize the multi-path channel. One of the most significant improvement of Type II Codebook is the support of multiple beams. Each beam and its corresponding coefficient reflect a path with a certain angle. And up to four beams can be reported in Type II Codebook. As a result, this codebook outperforms Type I Codebook in most scenarios, nevertheless, at the cost of increased feedback overhead. ### _PMI format_ The PMI report for Type II Codebook turns out to be more complicated than Type I Codebook. As a tradeoff between the performance and the feedback overhead / complexity, the layer limitation \(v\) is 2. Figure 3 demonstrates the PMI format, which covers four kinds of beam information, i.e., the beam choice, the beam with the maximum amplitude, the beam amplitudes and the beam phases. The chosen \(L\in\{2,3,4\}\) beams are indicated by \(\textbf{i}_{1,1}\) and \(i_{1,2}\). We should note that all layers share the same beam choice. The indicator \(\textbf{i}_{1,1}\) contains two indices \(q_{1},q_{2}\), where \(q_{1}\) and \(q_{2}\) map the oversampling parameter in the horizontal and vertical direction, respectively. The indicator \(i_{1,2}\) indicates how to choose \(L\) beams from the DFT vector set of size \(N_{1}N_{2}\). The value of \(i_{1,2}\) varies from \(0\) to \(C_{N_{1}N_{2}}^{L}-1\), where \(C_{N_{1}N_{2}}^{L}\) represents the number of possibilities of selecting different \(L\) beams from all \(N_{1}N_{2}\) beams. In general, the gNB is equipped with dual-polarized antennas. In Type II Codebook, the same set of \(L\) beams are shared for both polarizations. As a result, \(2L\) beam coefficients corresponding to \(L\) chosen beams are reported. These coefficients include the amplitudes and the phases. In order to reduce the complexity, only the phase information are reported in a subband manner, while the amplitudes can be reported in subband manner or wideband manner ( the reported amplitude for a certain beam is identical for all the subbands in the whole BWP), depending on configuration. The wideband amplitude indicator \(\textbf{i}_{1,4,l}\) is a vector with \(2L\) entries, which are denoted by \(k_{l,i}^{(1)}\), where \(i\in\{0,\cdots 2L-1\}\) indicates the beam index and \(k_{l,i}^{(1)}\in\{0,1,\ldots,7\}\). The wideband amplitude of beam \(i\) at layer \(l\) is computed by \(p_{l,i}^{(1)}=\left(1/\sqrt{2}\right)^{7-k_{l,i}^{(1)}}\). The phase information reported in every subband are indicated by \(\textbf{i}_{2,1,l}\). Its element \(c_{l,i}\) quantizes phases in a N-phase shift keying (N-PSK) manner as \(e^{j2\pi c_{l,i}/N_{\text{psk}}}\), where \(N_{\text{psk}}\in\{4,8\}\). The indicator \(i_{1,3,l}\) maps the index of the beam with the maximum amplitude at layer \(l\). In fact, subband amplitude report can be supported in Type II Codebook and it is indicated by a binary parameter \(I_{s}\). Specifically, \(I_{s}=1\) means that subband amplitude report is enabled, while \(I_{s}=0\) means it does not. If \(I_{s}=1\), an additional indicator vector \(\textbf{i}_{2,2,l}\) is reported to quantize the subband amplitude information with its entries being \(k_{l,i}^{(2)}\in\{0,1\}\). The reported subband amplitude is thus \(p_{l,i}^{(2)}=\left(1/\sqrt{2}\right)^{1-k_{l,i}^{(2)}}\). ### _PMI report compression_ Type II Codebook supports multiple beams and subband-wise report. As a result, the feedback overhead becomes heavier compared to Type I Codebook. To balance the report accuracy and the overhead, Type II Codebook introduces a PMI report compression mechanism. The coefficients of beam with the strongest amplitude \(k_{l,i_{l}^{(1)}}^{(1)}\) and the corresponding phase \(c_{l,i_{l}^{*}}\) will not be reported, where the beam index \(i_{l}^{*}\) is indicated by \(i_{1,3,l}\). When Type II Codebook is configured to wideband mode, only the non-zero wideband amplitude \(k_{l,i}^{(1)}\) and the corresponding phase \(c_{l,i}\) are reported to the gNB. The number of non-zero coefficients of each layer is \(M_{\text{max}}^{l}<2L\). If subband mode is configured, the subband coefficients report is slightly different. The \(M_{\text{vr}}^{l}\) stronger subband coefficients are phase-quantized in a N-PSK manner, where \(N_{\text{psk}}\in{4,8}\). And the remaining \(M_{\text{max}}^{l}-M_{\text{vr}}^{l}\) non-zero subband coefficients are phase-quantized with \(N_{\text{psk}}=4\). The rest \(2L-M_{\text{vr}}^{l}\) subband coefficients are not reported, since they are very close to zero. In general, the core idea of PMI report compression lies in feeding back the information of the predominant beams, and the feedback overhead is reduced by ignoring the weak beams. ### _Precoding matrix calculation_ The precoding matrix calculation in Type II Codebook is different from Type I Codebook. The main difference is that Type II Codebook supports multiple beams. Figure 3 shows how to map the precoding matrix from the PMI in Type II Codebook. In general, the precoding matrix \(\mathbf{W}^{(l)}\) is a weighted summation of multiple beams and is calculated by \[\mathbf{W}^{(l)}=\left[\begin{array}{cc}\mathbf{B}&\\ &\mathbf{B}\end{array}\right]\left(\left(\mathbf{A}_{w}^{(l)}\mathbf{A}_{s}^{ (l)}\mathbf{P}_{s}^{(l)}\right)\otimes\mathbf{I}_{N_{1}N_{2}}\right)\] Fig. 3: The PMI format and the precoding matrix of Type II Codebook at layer \(l\). For each beam, four types of beam information are reported, i.e., the beam choice \(\mathbf{w}_{m_{1}^{(i)},m_{2}^{(i)}}\), the wideband amplitude \(p_{l,i}^{(1)}\), the subband phase \(c_{l,i}\) and the subband amplitude \(p_{l,i}^{(2)}\). In Figure 3, the chosen \(L\) beams are denoted by \(\mathbf{B}\). The block matrix \(\mathrm{diag}\left\{\mathbf{B},\mathbf{B}\right\}\) is introduced to represent the beams for both polarizations. The matrix \(\mathbf{B}\) is composed of \(L\) beams and each beam \(\mathbf{w}_{m_{1}^{(i)},m_{2}^{(i)}}\) is similar to the vector \(\mathbf{w}_{m_{1},m_{2}}\) in Type I Codebook. However, the indices \(m_{1}^{(i)},m_{2}^{(i)}\) are mapped from \(\mathbf{i}_{1,1}\) and \(i_{1,2}\) as illustrated in Figure 3. And the beam choice indices \(n_{1}^{(i)}\), \(n_{2}^{(i)}\) are calculated from \(i_{1,2}\) through the algorithm in Sec. 5.2.2.2.3 of [6]. The wideband amplitude of each layer is defined by a diagonal matrix \(\mathbf{A}_{w}^{(l)}\) and its element \(p_{l,i}^{(1)}\) is indicated by \(\mathbf{i}_{1,4,l}\). The wideband amplitude matrix \(\mathbf{A}_{w}^{(l)}\) is always reported. The diagonal matrix \(\mathbf{A}_{s}^{(l)}\) is the subband amplitude matrix, which is valid only if subband mode is supported. The diagonal elements of \(\mathbf{A}_{s}^{(l)}\) are indicated by \(\mathbf{i}_{2,2,l}\). Correspondingly, the subband phase information is characterized by \(\mathbf{P}_{s}^{(l)}\). The elements of \(\mathbf{P}_{s}^{(l)}\) are mapped from \(\mathbf{i}_{2,1,l}\). The matrix \(I_{N_{1}N_{2}}\) is an \(N_{1}N_{2}\times N_{1}N_{2}\) identity matrix. Overall, Type II Codebook shows many significant improvements, especially the support of subband amplitude and multiple beams. As a result, the CSI feedback is more accurate, which facilitates the gNB to cancel inter-user interference and allocate resources. This is also why Type II Codebook is more suitable for multi-user MIMO (MU-MIMO) than Type I Codebook. Even though Type II Codebook introduces a PMI report compression scheme to reduce the overhead, the feedback still scales with the bandwidth and the number of UEs. This problem is particularly acute in FDD mode with large number of gNB antennas. Nowadays, the increasing number of antennas and wider bandwidth call for new codebooks with high accuracy and low feedback overhead. Fortunately, a better-performing codebook called "Enhanced Type II Codebook" is proposed in 5G NR R16. ## V Enhanced Type II Codebook The codebooks discussed before were proposed in 5G NR R15. With the evolution of 5G NR, frequency-sensitive and multi-path channel environment requires a codebook with better performance by capturing both the spatial domain and the frequency domain structures of the channel. Hence, Enhanced Type II Codebook is proposed in 5G NR R16 as an Upgrade of Type II Codebook. It is particularly suitable in a multipath scattering environment with diverse angle spread and delay spread, while the UE is capable of complex signal processing. The most significant merit of Enhanced Type II Codebook lies in feedback reduction in spatial and frequency domain. This is enabled by the channel sparsity in both spatial and frequency domains in wideband massive MIMO. Figure 4 gives a demonstration of the feedback overhead compression. In the spatial domain, \(L\) beams are chosen to characterize the angular structure of the channel like in Type II Codebook. However, the subband amplitude is always reported in Enhanced Type II Codebook. In the frequency domain, a delay matrix \(\mathbf{F}^{(l)}\) is introduced to map the phase information of all \(N_{3}\) subbands with \(M_{v}\leq N_{3}\) basis vectors. Hence, the subband amplitude and phase of all beams of all \(N_{3}\) subbands are reported in \(\mathbf{W}_{\mathrm{sb}}^{(l)}\) with the help of \(M_{v}\) IDFT vectors. Due to the DFT-based compression in spatial domain and the IDFT-based compression in frequency domain, Enhanced Type II Codebook has a reduced feedback overhead compared with its predecessor. According to Table 5.2.2.2.5-1 in [6], eight compression configurations, denoted by the parameter combination \((L,p_{v},\beta)\), for Enhanced Type II Codebook are supported. The number of basis vectors in frequency domain is calculated by \(M_{v}=\left[p_{v}\frac{N_{3}}{R}\right]\), where \(p_{v}\in\{1/4,1/8\}\) is the number of average basis vectors used per subband in frequency domain. \(\beta\in\{1/4,1/2,3/4\}\) is the feedback overhead compression ratio from the full dimension to the reduced dimension. The parameter \(R\) is either one or two, depending on the higher-layer configurations. Therefore, in spatial domain and frequency domain, a total of \(LM_{v}\) basis vectors are utilized to characterize the precoding matrix. ### _PMI format_ The PMI format in Enhanced Type II Codebook is more complicated than Type II Codebook. As illustrated in Figure 4, the PMI format includes the beam indicators \(\mathbf{i}_{1,1},i_{1,2}\), the delay indicators \(i_{1,5},i_{1,6,l}\), the bitmap indicator \(i_{1,7,l}\), the strongest beam indicator \(i_{1,8,l}\), the wideband amplitude indicator \(\mathbf{i}_{2,3,l}\), the feedback amplitude indicator \(\mathbf{i}_{2,4,l}\) and the feedback phase indicator \(\mathbf{i}_{2,5,l}\). On one hand, the beam indicators are similar to the ones in Type II Codebook. The beam selection is mapped by \(\mathbf{i}_{1,1},i_{1,2}\) like Type II Codebook. The wideband amplitude indicator \(\mathbf{i}_{2,3,l}\) consists of two coefficients, \(k_{l,0}^{(1)}\) and \(k_{l,1}^{(1)}\). They quantize the wideband amplitude in each polarization direction with 4 bits according to the mapping relationship in Table 5.2.2.2.5-2 of [6]. The quantified wideband amplitude at each polarization direction is denoted by \(p_{l,0}^{(1)}\) and \(p_{l,1}^{(1)}\). Compared with the wideband amplitude indicator \(\mathbf{i}_{1,4,l}\) in Type II Codebook, the amplitude quantization in Enhanced Type II Codebook increases from 3 bits to 4 bits. Moreover, the Fig. 4: The compression in spatial and frequency domain and the PMI format of Enhanced Type II Codebook. subband beam information is always available in Enhanced Type II Codebook. It is reported in angle-delay domain. The coefficients \(k_{l,i,f}^{(2)}\) of \(\mathbf{i}_{2,4,l}\) quantize the feedback amplitude \(p_{l,i,f}^{(2)}\) with 3 bits, outperforming the 1-bit quantization of the subband amplitude in Type II Codebook. Corresponding to the feedback amplitude \(p_{l,i,f}^{(2)}\), the coefficients \(\phi_{l,i,f}\) of indicator \(\mathbf{i}_{2,5,l}\) quantize the feedback phase \(c_{l,i,f}\) in a 4PSK manner. The indicator \(i_{1,8,l}\) records the index of the strongest subband coefficient at layer \(l\), similar to the indicator \(i_{1,3,l}\) in Type II Codebook. On the other hand, due to the compression in frequency domain and the report of delay information, several new indicators \(i_{1,5},i_{1,6,l},\mathbf{i}_{1,7,l}\) are introduced. The subband amplitude and phase information is reported in \(M_{v}\) dimension instead of \(N_{3}\), due to the frequency domain compression. The frequency basis vectors are determined by a vector \(\mathbf{n}_{3,l}\in\mathbb{C}^{1\times M_{v}}\). Each element of this vector, denoted by \(n_{3,l}^{(f)}\in\left\{0,1\cdots N_{3}-1\right\},f\in\left\{0,\cdots M_{v}-1\right\}\), indicates the delay information of the corresponding frequency basis vector through the relationship \(\tau_{n_{f},l}^{(f)}=e^{j2\pi n_{3,l}^{(f)}/N_{3}}\) is the subband index. The vector \(\mathbf{n}_{3,l}\) is computed based on the indicators \(i_{1,5},i_{1,6,l}\) that are fed back by the UE, according to the algorithm in Sec. 5.2.2.2.5 of [6]. Denote the index of the strongest frequency basis vector at the layer \(l\) by \(f_{l}^{*}\). The frequency basis vector \(\mathbf{n}_{3,l}\) is reorganized with respect to \(f_{l}^{*}\) such that \(n_{3,l}^{(f)}=\left(n_{3,l}^{(f)}-n_{3,l}^{(f_{1})}\right)\bmod N_{3}\). Thus, \(n_{3,l}^{(f_{1})}=0\) after remapping. Likewise, the frequency basis vector index \(f\) is reorganized with respect to \(f_{l}^{*}\) such that \(f=(f-f_{l}^{*})\bmod M_{v}\), and therefore, \(f_{l}^{*}=0\). ### _PMI report compression_ Although the problem of feedback overhead is alleviated by IDFT based frequency domain compression in Enhanced Type II Codebook, the PMI report still consumes valuable time-frequency resources. In order to further reduce the overhead, some PMI compression mechanisms are introduced. First, the indices of the strongest beam at the layer \(l\) are denoted by \(i_{l}^{*}\). The coefficients of \(\mathbf{i}_{2,4,l},\mathbf{i}_{2,5,l}\) corresponding to \(i_{l}^{*},f_{l}^{*}\), as well as the wideband amplitude \(\mathbf{i}_{2,3,l}\) with indices equal to \(\lfloor i_{l}^{*}/L\rfloor\) are not reported. Then, similar to Type II Codebook, only non-zero coefficients of \(\mathbf{i}_{2,4,l}\) and \(\mathbf{i}_{2,5,l}\) are reported. The indicator \(\mathbf{i}_{1,7,l}\) serves as a bitmap with size \(1\times 2LM_{v}\) in order to show whether the UE reports the corresponding coefficients in \(\mathbf{W}_{\text{sb}}^{(\text{l})}\) or not. Since some values in \(\mathbf{W}_{\text{sb}}^{(\text{l})}\) are negligible, this bitmap will help reduce the feedback overhead. The number of reported coefficients of all layers is denoted by \(M_{\text{nz}}=\sum\limits_{l=1}^{v}M_{\text{nz}}^{l}\). The number of non-zeros coefficients \(M_{\text{nz}}^{l}\) is equal to the summation of the coefficients of the bitmap indicator \(\mathbf{i}_{1,7,l}\) at layer \(l\). As a result, \(2LvM_{v}-M_{\text{nz}}\) coefficients of \(\mathbf{i}_{2,4,l},\mathbf{i}_{2,5,l}\) are not reported, where \(v\) is the number of layers. Since in each layer, only relative values with respect to the coefficient with the maximum amplitude are needed for feedback, the number of all reported coefficients \(k_{i,j,f}^{(2)},c_{l,i,f}\) is thus \(M_{\text{nz}}-v\). ### _Precoding matrix calculation_ In general, the precoding matrix calculation in Enhanced Type II Codebook has a lot in common with Type II Codebook. The precoding matrix \(\mathbf{W}^{(n_{f},l)}\) is similar to \(\mathbf{W}^{(l)}\) in Figure 3 The main difference lies in the frequency domain compression and the mapping of the delay information. Figure 4 demonstrates the relationship between the PMI and the precoding matrix \(\mathbf{W}^{(n_{f},l)}\). The beam selecting matrix \(\mathbf{B}\) is consistent with Type II Codebook. However, the wideband amplitude matrix \(\mathbf{A}_{w}^{(\text{l})}\) is different, as it is composed of a block diagonal matrix with the two blocks reflecting the wideband amplitudes for both polarization instead of reusing the same set of wideband amplitudes among the two polarizations as in Type II Codebook. The reconstruction of the subband phase and amplitude is also quite different from Type II Codebook, because of the frequency domain compression with IDFT basis vectors. The amplitude and phase information of all subbands are transformed to angle-delay domain, quantized, and fed back to the gNB. Then the gNB reconstructs the information by reverse transformation with the quantized coefficients. Generally speaking, Enhanced Type II Codebook is more sophisticated than Type II Codebook. Despite the complexity, Enhanced Type II Codebook shows great potential in improving system spectral efficiency. The detailed PMI report in Enhanced Type II Codebook characterizes much more channel structure information, especially in the delay domain. The key lies in the exploitation of the multipath angle-delay structure of wideband massive MIMO by means of DFT and IDFT transformations. Thanks to the feedback overhead reduction in frequency domain, the maximum number of layers in Enhanced Type II Codebook increases to four. And the maximum number of beams \(L\) increases from four to six compared to Type II Codebook. In fact, higher frequency band and larger antenna arrays are given great expectations for 5G NR and beyond. In such case, the angle and delay structure of the channel is more obvious and should be captured by the codebooks in order to facilitate the CSI feedback. No doubt that Enhanced Type II Codebook is a good choice in this circumstance. However, the feedback overhead of Enhanced Type II Codebook is still a serious problem, especially when the number of antennas and the bandwidth are large. Achieving more accurate CSI feedback with less overhead is an everlasting effort of industry. ## VI Port selection codebooks A category of codebook called port selection codebooks is also supported in 3GPP standards, starting with Type II Port Selection Codebook introduced in R15. For ease of exposition, the codebooks discussed before are referred to as non-port selection codebooks in our paper. The main difference between the port selection codebooks and the previously described codebooks lies in the beam selection mechanisms. More specifically, in the non-port selection codebooks, the UE finds the spatial beams by computing the inner product between the DL CSI or precoders and the 2D DFT vectors with oversampling. One or several strong beams are then reported by the UE. In port selection codebooks however, the gNB transmits precoded reference signal (pilot) with different precoders, where each precoder represents a certain beam and is associated with an antenna port. The UE selects several antenna ports by pilot-based measurements and reports the corresponding coefficients. As a result, the beams are determined by antenna port selection. In port selection codebooks, all \(N_{\mathrm{AP}}\) antenna ports are grouped by a port sampling parameter \(d\). Then, the beam selection is indicated by a binary port choice. Figure 5 compares the differences between port selection codebooks and non-port selection codebooks. In general, the core idea of port selection codebooks lies in the fact that the UE reports a port selection decision other than a beam, and the UE is not aware of the specific beam related to a certain antenna port. After the gNB receives the reported port choice, it finds the beams corresponding to the selected antenna ports, and then reconstructs the DL precoder or CSI with the port-related quantized coefficients reported by the UE. Note that the beams are not limited to the 2D DFT vectors. It may also take the form of the eigenvectors of the channel covariance matrix, which generally outperforms the DFT vectors. Such kind of beams is enabled by the low-rankness property of the channel covariance matrix [12, 13], which facilitates the compression of the CSI using channel statistics. The advantages of port selection codebooks are twofold. First, the form of beams is decoupled with the UE feedback. Hence, the topology of the antenna array at gNB is no longer limited to UPA and the beams are more flexible to accommodate different antenna typologies and algorithms. On the contrary, in non-port selection codebooks, the gNB and the UE assume the beams to be 2D DFT vectors only, which may not work well under other antenna typologies than UPA. Second, the computation complexity is reduced at the UE side in exchange for extra beam calculation complexity at the gNB side. This is due to the binary port selection decision rather than the complex 2D beam searching procedure in non-port selection codebooks. In 5G NR R17, three port selection codebooks are supported, i.e., Type II Port Selection Codebook, Enhanced Type II Port Selection Codebook and Further Enhanced Type II Port Selection Codebook, which will be discussed below. ### _Type II and Enhanced Type II Port Selection Codebook_ These two codebooks are proposed together with the corresponding non-port selection codebooks in R15 and R16, respectively. We focus on analyzing the port selection indicators of both codebooks. First, the port sampling parameter \(d\) is configured by the gNB. The indicator \(i_{1,1}\) denotes the selected port sample group at each polarization. This indicator is different from the one in non-port selection codebooks. The value of \(i_{1,1}\) varies from zero to \(\lceil N_{\mathrm{AP}}/2d\rceil-1\). The port selection index \(q^{(i)}\) is mapped by \(i_{1,1}\) as \(q^{(i)}=i_{1,1}d+i\). Finally, the \(q^{(i)}\)-th entry of the reported port selection vector \(\mathbf{w}_{q^{(i)}}\in\mathbb{C}^{\frac{N_{\mathrm{AP}}}{2}\times 1}\) is one and the rest are zero. The remaining indicators and the precoding matrix calculation of these two codebooks are consistent with the corresponding non-port selection codebooks. Therefore, the PMI of these two codebooks can be obtained in a way similar to the corresponding non-port selection codebooks, and the details are omitted. ### _Further Enhanced Type II Port Selection Codebook_ This port selection codebook is first supported in the recent 5G NR R17. Its most intriguing characteristic lies in the exploitation of the partial reciprocity of the channel in FDD massive MIMO. Even though the complete channel reciprocity does not hold in FDD, the frequency-irrelevant parameters, e.g., the multipath angle and delay distributions of the downlink and uplink channels are very close. Such a property is exploited in Further Enhanced Type II Port Selection Codebook and the feedback overhead is reduced. The core idea of Further Enhanced Type II Port Selection Codebook is elaborated in [9]. This codebook is enabled by a joint spatial-frequency domain precoding scheme for the transmission of the downlink CSI-RS. The joint spatial-frequency precoders are computed based on the uplink channel estimates and the partial reciprocity is exploited therein. The choices of this wideband precoders are flexible, e.g., the DFT vectors, the eigenvectors of the joint spatial and frequency domain channel covariance matrix, etc, depending on different ways of implementations [9]. Thanks to the partial channel reciprocity, Further Enhanced Type II Port Selection Codebook has the potential to achieve better system performance with less feedback coefficients, and the computational complexity at the UE side is also greatly reduced. Further Enhanced Type II Codebook extends the number of beams to \(\alpha N_{\mathrm{AP}}/2\), where \(\alpha\) is the ratio of the chosen antenna ports to the total antenna ports. The maximum reported beams is 6 according to Table 5.2.2.7-1 in [6]. However, in frequency domain, the number of frequency basis vectors \(M_{v}\in\{1,2\}\) is smaller than in Enhanced Type II Port Selection Codebook. The PMI format of Further Enhanced Type II Port Selection Codebook is similar to Enhanced Type II Port Selection Fig. 5: CSI report of port selection codebooks and non-port selections codebook at the BS side and the UE side. Codebook. However, it may report more beams compared to its predecessor. In frequency domain, the PMI report of Further Enhanced Type II Port Selection Codebook is quite different. Particularly, the frequency basis indicating vector \(\mathbf{n}_{3}\in\mathbb{C}^{1\times M}\) is defined like in Enhanced Type Port Selection Codebook, however it is identical across layers, rather than layer-dependent. A new indicator \(i_{1,6}\) reflects the non-zero values of \(\mathbf{n}_{3}\). Generally speaking, port selection codebooks leads to a more flexible beam set than non-port selection codebooks. To be more specific, the beams can be chosen from DFT vectors, or the eigenvectors of covariance matrix, etc. It is not limited to a certain antenna array topology. On the contrary, the non-port selection codebooks generally assume a UPA or ULA topology at the gNB, and the beams are generated from DFT vectors. However, the port selection codebooks require a more intelligent gNB algorithm to find the proper beams based on limited information, e.g., the partial reciprocity. In the non-port selection codebooks, since the UEs have the DL channel estimation, the beams are readily obtained with DFT transformations. ## VII Codebooks for the future In our previous discussion, we elaborated on the codebooks and the corresponding PMI report mechanisms of all codebooks supported in 5G NR so far. The key properties of these codebooks are summarized in Table I, which is a comparison in terms of the number of reported spatial beams, the subband coefficient quantization manner, the feedback overhead, and the computational complexity. Note that the feedback overhead is quantified by the number of all reported coefficients and indicators through all subbands. The complexity refers to the precoding matrix calculating complexity for the gNB. In fact, different codebooks might be adopted according to different system requirements. For example, in SU-MIMO, Type I Codebook may be sufficient due to its simplicity. However, Type II Codebook and the succeeding codebooks support multiple-beams and thereby may outperform Type I Codebook in MU-MIMO due to better mitigation of multi-user interference. Moreover, in wideband massive MIMO, Enhanced Type II Codebook can effectively reduce the feedback overhead than Type II Codebook. In case of low feedback and complexity constraints at the UE side, port selection codebooks are more preferable than non-port selection codebooks. Particularly, in wideband FDD massive MIMO, Further Enhanced Type II Codebook may be a better choice due to the reduced feedback overhead and increased system performance brought by the exploitation of partial channel reciprocity. Nowadays, new multiple antenna technologies are emerging, and the application scenarios are extending. They call for suitable codebooks to accommodate specific scenarios and system requirements. In the following of the paper, we will discuss some unresolved challenges of codebooks for the future and some promising solutions to these challenges. ### _Enhanced Codebook for mobile scenarios_ One of the major challenges in massive MIMO is the mobility problem. The shorter coherence time in mobility scenarios leads to a serious degradation on the spectral efficiency [14]. Recently, some research work focused on this dilemma from the theoretical perspective [15, 16, 17, 18]. In industry, the topic of mobility enhancement had been considered and discussed in R17 from the perspective of mobility management with a type of non-zero power (NZP) CSI-RS for mobility management, as well as synchronization signal block (SSB)-based handover between cells. In future R18 version or 5G-advanced, the enhancement of mobility performance will be an integral part and has been added to the agenda [19]. However in the 5G NR standards, no codebook has ever been particularly designed for the non-negligible UE mobility, which causes serious deterioration of the system spectral efficiency. The main reason lies in the fast variation of the channel, and in particular, the Doppler of the paths. Unfortunately, the codebooks supported in R17 cannot solve this challenge. They are not designed to characterize the Doppler frequency shift of the multipath of the channel, nor can they report timely CSI for high mobility scenarios. In order to design an enhanced codebook for mobility scenarios, we believe that three constraints should be considered. First, the codebook should characterize the Doppler frequency shift information of the channel. Second, the time-varying channel demands a timely CSI feedback framework. Introducing a channel prediction scheme in CSI report may be a solution. Third, the compatibility with the existing codebooks is also vital and will facilitate the implementation. The mobility enhanced codebook proposed in [20] is a candidate, which provides an effective approach to obtain the CSI in high mobility scenarios by applying a joint-angle-delay-Doppler (JADD) channel prediction scheme. The core idea is to track the multipath Doppler frequency shifts with a few channel samples. Moreover, the timely CSI feedback is enabled by a partial reciprocity based wideband precoding scheme for the pilots and the feedback-based CSI prediction at the gNB. ### _Codebook for cell-free massive MIMO_ Recently, a distributed multiple antenna system or cell-free massive MIMO system has drawn much attention by academia [21] and industry. Compared to cell-centric massive MIMO, cell-free massive MIMO aims to serve the UEs simultaneously through widely distributed access points (APs) instead of the centralized antenna array at the base station [22]. This cell-free massive MIMO mainly shows the advantage in exploiting diversity against shadow fading at the expense of high backhaul requirements. It may lead to performance improvements in respect of the coverage probability, the energy efficiency and the spectral efficiency [23]. In cell-free massive MIMO, the channel environments between the distributed APs and a certain UE are quite different. There is little correlation between the distributed BS antennas, which makes the CSI compression more challenging. In fact, the codebooks mentioned in this paper all rely on the channel structure, e.g., the spatial-domain structure of the multipath angular response and the frequency-domain structure of the multipath delay response. The structure makes the channel correlated and therefore compressible. In cell-free massive MIMO however, such structures are not available. Hence, it is more challenging to characterize the channel parameters of each path, and the current codebook framework may not be suitable for cell-free massive MIMO. Therefore, the major challenge of designing the codebook for cell-free massive MIMO lies in how to reduce the feedback overhead, which scales with the number of distributed antennas. ### _Codebook for Ultramassive MIMO System_ Nowadays, 6G is widely discussed and many emerging technologies are considered candidates to be used in 6G [24, 25], including ultramassive MIMO, which helps to meet the extremely high rate acquirement. One of the most tricky problem of ultramassive MIMO is the CSI acquisition due to the massive antenna arrays. We believe that the challenges are mainly reflected in the following three aspects. First, the channel propagation nature is coherently changed. Due to the increasing antenna dimension, the channel radiating environment tends to exhibit a near-field effect. Hence, current codebook which is based on far-field radiating condition may fail to characterize the channel environment. Second, the dimension of the CSI and the precoder increase significantly. Therefore, the complexity of precoding and signal processing in ultramassive MIMO increase exponentially. Third, future codebooks for ultramassive MIMO should be easy to implement in real communication system. Some state-of-the-art methods like artificial intelligence (AI) [26], compressed sensing method [27], etc, are also seemed promising to help acquire the overwhelming, complicated and accurate CSI in ultramassive MIMO. Nevertheless, how these methods would be standardized and deployed in real communication systems is a problem that needs to be solved in the future. ## VIII Conclusion In this paper, we discussed the codebook evolution from the 3GPP standard point of view. We first summarized the timeline and trend of codebook evolution. The physical meanings of the codebook parameters were given for a better grasp on the codebooks. Then we elaborated on feedback scheme and the PMI format of all codebooks in 5G NR. We also compared the performance of the codebooks in respect of the number of supported beams, subband quantization and feedback manner, feedback overhead, and the complexity at the gNB. Finally, the remaining issues of codebook design for high mobility scenarios were discussed, and the open problems of codebook for cell-free massive MIMO and ultramassive MIMO were raised.
2303.01557
BenchDirect: A Directed Language Model for Compiler Benchmarks
The exponential increase of hardware-software complexity has made it impossible for compiler engineers to find the right optimization heuristics manually. Predictive models have been shown to find near optimal heuristics with little human effort but they are limited by a severe lack of diverse benchmarks to train on. Generative AI has been used by researchers to synthesize benchmarks into existing datasets. However, the synthetic programs are short, exceedingly simple and lacking diversity in their features. We develop BenchPress, the first ML compiler benchmark generator that can be directed within source code feature representations. BenchPress synthesizes executable functions by infilling code that conditions on the program's left and right context. BenchPress uses active learning to introduce new benchmarks with unseen features into the dataset of Grewe's et al. CPU vs GPU heuristic, improving its acquired performance by 50%. BenchPress targets features that has been impossible for other synthesizers to reach. In 3 feature spaces, we outperform human-written code from GitHub, CLgen, CLSmith and the SRCIROR mutator in targeting the features of Rodinia benchmarks. BenchPress steers generation with beam search over a feature-agnostic language model. We improve this with BenchDirect which utilizes a directed LM that infills programs by jointly observing source code context and the compiler features that are targeted. BenchDirect achieves up to 36% better accuracy in targeting the features of Rodinia benchmarks, it is 1.8x more likely to give an exact match and it speeds up execution time by up to 72% compared to BenchPress. Both our models produce code that is difficult to distinguish from human-written code. We conduct a Turing test which shows our models' synthetic benchmarks are labelled as 'human-written' as often as human-written code from GitHub.
Foivos Tsimpourlas, Pavlos Petoumenos, Min Xu, Chris Cummins, Kim Hazelwood, Ajitha Rajan, Hugh Leather
2023-03-02T20:17:24Z
http://arxiv.org/abs/2303.01557v1
# BenchDirect: A Directed Language Model for Compiler Benchmarks ###### Abstract. The exponential increase of hardware-software complexity has made it impossible for compiler engineers to find the right optimization heuristics manually. Predictive models have been shown to find near optimal heuristics with little human effort but they are limited by a severe lack of diverse benchmarks to train on. Generative AI has been used by researchers to synthesize benchmarks into existing datasets. However, the synthetic programs are short, exceedingly simple and lacking diversity in their features. We develop BenchPress, the first ML compiler benchmark generator that can be directed within source code feature representations. BenchPress synthesizes executable functions by infilling code that conditions on the program's left and right context. BenchPress uses active learning to introduce new benchmarks with unseen features into the dataset of Grewe's et al. CPU vs GPU heuristic, improving its acquired performance by 50%. BenchPress targets features that has been impossible for other synthesizers to reach. In 3 feature spaces, we outperform human-written code from GitHub, CLgen, CLSmith and the SCRIOR mutator in targeting the features of Rodinia benchmarks. BenchPress steers generation with beam search over a feature-agnostic language model. We improve this with BenchDirect which utilizes a directed LM that infills programs by jointly observing source code context and the compiler features that are targeted. BenchDirect achieves up to 36% better accuracy in targeting the features of Rodinia benchmarks, it is 1.8\(\times\) more likely to give an exact match and it speeds up execution time by up to 72% compared to BenchPress. Both our models produce code that is difficult to distinguish from human-written code. We conduct a Turing test which shows our models' synthetic benchmarks are labelled as 'human-written' as often as human-written code from GitHub. 2018 Footnote 1: [https://github.com/fivosts/BenchPressThis](https://github.com/fivosts/BenchPressThis) work was supported by the Engineering and Physical Sciences Research Council (grant EP/L0510351/1), EPSRC Centre for Doctoral Training in Pervasive Parallelism at the University of Edinburgh, School of Informatics. This work was supported by the Royal Academy of Engineering under the Research Fellowship scheme. 2018 Footnote 2: [https://github.com/fivosts/BenchPressThis](https://github.com/fivosts/BenchPressThis) work was supported by the Engineering and Physical Sciences Research Council (grant EP/L0510351/1), EPSRC Centre for Doctoral Training in Pervasive Parallelism at the University of Edinburgh, School of Informatics. This work was supported by the Royal Academy of Engineering under the Research Fellowship scheme. ## 1. Introduction Predictive modeling for compiler optimisation heuristics has been shown to outperform human experts and reduce development time in previous studies (Sutton et al., 2015; Ghahramani et al., 2016; Ghahramani et al., 2016; Ghahramani et al., 2016). Predictive models learn such heuristics by training on source-level benchmarks or on static code features extracted at the (1) syntax level by traversing their Extract Syntax Tree (AST) or (2) Intermediate Representation (IR) with the help of compiler passes, as shown in Figure 1. However, predictive modeling's effectiveness is restricted by an acute shortage of benchmarks, both in quantity and feature diversity (Sutton et al., 2015; Ghahramani et al., 2016; Ghahramani et al., 2016), degrading their performance. There have been some recent generative approaches that leverage the rise of deep learning and language modeling to mitigate this shortage by automatically generating synthetic programs to enhance existing human-written benchmarks (Sutton et al., 2015; Ghahramani et al., 2016; Ghahramani et al., 2016). While they could provide elegant solutions to improve training data for predictive models, these synthetic benchmarks seem to be short, repetitive with little new features compared to existing benchmarks (Ghahramani et al., 2016). To generate programs, they either use static programming language specifications with fuzzing or sample programs from learnt distributions, e.g., machine learning algorithms. Their common characteristic is that they generate random benchmarks that are likely to conform to the language's grammar but they are highly unlikely to synthesize benchmarks that are both human-likely and are not already included in existing datasets. What is needed is a systematic method to search for missing programs whose features would be likely to improve the performance of trained downstream tasks. We aim to address this with BenchPress, a targeted benchmark generator, that can generate compiler benchmarks with a desired set of features. In this work, we focus on generating OpenCL benchmarks, as predictive models for heterogeneous systems is a rapidly advancing field and training examples for them are very sparse. We develop BenchPress(Bendt, 2012), a BERT-based OpenCL benchmark generator (Chen et al., 2017; Chen et al., 2018) that targets and synthesizes benchmarks in desired parts of the feature space. We use active learning to choose parts of the feature space and beam search to steer BenchPress's generated samples towards the requested features. We train BenchPress with OpenCL code samples that we collect by mining BigQuery (Han et al., 2017) and GitHub directly using its API (Han et al., 2017). We support composite data types and calls to user-defined functions in our dataset and benchmark generation. BenchPress is a bidirectional generative model and learns to generate code in any part of a sequence by jointly considering left and right context. We achieve this with a new learnt token, the [H0LE], which hides a sequence from the input, whose length is unknown to BenchPress during training. BenchPress learns to fill [H0LE] by iteratively predicting an arbitrary number tokens that are likely to lead to a compiling function. We further develop BenchDirect, an extension of BenchPress with a synthesizer conditioned on the features of the complete function. At inference time, this allows us to fill each [H0LE] with code that is more likely to bring us closer to the requested features. BenchPress outperforms CLgen in the task of undirected program generation from a fixed input feed, generating 10\(\times\) more unique OpenCL kernels that are 7.5\(\times\) longer on average, with a compilation rate of 86% compared to CLgen's 2.33%. BenchPress strongly outperforms benchmark synthesizers CLgen, CLSmith (Chen et al., 2017; Chen et al., 2017), and human written code from GitHub in reaching close to the features of Rodinia benchmarks, developed by compiler experts. The extended synthesizer, by directly filling holes with code that is useful for reaching the targeted features, makes this process 6% up to 72% faster, 6% up to 36% more accurate and 1.8\(\times\) more likely to perfectly reach these features. Finally, BenchPress uses active learning, specifically query by committee (Chen et al., 2017), to search the feature space and find missing features to improve Grewe's et al. (Grewe, 2017) CPU vs GPU heuristic. Enhancing the heuristic's dataset with BenchPress's benchmarks improves the heuristic's speedup relative to the optimal static decision by 50%, increasing it from 4% to 6%, when the maximum possible speedup for this task is 12%. In this paper, we present the following contributions: 1. We are the first to develop a feature-space agnostic, directed code generator towards desired program features. 2. We develop an automated approach to rank the feature space of downstream tasks with active learning. 3. We enable bidirectional source code generation by inserting [H0LE] tokens in any part of a sequence. ### New Contributions The contributions of this study, different from our previous work, are summarized as follows: 1. We develop BenchDirect, the first bi-directional language model for code infilling that is directed in compiler feature spaces. Compared to BenchPress's language model's random benchmark generation, BenchDirect jointly conditions on code context and target features to generate directly candidates that satisfy them. We conduct an extensive evaluation between BenchPress and BenchDirect and we show the latter develops up to 36% better accuracy in targeting the features of Rodinia benchmarks across 3 feature spaces, while at the same time it requires up to 72% less time. 2. We evaluate the human-likeness of BenchPress's, BenchDirect's, CLgen's and CLSmith's benchmarks as a means to measure their quality. We find benchmarks generated by BenchPress and BenchDirect to be 'human-written' labelled as often as code from GitHub from participants in a Turing test. ## 2. Motivation Figure 2 shows a two-dimensional slice of the Grewe's et al. (Grewe, 2017) feature space: number of computational instructions vs number of memory instructions. Figure 2 also shows how the OpenCL benchmarks found in the Rodinia suite map into this plane, represented as purple diamonds. We find much of this two dimensional space is uncovered. 54 of the 58 Rodinia examples cluster in the lower left corner, the rest of the space having only four examples. Any optimization decision for programs in this area of the space would not be accurate due to lack of representative examples. CLgen attempted to address this problem by automatically generating more training examples. However, the generated kernels lacked feature diversity and provided even poorer coverage of the Figure 1. Training pipeline of a predictive model. Figure 2. # Memory operations and # computational instructions for (a) Rodinia benchmarks in purple diamonds and (b) CLgen’s samples in red dots. Generating samples with missing features is vital for predictive modeling’s performance. feature space. Figure 2 represents their position in the 2D space as red dots. Almost all of them are concentrated in a corner covering a small percentage of the feature space. While CLgen can generate hundreds of millions of unique kernels, almost all of them will fail to compile. As the probability of having at least one illegal token in the kernel body increases with the number of tokens, only tiny kernels are valid. In our experiments in Section 5, the longest compiling CLgen kernel had 8 lines and 102 tokens. Given the small number of tokens in valid kernels, there is a high degree of repetitiveness in the generated corpus, not only in terms of features but also in terms of structure and functionality. As a result, this approach is not well suited to augmenting the training set with diverse feature benchmarks. There is a compelling need to generate training points for uncovered regions of the feature space and we attempt to address this need with BenchPress. In the following Sections, we discuss our approach and evaluation of BenchPress, comparing it to the existing state-of-the art for feature space coverage. ## 3. Approach We present BenchPress, a deep learning model for directed compiler benchmark generation. BenchPress is the first directed synthesizer for compiling functions with features targeted by a user or a downstream task. BenchPress consists of an undirected language model that is trained on source code and a beam search sampler that steers its generation. Given a downstream task, our model uses active learning to search desired features and direct its program generation towards areas of high importance for the task. We further extend BenchPress's underlying language model into a directed synthesizer by encoding compiler features into the model's training process. This enables token generation to attend directly on the targeted features, significantly optimising steerable synthesis. We name this architecture BenchDirect. BenchPress and BenchDirect share a BERT-based language model (Benn et al., 2017), which we transform into a generative model. There are two key features in our language model that enable directed, bi-directional program generation. First, we develop a new token, namely, the [HOCE], and we train BenchPress to iteratively fill holes of unknown length at any part of an input sequence by conditioning it on the left and right context of the [HOCE]. As an extension to this, BenchDirect's language model includes a Transformer-based encoder (Zhu et al., 2017) that incorporates target compiler features into token classification. This allows tokens to be selected not only with respect to the input's source code context, but also given the compiler features that are targeted. Figure 3 illustrates an overview of our approach. BenchPress consists of three main components: 1. Learning corpus collection and processing. 2. Directed source code language modeling. 3. Feature space search and benchmark generation. We discuss each step in the following four subsections. In our last subsection, we discuss BenchDirect's directed language model, which distinguishes it from our base architecture, BenchPress. Our codebase and experimental data are publicly available 1 for researchers to use. Footnote 1: [https://github.com/fivosts/BenchPress](https://github.com/fivosts/BenchPress) ### Learning Corpus Modeling source code accurately requires large amounts of data (Zhu et al., 2017) similarly to other deep learning tasks. We develop a tool to collect data from BigQuery's GitHub dataset (Krizhevsky et al., 2014). We also use GitHub's API (Krizhevsky et al., 2014) and mine directly extra repositories that are not included in BigQuery. There are a few innovations in how we pre-process the code compared to previous works. First, we inline included header files recursively into source files to resolve type dependencies. Additionally, we automatically extract custom data types (e.g. struct, typedef) and utility functions found in the unprocessed corpus and place them into header files that are accessible throughout BenchPress's pipeline. This way, we resolve most type dependencies by retaining the functionality and semantics of the original, human-written programs. These two steps enable us to increase significantly the amount of compiling kernels we end up with in our training dataset. Second, we isolate kernels into single instances because BenchPress is trained on complete functions. From the previous steps, the type dependencies of each kernel are known and we automatically provide them to the compiler, retaining their compilability. Finally, we compile all kernels with Clang and reject those that do not compile. Next, we re-write identifiers by randomly sampling the alphabet, eliminating spurious naming patterns in the corpus. All kernels are padded to BenchPress's sequence length and kernels that are longer than this are truncated to fit. This helps BenchPress train its later indices' positional embeddings more effectively, for which we have less training information compared to earlier indices. Finally, we derive a tokenizer by parsing the AST of all source code. We reserve tokens for all OpenCL keywords and all intrinsic OpenCL function name identifiers found in the official OpenCL specifications (Zhu et al., 2017). We analyze the dataset and tokenize by word the most common function names and custom data type identifiers that we have collected. We encode all literals and infrequently used custom types and functions character by character to avoid exploding the size of the vocabulary. We define 5 meta tokens: [START], [END], [PAD], [HOCE], [ENDHOCE]. The derived tokenizer holds in total 2,201 unique tokens. Figure 3. BenchPress’s high-level approach. ### Language Modeling BenchPress is based on BERT (Devlin et al., 2017), a Transformer-based model originally designed for natural language modeling. BERT is trained to predict words that have been randomly hidden by [MASK] tokens. This way BERT learns fitting words with respect to their position in a sequence and also the left and right context, i.e., the text sequence before and after the masked token to be predicted. This type of training helps BERT learn what words mean within a given context, improving downstream tasks that rely on that knowledge. While this is a useful property, it is not enough to turn BERT into a generative model. We also want to be able to extend a kernel by inserting an arbitrary number of tokens in arbitrary positions. We could iteratively add a [MASK] token to get one extra token at a time, until we have a full statement. This would be limiting. Each time the new token would be selected based on its probability of completing forming a plausible kernel. Every intermediate kernel in the iterative process would have to be plausible or almost plausible, which is not a general way for augmenting kernels. Clusters of [MASK] tokens could allow us to insert multiple tokens in each iteration. This is still unsatisfactory. The number of [MASK] tokens in the cluster biases the kind of code that will be generated: if we ask such a generator to produce five tokens, it will give us a five token statement that could be expected to close this gap, not a five token sequence that could be the start of a much longer statement. We could place the left and right context to the edges of a sequence and fill intermediate positions with [MASK] tokens.BenchPress could predict a vocabulary or a stop token for a [MASK], allowing for arbitrary sequences. We test this configuration and sample a trained model with a fixed input feed.BenchPress is unable to learn the [MASK]s' left and right context conditionally, when many [MASK]s are in a sequence, which leads to zero samples to compile or even resemble reasonable code. What we do instead is to extend BERT's functionality with a new pair of learnt tokens, the [HOLE] and the [ENDMDLE]. [HOLE] follows the same logic with [MASK], however the number of tokens that have been hidden behind it is unknown to the model during training. The model only learns to predict the first token of an arbitrarily long missing sequence. At inference-time, we iteratively predict the first token of the remaining sequence and re-insert it just before the [HOLE]. This wayBenchPress learns to generate arbitrarily large code sequences within any part of a sequence. Figure 4 shows how a [HOLE] is inserted into a function to create a datapoint. A random starting index and a random length are selected. The choice of index and length are only restricted by a potential overlap of the prospective hidden sequence with any of the other meta token or the maximum hole length that is defined as a training parameter for the architecture as a percentage of each function's length. When the specifications of a hole have been settled, the hidden sequence is discarded. Only the first token of it is kept as the target prediction for that hole. A hole can also represent an empty sequence, i.e. hiding 0 tokens. In this case, the target prediction during training is [ENDHOLE]. The training instances are randomly generated on demand, the entire space of possible instances is too large to be pre-generated. In this paper, we only insert 1 hole per training instance forBenchPress to learn. Multiple holes could be used during training, but this is not needed duringBenchPress's current benchmark generation task. ### Benchmark Generation BenchPress's synthesizer operates as a generative model with the help of [HOLE] / [ENDHOLE] tokens. It receives an input with 1 or more [HOLE] tokens and returns a completed benchmark. For each [HOLE],BenchPress predicts one token that fits in the sequence at the [HOLE]'s index, with respect to its left and right context. If the predicted token is not [ENDHOLE], it moves the [HOLE] and all subsequent tokens one position to the right and inserts the predicted token to the initial target index. This intermediate kernel is iteratively provided as an input for the next token prediction and the process is repeated untilBenchPress predicts [ENDHOLE]. This marks a [HOLE] is complete and the final sample is returned, as shown in Figure 5. On its own, this process only augments kernels given their existing left and right context. In that sense,BenchPress's language model is undirected with respect to the features that are targeted. We makeBenchPress the first synthesizer to target desired parts of a feature space with beam search sampling. We generate a set of kernels from an empty input, we select the ones closer to the Figure 4. When a [HOLE] is inserted to a kernel at a random index, it hides a random number of tokens, unknown toBenchPress. On this example,BenchPress learns to predict the first hidden token, p. Figure 5. During sampling,BenchPress receives an input and predicts iteratively the fitting tokens.BenchPress predicts [ENDHOLE] to indicate a [HOLE] is complete. target features and we insert holes to generate new edited kernels iteratively. Given a target feature vector, BenchPress samples a starting, fixed input feed 'kernel void [HOLE]' and yields a collection of starting benchmarks. We reject benchmarks that do not compile and for the remaining we measure the Euclidean distance between their feature vectors and the target features. We select the _top-K_ candidates that have the shortest distance from the target and we use them as inputs for the next generation. To improve diversity among promoted benchmarks we introduce randomness in the selection of _top-K_ candidates: Each _top-K_ sample, has a fixed probability \(p=0.15\) to be replaced by another random candidate of its generation. BenchPress lazily creates multiple different input instances for each selected candidate by placing a random [HOLE] of random length in order to synthesize a new sample. BenchPress generates a successive collection of benchmarks, of which \(K\) compiling ones with the shortest distance from the target again are selected with \(\mathbf{p}\)-randomness and used as inputs. This search continues until a sample achieves a distance of 0 from the target, or until a threshold of generations (i.e. beam search depth) is exhausted. BenchPress returns the closest benchmark to the target's features along with all beam search's intermediate benchmarks that cover the model's traversal of the feature space starting from the origin and ending near the target features. For the benchmark synthesis process, we use categorical sampling with temperature to sample BenchPress's probabilities. The sampling temperature, beam search's width \(K\) and depth are defined as sampling parameters. In the worst case, BenchPress's directed program generation is slow, ranging from a few seconds to one hour, as it typically requires thousands of random language model inferences. However, BenchPress is the first program synthesizer that can target a set of desired program features. BenchDirect speeds up targeting features significantly as its directed language model requires far less samples per beam search iteration to produce samples close to the target features. Often, BenchDirect can target the feature space within one single inference step from an empty input. ### Feature Space Search A steerable synthesizer allows the generation of benchmarks with desired features. However, the automatic selection of those parts of the feature space that are worth targeting is challenging and depends on the downstream task. BenchPress attempts to solve this by searching the feature space with query by committee (ShenchPress, 2017), a well-known active learning technique. We implement a committee of (a) 7 NN, (b) 7 k-NN and (c) 7 K-means models. We set their initial state by passively training on a small portion of the downstream task's data. We sample the committee with thousands of random points in the space, we collect the predicted labels and measure the entropy for each sample. The entropy shows the level of uncertainty among the committee about the predicted label of a given point and is defined as: \[H=-\sum_{i=1}^{leL}(p(l)*\log(p(l))) \tag{1}\] where \(L\) is the set of all predicted labels and \(p(l)\) the probability of label \(l\) in the committee's prediction set for a given input. The highest entropy point is an important feature vector to target and BenchPress steers benchmark generation towards it with the approach explained in 3.3. We collect the labels of generated benchmarks and we train incrementally the committee with them. Then, we sample it to find the next highest entropy point. We continue this process until we saturate the feature space. BenchPress's committee is agnostic to the downstream task or the feature space and its I/O dimensions are hyper-parameters selected with respect to the task's feature and prediction dimensions. ### Directed Language Modeling BenchPress's synthesizer presented thus far is feature agnostic. This language model infills source code given the input context left and right of the [HOLE]. BenchPress is only able to steer program generation through a costly beam search on the model's output: we generate a large number of random code candidates and we feed those that are closer to the target features back into the model's input with new holes for further edits. Given BenchPress's language model is undirected, it often needs hundreds of thousands of code candidates to increase the chance of finding a few with the right features. This is inefficient and unsustainable on complex compiler tasks. Instead of randomly trying to fill the space with new benchmarks to get closer to the target features, an approach to target them directly during synthesis is needed. At the best case, this would help collect a benchmark with the right features in a single inference. To this end, we develop BenchDirect, a steerable program generator that extends BenchPress's undirected language model into a directed one. Along with the masked source code input, BenchDirect also encodes its compiler features before masking. Its classification head selects tokens to fill a [HOLE] by jointly observing the code context and the encoded features. This leads to selecting tokens that are likely to generate a kernel that is (a) compiling, similarly to BenchPress, but also (b) matching the target features provided in the input. Even if BenchDirect cannot target a set of features within a few attempts, combined with our beam search sampling, further edits can be made efficiently until it does. BenchDirect's extended feature encoder is based on Transformer (ShenchPress, 2017) and is shown in Figure 6. We encode a vector of numerical compiler features using an embedded layer with positional encoding followed by a Transformer-Encoder. We reduce the dimensions of the Transformer's output using a Fully Connected layer to match BERT language model's hidden state representation of its input source code. Both hidden states are concatenated and fed to a Fully Connected layer with GELU (ShenchPress, 2017) activation to extract correlated features. Finally, a Decoding Fully Connected layer projects the joint hidden state into the vocabulary space. The feature encoder's input consists of 134 positions divided into three fixed segments. Each represents one feature space used in our evaluation: (a) 8 positions for Grewe's et al. features, (b) 56 for Automphase and (c) 70 for InstCount features. BenchDirect can support multiple spaces and it only needs to be trained once to direct benchmark synthesis on any of them. To steer generation in a new feature space, we simply need to extend a new segment in the Transformer-Encoder's input and apply fine-tuning using the new space's feature extractor to collect data from our training corpus. BenchDirect is trained with the same approach described in Subsection 3.2. We sample randomly one OpenCL kernel and introduce a [HOLE] to provide it to the language model's input. The model learns to predict the first token of the hidden sequence using cross categorical entropy loss function. Introducing compiler features in training is the distinction to this process. When one OpenCL kernel is sampled, its compiler features are also collected. The model receives a pair of inputs, \((src_{i},f\sigma)\) and one output \(token_{i}\), where \(i\) is the index at which the [HOLE] is located. It is important to note that we do not feed the feature vectors of all three feature spaces to the encoder at the same time. Instead, we uniformly select one, we set its values to the respective segment of the encoder's input and we [PAD] all other positions such that gradients are not applied. Over training time, the model observes datapoints from all feature spaces for every kernel. Padding all feature spaces but one allows the trained model to learn how to direct synthesis to each one of them independently. Providing vectors from all spaces as one datapoint would possibly allow the model to learn correlations between them but this is not useful to us. What is more, directed synthesis on one of the feature spaces would be impossible. The model would have been trained to observe all three feature vectors for one given source code input. This means we would have to know the mapping function among all feature spaces to translate a target feature vector to all supported ones for the encoder's input. Instead, keeping one feature space per datapoint leads to the encoder's weights to be tuned accordingly to perform accurately on all spaces separately. Parts of the network (e.g. the FC layers) are jointly trained to optimise all feature spaces encoding. Other parts, such as the \((Q,K,V)\) matrices are grouped in vectors, one for each index separately, and are only trained when their respective positions are not padded. An alternative solution would be to use many Transformer-Encoders, one per feature space, and train each separately. During generation, the appropriate Transformer would be manually selected given the desired feature space. Although this is a valid approach, there is no evidence to suggest it would perform better than one Transformer model large enough to learn all segments separately. During sampling,BenchDirect receives a source code input and the target features as an input. Given the code context and the [HOLE] position, the model will attempt to select those tokens that will produce a compiling kernel with features as close as possible to the target in that respective feature space. At its best, we hopeBenchDirect can receive an empty code input and provide the target benchmark at a single inference step. At the very least, the beam search sampler will go through fewer iterations and fewer inferences per generation compared toBenchPress. ## 4. Experimental Setup We describe the configurations used in trainingBenchPress, and the parameters used in evaluation, namely (1) Feature Spaces - we use three different representations of program features, (2) Target Benchmarks - We use Rodinia benchmarks (Bendt et al., 2017) and their features as the target for synthesis byBenchPress, (3) Comparison to SOTA - we compareBenchPress with code synthesizers and human written code in improving Grewe's et al. heuristic model. ### Platforms We trainBenchPress and conduct all our experiments on two 64-bit systems each having one Intel Xeon E5-2620 16-core CPU, 2x Nvidia GeForce GTX 1080 GPU and 32 Gigabytes of RAM. We use Ubuntu 18.04, PyTorch 1.9.1 (Krizhevsky et al., 2012), CUDA version 11.4 and Nvidia driver version 510.47.03. We use Clang-10 asBenchPress's compiler and LLVM-10 to compile and execute InstCount and Autophase (Hendt et al., 2017) extracting tools. For compatibility reasons, we are required to use Clang LibTooling from LLVM-6 to execute Grewe's et al. (Grew et al., 2017) feature extractor. ### Language Modeling for source code We collect OpenCL code from GitHub and split it into single function instances. We ensure no kernels that come from benchmarks suites used in the evaluation are included in our corpus. We preprocess text, re-write variables and reject OpenCL kernels that do not compile. In total we mine 63,918 OpenCL kernels across 12,860 GitHub repositories and we successfully compile 19,637 of them (31% compilation rate). We trainBenchPress on our OpenCL Corpus for 10M steps with a batch size of 32. ForBenchPress's BERT model parameters, we select 2 hidden layers, 12 attention heads. We set intermediate size, hidden size and max position embeddings to 768. We set the maximum length of holes to be 90% of a kernel's token length, i.e. a hole can hide almost all tokens of a training instance. We optimize the model using Adam optimizer with a learning rate that reaches a maximum of \(45x10^{-6}\) after 20,000 warmup steps and decays linearly Figure 6. BenchDirect’s directed language model design. over the remaining training steps. We train BenchPress's language model to a final loss value of 0.28. ### Feature Spaces Compiler predictive models use static code features to represent programs and learn optimisation heuristics. A vector of independent characteristics represent a single program. Each of them are typically an integer or float value. Features are extracted at the Syntax level by traversing the AST or at the IR level using the compiler's middle end (e.g. LLVM-IR). A feature space is the collection of all possible program feature vectors. BenchPress is a generative model that can be steered to generate samples for a desired part of the feature space. We evaluate BenchPress on three source feature representations we find across the literature, (a) Syntax-level Grewe's et al. features (Grewe and others, 2016), (b) IR-level LLVM-InstCount (Han et al., 2017) and (c) IR-level Autombase (Han et al., 2017). Grewe's et al. features are extracted with Clang's LibTooling and used to train their predictive model on the CPU vs GPU task for OpenCL kernels. This feature space holds 8 dimensions. 4 dimensions describe the number of 1) computational, 2) relational, 3) atomic and 4) memory access instructions. The feature space also counts the different type of memory instructions, local memory or coalesced. Finally, the computational to memory and coalesced to memory ratios are defined. InstCount is a standard pass provided by LLVM-IR framework and used in Compiler Gym by Cummins et al. (Cummins et al., 2017). InstCount holds 70 dimensions: 67 dimensions each counting all 67 LLVM-IR instruction types and total number of 1) instructions, 2) basic blocks and 3) functions. Autombase by Huang et al. (Huang et al., 2017) holds 56 dimensions. While many of the features used in Autombase are shared with InstCount, they introduce new ones such as number of input arguments to PHI Nodes or total number of memory instructions. On the other hand, they do not include the count of some LLVM instructions that are not considered to contribute to a program's representation, e.g. CatchPad instruction. ### Analysis of BenchPress and CLgen language models CLgen (Cummins et al., 2017) is the current state of the art in OpenCL benchmark generation. Its synthetic benchmarks improve the accuracy of Grewe's et al. predictive model (Grewe and others, 2016) by 1.27\(\times\). However, Goens et al. (Goens et al., 2016) perform a case study and show evidence that CLgen's synthetic benchmarks do not improve the quality of training data and, consequently, performance of predictive models. They show that a predictive model in fact performs worse with synthetic benchmarks as opposed to human written benchmarks or code from GitHub. This study motivates us to perform an analysis of BenchPress's language model, BERT, with CLgen in the task of undirected program generation. In this first experiment, we reproduce CLgen using the authors' artifacts and we sample it with a fixed input 'kernel void' to collect a dataset of unique OpenCL kernels. We use BenchPress on the same generative task and sample the model with the same fixed input 'kernel void [H0LE]' to obtain another dataset of unique benchmarks. In this experiment we focus on the language model's inference performance. We compare both generative models on their throughput, their ability to create compiling code, feature distribution and code size. In this experiment, we do not direct program generation. BenchPress generates compiling kernels in a single inference step. ### Targeted Benchmark Generation Next, we evaluate BenchPress's ability to steer towards desired program features. We use well-established compiler benchmarks as our reference and target their features within this space. These benchmarks usually perform intensive operations, such as matrix multiplications or FFT analysis, they contain hundreds of computational and memory instructions and are specifically fine-tuned by experts to exercise compilers from different angles. As a result, we believe features in these benchmarks provide a good target to assess performance of BenchPress's ability to target complex features. We choose target benchmarks within the Rodinia suite (Grew and others, 2016; Grew and others, 2016) as it is widely used in the literature (Cummins et al., 2017; Grew and others, 2016). Similar to the training corpus, we collect the suite's source files, we inline header files and dependent OpenCL libraries into them, we split kernels into single source files and reject those that do not compile. In total, we collect 61 target Rodinia benchmarks out of which 58 compile. For the remaining benchmarks, we collect their features using the feature extractors for Grewe's et al., InstCount and Autombase feature spaces (Grewe and others, 2016; Huang et al., 2017; Han et al., 2017). We target the feature vectors of these benchmarks and request BenchPress to generate at least one matching benchmark for each. We end up with three collective synthetic benchmark datasets, one for each feature space, that contain code with features matching Rodinia benchmarks. For each Rodinia benchmark's target feature vector, we measure the minimum Euclidean distance to it achieved between BenchPress, code from GitHub, CLgen and CLSmith (Grew and others, 2016; Grew and others, 2016). For GitHub's and CLSmith's kernels, we use SGRIOR (Grew and others, 2016) to apply code mutations exhaustively with beam search. To make our experiment more intuitive we use two datasets for GitHub: a) GitHub consisting of all OpenCL kernels we collected and b) GitHub-768, a proper subset of GitHub which contains only the kernels that do not exceed BenchPress's sequence length of 768 tokens. Since BenchPress benchmarks' size are restricted to the architecture's sequence length, we feel it is important to make this distinction in order to present a view of BenchPress's actual performance on features that may be unreachable within the current sequence length. For example, it may be impossible to generate 2,000 computational instructions within 768 tokens. For such cases, we believe GitHub-768 with its equally restricted sequence length would allow for a fairer comparison. For all three feature spaces, we weed out the Rodinia benchmarks that have an exact matching sample (i.e. a Euclidean distance of 0) in GitHub-768. Since we already have matching samples for them, we do not need to target them with BenchPress or any other generative model. However, we do not skip benchmarks whose features exist only in GitHub's full dataset as we wanted to explore the feasibility of using BenchPress to generate a sample with the same features but smaller sequence length. Applying this restriction we end up with 22 Rodinia benchmarks for Grewe's et al., 52 for InstCount and 36 for Autombase feature spaces. We sample BenchPress for a maximum of 50 beam search iterations unless a benchmark matching the target features is produced. We set a workload size of 2048 samples per iteration. Among those of them that compile, our beam search sampler propagates to the next generation the closest 32 candidates, placing new holes into them. ### Active Learning for Feature Selection BenchPress's steerable generation is vital for searching the feature space while also finding useful features to target with active learning. In this experiment, we evaluate BenchPress in the downstream task of training the predictive model proposed by Grewe et al. (Grewe et al., 2016), a well-tested problem used by many baseline models. Grewe et al. train a decision tree model to predict the optimal device to execute a benchmark, choosing between a CPU and a GPU. They measure their model's performance as speedup achieved with using the predicted device for execution versus statically executing all benchmarks on the GPU. To train the predictive model, they use OpenCL benchmarks from 7 well-known benchmarks suites (Grewe et al., 2016; Grewe et al., 2016). In this experiment, we reproduce Grewe's et al. heuristic using their artifact and we also retrain it with datasets enriched with executable benchmarks from BenchPress using active learning and passive learning (i.e. targeting random parts of the feature space instead of searching it), CLgen and GitHub. We measure the speedup over static mapping for each of them. To collect our evaluated datasets, we execute OpenCL benchmarks with CLDrive (Grewe et al., 2016) by Cummins et al. CLDrive automatically generates inputs and drives kernels to the hardware. It measures the execution time per device across thousands of runs and it rejects kernels that produce runtime errors, do not modify any of the inputs (no output) or modify them differently for each run (not deterministic). For (a) the 7 human-written benchmarks suites, (b) BenchPress, (c) CLgen and (d) GitHub, we execute their kernel on CLDrive using a range of different _local_ and _global size_ configurations. We label each instance with the fastest measured device (the CPU or the GPU), in the same way Cummins et al. (Grewe et al., 2016) and Grewe et al. (Grewe et al., 2016) performed their evaluation. ### Directed Language Modeling BenchPress develops strong performance compared to state of the art program synthesizers and its benchmarks outperform even human-written benchmarks from GitHub in two tasks, (a) targeting the features of Rodinia benchmarks and (b) improving the accuracy of a compiler heuristic model. However, its undirected language model requires up to hundreds of thousands of inferences for its beam search sampler to minimize its samples' distance from the target features. This process can be inefficient, which we strive to address with a directed language model, namely BenchDirect. We repeat the experiment of Section 4.5 to evaluate BenchDirect's accuracy and execution time in targeting the features of Rodinia benchmarks compared to BenchPress. We target the features of Rodinia benchmarks in all three feature spaces for a range of different workload sizes: 32, 64, 128, 256, 512, 1024 and 2048. A large workload size leads to a significant time overhead but is required to ensure high accuracy for BenchPress's undirected language model. This may not be the case for BenchDirect's directed synthesizer, speeding up directed generation without compensating on its accuracy. In this experiment, we explore how this parameter affects accuracy and total execution time for both models. We re-train BenchPress and BenchDirect for 8M steps to a final loss of 0.14 using the same BERT hyper-parameters described in Section 4.2, except for their max position embeddings which we set to 512 instead of 768 to reduce training time. For BenchDirect's Transformer-Encoder, we set an embedding size of 512, 4 attention heads, 2 hidden layers and we set its Fully Connected layers to 1024 features. During sampling, we set the threshold of maximum beam search iterations to 5. Reducing the models' sequence length to 512 and the sampler's iteration threshold to 5 leads to a performance reduction compared to BenchPress's accuracy in Section 4.5. However, it saves valuable compute time. Both BenchPress and BenchDirect are restricted by this reduction, therefore the validity of this comparative study's results is not hurt. ### Human Likeness of Generated Code A great challenge for neural synthesizers is to produce programs that are human likely, that is following basic structural and syntactical form that makes them easy for humans to read and understand. The human likeness of a synthetic program reflects its quality and efficiency in the functionality it serves. To this end, we conduct a case study to measure the likeness of BenchPress's generated benchmarks to human-written code. We devise a double blind Turing test in which we show to human participants random samples from BenchPress, BenchDirect, CLgen, CLSmith and also human-written code from GitHub. They are shown randomly selected benchmarks from the stored datasets and are asked to label them as human or AI-written. We release our Turing test publicly available in the form of a web application2. Footnote 2: [https://humanora.co.uk](https://humanora.co.uk) ## 5. Results and Analysis In this section, we show our experiments' results and compare BenchPress with state of the art techniques in OpenCL benchmark synthesis. We present case studies of (a) BenchPress's throughput as a generative model compared to CLgen, (b) its ability to steer benchmark generation towards desired features and (c) its performance in searching the feature space to enhance a downstream task's performance. ### Analysis of BenchPress and CLgen language models We perform an analysis of BenchPress and CLgen as language models and compare them in generating a collection of benchmarks from a fixed input feed, 'kernel void [HOLE]' and 'kernel void' respectively. We compare the two approaches measuring (a) the generative models' throughput and (b) the quality of their generated benchmarks in terms of code size and features. In this experiment, we do not use any directed search or iterative approach for BenchPress's generation. We perform this evaluation to measure how BERT, BenchPress's underlying language model, compares with CLgen as a generative model. Table 1 presents the aggregate measurements for the generated benchmarks using both approaches. _Compilation rate and code quality_.BenchPress generates over 10\(\times\) more unique compiling benchmarks than CLgen. This result is observed despiteBenchPress generating 8\(\times\) fewer unique benchmarks than CLgen. The compilation rate withBenchPress is 86% while CLgen has an exceedingly small rate of 2.3%.BenchPress's largest sample is 750 tokens compiling to 161 LLVM-IR instructions. This is a 7.5\(\times\) and 5\(\times\) increase in number of tokens and number of LLVM-IR instructions compared to CLgen's largest kernel. The only drawback ofBenchPress compared to CLgen is that it is considerably slower in generating candidates. This is because the transformer-based architecture inBenchPress is significantly larger in number of parameters than CLgen's LSTM. Additionally,BenchPress tends to generate longer kernels than CLgen, necessitating more inference steps and longer generation time. In Figures 6(a) and 6(b), we show the frequency distribution of the number of tokens and number of LLVM-IR instructions for compiling kernels for both datasets. To visualize our results better, we focus on synthesized kernels with token lengths \(\leq 100\) and instructions lengths \(\leq 25\) where the vast majority of benchmarks are found. Most of BenchPress's benchmarks are found to have 20 to 80 tokens and 3 to 16 LLVM-IR instructions. The majority of CLgen's benchmarks are found to have 5 to 45 tokens and only up to 4 LLVM-IR instructions. 94% of CLgen's generated benchmarks have only 1 instruction when compiled to LLVM-IR. We analyze the dataset to explain this phenomenon and find CLgen generates a lot of comments, repeated dead statements and awkward non-human-like code such as multiple semi-colons. These results agree with the case study by Goens et al. (Goens et al., 2017) that shows the AST depth distribution of CLgen's code is significantly narrower compared to code from GitHub or standard benchmarks. _Feature space coverage_. To further enhance our comparison, we perform an analysis on the feature space coverage of BenchPress's and CLgen's synthesized programs in all three feature spaces. Feature coverage is the most critical metric when evaluating the effectiveness of a benchmark synthesizer for predictive modeling. We use Principal Component Analysis (PCA-2) to represent the feature spaces in an easy to visualize 2-dimensional space. In Figures 7(a), 7(b) and 7(c) we show the extent of feature space covered by candidates in the two approaches. CLgen's samples are clustered around the origin, while there is one outlier for Autophase and two for Grewe's et al. and InstCount features. Candidates generated by BenchPress are more scattered achieving a much wider coverage of the feature space. ### Targeted Benchmark Generation We use beam search to generate samples that target desired parts of the feature space. We compare BenchPress with human-written benchmarks from GitHub and synthetic benchmarks from CLgen and CLSmith in targeting the features of Rodinia benchmarks on three feature spaces. We use SRCIROR code mutator with beam search to collect GitHub and CLSmith benchmarks with closer features. For each target benchmark, we gather one OpenCL kernel per evaluated dataset whose features have the minimum available Euclidean distance from the target features. Figures 8(a), 8(b) and 8(c) show the relative proximity of each benchmark to the target. This proximity is the complement of the relative distance of the two kernels, i.e, 1 minus the distance between the two kernels in the feature space relative to the distance of the Rodinia kernel from the axes origin. This allows us to express the quality of the match with an intuitive 0% to 100% scale: 100% means the two kernels have the same features, 0% means the best kernel is as close to the target as an empty kernel. We mark perfect matches with a white asterisk (*). \begin{table} \begin{tabular}{l l l l l l l} & \# unique & \# compiling & compilation & max & max inst & time per \\ & benchmarks & benchmarks & rate & tokens & (LLVM-IR) & sample (ms) \\ \hline BenchPress & 190,460 & 142,607 & 86\% & 750 & 161 & 162 \\ CLgen & 1,564,011 & 13,035 & 2.33\% & 102 & 32 & 103 \\ \end{tabular} \end{table} Table 1. Throughput comparison between BenchPress and CLgen on generated OpenCL benchmarks when BenchPress does not use feature-directed program generation. Figure 7. Probability distribution of (a) token length and (b) LLVM-IR Instruction count among BenchPress’s and CLgen’s generated benchmarks. BenchPress’s benchmarks presented here are generated at a single inference step without iteratively directing program synthesis. _Performance on syntactic features_. On Grewe's et al. feature space, BenchPress generates kernels that are the closest ones in features for all 22 Rodinia Benchmarks compared to CLgen and CLSmith, and 20 out of 22 compared to GitHub and GitHub-768. BenchPress synthesizes an exact match (100% relative proximity) for 14 target benchmarks. We pick out and discuss a few examples from our results. The absolute distance achieved for 'nw-1' and 'ellipse_opt', is 1.0. For both targets, almost all features match except for one missing instruction (coalesced mem access and atomic inst respectively). For 'hotspot' GitHub and BenchPress produce a candidate kernel with exact matching features. However, BenchPress generates the matching candidate kernel in 421 tokens, unlike GitHub's closest benchmark that has 798 tokens. For the two target benchmarks that BenchPress's candidates were not closest to, we found only GitHub contains better samples for 'com_dwt-3' and and 'gpu-1', while BenchPress does not. We find both benchmarks to be fairly large (901 and 5,200 tokens respectively) and BenchPress cannot reach these features within 768 tokens. For the same reason, GitHub-768, CLgen and CLSmith does worse than BenchPress on these targets. _Performance on LLVM IR features_. Autophase and InstCount features are extracted from the LLVM-IR of a program that has been compiled with -01 flag to apply basic optimisations such as dead code elimination. BenchPress occasionally generates repeating operations that a compiler will remove or numerical operations that may be reduced to simple assignments. Owing to these optimisations, we find targeting benchmarks on these two feature spaces is more challenging than Grewe's et al. syntax-level features. With InstCount features, BenchPress generates candidates whose features completely match 2 out of the 52 Rodinia benchmarks. Among the remaining 50, BenchPress outperforms CLgen, CLSmith, GitHub and GitHub-768 for all target benchmarks, achieving higher proximity. SRCIOR significantly improves GitHub leading to GitHub+SRCIOR to achieve better proximity for 18 out of 52 Rodinia benchmarks compared to BenchPress. On Autophase features, BenchPress generates candidates matching the same 2 target benchmarks, while outperforming CLgen, CLSmith and GitHub on 30 out of 36 Rodinia benchmarks in total. GitHub+SRCIOR performs better than BenchPress for 8 out of 36 target benchmarks and produces an exact match for 'hotspotKernel'. We previously explain the importance of having diverse features in compiler benchmarks and we show, in Figure 2, how sparse Rodinia benchmarks are on Grewe's et al. reduced feature space and how CLgen fails to provide any additional features. Now we introduce into this 2-dimensional space all BenchPress's kernels that are generated while performing directed space search to target Rodinia benchmarks and we present them in Figure 10.BenchPress densely populates the space around the target benchmarks that are clustered around the lower left corner. We find BenchPress's samples progressively converge to the target benchmark features with successive generations. For example, BenchPress targets 'com_dwt-3' at 385 computational and 137 memory instructions, starting from the axes origin and attempting to reach its features from different directions. One of the directions prevail but does not manage to exactly reach the target. The same happens for the top right point, 'gpu-1'.BenchPress's samples get closer developing a straight line from the origin to 1,000 computational and 100 memory instructions. At this point BenchPress is restricted by its sequence length and cannot augment further its samples. This is depicted by its attempt to reduce the distance by swapping the two instruction types within the same token length, forming a perpendicular line with a negative slope. We argue the area of Grewe's et al. feature space that BenchPress can cover within 768 tokens to be the area of the triangle formed by the intersections of the axes with the extension of the negative slope line developed by BenchPress's samples. _Summary_ - BenchPress _vs GitHub vs CLgen vs CLSmith._ 6 of the targeted Rodinia benchmarks exceed BenchPress's maximum sequence length of 768 tokens. In LLVM-IR feature spaces, care must be taken to generate code that will not be removed by compiler optimisations. This is a difficult challenge for source code generative models. However, our results demonstrate that BenchPress can generate OpenCL kernels that approach target human-written benchmarks compared to GitHub code and CLgen candidates. Our experiments also show BenchPress is dramatically better in all cases than CLgen, the current state of the art in OpenCL synthetic Figure 8. PCA-2 representation of feature space coverage of BenchPress and CLgen for (a) Grewe’s et al., (b) InstCount and (c) Autophase feature spaces. In this experiment, BenchPress’s generation is undirected and no iterative space search is performed. Figure 9: Relative proximity to each Rodinia benchmark of the candidate kernel with the closest features. We report the best match for seven datasets (BenchPress’s, Clgen’s, GitHub’s and GitHub-768’s datasets also combined with exhaustive mutations with SRCIROR) over three feature spaces ((a) Grewe’s et al., (b) InstCount and (c) Autombase). Relative proximity is 1 minus the distance of the two kernels in the feature space relative to the distance of the Rodinia benchmark from the axes origin. 100% means an exact match in features and is highlighted with a white asterisk (”). A score towards 0% indicates the closest match is closer to the axes origin than the benchmark, i.e., a very small or empty kernel. benchmark generation. We further elaborate on BenchPress's performance in the next subsections. ### Active Learning for Feature Selection We combine BenchPress's ability to generate benchmarks targeting desired features with active learning in order to generate benchmarks that improve the training of the Grewe et al. heuristic. We evaluate this against passive training with CLgen, GitHub code, and BenchPress with randomly selected target features. All approaches augment the same baseline training set that is taken from (Brock et al., 2018), containing 7 benchmark suites3. Table 2 shows the effect of each approach on the predictive power of the heuristic. Training only on human written benchmarks improves the heuristic's performance by 4%, as shown in Table 2's first row. To understand the maximum achievable improvement in the heuristic, we compute the best speedup (\(=12\%\)) that is achieved if the model chooses the optimal device as opposed to always picking the GPU. For 71% of the benchmarks, GPU is the optimal device, so no speedup improvement is possible. For the remaining 29% benchmarks, predicting the 'CPU' label correctly with Grewe et al. will result in a speedup improvement. Footnote 3: The benchmarks have been updated with a wider range of global and local sizes. BenchPress using active learning (BenchPress-AL) clearly outperforms all other approaches in terms of average speedup, improving it by 6%. When trained on BenchPress with passive/random feature selection (BenchPress-P), the speedup achieved is only 1%. To our surprise, the same speedup is achieved with GitHub, which is worse compared with training only on the original benchmark suites. We further analyze the dataset collected from GitHub code and we find it to be imbalanced with 90% of its training instances are labelled as 'GPU'. This leads the model having a higher precision of 0.85, i.e. predicting correctly that a kernel should execute on the GPU, but falling short when it comes to correctly predicting the 'CPU' label. Training the heuristic with CLgen actually leads to a slowdown: it is 1% slower to execute kernels on the predicted devices compared to statically executing everything on the GPU, the baseline device. We analyze CLgen's dataset and observe the opposite pattern found in GitHub's dataset. 63% of its training data execute faster on the CPU than on the GPU. This is a direct consequence of CLgen generating small benchmarks that are poor in features, as the CPU may be slower than the GPU but the large overhead of transferring data to the GPU makes the CPU a better choice for small workloads. CLgen containing too many CPU-labeled kernel explains the heuristic's low precision and specificity, as it becomes biased to select the CPU very often leading to a slowdown. Our main motivation behind using active learning is that it gives BenchPress the ability to target directly those parts of the feature space that will maximize a downstream task's performance. To assess the active learner's performance, we compare the Grewe et al. heuristic's speedup when trained on BenchPress's benchmarks that target areas of the feature space selected by the active learner versus benchmarks that target random features. In both cases, we execute BenchPress for the same amount of time, 10 sampling epochs (i.e., performing steered generation for 10 target feature vectors). In Figure 11, we show the speedup achieved by the heuristic when trained on the data collected at that step. Using \begin{table} \begin{tabular}{l c c c c} & Speedup \% & Precision & Recall & Specificity \\ \hline Benchmarks & +4\% & 0.81 & 0.86 & 0.61 \\ BenchPress-AL & +6\% & 0.84 & 0.86 & 0.64 \\ BenchPress-P & +1\% & 0.84 & 0.85 & 0.48 \\ CLgen & -1\% & 0.52 & 0.86 & 0.43 \\ GitHub & +1\% & 0.85 & 0.83 & 0.61 \\ \end{tabular} \end{table} Table 2. Grewe et al. heuristic model’s performance, precision, recall, and specificity when trained on each technique. Speedup is the geometrical mean of speedups over all benchmarks relative to the optimal static decision, i.e. running on the GPU. Precision, recall, and specificity treat GPU labels as positive and CPU labels as negative. Figure 11. BenchPress’s performance enhancement of Grewe et al. heuristic model when using active learning compared to passively targeting random parts of the feature space over the course of 10 sampling epochs. Figure 10. # Memory operations and # computational instructions for (a) Rodinia benchmarks in purple diamonds, (b) CLgen’s samples in red dots and BenchPress’s benchmarks in green crosses after performing directed search for all Rodinia benchmarks. active learning to target features, BenchPress's dataset improves the heuristic's speedup by 50% after 5 sampling steps, from 4% to 6%. Targeting random features never leads to a speedup higher than 1%. BenchPress can still develop the same speedup by targeting random features if infinite amount of time was available. Our active learner ensures that missing features are going to be quickly targeted, improving the state of the art within 5 sampling epochs. ### Directed Language Modeling We target the features of Rodinia benchmarks using BenchPress and BenchDirect. Both models use beam search over their synthesizer to minimize their samples' distance from the target features. At the end of each search, we select the generated kernel whose features have the minimum Euclidean distance from the target benchmark. We perform this experiment for multiple beam search candidate sizes: 32, 64, 128, 256, 512, 1024 and 2048. On the left side of Figures 12a, 12b and 12c we show the Pareto fronts of the average relative proximity achieved over all Rodinia benchmarks Figure 12. Pareto fronts of the average relative proximity versus total inferences in targeting Rodinia benchmarks over three feature spaces ((a) Grewe’s et al., (b) InstCount and (c) Autophase). Higher relative proximity and fewer inferences are better, therefore optimal points, i.e., Pareto-dominant, are those towards the top left. We annotate the workload size configuration per Pareto point. On the right, we show BenchDirect’s acquired speedup and accuracy gain over BenchPress for the same workload size setting. versus the total amount of inferences. Relative proximity is defined in Section 5.2 as a percentage of how close a feature vector is to the target features relatively to the axis origins. Inferences are calculated as the number of beam search iterations to target all benchmarks multiplied by the workload size. Each datapoint is annotated with its workload size configuration. On the right side of Figures 11(a), 11(b) and 11(c), we show BenchDirect's improvement in accuracy and execution time compared to BenchPress, for each workload size setting. BenchDirect outperforms BenchPress in average relative proximity and total inferences for all workload size configurations, across all three feature spaces. Taking the average proximity and the execution time as a design space, the datapoints that are optimal with respect to these two metrics belong exclusively to BenchDirect, while there are no configurations for BenchPress that optimise either metric compared to the former. The effect BenchDirect's directed language model has in targeting features is especially denoted when the workload size is small. BenchDirect's synthesizer conditions directly on the target features and provides, in very few attempts, candidates that match or are very close to them. This means a dramatic reduction in the amount of benchmarks per beam search does not drastically hamper the model's accuracy. The same is not true for BenchPress. While BenchDirect offers an average speedup of 10.2% and an improvement in average relative proximity of 10.1% for workloads greater or equal to 512, the speedup reaches up to 36% in all three feature spaces and the accuracy gain up to 72% on InstCount features for smaller workloads. This indicates BenchDirect remains consistent in the amount of iterations needed to achieve high accuracy, while BenchPress suffers in both areas. Both models achieve a peak accuracy when they use a workload size of 2048. This is expected as generating more candidates increases the probability of getting closer to the target features. Using this configuration on both models, we show in Figures 12(a), Figure 13. Relative proximity to each Rodinia benchmark of the candidate kernel with the closest features. We show the best match for BenchDirect and BenchPress. Relative proximity is defined in figure 9. 13b and 13c the best relative proximity achieved for each target benchmark in all three feature spaces. Similarly to Figures 9a, 9b and 9c, candidates whose euclidean distance from the target is 0 (i.e., perfect match feature-wise) are marked with a white asterisk (\({}^{\circ}\)). For a selection of Rodinia target benchmarks, we show how the minimum distance from the target is reduced over the course of 5 beam search iterations for both models in Figure 14. BenchDirect generates 1.8\(\times\) more candidates that match exactly the target features compared to BenchPress. Specifically, it matches 21 targets on Grewe's et al. features, 14 on InstCount and 10 targets on Autombase, compared to BenchPress's 17, 3 and 5 exact matches respectively. Overall, BenchDirect gets closer to the target compared to BenchPress. Its samples are closer, or as close, for 45 out of 58 Rodinia targets on Grewe's et al. features, 47 out of 52 on InstCount and 49 out of 52 on Autombase. BenchPress provides better candidates for 13, 5 and 3 targets on Grewe's et al., InstCount and Autombase features respectively. Even though it is expected for BenchDirect to miss some target features due to the experiment's randomness, we pick out a few such examples to discuss why this happens. The largest performance gap in favour of BenchPress is observed on ellipse and ellipse_opt on InstCount features. These two benchmarks are very large, containing multiple thousands of instructions, therefore they are difficult kernels to target. We examine both models' generated samples over all 5 beam search iterations. In both cases, we find BenchDirect's closest candidate on the first iteration to be 8% closer to the target compared to BenchPress's. After measuring the distance distribution from the Figure 14. A qualitative comparison between BenchDirect and BenchPress for backprop-2, gpu-4 and particle_naive Rodinia benchmarks in all three feature spaces. We show for both language models the minimum distance achieved (y-axis) from the target over the course of five beam search iterations (x-axis). target for both models' samples, we find BenchDirect is 93% more likely to generate a sample whose distance is lower compared to BenchPress on the first beam search iteration. BenchDirect seems to succeed in these two target benchmarks indeed. However, at every inference stepBenchDirect tries to match the target features in a single [HOLLE] infill. As these two kernels are very large, this is a challenging task leading to most of its produced candidates to have syntactic errors, leaving it with only a few benchmarks that compile. Even though its first iteration's samples are closer compared to BenchPress, all successive iterations are becoming increasingly difficult for BenchDirect to produce a compiling kernel which also reduces the minimum distance. For that reason, BenchPress's random and cautious steps lead to benchmarks that are eventually closer. We notice this pattern to happen in all targets where BenchPress produced a better candidate. For these targets, it is likely that if we broke down the difficulty into smaller steps by using intermediate feature vectors, this would have helped BenchDirect to get to the target features gradually but more accurately. ### Human Likeness of Code We conduct an empirical evaluation on BenchPress, BenchDirect, Clgen and CLSmith to measure the human-likeness of their samples by devising a Turing test in the form of a web application. Human-likeness is a desirable property for programs synthesized by generative models, as it indicates samples are likely to assimilate the functionality of human-written benchmarks. Each participant is shown a benchmark picked randomly from one of the 5 following datasets, (a) BenchPress, (b) BenchDirect, (c) CLgen, (d) CLSmith, and (e) GitHub. They are then asked to label the benchmark as written by a human or an AI. During this test, we show only the benchmarks that were selected in experiments 5.2 and 5.4, i.e., the closest samples per dataset to Rodinia for all 3 feature spaces. This results in 168 samples per presented dataset. In total, we collect data from 77 participants that declare familiarity with programming. Table 3 shows how often users tag a test from each dataset as 'human-written'. We notice that human-written code from GitHub is classified as 'AI-written' by users in 49% of the tests. We believe this to be due to two reasons. First, the dataset from GitHub contains large OpenCL kernels that contain long and unnatural expressions or have had their loops manually unrolled for optimisation reasons, making them hundreds of lines long. Such kernels are most of the times labelled as 'AI-written'. Second, a participant may be suspicious of statements that do not look simple enough to be written by a human, therefore tending to select the 'AI-written' label more often. Participants label samples from BenchPress as 'human-written' in 53% of its total tests and 49% of BenchDirect's total tests. While both scores are similar, it is likely that BenchDirect produces statements that are not likely written by a human slightly more often than BenchPress. This is because it tends to generate longer sequences than BenchPress when trying to reach to outliers of the feature space in a single inference step. CLgen's samples may look human likely but most of them are short, no longer than 3-4 lines. Often they contain no workloads or loops and are accompanied by unused arguments. This is the reason it scores lower at 38%. Finally, CLSmith is the most obvious case of unstructured and complicated code, being classified as 'human-written' only 29%. This fuzzer generates kernels by producing random expressions that conform to OpenCL's grammar, leading to random code whose functionality is not clear. ## 6. Conclusion Predictive models for compilers have been shown to outperform compiler experts but they are restricted by the amount and quality of training data they are exposed to. What is needed is an approach that can synthesize benchmarks and enhance datasets with missing features. In this paper we propose BenchPress, a powerful code generator that uses active learning to search the feature space and steers generation towards desired features. BenchPress generates 10x more and 7.5x larger undirected benchmarks with 37x greater compilation rate than Clgen - a state of the art compiler benchmark generator - from a fixed input feed. BenchPress outperforms CLgen, CLSmith, code from GitHub and applied mutations with SRCIROR in generating OpenCL kernels that target the features of Rodinia benchmarks developed by human experts. BenchPress applies active learning to enhance Grewe's et al. dataset with benchmarks with missing features and leads to improving the heuristic's speedup by 50%. We further extend BenchPress's language model into a directed synthesizer given compiler features. This directed model produces 1.8x more matches to target features, it improves the generation process's accuracy by up to 36% and reduces inference time by up to 72%, while we show both our techniques outperform all other synthetic benchmark generation techniques in producing high-quality programs that are indistinguishable from human-written benchmarks. We hope this work to demonstrate a sustainable method to direct feature space search of program generation and that BenchPress's release to researchers will enable research in related domains. ## 7. Related Work BenchPress is inspired by BERT, a representation model by Devlin et al. (Devlin et al., 2019). Contrary to previous techniques (Devlin et al., 2019; Devlin et al., 2019), BERT learns on unlabeled text data by jointly conditioning on both left and right context. BERT enables multiple applications of this architecture to a wide variety of difficult machine learning tasks, including programming languages. In CuBERT (Kanade et al., 2017), Kanade et al. apply BERT over Python programs and evaluate it on finding typical mutation faults. In CodeBERT (Kanade et al., 2017), Feng et al. fine-tune BERT to perform NL-PL and PL-NL transformations. In this work, we extend BERT to a bidirectional generative model, with the help of [HOLLE] token. \begin{table} \begin{tabular}{l c c c} & Score \% & \#Human & \#Total \\ \hline GitHub & 51\% & 139 & 270 \\ BenchPress & 53\% & 55 & 103 \\ BenchDirect & 49\% & 60 & 122 \\ Clgen & 38\% & 36 & 95 \\ CLSmith & 29\% & 26 & 89 \\ \end{tabular} \end{table} Table 3. Score of ‘human-likeness’ expressed as the percentage of code examples from each dataset that were tagged as ‘human-written’ by users Cummins et al. [5] develop CLgen, a deep learning generator based on LSTM [21] for OpenCL programs. They try to tackle the compiler benchmark shortage by providing synthetic benchmarks as training data for compiler heuristics. The authors present the Grewe et al. [16] heuristic model improved its performance by \(1.27\times\) when trained on their synthetic benchmarks. However, Goens et al. [14] show that training with CLgen's synthetic samples lead to a slowdown compared to training on human-written benchmarks only. To explain this, they measure the AST depth of CLgen's samples and show it is \(3\times\) smaller compared to human-written benchmarks and code from GitHub and poor in features, therefore unrealistic. This motivates us to develop BenchPress, which produces \(10\times\) more unique kernels that are \(7.5\times\) larger on average. In 2019, Nye et al. develop SketchAdapt [26], which uses a generator-synthesizer [10; 2] to generate program sketches given I/O specifications. The synthesizer samples sketches and the generator fills <HOLE> tokens with statements. SketchAdapt performs better than other architectures [10; 2], however it samples only a pre-defined pool of operations, which restricts its diversity. Bruen et al. [8], propose a Tree2Tree approach for code generation using VAE. They encode AST nodes using Tree-LSTMs (Tai et al. [33]) and train their model on C++ functions. They test their approach against a VAE with an LSTM Seq2Seq model. They use their model as a synthesizer by sampling random AST representations which they extend to new programs. Their Seq2Seq model achieves a compilation rate of up to \(6\%\) with greedy search, however this happens because the model greedily selects the most probable labels, leading to repetitive samples. When sampling with temperature, their Tree2Tree architecture is able to generate a wider variety of samples, but only achieves a compilation rate of \(22\%\), which translates to a few functions. Gupta et al. [17] develop SED, a two-stage generator. A synthesizer receives I/O specifications and generates programs likely to satisfy them and a neural debugger applies program repair to reform them into functions that match specifications. Gupta et al. evaluate three synthesizer architectures and measure (a) the correctness of generated programs across tests and (b) the accuracy of their debugger to repair code. While SED is an innovative work, Karel is a small-scale language and SED's generative performance on a complex programming language is not evaluated. Faustino et al. develop Anghabench [7] to tackle the benchmark shortage [5; 38]. Anghabench is a collection of C programs mined from GitHub. In order to make it compilable, they use Psyche-C [25] type inference engine to apply type reconstruction and resolve dependencies. Structs, unions and other composite data types are omitted or re-declared with primitive types. Their benchmarks are compiling, but cannot be executed. Compared to AnghaBench, BenchPress resolves type dependencies of composite types and user-defined functions without changing the functionality or semantics of programs.
2310.19220
From Stream to Pool: Pricing Under the Law of Diminishing Marginal Utility
Dynamic pricing models often posit that a $\textbf{stream}$ of customer interactions occur sequentially, where customers' valuations are drawn independently. However, this model is not entirely reflective of the real world, as it overlooks a critical aspect, the law of diminishing marginal utility, which states that a customer's marginal utility from each additional unit declines. This causes the valuation distribution to shift towards the lower end, which is not captured by the stream model. This motivates us to study a pool-based model, where a $\textbf{pool}$ of customers repeatedly interacts with a monopolist seller, each of whose valuation diminishes in the number of purchases made according to a discount function. In particular, when the discount function is constant, our pool model recovers the stream model. We focus on the most fundamental special case, where a customer's valuation becomes zero once a purchase is made. Given $k$ prices, we present a non-adaptive, detail-free (i.e., does not "know" the valuations) policy that achieves a $1/k$ competitive ratio, which is optimal among non-adaptive policies. Furthermore, based on a novel debiasing technique, we propose an adaptive learn-then-earn policy with a $\tilde O(k^{2/3} n^{2/3})$ regret.
Titing Cui, Su Jia, Thomas Lavastida
2023-10-30T01:53:37Z
http://arxiv.org/abs/2310.19220v3
# From Stream to Pool: Dynamic Pricing Beyond i.i.d. Arrivals ###### Abstract The dynamic pricing problem has been extensively studied under the **stream** model: A stream of customers arrives sequentially, each with an independently and identically distributed valuation. However, this formulation is not entirely reflective of the real world. In many scenarios, high-valuation customers tend to make purchases earlier and leave the market, leading to a _shift_ in the valuation distribution. Thus motivated, we consider a model where a **pool** of \(n\) non-strategic unit-demand customers interact repeatedly with the seller. Each customer monitors the price intermittently according to an independent Poisson process and makes a purchase if the observed price is lower than her _private_ valuation, whereupon she leaves the market permanently. We present a minimax _optimal_ algorithm that efficiently computes a non-adaptive policy which guarantees a \(1/k\) fraction of the optimal revenue, given any set of \(k\) prices. Moreover, we present an adaptive _learn-then-earn_ policy based on a novel _debiasing_ approach, and prove an \(\tilde{O}(kn^{3/4})\) regret bound. We further improve the bound to \(\tilde{O}(k^{3/4}n^{3/4})\) using martingale concentration inequalities. ## 1 Introduction Pricing with unknown demand is a fundamental challenge in revenue management. Consider the sale of new clothing lines. Each customer visits the (online or offline) store intermittently depending on their availability and makes a purchase if the observed price is lower than her valuation. As each customer typically needs only one unit, she exits the market once a purchase is made. As the product is newly introduced, the seller has little information about customers' valuations to inform their pricing strategy upfront. Most existing work on dynamic pricing employs what we call a **stream** model: A stream of customers arrives sequentially, each with an independent identically distributed (i.i.d.) valuation. Demand uncertainty is well understood under this model; see, e.g., [Kleinberg and Leighton, 2003, Besbes and Zeevi, 2009] and [Babaioff et al., 2015]. However, the stream model is lacking in many real-world scenarios. In the above clothing example, the demands over time are _neither identical nor independent_. They are not identically distributed since a high-value customer tends to make a purchase early and subsequently leaves the market, resulting in a _shift_ in the distribution of valuations towards the lower end. It should be noted that demand _non-stationarity_ has been extensively studied ([Besbes and Zeevi, 2011], [Besbes and Saure, 2014] and [Den Boer, 2015]). However, the non-stationarity in these work is _exogenous_: It serves to incorporate external factors such as seasonality or promotion and does not depend on the seller's action. In contrast, the non-stationarity in our example is _endogenously_ determined by the seller's actions. Orthogonal to non-stationarity, the _independence_ assumption is also questionable. In the stream model, the demand in every time period is independent of the previous prices (even if the demand function is non-stationary over time). However, this is not true in the previous example. To see why the demand rate depends on previous prices, suppose all customers have a valuation \(0.5\) and _always_ monitor the price. Then, the demand rate is \(0\) at price \(1\) if and only if the price has _ever_ been lower than \(0.5\). This problem is also related to _Reinforcement Learning_ (RL) for _Partially Observable Markov Decision Process (POMDP)_. In fact, we can encode the state using (i) the remaining time and (ii) the remaining customers in each valuation group. Moreover, we only observe the total number of sales (across all valuation groups) in any interval of time which only gives partial information about the current state. However, known results for learning POMDPs are not applicable since they (i) require special structures that do not hold in our problem, (ii) usually rely on revisiting the states, which is infeasible here as the state evolution is unidirectional, and (iii) do not leverage the special structure of our problem. We provide a detailed discussion in the literature review; see Section 1.2. In order to address these challenges, we consider a single-item revenue maximization problem where a **pool** of unit-demand, non-strategic customers interact repeatedly with a single seller. Each customer monitors the price intermittently according to an independent Poisson process and makes a purchase if she observes a price lower than her private valuation, whereupon she leaves the market permanently. We design an efficient algorithm that computes a nearly optimal non-adaptive policy for the unknown demand. Furthermore, we also propose a learn-then-earn policy with vanishing regret. ### Our Contribution We initiate the study of dynamic pricing under a _pool-based_ model and present the following results. 1. **A Novel Model.** We introduce a novel _pool_-based pricing model: Each customer monitors the price according to an independent Poisson process, makes a purchase when the observed price is below the valuation, and leaves the market _permanently_. In contrast to the _stream_-based model in most existing work, our model better encapsulates the key features of many retailing scenarios where the customers have unit demand. We show that this problem is tractable if the instance is known through the following results. a) **Price Monotonicity.** We show that the price sequence in any optimal non-adaptive policy is non-increasing; see Proposition 2.3. b) **Optimal Non-adaptive Policy.** We present an efficient algorithm that computes the optimal non-adaptive policy; see Theorem 2.5. 2. **Optimal Algorithm for Non-adaptive Policy.** We first consider _non-adaptive_ policies, i.e., policies that predetermine how the price changes, regardless of observed demands. These policies are particularly compelling and practical because of their operational simplicity. We provide a _complete_ settlement of this setting by showing the following results. a) **A \(k\)-Competitive Algorithm.** We present an efficient algorithm that takes a family of instances as input and returns _one_ non-adaptive policy. We show that our algorithm is \(k\)-competitive for any family of \(k\)-price instances, i.e., the output policy is guaranteed to procure a \((1/k)\)-fraction of the expected revenue achievable by any (possibly adaptive) policy with _full_ knowledge of the true instance; see Theorem 3.1. b) **A \((1+\log\rho)\)-Competitive Algorithm.** The above guarantee is weak for large \(k\). To mitigate this, we propose a variant of our algorithm that restricts its attention to a subset of prices. We show that this algorithm is \((1+\log\rho)\)-competitive, where \(\rho\) is the ratio between the highest and lowest prices; see Theorem 3.5. c) **Optimality.** Our algorithm achieves the (maximin) optimal competitive ratio. Specifically, for each \(k\geq 1\), we construct a family of \(k\)-price instances on which no non-adaptive policy guarantees more than \(1/k\) fraction of the optimal revenue on _all_ instances in this family; see Theorem 3.6. 3. **Adaptive Policy with Sublinear Regret.** We present an adaptive policy with \(\tilde{O}(k^{3/4}n^{3/4})\) regret against the optimal _non-adaptive_ policy that knows the size of each valuation group, given any set of \(k\) prices. This is achieved by combining the following components. a) **Learn-then-earn via Debiasing.** We propose a _learn-then-earn_ policy that estimates the size of each valuation group. Unlike the stream model (which, in this case, is equivalent to _multi-armed bandits_ (MAB)), we face an additional challenge of _confounding_ observations: At price \(p\), customers with valuations greater than \(p\) may also make a purchase, but we do not observe the valuations of those who made purchases. We devised an unbiased estimator that circumvents this issue by accounting for the (estimated) number of remaining customers from each valuation group. A naive analysis gives an \(\tilde{O}(kn^{3/4})\) regret bound; see Theorem 4.2. b) \(o(k)\)**Regret via Martingale Concentration.** Unlike in the stream model, in our problem, a naive analysis only yields _linear_ dependence on \(k\). This is essential because the confounding effect _accumulates_ over time. As a key technical step, we construct a supermartingale and use the Azuma-Hoeffding inequality to show that the estimation error scales as \(\tilde{O}(\sqrt{k})\). This leads to an improved regret bound of \(\tilde{O}(k^{3/4}n^{3/4})\). ### Literature Review Our work is related to the following lines of research. **Dynamic Pricing In the Stream Model.** The stream model has been extensively studied since the seminal work by [11] which focused on characterizing the optimal policy with a known demand model. The problem is particularly intriguing when the demand model is unknown, where the seller must balance learning and earning [13]. Various fundamental aspects have also been investigated, including finite inventory (Besbes and Zeevi, 2009), (Babaioff et al., 2015), joint inventory-pricing control (Chen and Simchi-Levi, 2004), customer choice model (Broder and Rusmevichientong, 2012), personalization (Ban and Keskin, 2021), non-stationarity (Besbes and Zeevi, 2011), just to name a few. For a comprehensive overview, the reader can refer to the survey by (den Boer, 2015). Although the stream model is broadly applicable in many contexts, in this work we aim to understand the pricing problem from an alternative perspective through the pool-based model. **Pricing with Repeated Interactions.** In the stream model, each customer interacts with the seller only once. On the other hand, there is substantial literature where customers engage with the seller multiple times, as in our model. However, these studies differ from ours in two critical ways: (i) they are dedicated primarily to analyzing customers' strategic behavior, often assuming known model dynamics, and (ii) they focus on characterizing the market equilibrium rather than finding policies with provable guarantees. For example, (Besanko and Winston, 1990) considered a pool-based model similar to ours but focused on characterizing subgame perfect Nash equilibrium. (Su, 2007) assumed that the customers are impatient, available from the beginning, and strategically wait for markdowns. (Correa et al., 2016) also considered the pool-based model, but focused on _pre-announced_ pricing policies for forward-looking customers. (Wang, 2016) studied the reference effect in intertemporal pricing where customer utility depends on past prices. **Markdown Pricing.** As we will soon see, any non-adaptive policy in our problem has a non-increasing price sequence. In revenue management, these policies are often referred to as _price skimming_ or _markdown_ policies. Existing work usually assumes that the demand model is known. The pool-based model has been extensively studied in the special case of \(\lambda=\infty\); see, e.g., Section 5.5.1 of (Talluri and Van Ryzin, 2006). Furthermore, (Smith and Achabal, 1998; Caro and Gallien, 2012; Heching et al., 2002) considered markdown optimization under known demand. There is also a recent line of research that studies markdown policies with _unknown_ demand models see, e.g., (Chen, 2021; Jia et al., 2021) and (Jia et al., 2022). Unlike our work, these works view monotonicity as a _constraint_ rather than as a property of the model's optimal solution. **Partially Observable Reinforcement Learning.** Our problem can be reformulated as a _Markovian Decision Process_ (MDP). In fact, we can characterize the state by (i) the remaining customers in each valuation group and (ii) remaining time. However, a key challenge is that the seller only observes the total demand, but not the demand from each valuation group. One may introduce a prior distribution and reformulate this problem as a Partially Observable MDP (POMDP). However, classical hardness results suggest that learning in POMDPs can be (both computationally and statistically) intractable even in simple settings (Krishnamurthy et al., 2016). Recent results for learning POMDP are not applicable to our problem for multiple reasons. First, they (i) require special structures, such as _block MDPs_(Du et al., 2019; Krishnamurthy et al., 2016) or _decodable MDPs_(Efroni et al., 2022) that do not hold in our problem. Second, they usually rely on revisiting states, which is not feasible in our problem, since state evolution is unidirectional - the number of customers can only decrease, and hence we do not observe the same state twice. Finally, our results exploit the structure of our problem which would be ignored by these works. ## 2 Model and Preliminaries We now formally describe our model. Consider a finite continuous time horizon, whose length is normalized to 1. There are \(n\) customers with private valuations taken from a known set \(\{v_{i}\}_{i\in[k]}\) where \(v_{1}\geq\ldots\geq v_{k}\). There are \(n_{i}\) customers in the \(i\)-th valuation group, all having valuation \(v_{i}\). Customer \(j\) monitors the price according to an independent Poisson process \((N_{s}^{j})_{s\in[0,1]}\) with a homogeneous rate \(\lambda>0\). An _instance_\(\mathcal{I}\) is specified by a tuple \((\lambda,\{n_{i}\}_{i\in[k]},\{v_{i}\}_{i\in[k]})\). **Policy.** A pricing _policy_ is a stochastic process \(X=(X_{t})_{t\in[0,1]}\) taking values on \(V\). A policy is required to be _non-anticipating_, i.e., the price depends only on the "history". Formally, this means that \(X\) is adapted to the filtration \((\mathcal{F}_{t})\) where \(\mathcal{F}_{t}=\sigma(\{N_{s}^{j}:j\in[n],s\in[0,t]\})\). **Customer Behavior.** Each customer \(j\) makes a purchase when the observed price is less equal than her valuation \(v_{j}\) for the _first_ time. To formalize this, we suppress \(j\) for now and let \((Y_{\ell})_{\ell=1,2,\ldots}\) be i.i.d exponential random variables with mean \(1/\lambda\), representing the time lags between the monitor events of this customer. Under this notation, \(T_{\ell}:=\sum_{i=1}^{\ell}Y_{i}\) is the time when the \(\ell\)-th time that the price is monitored by the customer. If the price is ever below the valuation, i.e., if \(\{\ell\geq 1:X_{T_{\ell}}\leq v\}\neq\emptyset\), then a purchase is made at time \(T_{L}\) where \(L:=\min\{\ell\geq 1:X_{T_{L}}\leq v\}.\) The customer immediately leaves the market once a purchase is made. We now can formally define the revenue. **Definition 2.1** (Revenue).: Let \(X=(X_{s})_{s\in[0,1]}\) be a policy. For each customer \(j\in[n]\), let \(\tau_{j}\in[0,1]\) be the time when customer \(j\) makes a purchase and set \(\tau_{j}=\infty\) if she never purchases. Then, the (random) _revenue_ is \(R_{X}:=\sum_{j\in[n]}X_{\tau_{j}}\cdot\mathbf{1}(\tau_{j}\leq 1)\). A compelling class of policies is the class of non-adaptive policies, where the prices are determined upfront, regardless of the purchase events. These policies are widely applied in practice due to their simplicity and effectiveness; see, e.g., [11]. **Definition 2.2** (Non-adaptive Policy).: A policy \((X_{s})_{s\in[0,1]}\) is _non-adaptive_ if for any \(s\), the random variable \(X_{s}\) is a constant. ### Optimization Under Known Demand When the sizes of each valuation group are known, the problem is relatively easy to handle, at least in the non-adaptive setting. We first show that the price sequence in any optimal non-adaptive policy is non-increasing over time. A policy with this property is often referred to as a _markdown_ policy in revenue management. **Proposition 2.3** (Price Monotonicity).: _Suppose \((X_{s})_{s\in[0,1]}\) is an optimal non-adaptive policy. Then, \(X_{s}\geq X_{t}\) almost surely (a.s.) whenever \(0\leq s<t\leq 1\)._ This structural result follows from a simple swapping argument. Suppose the price sequence is not non-increasing, say, the price is \(p_{L}\) in some interval \([t-\varepsilon,t]\) and increases to \(p_{H}\) in \([t,t+\varepsilon]\) where \(\varepsilon>0\). We show that the expected revenue does not decrease if we swap prices \(p_{H},p_{L}\) in two intervals. To see this, note that customers with valuations lower than \(p_{H}\) are not affected by this swap, since they can only buy the product at price \(p_{L}\) in the time interval \([t-\epsilon,t+\epsilon]\). On the other hand, we can argue that if a customer has a valuation higher than \(p_{H}\), then after the swap she is more likely to purchase at price \(p_{H}\). We will therefore restrict our attention to non-adaptive markdown policies subsequently. Each policy in this class can be specified by a sequence \((t_{1},\cdots,t_{k})\) where the policy selects the price \(v_{i}\) from time \(t_{i}\) to \(t_{i+1}\). Conveniently, we have a closed-form formula for the expected revenue for any non-adaptive markdown policy. **Proposition 2.4** (Expected Revenue of Markdown Policy).: _For any instance \(\mathcal{I}=(\lambda,\{n_{i}\}_{k=1}^{k},\{v_{i}\}_{i=1}^{k})\) and non-adaptive markdown policy \(\pi=(t_{i})_{i\in[k]}\), define the revenue function \(\mathrm{Rev}(\pi,\mathcal{I})\) as_ \[\sum_{i\in[k]}n_{i}\sum_{j:i\leq j\leq k}v_{j}e^{-\lambda(t_{j}-t_{i})}\left( 1-e^{-\lambda(t_{j+1}-t_{j})}\right).\] _where \(t_{k+1}:=1\). Then,_ \[\mathbb{E}[R_{\pi}]=\mathrm{Rev}(\pi,\mathcal{I}).\] Each term in the inner summation corresponds to the expected revenue from a customer with valuation \(v_{i}\) in the \(j\)-th time interval. The term \(e^{-\lambda(t_{j}-t_{i})}\) is the probability that a customer of valuation \(v_{i}\) remains in the market until time \(t_{j}\), and \(1-e^{-\lambda(t_{j}-t_{i})}\) is the probability that the customer makes a purchase during the \(j\)-th interval, assuming that she is still in the market. Monotonicity enables us to compute an optimal non-adaptive policy. **Theorem 2.5** (Optimal Non-adaptive Policy).: _There is a polynomial time algorithm that computes an optimal non-adaptive policy for any instance \(\mathcal{I}=(\lambda,\{n_{i}\}_{i=1}^{k},\{v_{i}\}_{i=1}^{k})\)._ So far, we have shown that our problem is tractable if the instance is known. In Sections 3 and 4 we consider the scenario where the instance is unknown. ## 3 Competitive Non-adaptive Policy For new products, the seller usually only has incomplete knowledge about the true model. An important class of policies is non-adaptive policies, i.e., policies that predetermine how price trajectory regardless of realized purchases. Non-adaptive policies are widely applied in the real world due to their operational simplicity; see Section 5 of [12]. In this section, we consider how to compute a _non-adaptive_ policy given only the monitoring rate and the price space. We provide a _complete_ settlement of this setting by presenting an algorithm that computes a non-adaptive policy that guarantees a best-possible \(1/k\) fraction of the optimal revenue. For any instance \(\mathcal{I}\), denote by \(\mathrm{OPT}(\mathcal{I})\) the optimal revenue achievable by any non-adaptive policy. **Theorem 3.1** (Competitive Ratio Lower Bound).: _There is an algorithm that takes as input the price space \(\{v_{i}\}_{i\in[k]}\), the monitoring rate \(\lambda\), and computes in polynomial time a non-adaptive policy \(\pi\) such that for any instance \(\mathcal{I}=(\lambda,\{n_{i}\}_{i=1}^{k},\{v_{i}\}_{i=1}^{k})\), we have_ \[\frac{\mathrm{Rev}(\pi,\mathcal{I})}{\mathrm{OPT}(\mathcal{I})}\geq\frac{1}{k}.\] We outline the proof and defer the details to the appendix. A natural idea is to write \(\mathrm{Rev}(\pi,\mathcal{I})/\mathrm{OPT}(\mathcal{I})\) as a function \(f(t_{1},\ldots,t_{k};n_{1},\ldots,n_{k})\) and then solve a _bilevel_ program \[\mathrm{(BP1)} \max_{t_{1},\ldots,t_{k}}\min_{n_{1},\ldots,n_{k}}f(t_{1},\ldots,t _{k};n_{1},\ldots,n_{k}),\] \[\mathrm{such\ that}\quad 0\leq t_{i}\leq t_{j}\leq 1,\forall i<j, \ i,j\in[k].\] However, this approach fails since most results on bilevel optimization assume certain structures such as concavity-convexity, but our \(f\) is neither convex in the \(n_{i}\)'s nor concave in the \(t_{i}\)'s. ### Upper Bounding on the Optimal Revenue An alternative idea is to find a closed-form formula for the denominator for any given \((n_{i})\)'s, and reduce the bilevel problem to a single-level problem. However, this approach does not work either since finding a closed-form solution for \(\operatorname{OPT}(\mathcal{I})\) is a formidable task. To circumvent this, we introduce the following upper bound on \(\operatorname{OPT}(\mathcal{I})\). **Lemma 3.2** (Upper Bound on \(\operatorname{OPT}(\mathcal{I})\)).: _For any instance \(\mathcal{I}=(\lambda,\{n_{i}\}_{i=1}^{k},\{v_{i}\}_{i=1}^{k})\), we define \(\operatorname{UB}(\mathcal{I}):=\sum_{i\in[k]}n_{i}v_{i}\cdot(1-e^{-\lambda}).\) Then, for any policy \(\pi\), we have_ \[\mathbb{E}[R_{\pi}]\leq\operatorname{UB}(\mathcal{I}).\] To see this, note that if a customer has valuation \(v\), then the maximum expected revenue from this customer is at most \(v(1-e^{-\lambda})\), which is attained by the policy that always selects price \(v\). The expression \(\operatorname{UB}(\mathcal{I})\) is simply the sum of this upper bound over all customers. On the other hand, it should be noted that \(\operatorname{UB}(\mathcal{I})\) can be much greater than \(\operatorname{OPT}(\mathcal{I})\). In fact, the \(\operatorname{UB}(\mathcal{I})\) is attained by a _personalized_ policy, i.e., prices for different customers may differ, whereas \(\operatorname{OPT}(\mathcal{I})\) is defined over _non-personalized_ policies. ### Linearization With this upper bound, we next focus on the bilevel optimization problem where \(\operatorname{OPT}(\mathcal{I})\) is replaced with \(\operatorname{UB}(\mathcal{I})\). Explicitly, we consider \[(\text{BP2}) \max_{t_{1},\ldots,t_{k}}\min_{n_{1},\ldots,n_{k}}\frac{ \operatorname{Rev}(\pi,\mathcal{I})}{\operatorname{UB}(\mathcal{I})},\] \[\text{such that }\quad 0\leq t_{i}\leq t_{j}\leq 1,\forall i<j,\ i,j \in[k].\] The above bilevel problem is still not readily solvable since \(\operatorname{Rev}(\pi,\mathcal{I})\) and \(\operatorname{UB}(\mathcal{I})\) are both _non-linear_ functions. To circumvent this, we consider a _linear surrogate_ for each of them, motivated by Taylor's expansion. **Definition 3.3** (Linear Surrogate).: For any instance \(\mathcal{I}=(\lambda,\{n_{i}\}_{i\in[k]},\{v_{i}\}_{i\in[k]})\) and non-adaptive policy \(\pi=(t_{i})\), we define the linear surrogate of \(\operatorname{UB}(\mathcal{I})\) and \(\operatorname{Rev}(\pi,\mathcal{I})\) as \[\operatorname{UB}^{\prime}(\mathcal{I}) :=\sum_{i\in[k]}n_{i}v_{i}\lambda,\] \[\operatorname{Rev}^{\prime}(\pi,\mathcal{I}) :=\sum_{i\in[k]}n_{i}\sum_{j\in[k];j\geq i}\lambda v_{j}\left(t_{j +1}-t_{j}\right).\] We show that this linearization only decreases the objective in (BP2). Thus, a lower bound on the linearized bilevel program implies a lower bound on (BP2). **Lemma 3.4** (Linearization Reduces the Objective).: _For any instance \(\mathcal{I}=(\lambda,\{n_{i}\}_{i=1}^{k},\{v_{i}\}_{i=1}^{k})\) and non-adaptive policy \(\pi\), we have_ \[\frac{\operatorname{Rev}(\pi,\mathcal{I})}{\operatorname{UB}(\mathcal{I})} \geq\frac{\operatorname{Rev}^{\prime}(\pi,\mathcal{I})}{\operatorname{UB}^{ \prime}(\mathcal{I})}.\] To see why this is true, observe that the function \(h(x):=\frac{1-e^{-x}}{x}\) is decreasing in \(x\). For any positive \(x\leq y\), we have \((1-e^{-x})/x\geq(1-e^{-y})/y\), which rearranges to \[\frac{1-e^{-x}}{1-e^{-y}}\geq\frac{x}{y}.\] ### Reducing to a Linear Program With Lemma 3.4, now we further simplify the bilevel program (BP2) by replacing the objective with the ratio between the linearized functions. This results in the following bilevel program: \[(\text{BP3}) \max_{t_{1},\ldots,t_{k}}\min_{n_{1},\ldots,n_{k}}\frac{\sum_{i \in[k]}n_{i}\sum_{j\in[k];j\geq i}v_{j}\left(t_{j+1}-t_{j}\right)}{\sum_{i}n_{ i}v_{i}},\] \[\text{such that }\quad 0\leq t_{i}\leq t_{j}\leq 1,\forall i<j,\ i,j \in[k].\] We construct an optimal solution to the (BP3) by reduction to linear program (LP). Observe that the inner minimum is always achieved by a binary vector with exactly one non-zero entry. More precisely, it is given by \(n_{i}=n\cdot\mathbf{1}(i=i^{*})\) where \[i^{*}=\arg\min\left\{\frac{\sum_{j=i}^{k}v_{j}t_{j}}{v_{i}}:i\in[k]\right\}.\] (For simplicity, we assume \(i^{*}\) is unique; apparently, this is not essential to the analysis.) Thus, (BP3) can be reformulated as \[\max_{(t_{i}),c} c\] \[\text{such that } c\leq\frac{\sum_{j\in[k];j\geq i}v_{j}\left(t_{j+1}-t_{j}\right) }{v_{i}},\,\forall i\in[k],\] \[0\leq t_{i}\leq t_{i+1}\leq 1,\forall i\in[k].\] We can easily verify that the optimal solution is attained when all the inequalities are binding. In this case, the optimal solution \((t_{i}^{*})\) satisfies \[t_{i+1}^{*}-t_{i}^{*}=\left(1-\frac{v_{i+1}}{v_{i}}\right)(1-t_{k}^{*}),\quad \forall i<k. \tag{1}\] This solves to \[t_{k}^{*}=1-\frac{1}{k-\sum_{1\leq i\leq k-1}\frac{v_{i+1}}{v_{i}}}.\] Finding \(t_{i}^{*}\) for \(i<k\) can be done with backward substitution using equation (1). The resulting performance guarantee is given by \[\mathrm{CR}(v_{1},\ldots,v_{k})=1-t_{k}^{*}=\frac{1}{k-\sum_{i=1}^{k-1}v_{i+1}/v _{i}}.\] So far, we have a performance guarantee for fixed \(v_{1},\ldots,v_{k}\). Next, we characterize the worst-case performance guarantee overall \(v_{i}\)s, i.e., the worst-case competitive ratio. By simple calculation, one can verify that \(\mathrm{CR}(v_{1},\ldots,v_{k})\) is at least \(1/k\) for any \(v_{1},\ldots,v_{k}\). ### Competitive Ratio for Small Aspect Ratio Note that when \(k\) grows, the above result gets weaker and weaker. This motivates us to employ a core set for the valuation levels. More precisely, let \(a>0\) and \(b\) be the minimum and maximum of all the \(v_{i}\) respectively. For any \(\epsilon>0\), we can partition the interval \([a,b]\) into subintervals \([a(1+\varepsilon)^{j-1},a(1+\varepsilon)^{j})\) for \(j=1\) to \(\log(b/a)/\log(1+\varepsilon)\). Further, we compute the non-adaptive policy based on the valuation set \(\{a(1+\varepsilon)^{j-1}\}\) for \(j=1\) to \(\log(b/a)/\log(1+\varepsilon)\), and derive another competitive ratio bound using these. **Theorem 3.5** (Competitive Ratio Lower Bound).: _For any instance \(\mathcal{I}=(\lambda,\{n_{i}\}_{i=1}^{k},\{v_{i}\}_{i=1}^{k})\) where \(\{n_{i}\}_{i=1}^{k}\) is unknown to the seller, we can compute in polynomial time a nonadaptive policy \(\pi=(t_{1},\cdots,t_{k+1})\) such that_ \[\frac{\mathrm{Rev}(\pi,\mathcal{I})}{\mathrm{OPT}(\mathcal{I})}\geq\frac{1}{1 +\log(v_{1}/v_{k})}.\] Since the optimal revenue on the whole valuation set, \(\mathrm{OPT}(\mathcal{I})\), is at most \((1+\epsilon)\) of the optimal revenue on the core set, the competitive ratio we derive on the core set is at least, \[\frac{1}{(1+\epsilon)(k-\sum_{i=1}^{k-1}v_{i+1}/v_{i})},\] where \(k=\log(v_{1}/v_{k})/\log(1+\varepsilon)\) for the core set, and \(v_{i+1}/v_{i}=1/(1+\epsilon)\) for \(i\in[k-1]\). Plugging the expression of \(k\) and \(v_{i+1}/v_{i}\), then the competitive ratio on the core is \[\frac{1}{1+\epsilon\log(v_{1}/v_{k})/\log(1+\varepsilon)}.\] Note \(\epsilon/\log(1+\varepsilon)\) is increasing in \(\epsilon\) and \(\epsilon/\log(1+\varepsilon)=1\) when \(\epsilon\) goes to \(0\), therefore, the competitive ratio is at least, \[\frac{1}{1+\epsilon\log(v_{1}/v_{k})/\log(1+\varepsilon)}\geq\frac{1}{1+\log (v_{1}/v_{k})}.\] ### Upper-Bounding the Competitive Ratio We also show that the above lower bound of \(1/k\) is the best possible. No algorithm can achieve a fraction larger than \(1/k\) of the optimal revenue. In Theorem 3.6, we demonstrate that for any non-adaptive policy \(\pi\), there exists an instance such that the policy can achieve at most \(\frac{1}{k}+\epsilon\) fraction of the optimal revenue. **Theorem 3.6** (Upper Bound on Competitive Ratio).: _For any integer \(k>0\), \(\varepsilon>0\) and non-adaptive policy \(\pi\), there exists an instance \(\mathcal{I}=\mathcal{I}_{\varepsilon,k}\) such that_ \[\frac{\mathrm{Rev}(\pi,\mathcal{I})}{\mathrm{OPT}(\mathcal{I})}\leq\frac{1}{k }+\epsilon.\] For small \(\lambda\), the error from linearization is negligible. For any integer \(k>0\), \(\varepsilon>0\) and non-adaptive policy \(\pi\), we only need to construct \(\{v_{i}\}_{i=1}^{k}\) such that the competitive ratio, \(\mathrm{CR}(v_{1},\ldots,v_{k})=1/(k-\sum_{i=1}^{k-1}v_{i+1}/v_{i})\) goes to \(1/k\). Consider a geometric sequence with \(v_{1}=1\), \(v_{i+1}=\phi v_{i}\) for \(i\in[k-1]\). For any \(\varepsilon>0\), there exist a \(\phi\) such that the ratio \(1/(k-\sum_{i=1}^{k-1}v_{i+1}/v_{i})\leq\frac{1}{k}+\epsilon\), i.e., the ratio \(1/k\) is tight. ## 4 Low-Regret Adaptive Policy Now we consider adaptive policies in the presence of unknown demand. Specifically, we only assume knowledge of the total number of customers \(n\), the price levels, and the monitoring rate \(\lambda\), but not the number of customers \(n_{i}\) in each valuation group. Our policies aim to simultaneously learn the demand and optimize pricing decisions given a finite time horizon. As is standard in the demand learning literature, we will analyze the _regret_ of our policy. We consider the optimal non-adaptive policy as our benchmark and denote its expected revenue under instance \(\mathcal{I}\) as \(\mathrm{OPT}(\mathcal{I})\). Thus we define the worst-case regret as follows. **Definition 4.1** (Worst-case Regret).: For a policy \(\pi\), we define its worst-case regret is \[\mathrm{Regret}(\pi):=\sup_{\mathcal{I}}\left\{\mathrm{OPT}(\mathcal{I})- \mathrm{Rev}(\pi,\mathcal{I})\right\}\] where the supremum is taken over all instances \(\mathcal{I}\) where the total number of customers \(n\), price levels \(\{v_{i}\}_{i=1}^{k}\), and monitoring rate \(\lambda\) are fixed. The demand at each price level \(n_{i}\) may vary arbitrarily subject to the constraint \(\sum_{i\in[k]}n_{i}=n\). As the main result of this section, we present an adaptive policy with sublinear regret in both \(n\) and \(k\). **Theorem 4.2** (Adaptive Policy with Sublinear Regret).: _There exists an adaptive policy \(\pi^{\mathrm{LTE}}\) which does not know the demand at each price level which satisfies \(\mathrm{Regret}(\pi^{\mathrm{LTE}})=\widetilde{O}(k^{3/4}\cdot n^{3/4})\)._ Our regret bound is higher than the optimal \(\tilde{\Theta}(\sqrt{kn})\) regret bound (Auer et al., 2002) for the stream model (which is equivalent to \(k\)-armed bandits). This is because in the stream model, the effect of exploration is _local_ in the sense that what the seller does in an interval of time only affects the customers arriving in that interval. In contrast, the effect of an action in our model is global: Regardless of how long an action lasts, it can potentially affect \(\Omega(n)\) customers. Therefore, it is reasonable to not expect the same order of regret as in the stream model. Our policy is formally described below in Algorithm 1. The policy operates in two phases. Initially, we explore each of the price levels \(v_{1},v_{2},\ldots,v_{k-1}\) for fixed intervals of length \(s_{1},s_{2},\ldots,s_{k-1}\). When exploring the \(i\)-th price level, we keep track of the realized demand \(D_{i}\), which we use to construct estimates \(\{\hat{n}_{i}\}_{i\in[k]}\) of the original demand. From there we construct an estimated instance \(\hat{\mathcal{I}}=(\lambda,\{\hat{n}_{i}\}_{i\in[k]},\{v_{i}\}_{i\in[k]})\), and compute an optimal non-adaptive policy for this instance only a shortened horizon of length \(1-s_{\text{sum}}\) where \(s_{\text{sum}}=\sum_{i=1}^{k-1}s_{i}\) is the total exploration time. ``` Data: Partial Instance \((n,\{v_{i}\}_{i=1}^{k},\lambda)\), Exploration times \((s_{1},s_{2},\ldots,s_{k-1})\) Result: Policy \(\pi^{\text{LTE}}\) //Learning phase for\(i=1,2,\ldots,k-1\)do Use price \(v_{i}\) for time \(s_{i}\) Observe sales \(D_{i}\) end for //Construct estimates \(\{\hat{n}_{i}\}_{i=1}^{k}\) Define the function \(q(x)=1-\exp(-\lambda x)\) for\(i=1,2,\ldots,k-1\)do \(\hat{n}_{i}\leftarrow\frac{D_{i}}{q(s_{i})}-\sum_{j<i}(\hat{n}_{j}-D_{j})\) end for \(\hat{n}_{k}\gets n-\sum_{i<k}\hat{n}_{i}\) //Earning Phase \(\hat{\mathcal{I}}\leftarrow(\lambda,\{\hat{n}_{i}\}_{i\in[k]},\{v_{i}\}_{i\in[ k]})\) \(s_{\text{sum}}\leftarrow\sum_{i=1}^{k-1}s_{i}\) Find optimal non-adaptive policy \(\hat{t}_{1},\ldots,\hat{t}_{k}\) for \(\hat{\mathcal{I}}\) on the time interval \([0,1-s_{\text{sum}}]\)1 for\(i=1,2,\ldots,k\)do Use price \(v_{i}\) during times \([\hat{t}_{i}+s_{\text{tot}},\hat{t}_{i+1}+s_{\text{tot}}]\) end for ``` **Algorithm 1**Learn-then-Earn Policy ### Debiasing the Demand As is standard in MAB, we aim to construct _unbiased_ estimates of the model parameter. In Algorithm 1, we use price \(v_{1}\) for time \(s_{1}\) and track (random) demand \(D_{1}\) during this period. Recall that each customer monitors the price with Poisson rate \(\lambda\), so \(D_{1}\sim\text{Binomial}(n_{1},q(s_{1}))\) where \(q(x):=1-\exp(-\lambda x)\). Thus, \(D_{1}/q(s_{1})\) is an unbiased estimate of \(n_{1}\). However, when we explore prices \(v_{2},v_{3},\ldots,v_{k-1}\), the situation is more complicated. There may still be active customers (i.e., customers who have not exited the market) with a valuation \(v_{1}\) that purchase during this time, _confounding_ the observed demand \(D_{2},D_{3},\ldots\) in future stages. As the pivotal step, we develop a novel unbiased estimator that overcomes this issue. Starting with \(\hat{n}_{1}=n_{1}\), for each \(i=2,3,\ldots,k\), we recursively define \[\hat{n}_{i}=\frac{D_{i}}{q(s_{i})}-\sum_{j<i}(\hat{n}_{j}-D_{j}), \tag{2}\] The first part is similar to the naive estimator we used for \(\hat{n}_{1}\), while the second part aims to remove the confounding affect of customers at higher valuations. We show that this estimator is unbiased. **Lemma 4.3** (Unbiasedness).: _Let \(s_{1},s_{2},\ldots,s_{k-1}\) be the lengths of each exploration period and \(D_{1},D_{2},\ldots,D_{k-1}\) be the realized demands. For \(i=1,\ldots,k\), we have \(\mathbb{E}[\hat{n}_{i}]=n_{i}\)._ As a quick sketch, we show this by induction on \(i<k\). The base case is obvious. For the inductive case \(1<i<k\), we observe that conditioned on \(D_{j}\) for \(j<i\), we have \(D_{i}\sim\text{Bin}(\sum_{j\leq i}n_{j}-\sum_{j<i}D_{j},q(s_{i}))\). Using this we can show \(\mathbb{E}[\hat{n}_{i}]=n_{i}+\sum_{j<i}\mathbb{E}[\hat{n}_{j}-n_{j}]\), which equals \(n_{i}\) under the inductive hypothesis. ### The Case of Two Price Levels We demonstrate the main ideas by showing a regret bound of \(\tilde{O}(n^{3/4})\) in the two-price case. We specify an exploration time \(s\in[0,1]\), then set the price \(X_{t}=v_{1}\) for all \(t\leq s\). Let \(D\) be the demand (number of sales) observed in this period. As discussed in Section 4.1, we use \(\hat{n}_{1}=D/q(s)\) and \(\hat{n}_{2}=n-\hat{n}_{1}\) as unbiased estimates of \(n_{1}\) and \(n_{2}\). Using these, we construct the estimated instance \(\hat{\mathcal{I}}=(\lambda,\{\hat{n}_{1},\hat{n}_{2}\},\{v_{1},v_{2}\})\) and compute a non-adaptive policy \(\hat{\pi}\) achieving revenue \(\text{OPT}(\hat{\mathcal{I}})\) for the remaining time horizon \(1-s\) and follow it. We decompose regret into two quantities that we bound separately. In addition to \(\hat{\mathcal{I}}\) as defined above, define \(\mathcal{I}^{\prime}=(\lambda,\{n_{1}-D,n_{2}\},\{v_{1},v_{2}\})\) as the instance which remains after observing the demand \(D\). We decompose the regret as follows. **Lemma 4.4** (Regret Decomposition).: _Define \(\eta_{1}=|\mathbb{E}[\text{Rev}(\hat{\pi},\mathcal{I}^{\prime})]-\mathbb{E}[ \text{OPT}(\hat{\mathcal{I}})]|\) and \(\eta_{2}=|\mathbb{E}[\text{OPT}(\hat{\mathcal{I}})]-\text{OPT}(\mathcal{I})|\). Then,_ \[\text{Regret}(\pi^{\text{LTE}})\leq\eta_{1}+\eta_{2}.\] The proof follows straightforwardly by noting that \(\mathbb{E}[\mathrm{Rev}(\hat{\pi},\mathcal{I}^{\prime})]\) is a lower bound on the revenue of our policy since it only accounts for the revenue in the _earning_ phase. We will show that for suitable \(s\), both terms above are \(\tilde{O}(n^{3/4})\). To this end, we first show that \(\eta_{1}\) will grow linearly in \(s\). For this we use two observations. First, observe that our estimates are unbiased and the revenue is linear in the size of each valuation group. Second, we observe that the impact of the exploration phase (which has length \(s\)) is linear in \(s\). **Lemma 4.5** (Analysis of \(\eta_{1}\)).: _We have \(\eta_{1}=O(\lambda nv_{1}s)\)._ To bound \(\eta_{2}\), we use concentration inequalities to show that our estimates are close to the target values with high probability. **Lemma 4.6** (Analysis of \(\eta_{2}\)).: _For our policy \(\pi^{\mathrm{LTE}}\), we have \(\eta_{2}=O(v_{1}\sqrt{n\log(n)}/\lambda s)+o(1)\)._ At a high level, we apply Hoeffding's inequality to \(D\sim\mathrm{Binomial}(n_{1},q(s))\), and combine this with the approximation \(q(s)=1-\exp(-\lambda s)\approx\lambda s\) for small \(\lambda s\). The following lemma states that if two functions are point-wise close, then their maximums are also close. This essentially follows from the triangle inequality. **Lemma 4.7**.: _Let \(f,g\) be real-valued functions defined on any set \(\mathcal{X}\). If for all \(x\in\mathcal{X}\), we have \(|f(x)-g(x)|\leq\epsilon\), then \(|\max_{x}f(x)-\max_{x}g(x)|\leq 3\epsilon\)._ Lemma 4.6 then follows by choosing the functions \(f=\mathrm{Rev}(\cdot,\tilde{\mathcal{I}})\) and \(g=\mathrm{Rev}(\cdot,\mathcal{I})\), and choose \(\varepsilon\) to be the bound implied by Hoeffding's inequality. Now we complete the analysis for the two-price case. From Lemma 4.4, Lemma 4.5 and Lemma 4.6 we have \[\mathrm{Regret}(\pi^{\mathrm{LTE}})\leq O\left(\lambda nv_{1}s+\frac{v_{1} \sqrt{n\log(n)}}{\lambda s}\right)+o(1).\] The \(\tilde{O}(n^{3/4})\) bound follows by taking \(s=\tilde{\Theta}(n^{-1/4}/\lambda)\). ### Extending to \(k\) Price Levels We briefly sketch how we extend the analysis from two price levels to \(k\) price levels. Our current analysis for the two-price setting only leads to a bound that depends linearly on \(k\). To achieve a sublinear dependence on \(k\), we need to be more careful in our analysis of the _total_ error in our estimates \(\hat{n}_{i}\). Due to the dependencies that exist between our estimates \(\hat{n}_{i}\), we cannot directly apply concentration inequalities to control the total error. Instead, we employ a more careful analysis, showing that the sequence \(Z_{i}=\sum_{j\leq i}(\hat{n}_{j}-n_{j})-\alpha_{i}\) is a supermartingale for an appropriate choice of \(\alpha_{i}>0\). Then, we apply the Azuma-Hoeffding inequality to obtain a bound that is sublinear in \(k\) for the total error. Using this in the rest of our analysis leads to the \(\tilde{O}(k^{3/4}n^{3/4})\) bound on the regret. We defer the details to the appendix. ## 5 Future Work This work opens up a wealth of new directions and open problems. 1. Lower bounds for the adaptive setting: Known techniques for deriving regret lower bounds for MAB turn out to be ineffective for our problem, and we have to develop new proof strategies. 2. Unknown \(\lambda\): In reality, the monitoring rate \(\lambda\) may be unknown and must also be learned online. It is not clear how to generalize our LTE policy to handle unknown \(\lambda\). 3. Inventory constraint: The problem becomes substantially harder if the inventory is finite, which caps our learning process. 4. New arrivals: In reality, there may be new arrivals apart from the initial group of customers, making the problem significantly harder. For example, in this case the monotonicity result no longer holds.
2304.14843
Gain-Loss Hedging and Cumulative Prospect Theory
Two acts are comonotonic if they yield high payoffs in the same states of nature. The main purpose of this paper is to derive a new characterization of Cumulative Prospect Theory (CPT) through simple properties involving comonotonicity. The main novelty is a concept dubbed gain-loss hedging: mixing positive and negative acts creates hedging possibilities even when acts are comonotonic. This allows us to clarify in which sense CPT differs from Choquet expected utility. Our analysis is performed under the simpler case of (piece-wise) constant marginal utility which allows us to clearly separate the perception of uncertainty from the evaluation of outcomes.
Lorenzo Bastianello, Alain Chateauneuf, Bernard Cornet
2023-04-28T13:30:43Z
http://arxiv.org/abs/2304.14843v1
# Gain-Loss Hedging and Cumulative Prospect Theory ###### Abstract Two acts are comonotonic if they yield high payoffs in the same states of nature. The main purpose of this paper is to derive a new characterization of Cumulative Prospect Theory (CPT) through simple properties involving comonotonicity. The main novelty is a concept dubbed gain-loss hedging: mixing positive and negative acts creates hedging possibilities even when acts are comonotonic. This allows us to clarify in which sense CPT differs from Choquet expected utility. Our analysis is performed under the simpler case of (piece-wise) constant marginal utility which allows us to clearly separate the perception of uncertainty from the evaluation of outcomes. Keywords: Cumulative Prospect Theory, Comonotonicity, Gain-loss hedging, Sipos integral, Choquet integral. JEL Classification Number: D81. ## 1 Introduction When making everyday decisions, economic agents are often confronted with uncertainty. For instance, one can think of a decision maker (DM) who needs to choose how to allocate her wealth between two different portfolios of assets, or a firm that has to decide whether to invest in an innovative technology or in a traditional one. The most popular model used under risk and uncertainty is the expected utility model. This model, proposed first by Bernoulli at the beginning of the XVIII century, has been axiomatized by de Finetti [6], Savage [13] and Von Neumann and Morgenstern [27]. However, empirical evidence has shown that expected utility does not provide a good description of DMs' actual choices. Early examples are the famous paradoxes of Allais [1] and Ellsberg [9]. One of the most prominent and most successful alternative to expected utility theory is cumulative prospect theory (CPT) of Tversky and Kahneman [21]. The aim of this paper is twofold: \((i)\) we provide a new mathematical characterization of the CPT functional under the simplifying assumption of (piece-wise) constant marginal utility \(\dot{a}\)_la_ Yaari [26]; \((ii)\) we use the characterization of the previous point to obtain a novel preference axiomatization of CPT. Consider acts as functions from a state space \(S\) to the set of real numbers. Thus, given an act \(f:S\to\mathbb{R}\), \(f(s)\) can be interpreted as the amount of money or consumption good that a DM obtains if the state turns out to be \(s\). A central role is played by comonotonic acts. Loosely speaking, two acts are comonotonic if they are positively correlated. Mixing two comonotonic acts does not provide a possible hedge against uncertainty. This idea was exploited in the seminal papers of Schmeidler [14], [15] to extend expected utility to Choquet expected utility. One advantage of CPT over the Choquet model is that it allows to disentangle the behavior of DMs in the domain of gains from the one in the domain of losses, i.e. when outcomes are respectively above or below a certain reference point (in our case the reference point is naturally taken equal to \(0\)). This difference in behavior can be decomposed into two components. The first one is called loss-aversion and says that "losses loom larger than gains" (Tversky and Kahneman [21]). Mathematically, it means that losses are multiplied by a constant \(\lambda>1\). The second one is usually called sign dependence and says that the attitude toward uncertainty (mathematically represented by a capacity) is different for gains and for losses. We take this behavior as a starting point for both the mathematical characterization of CPT and its axiomatization. The intuition behind our properties is that adding comonotonic acts can still provide some hedge if those acts are of opposite signs and have non-disjoint supports. We call this property gain-loss hedging. We describe here the two main properties that we use in Section 3.1 to characterize mathematically the CPT functional. The first property is well-known and postulates comonotonic independence (separately) for gains and for losses. Comonotonic acts do not provide a possible hedge against uncertainty and therefore adding them should not change the preferences of the DM. Take three acts \(f,g\), and \(h\), all in the domain of gains or all in the domain of losses, such that \(h\) is comonotonic with \(f\) and \(g\). Then our condition require that if \(f\) and \(g\) are indifferent, then adding \(h\) to both of them does not change a DM's preferences since in both situations \(h\) does not increase nor reduce uncertainty. The second property, that we call gain-loss hedging, represents the main behavioral novelty of the paper. The key idea is that adding an act above the reference point to an act below the reference point may provide an hedge against uncertainty unless these acts have disjoint supports. To exemplify suppose that there are two states of the world \(S=\{s_{1},s_{2}\}\) and that a DM with a linear utility function over outcomes is indifferent between the assets \(f=(20,0)\) (\(f\) is the act that pays \(20\) if \(s_{1}\) is realized and \(0\) otherwise) and \(g=(10,10)\). Consider now the act \(h=(0,-10)\) which has disjoint support with \(f\) but not with \(g\). When the DM evaluates \(f+h=(20,-10)\) and \(g+h=(10,0)\), she may feel \(f+h\) more uncertain than \(g+h\) and therefore she may prefer \(g+h\). Note that indifference between \(f\) and \(g\) and then a strict preference for \(g+h\) is precluded by the expected utility model (with the utility function being the identity). More interestingly, this preference pattern would be a paradox even for the more general Choquet expected utility model of Schmeidler [15] (with the utility function being the identity). The Choquet model excludes any possible hedging through mixing of comonotonic acts. In this example however act \(h\) is comonotonic with both acts \(f\) and \(g\) and therefore no possible hedging would be envisioned by the Choquet model. Therefore \(h\) is a possible hedge to uncertainty when added to \(g\) because gains and losses balance out one another, and not because of comonotonicity. We elaborate more on this idea in Example 1. In Section 3.2, we give a preference axiomatization of the CPT model with piecewise linear utility. We do not assume the Anscombe and Aumann [2] framework, and our axioms only appeals to simple properties related to comonotonicity. Moreover, we propose a new and simple axiom that can be used to elicit the coefficient of loss-aversion \(\lambda\). In order to derive a CPT representation of preferences, we use the mathematical characterization of Section 3.1. In a sense, our paper parallels, in the context of prospect theory, the work of Schmeidler [14], [15] on the Choquet integral. Empirical evidence not only supports sign-dependence, but it suggests further that agents are uncertainty averse for gains and uncertainty seeking for losses, see for instance Wakker [23], Section 12.7 for a review. Section 3.3 provides testable axioms that characterize those opposite behaviors. Finally we investigate when uncertainty aversion for gains is symmetric to uncertainty seeking for losses. Behaviorally, this happens if a DM who is indifferent between an act \(f\) and a monetary outcome \(\alpha\) is also indifferent between \(-f\) and \(-\alpha\). In this case we prove that weights for gains and losses are dual with respect to each other and that CPT reduces to a Sipos integral, see Sipos [18]. This result clarifies the relation of CPT with the Sipos integral that was first noticed by Starmer and Sudgen [19] (see also Wakker [23] and Kothiyal _et al._[12]). Of course, there are several axiomatizations of CPT available in the literature. The concept of comonotonicity and the fact that acts are rank-ordered are crucial, see Diecidue and Wakker [8]. The very first axiomatization is provided in the seminal paper of Tversky and Kahneman [21] and relies on comonotonic independence and a property called double matching. See also Trautmann and Wakker [20] for a recent characterization using these axioms in a (reduced) Anscombe and Aumann [2] framework. Wakker and Tversky [24] pair comonotonicity with trade-off consistency (see also Chateauneuf and Wakker [4] for the case of risk). The comonotonic sure thing principle approach (or a weakening of it called tail independence) is developed in Chew and Wakker [5], Zank [28] and Wakker and Zank [25]. The paper closest to our is the one of Schmidt and Zank [16]. The authors characterize CPT through an axiom called independence of common increments for comonotonic acts. Interestingly, they obtain a piecewise linear utility function (with a kink about the reference point), as in our axiomatization. We refer the reader to the introductory section of Schmidt and Zank [16] for a detailed discussion about the advantages of adopting piece-wise linear utility. The rest of the paper is organized as follows. Section 2 introduces the framework, the mathematical notations and the behavioral models that we will consider. Section 3 is divided in three subsections and it contains our main results. Section 3.1 presents the mathematical characterization of the CPT and Sipos functionals, Section 3.2 provides a behavioral characterization of CPT and Section 3.3 discusses DM's attitude towards uncertainty. Section 4 concludes. All proofs are gathered in the Appendix. ## 2 Framework Let \(S\) be a set of states of the world endowed with a \(\sigma\)-algebra \(\mathcal{A}\). Elements of \(\mathcal{A}\) are called _events_. We denote \(\mathcal{F}\) the set of all bounded, real-valued, \(\mathcal{A}\)-measurable functions over \(S\), i.e. \(\mathcal{F}\ =\ \{f:S\to\mathbb{R}|f\text{ is bounded and }\mathcal{A}\text{-mesurable}\}\). A function \(f\in\mathcal{F}\) is called _act_. An act can be interpreted as an asset that pays a monetary outcome in \(\mathbb{R}\) that depends on the realization of the state of the world. We denote the _positive part_ of an act \(f\in\mathcal{F}\) by \(f^{+}=f\lor 0\) and the _negative part_ by \(f^{-}=(-f)\lor 0\). Note that both positive and negative parts are greater than \(0\).1 The set \(\mathcal{F}^{+}=\{f\in\mathcal{F}|f(s)\geq 0,\,\forall s\in S\}\) is the set of positive acts, the set of negative acts \(\mathcal{F}^{-}\) is defined analogously. Two acts \(f,g\in\mathcal{F}\) have the _same sign_ if either \(f,g\in\mathcal{F}^{+}\) or \(f,g\in\mathcal{F}^{-}\). We say that two acts are of _opposite sign_ if one of them is positive and the other is negative. The _support_ of an act of \(f\in\mathcal{F}\) is the set \(supp(f)=\{s\in S|f(s)\neq 0\}\). Two acts \(f,g\in\mathcal{F}\) are _comonotonic_ if for all \(s,t\in S\), \((f(s)-f(t))(g(s)-g(t))\geq 0\). Let \(A\subseteq S\), \(1_{A}\) is the _indicator function_ of the set \(A\), i.e. \(1_{A}(s):=\begin{cases}1&\text{ if }s\in A\\ 0&\text{ if }s\in A^{c}\end{cases}\). If \(\alpha\in\mathbb{R}\), then \(\alpha 1_{A}\) denotes the constant act which pays \(\alpha\) in every state \(s\in A\). Footnote 1: Note that several papers studying prospect theory use the symbol \(f^{-}\) to denote \(f\wedge 0\). A _(normalized) capacity_\(v\) on the measurable space \((S,\mathcal{A})\) is a set function \(v:\mathcal{A}\mapsto\mathbb{R}\) such that \(v(\emptyset)=0,\,v(S)=1\) and for all \(A,B\in\mathcal{A},\,A\subseteq B\Rightarrow v(A)\leq v(B)\). If \(v\) is a capacity, we define its _conjugate_ by \(\hat{v}(A)=1-v(A^{c})\) for all \(A\in\mathcal{A}\). A capacity \(v:\mathcal{A}\mapsto\mathbb{R}\) is _convex (concave)_ if, for all \(A,B\in\mathcal{A}\), \(v(A\cup B)+v(A\cap B)\geq(\leq)v(A)+v(B)\). Given a capacity \(v\) on \((S,\mathcal{A})\), the _Choquet integral_ of \(f\in\mathcal{F}\) with respect to \(v\) is a functional \(C:\mathcal{F}\to\mathbb{R}\) defined by \[C(f)=\int_{S}f\,dv:=\int_{-\infty}^{0}\left(v(\{f\geq t\})-1\right)dt+\int_{0 }^{+\infty}v(\{f\geq t\})\,dt,\] In the following we will remove the subscript \(S\) from the integral sign whenever the domain of integration is clear. Given a capacity \(v\) on \((S,\mathcal{A})\), the _Sipos integral_ (see Sipos [18]) of \(f\in\mathcal{F}\) with respect to \(v\) is a functional \(\check{S}:\mathcal{F}\to\mathbb{R}\) defined as \[\check{S}(f)=\int f^{+}dv-\int f^{-}dv\] where the two integrals are Choquet integrals. The following Lemma gives an alternative formulation of the Sipos integral when the conjugate capacity is used when evaluating the negative part of a function. Moreover it clarifies the relation between the Choquet integral and the Sipos integral. **Lemma 1**.: _Let \(v\) be a capacity and \(\hat{v}\) its conjugate. Then the following holds:_ * \(\tilde{S}(f)=\int f^{+}dv+\int-f^{-}d\hat{v}\)_;_ * \(C(f)=\int f^{+}dv+\int-f^{-}dv=\int f^{+}dv-\int f^{-}d\hat{v}\)_._ The main object of this paper is the _(piecewise linear) Cumulative Prospect Theory (CPT)_ functional \(CPT:\mathcal{F}\rightarrow\mathbb{R}\). It is a generalization of both Choquet and Sipos integrals. Consider two capacities \(v^{+}\), \(v^{-}\) and a real number \(\lambda>0\), then the _(piecewise linear) CPT_ functional \(CPT:\mathcal{F}\rightarrow\mathbb{R}\) is defined by \[CPT(f)=\int f^{+}dv^{+}-\int\lambda f^{-}dv^{-}.\] A preference relation \(\succsim\) over \(\mathcal{F}\) is a complete and transitive binary relation with non-empty strict part. As usual, \(f\succsim g\) means "\(f\) is preferred to \(g\)". We denote \(\succ\) and \(\sim\) the strict and weak part of \(\succsim\). A functional \(I:\mathcal{F}\rightarrow\mathbb{R}\)_represents_\(\succsim\) if for all \(f,g\in\mathcal{F}\), \(f\succsim g\) if and only if \(I(f)\geq I(g)\). ## 3 Main results This section contains our two main results. The first one, Theorem 2, characterizes mathematically the CPT functional. The second result, Theorem 5, studies which behavioral axioms a preference relation should satisfy in order to be represented by a CPT functional. ### The CPT functional We start with a seminal theorem of Schmeidler [14] who provided a characterization of the Choquet functional. Before presenting the result we recall that a functional \(I:\mathcal{F}\rightarrow\mathbb{R}\) is _monotonic_ if \(f\geq g\Rightarrow I(f)\geq I(g)\), where \(f\geq g\) means \(f(s)\geq g(s)\) for all \(s\in S\). Moreover \(I\) satisfies _comonotonic additivity_ if, whenever \(f\) and \(g\) are comonotonic, then \(I(f+g)=I(f)+I(g)\). **Theorem 1**.: (Schmeidler [14]) _Let \(I:\mathcal{F}\rightarrow\mathbb{R}\) be a given functional with \(I(1_{S})=1\). Then the following are equivalent._ * \((a)\)__\(I\) _is monotonic;_ \((b)\)__\(I\) _satisfies comonotonic additivity._ * \(I\) _is a Choquet integral._ The CPT functional generalizes the Choquet functional by relaxing comonotonic additivity. More specifically, comonotonic additivity will be retained only for comonotonic acts of the same sign and for (comonotonic) acts of opposite sign with disjoint supports. The following is our first main result. **Theorem 2**.: _Let \(I:\mathcal{F}\rightarrow\mathbb{R}\) be a given functional satisfying \(I(1_{S})=1\). Then the following are equivalent._ 1. \((a)\) _I is monotonic;_ \((b)\) _I satisfies comonotonic additivity on_ \(\mathcal{F}^{+}\) _and_ \(\mathcal{F}^{-}\) _and for acts_ \(f,g\) _of opposite sign such that_ \(supp(f)\cap supp(g)=\emptyset\)_._ 2. _I is a CPT functional._ Consider item \((i)\) of both Theorem 1 and Theorem 2. Note that part (b) of Theorem 1 implies (b) of Theorem 2, as acts with opposite sign and disjoint supports are comonotonic. This relaxation not only characterize a functional that is more general than the Choquet integral, but it also gives some important insights from a behavioral point of view. Recall that comonotonic additivity is a weakening of full-fledged additivity, a property that would force the functional \(I\) to be linear, and hence an expectation. The behavioral intuition behind comonotonic additivity is that adding two comonotonic acts does not permit possible hedging against choices of nature. Relaxing comonotonic additivity allows us to uncover more sophisticated attitudes towards uncertainty and more subtle forms of hedging. The fist remarkable property of the CPT functional is that it differentiates agents' behavior in the domain of gains (i.e. \(\mathcal{F}^{+}\)) from the one in the domain of losses (i.e. \(\mathcal{F}^{-}\)). The outcome for which behavior changes, namely the monetary outcome \(0\), is called the _reference point_.2 Comonotonic additivity is preserved whenever acts under considerations are both above or both below the reference point. Comonotonic additivity over \(\mathcal{F}^{+}\) and \(\mathcal{F}^{-}\) weakens a condition already well known in the literature called cosigned independence. Two acts \(f,g\in\mathcal{F}\) are _sign-comonotonic_ or simply _cosigned_ if they are comonotonic and there exists no \(s\in S\) such that \(f(s)>0\) and \(g(s)<0\), see Wakker and Tversky [24] and Trautmann and Wakker [20]. Footnote 2: In this paper, the reference point is exogenously given and it is normalized to \(0\) for convenience (we could have chosen any other reference point \(r\in\mathbb{R}\)). Schmidt and Zank [17] provide axioms to make the reference point endogenous. One of the main contribution of the present paper lies in the second comonotonic additivity requirement that characterizes the \(CPT\) functional, namely \(f,g\) of opposite sign such that \(supp(f)\cap supp(g)=\emptyset\) implies \(CPT(f+g)=CPT(f)+CPT(g)\). This means that comonotonic additivity can fail if we have \(f,g\) of opposite sign and \(supp(f)\cap supp(g)\neq\emptyset\) (we underline again that such acts are comonotonic). The behavioral intuition behind this requirement is that adding the positive and negative parts of two acts can provide a hedge against possible choices of nature even when acts under consideration are comonotonic. We call this property _gain-loss hedging_.3 This hedging possibility it is not considered for instance in the Choquet model, where the only way to hedge is to add two non-comonotonic acts. The following example provides more details for the particular case in which CPT reduces to a Sipos integral, i.e. \(\lambda=1\) and \(v^{+}=v^{-}\). Footnote 3: We thank Peter Wakker for suggesting this terminology. **Example 1**.: _Let \(S=\{s_{1},s_{2},s_{3}\}\) and consider a CPT functional with \(\lambda=1\) and \(v=v^{+}=v^{-}\) (i.e. a Sipos integral). Let \(v\) be defined as_ _Consider now the following acts on \(S\)._ \[\begin{array}{c|c|c|c|c|c|c}&s_{1}&s_{2}&s_{3}\\ \hline f&\text{3}&\text{4}&\text{4}\\ g&\text{0}&\text{11}&\text{0}\\ h&\text{-3}&\text{0}&\text{-1}\\ -h&\text{3}&\text{0}&\text{1}\\ f+h&\text{0}&\text{4}&\text{3}\\ g+h&\text{-3}&\text{11}&\text{-1}\end{array}\] _Acts \(f,g,h\) are comonotonic, but \(supp(g)\cap supp(h)=\emptyset\) while \(supp(f)\cap supp(h)\neq\emptyset\). Let \(\succsim_{S}\) be the preference relation induced by the \(\tilde{S}\) functional, i.e. \(f\succsim_{S}g\Leftrightarrow\tilde{S}(f)\geq\tilde{S}(g)\) and \(\succsim_{C}\) the one induced by the \(C\) functional (both functionals \(\tilde{S}\) and \(C\) are defined in Section 2). We have_ \[\tilde{S}(f)=C(f)= 3+(4-3)\frac{2}{3}=\frac{11}{3}\] \[\tilde{S}(g)=C(g)= 0+(11-0)\frac{1}{3}=\frac{11}{3}\] _and therefore \(f\sim_{\tilde{S}}g\) and \(f\sim_{C}g\). Moreover since \(h\) is comonotonic with \(f\) and \(g\), by comonotonic additivity \(f+h\sim_{C}g+h\) (one can actually verify that \(C(f+h)=C(f)+C(h)=\frac{7}{3}=C(g)+C(h)=C(g+h)\)). However we can notice that the act \(f+h\) looks much "smoother" than \(g+h\) and moreover \(f+h\geq 0\) since gains balance losses. This intuition is captured by the preference relation induced by the Sipos integral as_ \[\tilde{S}(f+h)= 0+(3-0)\frac{2}{3}+(4-3)\frac{1}{3}=\frac{7}{3}\] \[\tilde{S}(g+h)= C(g)-C(-h)=\frac{11}{3}-(0+(1-0)1+(3-1)\frac{2}{3})=\frac{4}{3}\] _and therefore \(f+h\succ_{S}g+h\)._ Example 1 shows that gain-loss hedging is an interesting behavioral feature of CPT and of Sipos integrals. Adding positive and negative acts with supports that are not disjoint, can provide an hedge even when the acts involved are comonotonic. This happens because gains compensate losses. In the next section we provide a new behavioral foundation of CPT taking this observation as a starting point. Example 1 shows that preferences represented by Sipos integrals are rich enough to entail gain-loss hedging behaviors. It is therefore interesting to mathematically characterize Sipos integrals. Theorem 3 shows that a symmetric condition pins down a CPT functional as a Sipos integral. **Theorem 3**.: _A CPT functional is a Sipos integral if and only if \(CPT(-f)=-CPT(f)\) for all \(f\in\mathcal{F}\)._ Theorem 3 says that CPT reduces to a Sipos integral if and only if the condition \(CPT(-f)=-CPT(f)\) for all \(f\in\mathcal{F}\) is satisfied. This is an interesting result as such condition is a strong one. As an example, if \(C\) is a Choquet functional then \(C(-f)=-C(f)\) for all \(f\in\mathcal{F}\) if and only if the capacity \(v\) equals its conjugate \(\hat{v}\), and therefore it is additive on events \(\{A,A^{c}\}\). ### A behavioral characterization of CPT In this section we provide a preference axiomatization of CPT. We recall that a preference relation \(\succsim\) over \(\mathcal{F}\) is a complete and transitive binary relation with non-empty strict part. The first axiom is a continuity axiom. A.1 ContinuityThe sets \(\{\alpha\in\mathbb{R}|\alpha 1_{S}\succsim f\}\) and \(\{\alpha\in\mathbb{R}|f\succsim\alpha 1_{S}\}\) are closed for all \(f\in\mathcal{F}\). Note that the axiom requires only to compare acts with constants. This dispenses us to formulate topological assumptions on the set of acts \(\mathcal{F}\). The second axiom is a monotonicity property. A.2 MonotonicityLet \(f,g\in\mathcal{F}\) be such that \(f\geq g\). Then \(f\succsim g\). Consider now the well known comonotonic independence axiom (Chateauneuf [3], Schmeidler [15]). It says that if two acts \(f\) and \(g\) are indifferent to each other, then adding a comonotonic act \(h\) to both of them does not change the DM's preferences. The idea behind this condition is that adding comonotonic acts does not provide any possible hedge against uncertainty. A.C Comonotonic IndependenceLet \(f,g,h\in\mathcal{F}\) such that \(h\) is comonotonic with \(f\) and with \(g\). Then \(f\sim g\) implies \(f+h\sim g+h\) Preferences satisfying A.1, A.2 and A.C are represented by a Choquet integral. We present this result in the next proposition **Theorem 4**.: (Chateauneuf [3], Schmeidler [15]) _Let \(\succsim\) be a preference relation over \(\mathcal{F}\). Then the following are equivalent._ 1. \(\succsim\) _satisfies A.1, A.2 and A.C._ 2. _There exists a (unique) capacity_ \(v\) _such that_ \(\succsim\) _is represented by a Choquet functional._ However, as Example 1 shows, Comonotonic Independence may be too strong as it doesn't take into account (gain-loss) hedging possibilities that arise adding positive and negative acts with non-disjoint supports. The following two axioms, axiom A.3 and A.4, are both implied by Comonotonic Independence. They are at the heart of our behavioral characterization. They generalizes A.C in two directions. First, axiom A.3 allows for different attitudes towards uncertainty in the domain of gain and in the domain of losses. Second, axiom A.4 takes into account possible gain-loss hedging opportunities that arises in situations like the one of Example 1. A.3 Comonotonic Independence for Gain and Losses. Let \(f,g,h\in\mathcal{F}^{+(-)}\) be such that \(h\) is comonotonic with \(f\) and \(g\). Then \(f\sim g\) implies \(f+h\sim g+h\). A.4 \(\lambda\)-Disjoint Independence. There exists \(\lambda>0\) such that for all \(f\in\mathcal{F}^{+}\) and \(g\in\mathcal{F}^{-}\) such that \(supp(f)\cap supp(g)=\emptyset\) and such that \(f\sim\alpha 1_{S}\) and \(g\sim\beta 1_{S}\) 1. if \(\alpha+\lambda\beta\geq 0\) then \(f+g\sim(\alpha+\lambda\beta)1_{S}\); 2. if \(\alpha+\lambda\beta<0\) then \(f+g\sim\left(\frac{\alpha+\lambda\beta}{\lambda}\right)1_{S}\). Axiom A.4 represents the main behavioral novelty. To better understand it, note that it is implied by the following (stronger) axiom. A.4\({}^{*}\) Disjoint Independence. For all \(f\in\mathcal{F}^{+}\) and \(g\in\mathcal{F}^{-}\) such that \(supp(f)\cap supp(g)=\emptyset\) and such that \(f\sim\alpha 1_{S}\) and \(g\sim\beta 1_{S}\), one has \(f+g\sim(\alpha+\beta)1_{S}\). It is easy to see that A.4\({}^{*}\) follows from A.4 imposing \(\lambda=1\). A.4\({}^{*}\) requires that the act \(f+g\) is evaluated as the sum of its constant equivalent. In the general case, we can have \(\lambda\neq 1\) and in this case A.4 says that the constant equivalent of \(f+g\) depends on the sign of \(\alpha+\lambda\beta\). The interpretation for the case of loss-aversion, \(\lambda>1\), is the following. The DM outweighs losses by a factor of \(\lambda\) and considers \(\lambda\beta\) instead of \(\beta\)_tout-court_. If \(\alpha+\lambda\beta>0\) then the DM feels "overall in the domain of gains" and the certainty equivalent of \(f+g\) is positive and such that \(\alpha>0\) is balanced by \(\lambda\beta<\beta<0\), i.e. the certainty equivalent \(\beta\) of losses is outweighed by a factor of \(\lambda\). If \(\alpha+\lambda\beta<0\) then the DM feels "overall in the domain of losses" and in this case the certainty equivalent of \(f+g\) is negative and equal to \(\beta<0\) plus \(\frac{\alpha}{\lambda}>0\), i.e. the certainty equivalent \(\alpha\) of the positive part decreased by a factor of \(\lambda\) (since \(\frac{\alpha}{\lambda}<\alpha\)). Importantly, \(\lambda\) can be determined in the lab: take \(f\in\mathcal{F}^{+}\) and \(g\in\mathcal{F}^{-}\) such that \(supp(f)\cap supp(g)=\emptyset\), ask the certainty equivalents \(\alpha\), \(\beta\) and \(\gamma\) of \(f\), \(g\) and \(f+g\) respectively. If \(\gamma=\alpha+\beta\), there is no loss-aversion or seeking. If \(\gamma\neq\alpha+\beta\), then if \(\gamma>0\) we have \(\lambda=\frac{\gamma-\alpha}{\beta}\) and if \(\gamma<0\) we have \(\lambda=\frac{\alpha}{\gamma-\beta}\). There is lively debate on whether loss-aversion is a real phenomena or not, with results on both sides. See for instance Gal and Rucker [10] and Gachter, Johnson and Herrmann [11]. We hope therefore that A.4 could be helpful to elicit loss-aversion in a setting in which individuals' preferences are represented by the CPT functional with piece-wise constant marginal utility. When A.C is replaced by A.3 and A.4, we obtain a characterization of the CPT functional. The following is our second main result. **Theorem 5**.: _Let \(\succsim\) be a preference relation over \(\mathcal{F}\). Then the following are equivalent._ 1. \(\succsim\) _satisfies A.1, A.2, A.3 and A.4._ 2. _There exist two (unique) capacities_ \(v^{+}\)_,_ \(v^{-}\) _and a real number_ \(\lambda>0\) _such that_ \(\succsim\) _is represented by a CPT functional._ Note that if we replace A.4 with A.4\({}^{*}\) in Theorem 5, we obtain a CPT functional with \(\lambda=1\), i.e. loss-neutrality. ### Attitude towards uncertainty As we already said above, a remarkable property of CPT is that (unlike the Choquet functional) it allows to disentangle DMs' attitude towards uncertainty in the domain of gains from the one in the domain of losses. This is made possible since an act is evaluated through the sum of two Choquet integrals with respect to a capacity \(v^{+}\) for gains, and a different one \(v^{-}\) for losses. Experimental evidence shows that DMs are uncertainty averse for gains and uncertainty seeking for losses. Loosely speaking, uncertainty aversion (seeking) means that agents prefer situations in which objective probabilities of events are (not) available. In our framework, objective probabilities are not there at all. Therefore an act is not uncertain only if it is a constant act. Intuitively, in our purely subjective setting, an uncertainty averse (seeking) DM would prefer acts that are "as close (far) as possible" to constant acts. We capture this idea with the two following axioms. A.3' Let \(f,g,h\in\mathcal{F}^{+}\) such that \(h\) is comonotonic with \(g\). Then \(f\sim g\Rightarrow f+h\succsim g+h\). A.3" Let \(f,g,h\in\mathcal{F}^{-}\) such that \(h\) is comonotonic with \(f\). Then \(f\sim g\Rightarrow f+h\succsim g+h\). Axiom A.3' captures the intuition that DMs are uncertainty averse in the domain of gains \(\mathcal{F}^{+}\). Consider three acts \(f,g,h\in\mathcal{F}^{+}\) such that \(f\sim g\) and \(h\) is comonotonic with \(f\) and \(g\). Then adding (the potentially non-comonotone act) \(h\) to \(f\) increases the appreciation of \(f\) since \(h\) may by an hedge against \(f\), while at the same time it decreases the appreciation of \(g\) since uncertainty may be higher. To exemplify, let \(A\in\mathcal{A}\) and consider \(f=10\cdot 1_{A}+5\cdot 1_{A^{c}}\), \(g=5\cdot 1_{A}+10\cdot 1_{A^{c}}\) and \(h=0\cdot 1_{A}+5\cdot 1_{A^{c}}\). Then \(f+h=10\cdot 1_{S}\) is a constant act while \(g+h=5\cdot 1_{A}+15\cdot 1_{A^{c}}\) is even more uncertain than \(g\). A DM who dislikes uncertainty would clearly prefer \(f+h\) to \(g+h\). Axiom A.3" can be interpreted similarly, but in this case the DM is willing to increase the perceived uncertainty. Notice that similar conditions were proposed by Chateuneuf [3], see also Wakker [22]. The following theorem shows that if a DM is uncertainty averse for gains and uncertainty seeking for losses then the capacities appearing in the CPT functional are both convex. **Theorem 6**.: _Let \(\succsim\) be a preference relation over \(\mathcal{F}\). Then the following are equivalent._ 1. \(\succsim\) _satisfies A.1, A.2, A.3', A.3", and A.4._ 2. _There exist two (unique) convex capacities_ \(v^{+}\)_,_ \(v^{-}\) _and_ \(\lambda>0\)_, such that_ \(\succsim\) _is represented by a CPT functional._ Note that the CPT functional can be rewritten (using Lemma 2 in the Appendix) as \[CPT(f)=\int f^{+}dv^{+}+\int-\lambda f^{-}d\hat{v}^{-}. \tag{3.1}\] If one is using this formulation then Theorem 6 implies that the conjugate capacity \(\hat{v}^{-}\) is concave. We conclude this section providing a testable axiom that characterizes symmetric attitudes around the reference point with respect to uncertainty. An interesting question is in fact to understand when one has \(v^{-}=v^{+}\) in the CPT functional.4 Note that if \(\lambda=1\) Theorem 3 applies and one gets a Sipos integral. Consider the following axiom. Footnote 4: Or, equivalently if one is using formulation (3.1), when one gets \(v^{-}=\hat{v}^{+}\). A.5 Gain-Loss Symmetry. Let \(f\in\mathcal{F}\) and \(\alpha\in\mathbb{R}\). Then \(f\sim\alpha 1_{S}\) if and only if \(-f\sim-\alpha 1_{S}\). Axiom A.5 says that if a DM is indifferent between an (uncertain) act \(f\) and a sure amount \(\alpha\), then she should stay indifferent between \(-f\) and \(-\alpha\). The intuition is that the DM sees \(f\) and \(-f\) as symmetric with respect to the reference point \(0\), and therefore evaluates them through the symmetric sure amounts \(\alpha\) and \(-\alpha\). The following theorem offers a behavioral characterization of the Sipos integral. **Theorem 7**.: _Let \(\succsim\) be a preference relation over \(\mathcal{F}\). Then the following are equivalent._ 1. \(\succsim\) _satisfies A.1, A.2, A.3, A.4 and A.5._ 2. _There exists a (unique) capacity_ \(v\) _such that_ \(\succsim\) _is represented by the Sipos integral._ ## 4 Conclusion We provided an axiomatic analysis of CPT with piece-wise linear utility. This allowed us to focus on (sign-dependent) attitudes towards uncertainty. First, we mathematically characterized the CPT functional by weakening the comonotonic additivity property of the Choquet integral. We also gave conditions to reduce CPT to a Sipos integral. Then we gave an axiomatic characterization of CPT. The main novelty is given by a gain-loss hedging property: gains and losses balance each other out and provide an hedge against uncertainty. Moreover, we introduced an axiom that offers a way to easily elicit the coefficient of loss-aversion, in case of piece-wise linear utility. Finally, we characterized uncertainty aversion for losses and uncertainty loving for gains. Moreover we showed that these attitudes are symmetric with respect to the reference point if and only if CPT is a Sipos integral. Appendix We begin with an elementary Lemma. The proof if given for sake of completeness. **Lemma 2**.: _Let \(\hat{v}(A)=1-v(A^{c})\) and \(f\in\mathcal{F}^{+}\) or \(f\in\mathcal{F}-\). Then \(-\int fdv=\int-fd\hat{v}\)._ Proof.: Let \(f\in\mathcal{F}^{+}\), then one has \[-\int fdv =-\int_{0}^{\infty}v(s\in S|f(s)\geq t)dt\] \[=-\int_{0}^{\infty}[1-\hat{v}(s\in S|f(s)>t)]dt\] \[=-\int_{0}^{-\infty}[1-\hat{v}(s\in S|f(s)>u)](-du)\] \[=\int_{-\infty}^{0}[\hat{v}(s\in S|f(s)>u)-1]du\] \[=\int-fd\hat{v}\] The case \(f\in\mathcal{F}^{-}\) can be treated similarly. Proof of Lemma 1.: To prove the first point we just need to apply Lemma 2. In fact noticing that \(f^{-}\in\mathcal{F}^{+}\) we have \(\tilde{S}(f)=\int f^{+}dv-\int f^{-}dv=\int f^{+}dv+\int-f^{-}d\hat{v}\). The second point, note that \(f=f^{+}+(-f^{-})\) and that \(f^{+}\) and \(-f^{-}\) are comonotonic. Then by the comonotonic additivity of the Choquet integral proved in Theorem 1, we have \[\int fdv=\int f^{+}+(-f^{-})dv=\int f^{+}dv+\int-f^{-}dv.\] Note that by Lemma 2 one can also write \(\int fdv=\int f^{+}dv-\int f^{-}d\hat{v}\). Proof of Theorem 2.: \((i)\Rightarrow(ii)\). We start with an auxiliary Lemma. **Lemma 3**.: _For all \(\alpha>0\), for all \(f\in\mathcal{F}^{+}\cup\mathcal{F}^{-}\), \(I(\alpha f)=\alpha I(f)\). Moreover for \(\alpha>0\) and \(f\in\mathcal{F}^{+}\), \(I(f+\alpha 1_{S})=I(f)+\alpha\)._ Proof of Lemma 3.: The proof is standard. Let \(v^{+}(A)=I(1_{A})\), then doing the same proof as Schmeidler [14] one can show that for all \(f\in\mathcal{F}^{+}\), \(I(f)=\int fdv^{+}\). Now let \(\lambda:=-I(-1_{S})\). By comonotonic additivity of \(I\), \(I(0)=0\). By monotonicity of \(I\), \(I(-1_{S})\leq I(0)=0\). Then \(\lambda\geq 0\). Define for all \(A\in\mathcal{A}\), \[v(A)=-\frac{I(-1_{A})}{\lambda}.\] We have \(v(\emptyset)=0\) and \(v(S)=1\). Take \(A\subseteq B\) so that \(-1_{A}\geq-1_{B}\). Since \(I\) is monotonic, \(I(-1_{A})\geq I(-1_{B})\) and therefore \(v(A)\leq v(B)\). This show that \(v\) is a capacity. Define \(v^{-}\) as the conjugate capacity of \(v\), meaning that for all \(A\in\mathcal{A}\), \[v^{-}(A)=1-v(A^{c}).\] We will show that for all \(f\in\mathcal{F}^{-}\), \(f\) simple, \(I(f)=\int\lambda fdv^{-}\). Let \(f\in\mathcal{F}^{-}\) be defined as \[f=x_{1}1_{A_{1}}+\cdots+x_{n}1_{A_{n}}\] where \(\{A_{1},\ldots,A_{n}\}\) is a partition of \(S\) and \(x_{1}\leq\cdots\leq x_{n}\leq 0\). Note that we can rewrite \(f\) as \[f=(0-x_{n})(-1_{S})+(x_{n}-x_{n-1})(-1_{A_{n-1}\cup\cdots\cup A_{1}})+\cdots+(x _{3}-x_{2})(-1_{A_{2}\cup A_{1}})+(x_{2}-x_{1})(-1_{A_{1}}).\] Define \[h_{i}=(x_{i+1}-x_{i})(-1_{A_{i}\cup\cdots\cup A_{1}})\] with the convention that \(x_{n+1}=0\). We have that \[f=\sum_{i=0}^{n}h_{i}\] We show now that \(h_{i}\) is comonotone with \(\sum_{k=i+1}^{n}h_{k}\). Consider \(s,t\in S\) be such that \(s\in A_{i}\cup\cdots\cup A_{1}\) and \(t\in(A_{i}\cup\cdots\cup A_{1})^{c}\), suppose \(t\in A_{l}\) for \(l>i\). Then \(h_{i}(s)-h_{i}(t)=x_{i}-x_{i+1}\leq 0\) and \(\sum_{k=i+1}^{n}h_{k}(s)-\sum_{k=i+1}^{n}h_{k}(t)=x_{i}-x_{l}\leq 0\), hence \((h_{i}(s)-h_{i}(t))\left(\sum_{k=i+1}^{n}h_{k}(s)-\sum_{k=i+1}^{n}h_{k}(t) \right)\geq 0\). If \(s,t\in A_{i}\cup\cdots\cup A_{1}\) or \(s,t\in(A_{i}\cup\cdots\cup A_{1})^{c}\) the previous product is \(0\). This shows that the functions \(h_{i}\) and \(\sum_{k=i+1}^{n}h_{k}\) are conomotone. Since \(h_{i}\) and \(\sum_{k=i+1}^{n}h_{k}\) are negative, by comonotonic additivity on \(\mathcal{F}^{-}\) we have \[I(f)=I(h_{1}+\sum_{i=2}^{n}h_{i})=I(h_{1})+I(h_{2}+\sum_{i=3}^{n}h_{i})=\cdots =\sum_{i=1}^{n}I(h_{i}).\] Note that by Lemma 3 and by definition of \(v^{-}\) we have \[I(h_{i})=(x_{i+1}-x_{i})\lambda(v^{-}(A_{i+1}\cup\cdots\cup A_{n})-1).\] Therefore \[I(f) =\sum_{i=1}^{n-1}I(h_{i})+I(h_{n})\] \[=\lambda\left[\sum_{i=1}^{n-1}(x_{i+1}-x_{i})v^{-}(A_{i+1}\cup \cdots\cup A_{n})+\sum_{i=1}^{n-1}(x_{i+1}-x_{i})\right]+(x_{n+1}-x_{n})(-\lambda)\] \[=\lambda\left[\sum_{i=1}^{n-1}(x_{i+1}-x_{i})v^{-}(A_{i+1}\cup \cdots\cup A_{n})-x_{n}+x_{1}\right]+\lambda x_{n}\] \[=\lambda\left[x_{1}+\sum_{i=1}^{n-1}(x_{i+1}-x_{i})v^{-}(A_{i+1} \cup\cdots\cup A_{n})\right]\] \[=\lambda\int fdv^{-}\] \[=\int\lambda fdv^{-}\] Notice that every bounded function can be approximated by a sequence of step functions as in Schmeidler [14]. This shows that for all \(f\in\mathcal{F}^{-}\), \(I(f)=\int\lambda fdv^{+}\). Let now \(f\in\mathcal{F}\) and notice that \(f=f^{+}+(-f^{-})\) and moreover \(supp(f^{+})\cap supp(f^{-})=\emptyset\). Hence \[I(f)=I(f^{+}+(-f^{-}))=I(f^{+})+I(-f^{-})=\int f^{+}dv^{+}+\int\lambda(-f^{-}) d\bar{v}^{-}\] Let \(\hat{v}^{-}\) be the conjugate capacity of \(\bar{v}^{-}\), i.e. \(\hat{v}^{-}(A)=1-\bar{v}^{-}(A^{c})\) for all \(A\in\mathcal{A}\). Then by Lemma 2 one has \[\int-\lambda f^{-}d\bar{v}^{-}=-\int\lambda f^{-}d\hat{v}^{-}\] Defining \(v^{-}=\hat{v}^{-}\) concludes the "\(\Rightarrow\)" part of the proof. \((ii)\Rightarrow(i)\). We prove (a). Suppose \(f\geq g\). Then \(f^{+}\geq g^{+}\) and \(g^{-}\geq f^{-}\). It is well known that the Choquet integral is monotonic. Hence \(I(f)=\int f^{+}dv^{+}-\int\lambda f^{-}dv^{-}\geq g^{+}dv^{+}-\int\lambda g^{- }dv^{-}=I(g)\). We prove (b). Let \(f,g\) comonotonic and such that \(f,g\geq 0\) (the case \(f,g\leq 0\) is similar). Then \((f+g)^{+}=f+g=f^{+}+g^{+}\) and \((f+g)^{-}=0=f^{-}=g^{-}\). Therefore \[I(f+g)=\int(f+g)^{+}dv^{+}=\int f^{+}dv^{+}+\int g^{+}dv^{+}=\\ \int f^{+}dv^{+}-\int\lambda f^{-}dv^{-}+\int g^{+}dv^{+}-\int \lambda g^{-}dv^{-}=I(f)+I(g).\] We prove part (b) of \((ii)\). Let \(f\), \(g\) be of opposite sign (for instance \(f\geq 0\) and \(g\leq 0\)) and such that \(supp(f)\cap supp(g)=\emptyset\). Notice that \((f+g)^{+}=f=f^{+}\), \((f+g)^{-}=-g=g^{-}\), and \(f^{-}=0\), \(g^{+}=0\). Therefore \[I(f+g)=\int(f+g)^{+}dv^{+}-\int\lambda(f+g)^{-}dv^{-}=\int f^{+} dv^{+}-\int\lambda g^{-}dv^{-}=\\ \int f^{+}dv^{+}-\int\lambda f^{-}dv^{-}+\int g^{+}dv^{+}-\int \lambda g^{-}dv^{-}=I(f)+I(g)\] which complete the proof of part (b). Proof of Theorem 3.: \((i)\Rightarrow(ii)\) Let \(CPT\) be a Sipos integral. Then \(\lambda=1\) and \(v^{+}=v^{-}\) and hence \[CPT(-f)=\int(-f)^{+}dv-\int(-f)^{-}dv=\int f^{-}dv-\int f^{+}dv=-CPT(f)\] \((ii)\Rightarrow(i)\) Note that \(\lambda=1\) since \(-\lambda=CPT(-1_{S})=-CPT(1_{S})=-1\). Let \(A\in\mathcal{A}\) and consider \(f=1_{A}\). Then \[CPT(-f)=0-\int 1_{A}dv^{-}=-v^{-}(A)\text{ and }-CPT(f)=-\int 1_{A}dv^{+}=-v^{+}(A)\] Therefore \[CPT(-f)=-CPT(f)\Leftrightarrow v^{-}(A)=v^{+}(A).\] Since this must be true for all \(A\in\mathcal{A}\), \(v^{-}=v^{+}\) and the CPT functional is a Sipos integral. Proof of Theorem 5.: \((ii)\Leftarrow(i)\) We only prove A.4. Take \(\lambda>0\) of the CPT functional. Fix \(f\) and \(g\) s.t. \(f\in\mathcal{F}^{+}\) and \(g\in\mathcal{F}^{-}\) and s.t. \(supp(f)\cap supp(g)=\emptyset\). Suppose \(f\sim\alpha 1_{S}\) and \(g\sim\beta 1_{S}\). Note that \(\alpha\geq 0\) and \(\beta\leq 0\). Therefore \[CPT(f) =CPT(\alpha 1_{S})\Leftrightarrow CPT(f)=\alpha\] \[CPT(g) =CPT(\beta 1_{S})\Leftrightarrow CPT(g)=-\lambda\int\beta^{-}dv^{-}=- \lambda(-\beta)=\lambda\beta.\] Moreover since \(f\) and \(g\) have opposite signs and have disjoints supports we have \[CPT(f+g)=CPT(f)+CPT(g)=\alpha+\lambda\beta.\] Now, if \(\alpha+\lambda\beta>0\), \(CPT((\alpha+\lambda\beta)1_{S})=\alpha+\lambda\beta\), and since CPT represents \(\succsim\), \(f+g\sim(\alpha+\lambda\beta)1_{S}\). If \(\alpha+\lambda\beta<0\), \(CPT\left(\frac{\alpha+\lambda\beta}{\lambda}1_{S}\right)=-\lambda\int\left( \frac{\alpha+\lambda\beta}{\lambda}\right)^{-}dv^{-}=-\lambda\frac{-\alpha- \lambda\beta}{\lambda}=\alpha+\lambda\beta\). Therefore \(f+g\sim\frac{\alpha+\lambda\beta}{\lambda}1_{S}\). \((i)\Rightarrow(ii)\) First, note that for all \(f\in\mathcal{F}^{+}\), \(f=f^{+}\) and for all \(f\in\mathcal{F}^{-}\), \(f=-f^{-}\). (One can prove that) By A.1 and A.2 for all \(f\in\mathcal{F}^{+}\) there exists a unique \(\alpha_{f^{+}}\geq 0\) s.t. \[f^{+}\sim\alpha_{f^{+}}1_{S}.\] Let \(\lambda>0\) be the one of Axiom A.4. Then again by A.1 and A.2 for all \(f\in\mathcal{F}^{-}\) there exists a unique \(\alpha_{-f^{-}}\leq 0\) s.t. \[-f^{-}\sim\left(\frac{\alpha_{-f^{-}}}{\lambda}\right)1_{S}.\] Define \(I:\mathcal{F}\rightarrow\mathbb{R}\) as \[I(f)=I(f^{+})+I(-f^{-})\] where \(I(f^{+})=\alpha_{f^{+}}\) and \(I(-f^{-})=\alpha_{-f^{-}}\). Note that \(f^{+}\sim I(f^{+})1_{S}\) and \(-f^{-}\sim\left(\frac{I(-f^{-})}{\lambda}\right)1_{S}\). Moreover \(I(1_{S})=1\) by Monotonicity. We will prove that \(I\) satisfies the conditions of Theorem 2 and it is therefore a CPT functional. **Step 1**.: _Fix \(f\in\mathcal{F}\), then \(I(f)\geq 0\) implies \(f\sim I(f)1_{S}\) and \(I(f)<0\) implies \(f\sim\frac{I(f)}{\lambda}1_{S}\)._ Proof.: Let \(f\in\mathcal{F}\). * Case 1: \(I(f)\geq 0\). Note that \(f=f^{+}+(-f^{-})\) and by definition \(f^{+}\sim I(f^{+})1_{S}\) and \(-f^{-}\sim\frac{I(-f^{-})}{\lambda}1_{S}\). Moreover \(I(f^{+})+\lambda\frac{I(-f^{-})}{\lambda}=I(f)\geq 0\), hence by A.4 and by the definition of \(I(f)\) \[f=f^{+}+(-f^{-})\sim\left(I(f^{+})+\lambda\frac{I(-f^{-})}{\lambda}\right)1_{ S}=I(f)1_{S}.\] * Case 2: \(I(f)<0\). Then reasoning as before and applying A.4 we get \[f=f^{+}+(-f^{-})\sim\left(\frac{I(f^{+})+\lambda\frac{I(-f^{-})}{\lambda}}{ \lambda}\right)1_{S}=\left(\frac{I(f^{+})+I(-f^{-})}{\lambda}\right)1_{S}= \frac{I(f)}{\lambda}1_{S}.\] **Step 2**.: \(I\) _is monotone._ Proof.: Let \(f,g\in\mathcal{F}\) be such that \(f\geq g\). Then \(f^{+}\geq g^{+}\) and \(-f^{-}\geq-g^{-}\). Then by Monotonicity \(f^{+}\succsim g^{+}\) and \(-f^{-}\succsim-g^{-}\). Then by Step 1\(I(f^{+})1_{S}\sim f^{+}\succsim g^{+}\sim I(g^{+})1_{S}\) and \(\frac{I(-f^{-})}{\lambda}1_{S}\sim-f^{-}\succsim-g^{-}\sim\frac{\gamma(-g^{-} )}{\lambda}1_{S}\). Monotonicity implies \(I(f^{+})\geq I(g^{+})\) and \(I(-f^{-})\geq I(-g^{-})\). Summing up we obtain \(I(f)\geq I(g)\). **Step 3**.: \(I\) _satisfies comonotonic additivity over \(\mathcal{F}^{+}\) and \(\mathcal{F}^{-}\)._ Proof.: We prove comonotonic additivity over \(\mathcal{F}^{-}\), the proof for \(\mathcal{F}^{+}\) can be done in a similar way. Take \(f,g\in\mathcal{F}^{-}\) s.t. \(f\) and \(g\) are comonotone. By Step 1, \(f\sim\frac{I(f)}{\lambda}1_{S}\) and \(g\sim\frac{I(g)}{\lambda}1_{S}\). Since constant acts are comonotone with all other acts and \(\frac{I(f)}{\lambda},\frac{I(g)}{\lambda}\leq 0\), by A.3 one gets \(f+g\sim\frac{I(f)}{\lambda}1_{S}+g\) and \(g+\frac{I(f)}{\lambda}1_{S}\sim\frac{I(g)}{\lambda}1_{S}+\frac{I(f)}{\lambda}1 _{S}\). Since \(f+g\in\mathcal{F}^{-}\), by Step 1\(f+g\sim\frac{I(f+g)}{\lambda}1_{S}\). Therefore \(\frac{I(f+g)}{\lambda}1_{S}\sim\left(\frac{I(f)}{\lambda}+\frac{I(g)}{\lambda} \right)1_{S}\), and Monotonicity implies \(I(f+g)=I(f)+I(g)\). **Step 4**.: _For all \(f\in\mathcal{F}^{+(-)}\) and \(g\in\mathcal{F}^{-(+)}\) s.t. \(supp(f)\cap supp(g)=\emptyset\), \(I(f+g)=I(f)+I(g)\)._ Proof.: Fix \(f\in\mathcal{F}^{+}\) and \(g\in\mathcal{F}^{-}\) s.t. \(supp(f)\cap supp(g)=\emptyset\). Define \(h=f+g\) and note that \(h^{+}=f\) and \(-h^{-}=g\). Therefore by definition of \(I\), \(I(f+g)=I(h)=I(h^{+})+I(-h^{-})=I(f)+I(g)\). **Step 5**.: \(I\) _represents \(\succsim\) over \(\mathcal{F}\) (i.e. \(f\succsim g\Leftrightarrow I(f)\geq I(g)\))._ Proof.: Fix \(f,g\in\mathcal{F}\). Then we have to consider 4 cases. * Case 1: \(I(f),I(g)\geq 0\). Using Step 1 and Monotonicity \(I(f)1_{S}\sim f\succsim g\sim I(g)1_{S}\Leftrightarrow I(f)\geq I(g)\). * Case 2: \(I(f),I(g)\leq 0\). Using Step 1 and Monotonicity \(\frac{I(f)}{\lambda}1_{S}\sim f\succsim g\sim\frac{I(g)}{\lambda}1_{S} \Leftrightarrow I(f)\geq I(g)\), since \(\lambda>0\). * Case 3: \(I(f)\geq 0>I(g)\). Using Step 1 and Monotonicity \(I(f)1_{S}\sim f\succsim g\sim\frac{I(g)}{\lambda}1_{S}\Leftrightarrow I(f) \geq I(g)\). Note that in this case we cannot have \(g\succsim f\). * Case 4: \(I(g)\geq 0>I(f)\). This is the same as Case 3. Since \(I(1_{S})=1\), Steps 2, 3 and 4 prove that \(I\) satisfies condition \((i)\) of Theorem 2 and therefore \(I\) is a CPT functional. Moreover Step 5 shows that \(I\) represents \(\succsim\). Therefore the proof is complete. Proof of Theorem 6.: \((i)\Rightarrow(ii)\) Note that A.3' and A.3" imply A.3. Hence Theorem 5 applies and \(I\) is represented by a CPT functional. It is left to show that \(v^{+}\) and \(v^{-}\) are convex. We only show convexity of \(v^{-}\). Fix \(A,B\in\mathcal{A}\) and note that \(CPT(-1_{A})=-\lambda v^{-}(A)=CPT(-v^{-}(A)1_{S})\) and a similar statement holds for \(B\in\mathcal{A}\). Therefore \(-1_{A}\sim-v^{-}(A)1_{S}\) and \(-1_{B}\sim-v^{-}(B)1_{S}\). Since \(-1_{B}\) is comonotonic with \(-v^{-}(A)1_{S}\), by A.3" \(-v^{-}(A)1_{S}-1_{B}\succsim-1_{A}-1_{B}\). Moreover since \(-v^{-}(A)1_{S}\) is comonotonic with both \(-1_{B}\) and \(-v^{-}(B)1_{S}\) by A.3" we get \(-1_{B}-v^{-}(A)1_{S}\sim-v^{-}(B)1_{S}-v^{-}(A)1_{S}\). Therefore \[-v^{-}(B)1_{S}-v^{-}(A)1_{S}\sim-1_{B}-v^{-}(A)1_{S}\succsim-1_{A}-1_{B}\] Note that \(-1_{A}-1_{B}=-1_{A\cup B}-1_{A\cap B}\) and since \(1_{A\cup B}\) and \(1_{A\cap B}\) are comonotonic, \[CPT(-1_{A\cup B}-1_{A\cap B}) =-\int\lambda(-1_{A\cup B}-1_{A\cap B})^{-}dv^{-}\] \[=-\lambda\left(\int 1_{A\cup B}dv^{-}+\int 1_{A\cap B}dv^{-}\right)\] \[=-\lambda[v^{-}(A\cup B)+v^{-}(A\cap B)].\] Therefore \(-\lambda[v^{-}(A)+v^{-}(B)]=CPT(-v^{-}(A)1_{S}-v^{-}(B)1_{S})\geq CPT(-1_{A \cup B}-1_{A\cap B})=-\lambda[v^{-}(A\cup B)+v^{-}(A\cap B)]\) which implies \(v^{-}(A)+v^{-}(B)\leq v^{-}(A\cup B)+v^{-}(A\cap B)\), i.e. \(v^{-}\) is convex. \((ii)\Rightarrow(i)\) Left to the reader. Proof of Theorem 7.: \((i)\Rightarrow(ii)\) Since \(\succsim\) satisfies A.1, A.2, A.3 and A.4, it can be represented by a CPT functional \(I\) by Theorem 5. Hence for all \(f\in\mathcal{F}\), \(f\sim I(f)1_{S}\) and \(-f\sim I(-f)1_{S}\). Notice that by A.5 one has also \(-f\sim-I(f)1_{S}\) and hence A.2 implies \(I(-f)=-I(f)\). By Theorem 3, \(I\) is a Sipos integral. \((ii)\Rightarrow(i)\) Left to the reader.
2306.02858
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
We present Video-LLaMA a multi-modal framework that empowers Large Language Models (LLMs) with the capability of understanding both visual and auditory content in the video. Video-LLaMA bootstraps cross-modal training from the frozen pre-trained visual and audio encoders and the frozen LLMs. Unlike previous works that complement LLMs to process the visual or audio signals only, Video-LLaMA enables video comprehension by tackling two challenges: (1) capturing the temporal changes in visual scenes, (2) integrating audio-visual signals. To counter the first challenge, we propose a Video Q-former to assemble a pre-trained image encoder into our video encoder and introduce a video-to-text generation task to learn video-language correspondence. For the second challenge, we leverage ImageBind, a universal embedding model aligning multiple modalities, as the pre-trained audio encoder and introduce an Audio Q-former on top of ImageBind to learn reasonable auditory query embeddings for the LLM module. To align the output of both visual and audio encoders with LLM's embedding space, we first train Video-LLaMA on massive video/image-caption pairs and then tune our model with visual-instruction datasets of moderate amount but higher quality. We found Video-LLaMA shows the ability to perceive and comprehend video content and generate meaningful responses grounded in the visual and auditory information presented in the videos.
Hang Zhang, Xin Li, Lidong Bing
2023-06-05T13:17:27Z
http://arxiv.org/abs/2306.02858v4
# Video-LLaMA ###### Abstract We present Video-LLaMA, a multi-modal framework that empowers Large Language Models (LLMs) with the capability of understanding both visual and auditory content in the video. Video-LLaMA bootstraps cross-modal training from the frozen pre-trained visual & audio encoders and the frozen LLMs. Unlike previous vision-LLMs that focus on static image comprehensions such as MiniGPT-4 (Zhu et al., 2023) and LLaVA (Liu et al., 2023), Video-LLaMA mainly tackles two challenges in video understanding: (1) capturing the temporal changes in visual scenes, (2) integrating audio-visual signals. To counter the first challenge, we propose a Video Q-former to assemble the pre-trained image encoder into our video encoder and introduce a video-to-text generation task to learn video-language correspondence. For the second challenge, we leverage ImageBind (Girdhar et al., 2023), a universal embedding model aligning multiple modalities as the pre-trained audio encoder, and introduce an Audio Q-former on top of ImageBind to learn reasonable auditory query embeddings for the LLM module. To align the output of both visual & audio encoders with LLM's embedding space, we train Video-LLaMA on massive video/image-caption pairs as well as visual-instruction-tuning datasets of moderate amount but higher quality. We found Video-LLaMA showcases the ability to perceive and comprehend video content, generating meaningful responses that are grounded in the visual and auditory information presented in the videos. This highlights the potential of Video-LLaMA as a promising prototype for audio-visual AI assistants. ## 1 Introduction Large Language Models (LLMs) (Chowdhery et al., 2022; Bai et al., 2022; OpenAI, 2023) trained on massive amounts of textual data are the most impressive breakthroughs of AI since 2022. LLMs enabled general-purpose AI assistants12 have demonstrated a remarkable capability of understanding and following user intentions and instructions. Despite their success, most of the users are only allowed to interact with LLMs via text-based conversations2. Obviously, text-only human-computer interaction is less optimal for a powerful AI assistant. In order to further explore the potential of LLMs, many researchers attempt to endow LLMs with visual understanding capability (Tsimpoukelli et al., 2021; Alayrac et al., 2022; Wang et al., 2022, 2023; Li et al., 2022; Wang et al., 2022; Li et al., 2023; Xu et al., 2023; Huang et al., 2023; Zhang et al., 2023). Among these efforts, BLIP-2 (Li et al., 2023) bootstraps vision-language pre-training from frozen pre-trained image encoders and frozen language decoders. It receives increasing attention for its compute-efficiency and the flexibility of leveraging readily-available instruction-following LLMs (e.g., FLAN-T5 (Chung et al., 2022) and Vicuna (Chiang et al., 2023)). Based on BLIP-2, Zhu et al. (2023); Liu et al. (2023); Ye et al. (2023) conduct initial attempts to introduce vision foundation models as the plugin of LLMs to accommodate image input. In these frameworks, the BLIP-style cross-model pre-training well connects the LLMs with vision foundation models. Moreover, the intrinsic property of each unimodal pre-trained model, especially the instruction-following capability of LLMs, is preserved during vision-language pre-training. Therefore, these works empower LLMs to support both text-based conversations and image-grounded conversations. On the other hand, as another popular form of content on the social media platform, video has not been considered integrated into chatting yet. The reasons probably come from the difficulty of accurately understanding non-static visual scenes. Besides, mitigating the modality gap between video and text, which typically requires the processing of both visual signals and audio signals, is more challenging than that between image and text. In this work, to fill in this blank, we investigate the possibility of building multi-modal LLMs that support the input of video and allow users to chat with computers around the user-uploaded video, which is usually composed of multiple video frames and audio. Instead of employing external perception models to convert visual/auditory signals to textual signals (Shen et al., 2023; Li et al., 2023), we choose to build an end-to-end model that can handle the data from multiple modalities within one single framework. Specifically, we adopt the idea of BLIP-2 (Li et al., 2023) to guarantee the efficiency of cross-modal pre-training. To explicitly capture the change of visual scenes in the video, we use a pre-trained visual encoder to separately compute frame representations. Then, we introduce a frame embedding layer to inject temporal information and a video Q-Former to generate visual query tokens. As for the audio signals from the video, we additionally leverage a pre-trained audio encoder as well as an audio Q-former to learn reasonable auditory query embeddings (see the right part of Figure 1). To align textual output with video, we devise multi-branch cross-model pre-training to learn the vision-language correspondence and the audio-language correspondence. For vision-language correspondence, we first pre-train the vision-related components on a large-scale video caption dataset with a video-clips-to-text generation task. To enhance the understanding of static visual concepts, we also add image-caption data into this pre-training stage. Then, we further fine-tune these components on a video-based conversation dataset to execute visual instruction tuning. For the alignment between the audio encoder and language decoder, we further pre-train the audio-related components on an audio caption dataset with an audio-to-text generation task. For the audio-language correspondence, we leverage Imagebind (Girdhar et al., 2023) as an encoder, which performs exceptionally well in aligning different modalities to a common embedding space. Given the limited availability of audio-text data, we utilize vision-text data to train the audio-related components. These components learn to align the common embedding space provided by Imagebind with the embedding space of LLMs. Despite not being explicitly trained with audio-text data, Video-LLaMA exhibits a remarkable zero-shot audio understanding capability during inference. In summary, our contributions are as follows: \(\bullet\) We propose Video-LLaMA, a multi-model large language model that achieves video-grounded conversations between humans and computers by connecting language decoder with off-the-shelf unimodal pre-trained models. \(\bullet\) To empower LLMs with video understanding capability, we propose a multi-branch cross-model pre-training framework to achieve both vision-language alignment and audio-language alignment. \(\bullet\) We open-source the entire codebase for pre-training and fine-tuning as well as the model weights of all the variants of Video-LLaMA5. We also prepare the demos for video-grounded conversation67. Footnote 5: [https://github.com/DAMO-NLP-SG/Video-LLaMA](https://github.com/DAMO-NLP-SG/Video-LLaMA) Footnote 6: [https://huggingface.co/spaces/DAMO-NLP-SG/Video-LLaMA](https://huggingface.co/spaces/DAMO-NLP-SG/Video-LLaMA) Footnote 7: [https://modelscope.cn/studios/damo/video-llama/summary](https://modelscope.cn/studios/damo/video-llama/summary) ## 2 Method Video-LLaMA aims to empower frozen LLMs with the capability of understanding both visual and auditory content in videos. As shown in Figure 1, we design two branches, namely Vision-Language Branch and Audio-Language Branch, to respectively transform the video frames and audio signals into query representations that are compatible with the textual inputs of LLMs. In this section, we first introduce the overall architecture and the building blocks of each branch. Then, we delineate the procedures of the proposed multi-branch cross-modal pre-training and audio-visual instruction tuning. ### Architecture #### 2.1.1 Vision-Language Branch The Vision-Language Branch is designed for enabling the LLMs to understand visual inputs. As shown in the left part of Figure 1, it is composed of a frozen pre-trained image encoder to extract features from video frames, a position embedding layer to inject temporal information into video frames, a video Q-former to aggregate frame-level representations and a linear layer to project the Figure 1: Overall architecture of Video-LLaMA. output video representations into the same dimension as the text embeddings of LLMs. Given one video consists of \(N\) frames, the image encoder will first map each frame/image into \(K_{f}\) image embedding vectors, yielding video frame representations \(\mathbf{V}=[\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v}_{N}]\) where \(\mathbf{v}_{i}\in\mathbb{R}^{K_{f}\times d_{f}}\) is the set of \(d_{f}\)-dimensional image embeddings corresponding to the \(i\)-th frame. Since the frame representations \(\mathbf{v}_{i}\) from the frozen image encoder are computed without considering any temporal information, we further apply position embeddings as the indicator of temporal information to the representations from different frames. Then, we feed the position-encoded frame representations to Video Q-former, which shares the same architecture with Query Transformer (Q-Former) in BLIP-2 (Li et al., 2023), to obtain \(k_{V}\) video embedding vectors of dimension \(d_{v}\) as the representation \(\hat{\mathbf{v}}\in\mathbb{R}^{k_{V}\times d_{v}}\) of the video. To adapt the video representations to the input of LLMs, we add a linear layer to transform the video embedding vectors into the video query vectors. The video query vectors are of the same dimension as the text embeddings of LLMs. In the forward pass, they will be concatenated to input text embeddings as a _video soft prompt_ and guide the frozen LLMs to generate text conditioned on the video content. As for the implementation of the Vision-Language Branch, we utilize the pre-trained vision component of BLIP-2 (Li et al., 2023) as the frozen visual encoder, which includes a ViT-G/14 from EVACLIP (Fang et al., 2022) and a pre-trained Q-former. The remaining components, including the position embedding layer, Video Q-former, and Linear layer are randomly initialized and optimized to well connect the output of the frozen visual encoder to frozen LLMs. #### 2.1.2 Audio-Language Branch To deal with the auditory content of the given video, we introduce the Audio-Language Branch. Concretely, it consists of a pre-trained audio encoder to compute features given a short segment of origin audio, a position embedding layer to inject temporal information to audio segments, an audio Q-former to fuse the features of different audio segments, and a linear layer to map the audio representation into the embedding space of LLMs. In practice, we utilize the pre-trained Imagebind (Girdhar et al., 2023) as the audio encoder. We first uniformly sample \(M\) segments of 2-second short audio clips from the video, then convert each 2-second audio clip into spectrograms using 128 mel-spectrogram bins. After obtaining the spectrogram list of input audio, the audio encoder will map each spectrogram into a dense vector. So the generated audio representation of the given video can be denoted as \(A=[a_{1},a_{2},...,a_{M}]\). Similar to Video Q-Former, the Audio Q-former injects temporal information by adding learnable positional embeddings to audio segments. It then generates fixed-length audio features by computing the interaction across the position-encoded audio segments. Audio Q-Former adopts the same architecture as Q-Former. It projects the variable-length audio representation list \(A\) into a fixed-length sequence \(\hat{\mathbf{A}}\in\mathbb{R}^{K_{a}\times d_{a}}\), where the \(K_{a}\) is the number of audio embedding vectors and \(d_{a}\) is the dimension of each vector. Finally, we employ a linear layer to map audio features to the embedding space of the LLM. ### Multi-branch Cross-Modal Training We train the vision-language and audio-language branches separately. In the first stage, large-scale vision-caption datasets are used for training, and in the second stage, high-quality instruction-following datasets were used for fine-tuning. The image is treated as a one-frame video. #### 2.2.1 Training of Vision-Language Branch For pre-training vision-language branch, we utilized Webvid-2M (Bain et al., 2021), a large-scale dataset of short videos with textual descriptions sourced from stock footage sites. Moreover, we employed the image caption dataset CC595k, which is sourced from CC3M (Sharma et al., 2018) and filtered by Liu et al. (2023). We adopt a video-to-text generation task during the pre-training stage, i.e., given the representation of a video, prompting the frozen LLM to generate the corresponding text description. We find that a significant portion of textual descriptions are insufficient to reflect the entire content of the videos. Therefore, the visual semantics in the videos are not fully aligned with the textual semantics in the video descriptions. Nevertheless, this stage aimed to utilize a vast amount of data and enable video features to contain as much visual knowledge as possible. We left the abilities of vision-text alignment and instruction-following for the next stage. After the pre-training stage, the model can generate content about information in the video, but its ability to follow instructions has decreased. Therefore, in the second stage, we fine-tuned the model using high-quality instruction data. We integrated the image-detail-description dataset from MiniGPT-4 (Zhu et al., 2023), the image-instruction dataset from LLaVA (Liu et al., 2023), and the video-instruction dataset from Video-Chat (Li et al., 2023b). After fine-tuning, Video-LLaMA exhibited remarkable abilities in following instructions and comprehending images and videos. #### 2.2.2 Training of Audio-Language Branch Training the audio-language branch directly using audio-text data is highly challenging due to the rarity of such data. The objective of the learnable parameters in the audio-language branch is to align the output embedding of the frozen audio encoder with the embedding space of LLM. Given the scarcity of audio-text data, we employ a workaround strategy to achieve this objective. ImageBind, which is used as our audio encoder, has a remarkable ability to align different modalities' embeddings to one common space, demonstrating impressive performance on cross-modal retrieval and generation tasks. In light of the scarcity of audio-text data and the abundance of visual-text data, we train the audio-language branch using visual-text data, following the same data and process as the vision branch. Thanks to the shared embedding space provided by ImageBind, Video-LLaMA exhibits the ability to comprehend audio during inference, even though the audio interface has never been trained on audio data. ## 3 Related Works **Large Language Models**: Large language models (LLMs) (Black et al., 2022; Scao et al., 2022; OpenAI, 2023; Tsimpoukelli et al., 2021) have demonstrated remarkable language understanding and reasoning abilities, enabling the generation of high-quality natural language text across various domains, including articles, conversations, stories, and poetry. LLMs have already sparked a technological revolution and have been widely applied in different applications. Moreover, a series of open source large models, such as LLaMA (Touvron et al., 2023), BLOOM (Scao et al., 2022) and OPT (Zhang et al., 2022), have greatly promoted technological advancement and made outstanding contributions to the NLP community. Building upon the foundation of these impressive LLMs, researchers have further extended their capabilities and developed excellent models for various NLP tasks. Examples include Vicuna (Chiang et al., 2023) and Baize (Xu et al., 2023a). Our work is based on these awesome LLMs and provides plug-and-play plugs that empower them with the capability of comprehending both visual and auditory content in videos. **Multi-modal Large Language Models**: Researchers have been actively exploring the use of LLMs for processing multi-modal problems (Gao et al., 2023; Li et al., 2023b). Existing approaches can be categorized into two main groups. The first category involves employing LLMs as controllers and utilizing existing multi-modal models as tools. In this approach, when receiving the user's text instruction, the LLM recognizes the user's intention and makes decisions about which tools to call. It then generates comprehensive responses by incorporating the results obtained from these off-the-shelf multi-modal models. Examples include ChatGPT (Wu et al., 2023), HuggingGPT (Shen et al., 2023), and AudioGPT (Huang et al., 2023a). The second category focuses on training fundamental large-scale multimodal models. The key idea of this line of work is to align other modal pre-trained models to textual LLMs. For instance, Flamingo (Alayrac et al., 2022a) utilizes a perceiver resampler and a gated cross-attention layer to connect a frozen image encoder and LLM. BLIP2 (Li et al., 2023a) introduces a Q-Former to map learned image queries to the textual embedding space of LLMs. LLaVA (Liu et al., 2023), mPLUG-owl (Ye et al., 2023) and MiniGPT4 (Zhu et al., 2023) develop instruction-following image-LLMs using image-instruction-following dataset. Video-Chat (Li et al., 2023b) extends image encoders, enabling large models to understand visual content in videos. PandaGPT (Su et al., 2023) utilizes multimodal encoders from ImageBind, trained exclusively on image-instruction pairs, to enable large models to understand six modalities. Our work falls into the second category, where we train fundamental models to comprehend both the visual and auditory content in videos. ## 4 Limitations Although Video-LLaMA has demonstrated impressive abilities in understanding both visual and auditory content in videos, it is still an early-stage prototype and has some limitations, including: (1) Limited perception capacities: Video-LLaMA's performance is hindered by the quality and scale of the current training dataset. We are actively constructing a high-quality audio-video-text alignment dataset to enhance the model's perception capabilities. (2) Limited ability to handle long videos. Long videos(such as movies, and TV shows) contain a large volume of information and impose higher demands on computational resources. This challenge remains a crucial issue that the research community is actively working to address. (3) Hallucination. Video-LLaMA inherits the hallucination problem from the frozen LLMs. Future advancements in more powerful LLMs are expected to alleviate this issue. We will continue to address these challenges and look forward to developing a more powerful language model for video understanding. ## 5 Examples In this section, we show some cases to demonstrate Video-LLaMA's multi-modal instruction-following capability in video/audio/image-grounded conversations. (1) Audio-visual integration perception ability.Figure 2 and Figure 3 show Video-LLaMA's unique ability to comprehend auditory and visual information simultaneously. The videos in both cases contain audio. In each conversation, we pose two questions related to visual and auditory content respectively. If the model could only receive one modal, it would be unable to answer both of these questions. However, we can observe that Video-LLaMA accurately responds to both visual and auditory questions in both cases. (2) The ability to perceive and understand static images.Figure 4 and Figure 5 show Video-LLaMA's ability to perceive and understand pictures. In Figure 4, not only does Video-LLaMA accurately describe the main content of the image, but it also associates it with the friendly interaction between a dog and a human. Figure 5 demonstrates Video-LLaMA's ability to understand the concept of "unusual" and specifically describe the unusual scene. (3) The ability of common-knowledge concept recognition.Figure 7 and Figure 6 demonstrate Video-LLaMA's remarkable capacity for recognizing common-knowledge concepts in visual signals. Video-LLaMA successfully recognizes famous landmarks and characters and can engage in common-sense question-answering. (4) The ability to capture temporal dynamics in videos.Figure 8 and Figure 9 illustrate the capability of Video-LLaMA to identify actions over time. It successfully describes the actions being performed by the girl and the direction of the moving boat. Figure 3: A case showing Video-LLaMA’s ability to identify the sound of applause in a video and infer the positive response from the audience. Additionally, it infers that a man is playing the saxophone on stage based on the visual content. Figure 2: A case that Video-LLaMA answers the questions based on the background sound and visual content of the video. Figure 4: A case where Video-LLaMA provides a detailed description of the static image content. Figure 5: A case demonstrating Video-LLaMA’s ability to comprehend static images. Figure 6: A case demonstrating Video-LLaMA’s ability to recognize famous landmarks. Figure 7: A case showing Video-LLaMA’s ability to recognize renowned characters and participate in video-grounded question answering. Figure 8: A case where Video-LLaMA provides a detailed description of the visual content in a dynamic video. Figure 9: A case showing Video-LLaMA’s ability to identify actions over time.
2304.05796
On diagrams of algebras
We present a proof of the formula (given in Lurie's Higher Algebra) for the operad governing diagrams of operad algebras. We believe that our proof corrects a flaw in the original argument.
Vladimir Hinich
2023-04-12T12:17:50Z
http://arxiv.org/abs/2304.05796v1
# On diagrams of algebras ###### Abstract. We present a proof of the formula [L.HA], 2.4.3.18 for the operad governing \(K\)-diagrams of \(\mathcal{O}\)-algebras. The original proof, in our opinion, contained a gap. ## 1. Introduction This is a note about \(\infty\)-operads. In this note we will use the word "category" to denote \(\infty\)-categories and "operad" to denote an \(\infty\)-operad as defined by Lurie in [L.HA], Section 2. To work in a well-defined context, we decided to accept quasicategories as a model for \(\infty\)-categories; but all our constructions are presented in a \(\infty\)-categorical language, as it is described in [H.EY], Section 2, so that they make sense in any model. The term "conventional categories" stands for those categories whose spaces of morphisms are equivalent to sets. In this note \(\mathtt{Cat}\) denotes the category of small categories, \(\mathit{Fin}_{*}\) is the category of finite pointed sets and an operad is a functor \(p:\mathcal{O}\to\mathit{Fin}_{*}\) satisfying the standard properties of Definition 2.1.1.10 of [L.HA]. In particular, \(\mathtt{Com}=\mathit{Fin}_{*}\) is the operad for commutative algebras. The category of operads \(\mathtt{Op}\) is defined as the subcategory of \(\mathtt{Cat}_{/\mathit{Fin}_{*}}\), spanned by the operads, with the arrows preserving cocartesian liftings of the inerts. It can also be defined as a Bousfield localization as follows. Let \(\mathtt{Cat}^{+}_{/\mathit{Fin}_{*}^{\natural}}\) the category of marked categories over \(\mathit{Fin}_{*}\) endowed with the standard marking (inert arrows are marked). Then \(\mathtt{Op}\) indentifies with the full subcategory of \(\mathtt{Cat}^{+}_{/\mathit{Fin}_{*}^{\natural}}\) spanned by the operads with the inerts as the marked arrows. The full embedding \(R:\mathtt{Op}\rightarrow\mathtt{Cat}^{+}_{/\mathit{Fin}_{*}^{\natural}}\) admits a left adjoint \[L:\mathtt{Cat}^{+}_{/\mathit{Fin}_{*}^{\natural}}\rightarrow\mathtt{Op},\] so that \(\mathtt{Op}\) becomes the Bousfield localization of \(\mathtt{Cat}^{+}_{/\mathit{Fin}_{*}^{\natural}}\) with respect to the equivalence determined by \(L\) (called the operadic equivalence). ## 2. An operad for diagrams of algebras Let \(\mathcal{O}\) be an operad and \(K\) be a category. The functor assigning to any operad \(\mathcal{C}\) the category \(\operatorname{Fun}(K,\operatorname{\mathtt{Alg}}_{\mathcal{O}}(\mathcal{C}))\) is represented by an operad that we denoted by \(\mathcal{O}_{K}\) in [H.EY], 2.10.5(3). This is the operad governing \(K\)-diagrams of \(\mathcal{O}\)-algebras. By definition, \(\mathcal{O}_{K}\) is an operad endowed with an operadic equivalence \[\gamma:K\times\mathcal{O}\to\mathcal{O}_{K},\] where \(K\times\mathcal{O}\) is considered as marked category over \(\mathit{Fin}_{*}^{\natural}\), where an arrow \((\alpha,\beta)\) in \(K\times\mathcal{O}\) is marked iff \(\alpha\) is an equivalence and \(\beta\) is inert. The operad \(\mathcal{O}_{K}\) is equivalent, by [L.HA], 2.4.3.18, to \(K^{\sqcup}\times_{\mathtt{Com}}\mathcal{O}\). Unfortunately, the proof of this important fact is based on an incorrect Remark 2.4.3.6. In Section 3 below we present an alternative proof of this equivalence. Recall the definition of \(K^{\sqcup}\), [L.HA], 2.4.3.1. Define \(\Gamma^{*}\) as the (conventional) category of pairs \((I_{*},i)\) with \(I_{*}\in\mathit{Fin}_{*}\) and \(i\in I\), with the arrows \((I_{*},i)\to(J_{*},j)\) given by arrows \(I_{*}\to J_{*}\) carrying \(i\) to \(j\). The functor \(\pi:\Gamma^{*}\to\mathit{Fin}_{*}\) carries \((I_{*},i)\) to \(I_{*}\). For \(K\in\mathtt{Cat}\), we define \(K^{\sqcup}\) as a category over \(\mathtt{Com}=\mathit{Fin}_{*}\) representing the functor \[B\mapsto\operatorname{Map}(B\times_{\mathit{Fin}_{*}}\Gamma^{*},K).\] The fiber of \(K^{\sqcup}\) at \(I_{*}\in\mathtt{Com}\) is \(K^{I}\); an arrow in \(K^{\sqcup}\) over \(\alpha:I_{*}\to J_{*}\) from \(x:I\to K\) to \(y:J\to K\) is given by a collection of arrows \(x(i)\to y(j)\) for all pairs \((i,j)\in I\times J\) with \(\alpha(i)=j\). #### 2.1.1. In the case when \(K\) is a conventional category, \(K^{\sqcup}\) is a conventional operad. Its colors are the objects of \(K\) and an operation from \(\{x_{i}\}\) to \(y\) is given by a collection of arrows \(x_{i}\to y\). The composition of operations is defined in an obvious way. #### 2.1.2. The natural map \(\gamma:K\times\mathtt{Com}\to K^{\sqcup}\) is given by the projection \[(K\times\mathtt{Com})\times_{\mathtt{Com}}\Gamma^{*}=K\times\Gamma^{*}\to K.\] Given an operad \(\mathcal{O}\), we obtain, by the base change, the map \[\gamma_{\mathcal{O}}:K\times\mathcal{O}\to K^{\sqcup}\times_{\mathtt{Com}} \mathcal{O}. \tag{1}\] We see \(K\times\mathcal{O}\) as an object of the category \(\mathtt{Cat}^{+}_{/\mathit{Fin}_{*}^{\natural}}\). The aim of this note is the proof of the following result. **2.1.3 Theorem**.: \(\gamma_{\mathcal{O}}\) _is an operadic equivalence._ The proof of the theorem is presented in Section 3. First of all, we verify it in the case when \(K\) is a conventional category and \(\mathcal{O}\) is an operad in sets. Then we use a general fact that (generalized) tensor product of operads commutes with colimits in each argument, and a presentation of categories (resp., operads) as colimits of conventional categories (resp., operads). ## 3. Proof of Theorem 2.1.3 ### Conventional setting Let \(K\) be a conventional category and let \(\mathcal{O}\) be an operad in sets. We will verify that the map \(\gamma_{\mathcal{O}}:K\times\mathcal{O}\to K^{\sqcup}\times_{\mathtt{Com}} \mathcal{O}\) is an operadic equivalence. The \(K\times\mathcal{O}\)-algebras in an operad \(\mathcal{C}\) are, by definition, functors \(K\to\mathtt{Alg}_{\mathcal{O}}(\mathcal{C})\). We have to verify that the fiber product \(K^{\sqcup}\times_{\mathtt{Com}}\mathcal{O}\) is the operad governing such algebras. In the case of conventional \(K,\mathcal{O}\) the governing operad \(\mathcal{O}_{K}\) can be easily described. Its colors are the pairs \((k,x)\) where \(k\in K\) and \(x\) is a color of \(\mathcal{O}\). An operation \[(k_{1},x_{1}),\ldots(k_{n},x_{n})\longrightarrow(k,x)\] is given by an \(n\)-ary operation \((x_{1},\ldots,x_{n})\longrightarrow x\) in \(\mathcal{O}\), together with a collection of arrows \(k_{i}\to k\) in \(K\). Therefore, \(\mathcal{O}_{K}\) in this case is precisely \(K^{\sqcup}\times_{\mathtt{Com}}\mathcal{O}\). ### The case \(\mathcal{O}=\mathtt{Com}\), \(K\) arbitrary We know that the map \(\gamma:K\times\mathtt{Com}\to K^{\sqcup}\) is an operadic equivalence if \(K\) is a conventional category. Since both the source and the target of \(\gamma\) carry colimits in the first argument into colimits in \(\mathtt{Cat}^{+}_{/\mathtt{Fun}^{\natural}_{*}}\), and since any category is a colimit of conventional categories (even of categories \([n]\)), we deduce that \(\gamma\) is always an operadic equivalence. From now on we will write \(\mathtt{Com}_{K}\) for \(K^{\sqcup}\). ### The general case The operad \(\mathtt{Com}_{K}\) is flat by [H.EY], 2.12.5(3) 1, so the target of \(\gamma_{\mathcal{O}}\) preserves colimits in the second argument. Therefore, Theorem 2.1.3 follows from the following result saying that any operad \(\mathcal{O}\) can be presented, as a colimit in \(\mathtt{Cat}^{+}_{/\mathit{Fin}^{\natural}_{*}}\), of a functor with values in conventional operads. Footnote 1: Note that this claim is independent of parts 1,2 of loc. cit. that is based on the equivalence we establish in this paper. #### 3.3.1. Proposition _For any operad \(\mathcal{O}\in\mathtt{Op}\) there exists a diagram \(f:B\to\mathtt{Op}\) with values in conventional operads, and an equivalence_ \[R(\mathcal{O})=\operatorname{colim}(R\circ f) \tag{2}\] _in \(\mathtt{Cat}^{+}_{/\mathit{Fin}^{\natural}_{*}}\), where \(R:\mathtt{Op}\to\mathtt{Cat}^{+}_{/\mathit{Fin}^{\natural}_{*}}\) is the standard embedding._ We are going to construct the required presentation for \(\mathcal{O}\), using the equivalence of two descriptions of the category of operads presented in [HM]. ## 4. Proof of 3.3.1 ### Dendroidal description of Op Dendroidal description of \(\infty\)-operads was initiated and developed by Ikee Moerdijk and collaborators. It is based on the category of trees \(\Omega\). Dendroidal sets, models of \(\infty\)-operads, have been initially defined as presheaves of sets on \(\Omega\). In this note we use the version presented in [HM]: we use the category of forests \(\Phi\) instead of \(\Omega\) and presheaves of spaces instead of the original [MW] presheaves of sets. #### 4.1.1. Denote by \(\mathtt{Op}(\mathtt{Set})\) the category of operads in sets. Recall [HM] that the category of forests \(\Phi\) is defined as the full subcategory of \(\mathtt{Op}(\mathtt{Set})\) spanned by the operads \(o(F)\) where \(F\) is a finite disjoint union of trees and \(o(F)\) denotes the colored operad in sets whose colors correspond to edges of \(F\) and operations are freely generated by the corollas of \(F\). The dendroidal version of the category of operads, \(\mathtt{DUp}\), is defined as the full subcategory of \(P(\Phi)\) spanned by the presheaves satisfying a certain analog of Segal and completeness conditions, see [HM], 2.2.3. The (Lurie) category of operads \(\mathtt{Op}\) is a subcategory of \(\mathtt{Cat}_{/Fin_{*}}\); the latter identifies with a full subcategory of \(P(\mathbb{F})\) where \(\mathbb{F}:=\Delta_{/Fin_{*}}\) denotes the category of simplices in \(\mathit{Fin}_{*}\). The functor \(\omega:\mathbb{F}=\Delta_{/Fin_{*}}\to\Phi\) assigns to a simplex \(A:[n]\to\mathit{Fin}_{*}\) interpreted as level forest, the forest \(\omega(A)\) obtained by forgetting the level information, see detailed explaination in [HM], 3.1.1. The following result is proven in [HM], Theorem 3.1.4. **4.1.2 Theorem**.: _The functor \(\omega^{*}:P(\Phi)\to P(\mathbb{F})\) carries \(\mathtt{DUp}\) to \(\mathtt{Up}\) and \(\lambda:=\omega^{*}|_{\mathtt{DUp}}:\mathtt{DUp}\to\mathtt{Op}\) is an equivalence._ By the original definition, \(\mathtt{Op}\) is defined as the subcategory of fibrous objects in \(\mathtt{Cat}_{/Fin_{*}}\). This implies that \(\mathtt{Op}\) is a Bousfield localization of \(\mathtt{Cat}^{+}_{/Fin_{*}^{\natural}}\): there is a localization functor \(L:\mathtt{Cat}^{+}_{/Fin_{*}^{\natural}}\to\mathtt{Op}\) admitting fully faithful right adjoint \(R\). As a consequence of 4.1.2, \(\mathtt{Op}\) is also presented as Bousfield localization of \(P(\Phi)\): there is a localization functor \(L^{\prime}:P(\Phi)\to\mathtt{Op}\) admitting fully faithful right adjoint \(R^{\prime}\). Note [HM], 2.3.1, that the category \(\mathtt{Cat}_{/Fin_{*}}\) is a localization of \(P(\mathbb{F})\), with a fully faithful right adjoint \(G\) whose image consists of presheaves satisfying a version of Segal and completeness properties. #### 4.2.1. Given an operad \(\mathcal{O}\in\mathtt{Op}\), its image \(R^{\prime}(\mathcal{O})\in P(\Phi)\) has a canonical presentation as the colimit of the composition \[\bar{\rho}:\Phi_{/\mathcal{O}}\overset{p}{\to}\Phi\overset{Y}{\to}P(\Phi),\] where \(p\) is the obvious projection and \(Y\) is the Yoneda embedding. This defines a colimit diagram \(\bar{\rho}^{\triangleright}:\Phi^{\triangleright}_{/\mathcal{O}}\to P(\Phi)\) whose essential image belongs to \(R^{\prime}(\mathtt{Op})\). It is therefore uniquely factors through \[\rho^{\triangleright}:(\Phi_{/\!\!0})^{\triangleright}\to\mathtt{Op}.\] By construction, the composition \(R^{\prime}\circ\rho^{\triangleright}\) is a colimit diagram in \(P(\Phi)\). The functor \(\omega^{*}:P(\Phi)\to P(\mathbb{F})\) defined as in [HM], preserves colimits. Therefore, the composition \(\omega^{*}\circ R^{\prime}\circ\rho^{\triangleright}\) is a colimit diagram in \(P(\mathbb{F})\). We will deduce from this that the composition \(R\circ\rho^{\triangleright}:(\Phi_{/\!\!0})^{\triangleright}\to\mathtt{Cat}^{+}_{ /\!\!Fin^{\natural}_{*}}\) is also a colimit diagram. This is done as follows. The diagram (3) where \(G^{+}\) denotes the forgetful functor, is commutative, by Theorem 4.1.2. Therefore, the composition of \(R\circ\rho^{\triangleright}\) with the forgetful functor \(G\circ G^{+}\) is equivalent to \(\omega^{*}\circ R^{\prime}\circ\rho^{\triangleright}\), so it defines a colimit diagram. We will deduce from this that \(R\circ\rho^{\triangleright}\) is a colimit diagram. We will be using the following simple observation. **4.2.2 Lemma**.: _Let \(G:\mathcal{C}\to\mathcal{D}\) be a full embedding having a left adjoint \(F:\mathcal{D}\to\mathcal{C}\). Let \(f:K^{\triangleright}\to\mathcal{C}\) be a diagram such that \(G\circ f\) is a colimit diagram. Then \(f\) is also a colimit diagram._ There is an adjoint pair of functors \[\mathrm{Fun}([1],\mathtt{Cat})\smash{\mathop{\longrightarrow}\limits^{\sim}} \mathtt{Cat}^{+},\] describing \(\mathtt{Cat}^{+}\) as a Bousfield localization of \(\mathrm{Fun}([1],\mathtt{Cat})\), with the right adjoint functor carrying \(\mathcal{C}^{\natural}=(\mathcal{C},\mathcal{C}^{\circ})\) to the embedding \(\mathcal{C}^{\circ}\to\mathcal{C}\), and the left adjoint carrying \(\mathcal{C}^{\prime}\to\mathcal{C}\) to the marked category \((\mathcal{C},\mathcal{C}^{\circ})\) where \(\mathcal{C}^{\circ}\) is spanned by all arrows generated by equivalences and the images of arrows in \(\mathcal{C}^{\prime}\). This gives a recipe for calculating colimits in \(\mathtt{Cat}^{+}\): one calculates a colimit in \(\mathrm{Fun}([1],\mathtt{Cat})\) and then applies the localization functor. Thus, in order to prove that \(f:K^{\triangleright}\to\mathtt{Cat}^{+}\) is a colimit diagram in \(\mathtt{Cat}^{+}\), it is sufficient 2 to verify that the compositions \(p_{i}\circ f:K^{\triangleright}\to\mathtt{Cat}\), where \(p_{0},p_{1}:\mathtt{Cat}^{+}\to\mathtt{Cat}\) carry \((\mathcal{C},\mathcal{C}^{\circ})\) to \(\mathcal{C}^{\circ}\), \(\mathcal{C}\), respectively. #### 4.2.3. End of the proof We know that the composition of \(R\circ\rho^{\triangleright}\) with \(G\circ G^{+}\) is a colimit diagram. By Lemma 4.2.2 this implies that the composition \(G^{+}\circ R\circ\rho^{\triangleright}\) is a colimit diagram. This means that that \(p_{1}\circ R\circ\rho^{\triangleright}\) is a colimit diagram. It remains to verify that \(p_{0}\circ R\circ\rho^{\triangleright}\) is a colimit diagram. The inert part of \(R(\lambda(\mathcal{O}))\), for any \(\mathcal{O}\in\mathsf{DOp}\), is canonically determined by the space \(\mathcal{O}(\eta)\) where \(\eta\in\Phi\) is the unit tree. Evaluation at \(\eta\), \(\eta^{*}:P(\Phi)\to\mathcal{S}\), carries the colimit diagram \(\bar{\rho}\) to a colimit diagram of spaces which obviously yields a colimit diagram in \(\mathtt{Cat}_{/Fin_{*}}\) equivalent to \(p_{0}\circ R\circ\rho^{\triangleright}\). This proves Proposition 3.3.1.
2303.01231
Robust Hicksian Welfare Analysis under Individual Heterogeneity
Welfare effects of price changes are often estimated with cross-sections; these do not identify demand with heterogeneous consumers. We develop a theoretical method addressing this, utilizing uncompensated demand moments to construct local approximations for compensated demand moments, robust to unobserved preference heterogeneity. Our methodological contribution offers robust approximations for average and distributional welfare estimates, extending to price indices, taxable income elasticities, and general equilibrium welfare. Our methods apply to any cross-section; we demonstrate them via UK household budget survey data. We uncover an insight: simple non-parametric representative agent models might be less biased than complex parametric models accounting for heterogeneity.
Sebastiaan Maes, Raghav Malhotra
2023-03-01T10:57:56Z
http://arxiv.org/abs/2303.01231v3
# Price Changes and Welfare Analysis: ###### Abstract Measuring the welfare impact of price changes on consumers is pivotal in economic analyses. Researchers often measure these impacts with cross-sectional data, where every consumer is observed only once. The representative agent (RA) approach, which assumes all observations stem from a single agent, may lead to biased estimates when agents' preferences are heterogeneous. We show how to use the higher moments of demand to improve these estimates. In fact, the variance alone captures much of the bias in the RA approach. Our approach also enables inference on the distribution of welfare changes. We then leverage our approach to obtain conditions moments of demand must satisfy to arise from a population of utility maximizers and deliver a characterization of rationality for the two-good case. Using the UK Household Budget Survey, we apply our methodology to estimate the welfare impact of a 10% transport price increase and find that the RA approach understates the welfare impact by 27.2%. **Keywords**: nonparametric welfare analysis, individual heterogeneity, compensating variation, exact consumer surplus, deadweight loss **JEL classification**: C14, C31, D11, D12, D63, H22, I31 Introduction Measuring the welfare impact of price changes on consumers is crucial in many settings, for example, for policy evaluation of tax reforms and trade liberalization. The ideal way to estimate this impact would be a long panel on individuals' consumption choices. Long panel data can be used to estimate individual demand functions and preferences and identify the welfare impact.1 However, the data typically available to measure such impacts takes the form of cross-sections, meaning it is only possible to observe one consumption bundle for every consumer. This presents a theoretical challenge since only a single point of individuals' entire demand function is observed. Footnote 1: Note that for a single individual, the demand function pins down the utility function, making such computations possible (Hurwicz and Uzawa, 1971). The standard representative agent (RA) approach assumes that all observations come from the same individual (e.g., see Hausman, 1981; Vartia, 1983). Under this assumption, the average demand function coincides with the RA. This approach works well if the data comes from individuals with similar preferences. However, it might misstate welfare impacts when there is considerable preference heterogeneity.2 Footnote 2: Allowing for heterogeneous preferences is essential in empirical applications since traditional microeconometric models typically only explain a small part of the variation in consumer demand. Moreover, interpreting average demand as a representative consumer is only justified under restrictive assumptions (Jerison, 1994; Lewbel, 2001). We present a method to improve upon the RA approach when only cross-sectional data is available. We make a methodological contribution by deriving the relationship between the conditional moments of demand and the Slutsky equation.3 This allows us to use the information in cross-sectional data about income effects. Knowledge of income effects can be used for welfare calculations, as they show how much individuals need to be compensated for a price change. Footnote 3: The moments of demand are conditioned on prices and income. The techniques developed in this paper can also be used to test stochastic rationalizability.4 We can characterize rationalizable cross-sectional distributions of demand locally in the two-good case. Our method is computationally feasible, and we use it to construct a semi-decidable test of rationality. With more than two goods, we can test negative semidefiniteness of compensated demand but not symmetry. Footnote 4: See McFadden and Richter (1991) and McFadden (2005) for a thorough treatment of stochastic rationalizability. Perhaps surprisingly, we find that even just the first and second moments of demand can be used to construct tests of rationality and provide tight estimates of average welfare changes. By contrast, the first conditional moment of demand carries no empirical content locally (for a detailed review, see Rizvi, 2006). Returning to the analysis of welfare impacts, we show that one problem with the RA approach is that it weights all individuals' marginal propensity to consume equally. Consider two individuals who have different demands and different marginal propensities to consume. Correctly computing the welfare impact involves weighting each individual's marginal propensity by the amount they demand. Thus, the approach may produce substantial bias in the welfare impact. In practice, we find that a large amount of the bias incurred by the RA approach is captured by the covariance between the amount demanded and the marginal propensity to consume, which allows us to (locally) re-weight the marginal propensities. Under standard assumptions, this covariance can be estimated using the variance of demand, which can, in turn, be estimated from cross-sectional data. It delivers the average direction of the bias caused by the RA approach. In other words, the RA approach assumes this covariance to be zero. We show that our bias correction can improve welfare estimates significantly, especially when the analyst has no accurate a priori knowledge of the magnitude of the income effects. Besides average welfare, our results also enable inference on the distribution of welfare under general forms of preference heterogeneity. This enables a distributional assessment of the impact of price changes. To demonstrate our approach's usefulness, we estimate the welfare effect of a 10% increase in transport prices on consumer welfare. We collect data on households' consumption bundles and income from 14 waves of the UK Household Budget Survey (2006-2019), and on prices from the Office for National Statistics. Our results suggest that the RA approach significantly underestimates the welfare impact, as our estimate of the welfare impact is 27.2% higher. Moreover, this bias is larger for individuals with low disposable income. We perform a similar exercise for food and housing and find even larger effects for the latter. Related Literature.The literature on the RA approach is first found in Hausman (1981) and Vartia (1983). Hausman and Newey (1995) obtain point estimates for a representative consumer using nonparametric regression. Foster and Hahn (2000) and Blundell, Browning, and Crawford (2003) give conditions under which these point estimates are first-order approximations to the true welfare impact. More recently, Hausman and Newey (2016) show that the average welfare impact is not point-identified from cross-sectional data. However, they demonstrate that if income effects are bounded, observationally equivalent models' average welfare estimates are close. They show how to compute worst-case bounds on these effects.5 We strengthen their insight that this non-identification result generally has limited empirical consequence for welfare analysis by providing the best possible point estimates and tightening their bounds. Footnote 5: While their method is robust, it can lead to wide bounds when the analyst has no prior knowledge of the magnitude of the income effect. Several papers attempt to provide bounds that account for preference heterogeneity in this tradition. Making stronger assumptions on preferences, Schlee (2007) shows that the estimates from the RA approach can act as an upper bound for the true value. Other papers provide bounds by means of revealed preference inequalities. Exploiting the weak axiom of stochastic revealed preference, Cosaert and Demuynck (2018) derive bounds for a sample of heterogeneous consumers observed repeatedly. Kitamura and Stoye (2019) carry out a similar analysis in the case of random utilities. Chambers and Echenique (2021) provide bounds by characterizing allocations which cannot be rejected as Pareto optimal. Kang and Vasserman (2022) study settings where only a few aggregate demand bundles are observed and assess the additional power that assumptions relating to the curvature of demand provide. These bounding approaches typically deliver wide bounds, which might limit their usefulness for policy analysis. Hoderlein and Vanhems (2018) deliver point estimates under the assumption that demand is monotonic in scalar unobserved heterogeneity. The identifying assumption is restrictive, however, as it implies that the relative position of an individual in the conditional distribution of demand is unchanged when prices or income change. This is unrealistic when the marginal propensity to consume varies widely across individuals. Moreover, their results are only applicable to settings with two goods. In the discrete choice literature, Dagsvik and Karlstrom (2005), de Palma and Kilani (2011), and Bhattacharya (2015, 2018) show that the distribution of the compensating variation can be written in terms of choice probabilities. These choice probabilities are point-identified from cross-sectional data, even when heterogeneity is unrestricted.6 Footnote 6: However, if choice is ordered, identification breaks down due to a lack of relative price variation. Since continuous choice under a linear budget constraint can be seen as a limiting case of ordered discrete choice, this finding is consistent with the non-identification result in Hausman and Newey (2016). Our results on stochastic rationalizability are related to the literature that derives observable restrictions on demand. In the many-good case, Hoderlein and Stoye (2014, 2015) and Dette, Hoderlein, and Neumeyer (2016) derive and test restrictions on marginal quantiles of demand. In a related exercise, Hoderlein (2011) uses techniques similar to ours to bound the proportion of individuals in a population who could satisfy rationality. An advantage of our approach is that it can also be employed when researchers do not observe the entire demand distribution but only some coarse moments.7Kitamura and Stoye (2018) provide tests based on revealed preference inequalities for finitely many demand distributions at different prices. By contrast, our results assume differentiable demands but are valid at the population level. Footnote 7: Moreover, our moment-based results scale naturally to the many-good case, whereas the quantile-based approach does not. We also view our results as challenging the intuition behind the Sonnenschein-Mantel-Debreu theorem (Sonnenschein, 1973; Mantel, 1974; Debreu, 1974), which suggests that rationality imposes no restrictions on aggregate demand.8 In the specific case addressed by Chiappori and Ekeland (1999), where the authors fix nominal incomes and vary prices, we find that rationality imposes restrictions on higher demand moments.9 Importantly, we show that just the first two moments of demand already contain empirical content. Footnote 1: The _mean_ of the variance of demand (second moment) which guarantees that aggregate demand obeys the so-called _law of demand_. We tackle the inverse problem. ## 2 Illustrative Example The intent of this paper is to measure the welfare impact of a price change, specifically the compensating variation (CV). Assume that preferences in a population are indexed by \(\omega\in\Omega\), and drawn from a distribution F. A researcher observes the consumption bundles (\(q_{i}\)) of a sample of individuals \(i=1,..,k\), and the prices of the goods they buy (\(p_{i}\)). The researcher observes each individual's choices only once. Note that the researcher can infer individuals' income levels from their consumption and the prices they face (\(y_{i}=q_{i}\cdot p_{i}\)). We illustrate the intuition behind our results by considering the setting where both uncompensated and compensated demand are linear in price. Individual Welfare.Let \(q^{\omega}(p,y)\) denote uncompensated demand and \(h^{\omega}(p,u)\) compensated demand for a given type \(\omega\). \(u\) denotes utility. Assume that both demands are linear in price. The welfare impact of a price change from \(p_{0}\) to \(p_{1}\) at income \(y\) is measured by the CV: \[CV^{\omega}(p_{0},p_{1},y)=\int_{p_{0}}^{p_{1}}h^{\omega}(p,u_{i})dp.\] Because \(h(p,u)\) is linear, we can rewrite CV as follows: \[\begin{split} CV^{\omega}(p_{0},p_{1},y)&=\int_{p_ {0}}^{p_{1}}\left[h^{\omega}(p_{0},u)+(p-p_{0})\frac{\partial h_{i}(p_{0},u)} {\partial p}\right]dp\\ &=\Delta ph^{\omega}(p_{0},u)+\frac{(\Delta p)^{2}}{2}\frac{ \partial h^{\omega}(p_{0},u)}{\partial p}\end{split} \tag{1}\] where \(\Delta p=p_{1}-p_{0}\). Observe that \(h^{\omega}(p_{0},u)=q^{\omega}(p_{0},y)\). Moreover, Slutsky's equation allows us to decompose the change in compensated demand into two terms, the substitution effect and the income effect (IE): \[\underbrace{\frac{\partial h^{\omega}(p,u)}{\partial p}}_{\text{ substitution effect (SE)}} = \underbrace{\frac{\partial q^{\omega}(p,y)}{\partial p}}_{\text{ price effect (PE)}}+\underbrace{q^{\omega}(p,y)\frac{\partial q^{\omega}(p,y)}{ \partial y}}_{\text{income effect (IE)}}. \tag{2}\] Hence we can rewrite 1 as follows: \[CV^{\omega}(p_{0},p_{1},y)=\Delta p\underbrace{q^{\omega}(p_{0},y)}_{\text{ initial demand}}+\frac{(\Delta p)^{2}}{2}\underbrace{\left[\frac{\partial q^{\omega}(p,y)}{ \partial p}\right.}_{\text{price effect (PE)}}+\underbrace{q^{\omega}(p_{0},y)\frac{\partial q^{ \omega}(p_{0},y)}{\partial y}}_{\text{income effect (IE)}}. \tag{3}\] The RA Approach.We assume the analyst has a large enough sample to estimate \(M_{1}(p,y)=\mathbb{E}_{\omega}[q^{\omega}\mid p,y]\) as a function of price and income. Suppose that there is a single preference type \(\omega^{*}\). Their demand \(q^{\omega^{*}}(p,y)\) would equal \(M_{1}(p,y)\). Making this assumption lets us compute the hypothetical welfare change \(CV^{\omega^{*}}(p_{0},p_{1},y)\) from Expression (3): \[\begin{split}\mathbb{E}[CV^{\omega}(p_{0},p_{1},y)]& =CV^{\omega^{*}}(p_{0},p_{1},y)\\ &=\Delta pM_{1}+\frac{(\Delta p)^{2}}{2}\mathbb{E}\left[\frac{ \partial M_{1}}{\partial p}+\,M_{1}\frac{\partial M_{1}}{\partial y}\right] \end{split} \tag{4}\] The RA approach assumes there is a representative agent \(\omega^{*}\) and uses 4 to estimate the welfare impact of a price change.10 Footnote 10: In this approach, the variation in demand conditional on prices and income is thought to be generated by measurement error. See, for example, Hausman and Newey (1995). Correcting for Preference Heterogeneity.The issue with the RA approach is that it fails to account for preference heterogeneity. However, individuals may have large unobserved preference heterogeneity at every price and income level in the data. As we will see, if there is significant heterogeneity in a population's rates of substitution, the RA approach may lead to significantly biased estimates. Lemma 1, which follows from Theorem 1 in the next section, quantifies this bias. **Lemma 1**.: In the linear case, the bias in the RA approach is: \[\overline{CV}^{*}-\overline{CV}_{RA}=\frac{(\Delta p)^{2}}{2}\text{Cov}\left( q^{\omega},\frac{\partial q^{\omega}}{\partial y}\right).\] The bias in the RA approach is proportional to the covariance between demand and marginal propensity to consume. The example which follows will deliver intuition for this result. The RA approach implicitly assumes this covariance is zero; accounting for this covariance can considerably improve welfare estimates.11 Footnote 11: Note that demand conditional on prices and income is a degenerate random variable in the RA approach. Example.To understand this bias, consider a setting with two individuals who have different preferences: Ann (A) and Betty (B). Let their demands be \(q^{A},q^{B}\), which would fix \(M_{1}=\frac{q^{A}+q^{B}}{2}\). The bias in the RA approach stems from the misspecification of income effects in equation 4. In the RA approach, the analyst estimates the average income effect assuming it stems from one individual, which gives \[\overline{TE}_{RA}=M_{1}\frac{\partial M_{1}}{\partial y}=\frac{1}{2}\left( \overline{q}\frac{\partial q^{A}}{\partial y}+\overline{q}\frac{\partial q^{B }}{\partial y}\right).\] However, if we conducted the analysis individual by individual, the average income effect would be \[\overline{IE}^{*}=\frac{1}{2}\left(IE^{A}+IE^{B}\right)=\frac{1}{2}\left(q^{A} \frac{\partial q^{A}}{\partial y}+q^{B}\frac{\partial q^{B}}{\partial y}\right),\] Note that the difference stems from how the income derivative \(\frac{\partial q}{\partial y}\) is weighted across individuals. Table 1 summarizes these weights for the two approaches. Now suppose that Anne and Betty's demands are such that \(q^{A}<q^{B}\) and \(\frac{\partial q^{A}}{\partial y}<\frac{\partial q^{B}}{\partial y}\), which means that \(\text{Cov}(q,\frac{\partial q}{\partial y})>0\). Since \(q^{A}<\overline{q}<q^{B}\), in the RA approach, the weight on Betty is too small and the weight on Ann too large. As \(\frac{\partial q^{A}}{\partial y}<\frac{\partial q^{B}}{\partial y}\), there is too much weight on small values of \(\frac{\partial q}{\partial y}\), which biases the RA approach downwards.12 Footnote 12: In the odd case where \(\text{Cov}\left(q,\frac{\partial q}{\partial y}\right)<0\), the RA approach is biased upwards. ## 3 Conceptual Framework Our conceptual framework allows for unobserved heterogeneity in preferences to be unrestricted. For ease of exposition, we suppress all _observed_ individual characteristics; all results in this paper can be thought of as conditional on these covariates. ### Consumer Demand We consider the standard model of utility maximization under a linear budget constraint. Let \(\Omega\) denote the universe of preference types. Every preference type \(\omega\in\Omega\) can be considered an individual with preferences over bundles of \((k+1)\) goods \(\mathbf{q}\). We assume the set of bundles is compact and convex and denote it as \(\mathcal{Q}\subseteq\mathbb{R}^{k+1}_{++}\). Preferences are assumed to be representable by smooth, strictly quasi-concave utility functions \(u^{\omega}:\mathcal{Q}\rightarrow\mathbb{R}\). This formulation allows utility functions to differ arbitrarily across individuals. Prices are denoted \(\mathbf{p}\in\mathcal{P}\subset\mathbb{R}^{k+1}_{++}\) and income, \(y\in\mathcal{Y}\subset\mathbb{R}_{++}\). We call a pair \((\mathbf{p},y)\) a budget set. Individual demand functions \(\mathbf{q}^{\omega}(\mathbf{p},y):\mathcal{P}\times\mathcal{Y}\rightarrow \mathcal{Q}\) arise from individuals maximizing their utility subject to a linear budget constraint, \[\mathbf{q}^{\omega}(\mathbf{p},y)=\operatorname*{arg\,max}_{\mathbf{p}\cdot \mathbf{q}\leq y\cdot\mathbf{q}\in\mathcal{Q}}u^{\omega}(\mathbf{q}).\] These demand functions satisfy homogeneity of degree zero and Walras law, \[\mathbf{q}^{\omega}(\alpha\mathbf{p},\alpha y) =\mathbf{q}^{\omega}(\mathbf{p},y),\quad\forall\alpha\in\mathbb{R}_{ +},\] \[\mathbf{p}\cdot\mathbf{q}^{\omega}(\mathbf{p},y) =y,\] for all budget sets. For every uncompensated (Marshallian) demand function \(\mathbf{q}^{\omega}\), there exists a compensated (Hicksian) demand function \(\mathbf{h}^{\omega}(\mathbf{p},u):\mathcal{P}\times\mathbb{R}\to\mathcal{Q}\) defined as \[\mathbf{h}^{\omega}(\mathbf{p},u)=\operatorname*{arg\,min}_{\mathbf{q}\in \mathcal{Q}}\{\mathbf{p}\cdot\mathbf{q}|u^{\omega}(\mathbf{q})\geq u\}.\] The Slutsky equation (5) provides a link between both demand functions. \[\frac{\partial}{\partial\mathbf{p}}\mathbf{q}^{\omega}(\mathbf{p},y)=\frac{ \partial}{\partial\mathbf{p}}\mathbf{h}^{\omega}(\mathbf{p},u)-\frac{ \partial}{\partial y}\mathbf{q}^{\omega}(\mathbf{p},y)\mathbf{q}^{\omega}( \mathbf{p},y)^{\intercal}, \tag{5}\] The indirect utility function \(v^{\omega}:\mathcal{P}\times\mathcal{Y}\to\mathbb{R}\) is defined as \[v^{\omega}(\mathbf{p},y)=\max_{\mathbf{p}\cdot\mathbf{q}\leq y\cdot\mathbf{q} \in\mathcal{Q}}u^{\omega}(\mathbf{q}),\] i.e., the utility level obtained at budget set \((\mathbf{p},y)\). The expenditure function \(e^{\omega}(\mathbf{p},u):\mathcal{P}\times\mathbb{R}\to\mathcal{Y}\) is defined as \[e^{\omega}(\mathbf{p},u)=\min_{u\leq u^{\omega}(\mathbf{q}):\mathbf{q}\in \mathcal{Q}}\mathbf{p}\cdot\mathbf{q},\] i.e., the minimum amount of income needed to achieve utility level \(u\) at prices \(\mathbf{p}\). These two functions are related by Shephard's Lemma (6). \[\frac{\partial}{\partial\mathbf{p}}e^{\omega}(\mathbf{p},u)=\mathbf{h}^{ \omega}(\mathbf{p},u). \tag{6}\] In the remainder of the paper, we will omit the demand and price for the \((k+1)\)st good using Walras' law. We assume that preference types are distributed with some distribution \(F(\omega)\), which admits a density. We now state our main identifying assumption. **Assumption 1**.: The distribution of unobserved heterogeneity is independent of prices and income: \[F(\omega\mid\mathbf{p},y)=F(\omega).\] The exogeneity of budget sets is a strong but standard assumption in the literature on nonparametric identification. (e.g., see Hausman and Newey, 2016; Blomquist, Newey, Kumar, and Liang, 2021). To the best of our knowledge, theoretical results for cross-sections do not allow for general forms of endogeneity under general heterogeneity. Some forms of endogeneity can be mitigated by a control function approach (Blundell and Powell, 2003). ### Welfare Impact Our main object of interest is the _compensating variation_ (CV), which quantifies the impact of price changes on individual welfare.13 It measures how much income an individual is willing to give up after the price change to be offered the initial price vector. Formally, for a price change from \(\mathbf{p}_{0}\) to \(\mathbf{p}_{1}\), it is defined as Footnote 13: We focus on compensating variation (instead of equivalent variation) because this measure allows comparing different reforms and is measured in baseline prices. \[CV^{\omega}(\mathbf{p}_{0},\mathbf{p}_{1},y) =e^{\omega}(\mathbf{p}_{1},v^{\omega}(\mathbf{p}_{0},y))-e^{ \omega}(\mathbf{p}_{1},v^{\omega}(\mathbf{p}_{1},y))\] \[=e^{\omega}(\mathbf{p}_{1},v^{\omega}(\mathbf{p}_{0},y))-y.\] When \(\mathbf{p}_{1}>\mathbf{p}_{0}\), we have that \(CV^{\omega}(\mathbf{p}_{0},\mathbf{p}_{1},y)>0\).14 Notice that the compensating variation is stochastic from the analyst's viewpoint because individuals' preference types cannot be observed. We let \(\Delta\mathbf{p}=\mathbf{p}_{1}-\mathbf{p}_{0}\). Footnote 14: For expository clarity of our results, we deviate from the textbook definition of the CV by reversing its sign (e.g., see Mas-Colell, Whinston, and Green, 1995). ### Conditional Moments of Demand In the two-good case, integrating out unobserved preference heterogeneity, we can express the \(n\)th (non-central) _conditional moment of demand_ as \[M_{n}(\mathbf{p},y) =\mathbb{E}_{\omega}[q^{\omega}(\mathbf{p},y)^{n}\mid\mathbf{p},y] \tag{7}\] \[=\int q^{\omega}(\mathbf{p},y)^{n}dF(\omega\mid\mathbf{p},y)\] \[=\int q^{\omega}(\mathbf{p},y)^{n}dF(\omega),\] since, by Walras' law, it suffices to consider scalar demand.15 The second equality follows from Assumption 1. Since these moments are conditional expectation functions, the set \(\{M_{n}(\mathbf{p},y)\}_{n=1}^{\infty}\) is nonparametrically identified from cross-sectional data. Footnote 15: In the remainder of the paper, unless stated otherwise, expectations are always conditional on a budget set \((\mathbf{p},y)\): i.e., for a random variable \(z(\mathbf{p},y)\), we will write \(\mathbb{E}[z(\mathbf{p},y)]=\mathbb{E}[z(\mathbf{p},y)\mid\mathbf{p},y]\). In the many-good case, one can express the \(n\)th conditional moment of demand by means of the symmetric \(n\) tensor \(\mathbf{T}_{n}^{\omega}(\mathbf{p},y)\) for which the element \(t_{i_{1},i_{2},\ldots i_{n}}^{\omega}(\mathbf{p},y)=q_{i_{1}}^{\omega}( \mathbf{p},y)q_{i_{2}}^{\omega}(\mathbf{p},y)\ldots q_{i_{n}}^{\omega}( \mathbf{p},y)\) with \(i_{1},i_{2},\ldots,i_{n}\in\{1,2,\ldots,k\}\). We define the generalized tensor form_ of \(\mathbf{T}_{n}^{\omega}(\mathbf{p},y)\) with respect to a vector \(\mathbf{v}\in\mathbb{R}^{k}\) as the multilinear function \[\mathbf{v}(**)\mathbf{T}_{n}^{\omega}(\mathbf{p},y) =\mathbf{T}_{n}^{\omega}(\mathbf{p},y)(\underbrace{\mathbf{v} \times\mathbf{v}\cdots\times\mathbf{v}}_{n\text{ times}})\] \[=\sum_{i_{1},i_{2},\ldots,i_{n}=1}^{k}t_{i_{1},i_{2},\ldots,i_{n} }^{\omega}(\mathbf{p},y)v_{i_{1}}v_{i_{2}}\ldots v_{i_{n}}.\] Again, by integrating out unobserved preference heterogeneity, we can express the \(n\)th (non-central) conditional moment of demand as \[\mathbf{M}_{n}(\mathbf{p},y) =\mathbb{E}\left[\mathbf{T}_{n}^{\omega}(\mathbf{p},y)\right] \tag{8}\] \[=\int\mathbf{T}_{n}^{\omega}(\mathbf{p},y)dF(\omega\mid\mathbf{p },y).\] We define a _moment sequence_ as the (possibly infinite) sequence \(\{\mathbf{M}_{i}(\mathbf{p},y)\}_{i=1}^{n}\) of the first \(n\) moments of demand. ### Rationalizability Let \(\{\mathbf{a}_{i}(\mathbf{p},y)\}_{i=1}^{r}\) be a sequence where each \(\mathbf{a}_{i}(\mathbf{p},y):\mathcal{P}\times\mathcal{Y}\to\mathbb{R}^{(k)^{ i}}\) is function which maps budget sets to tensor forms of (weakly) increasing dimension. We say \(\{\mathbf{a}_{i}(\mathbf{p},y)\}_{i=1}^{r}\) is _rationalizable around_\((\mathbf{p}_{0},y_{0})\) if there exists a universe of preference types \(\overline{\Omega}\) and a probability measure \(\overline{F}(\omega)\) over these types such that \[\mathbf{a}_{i}(\mathbf{p},y)=\int\overline{\mathbf{T}}_{i}^{\omega}(\mathbf{p },y)d\overline{F}(\omega),\quad\forall i\leq k,\] holds for an open set around the budget set \((\mathbf{p}_{0},y_{0})\), and \(\overline{\mathbf{T}}_{i}^{\omega}\) is generated by a rational demand function \(\overline{\mathbf{q}}^{\omega}\) for all \(\omega\in\Omega\).16 Footnote 16: A demand function is called _rational_ when it obeys Slutsky symmetry and negative semidefiniteness, homogeneity of degree zero, and Walras’ law. Technical conditions are relegated to Appendix A. In particular, we assume that the conditions for the dominated convergence theorem hold such that derivative and integral operators can be interchanged. ## 4 Approximations to Welfare Changes We now formalize and extend the procedure that underpins our illustrative example. For ease of exposition, we focus on the two-good case; the results for the many-good case are relegated to Appendix B. In Section 4.1, we derive results for small price changes, where "triangles are good approximations." This allows us to obtain a first-order approximation of compensated demand in terms of observable objects, which then lets us derive a second-order approximation to all moments of the CV. In addition, we show that cross-sectional data is uninformative about higher-order approximations. Figure 1 gives a schematic overview of our main argument. In Section 4.2, we derive results for settings where price changes can be large, requiring us to move away from triangles. We allow demand to vary non-linearly in prices, but it remains linear in income. We demonstrate that a second-order approximation to the average CV is identified from cross-sectional data, but higher-order moments are not. ### Linearity in Price and Income We show that the moments of CV can be approximated up to second order from the conditional moments of demand. The following lemma establishes a relation between (transformations of) income effects and the conditional moments of demand.17 Footnote 17: A full exploration of the informational content of the moments of demand is postponed to Section 5. **Lemma 2**.: For every \(n\in\mathbb{N}_{+}\), it holds that \[\mathbb{E}\left[(q^{\omega}(\mathbf{p},y))^{n-1}\frac{\partial}{\partial y}q^ {\omega}(\mathbf{p},y)\right]=\frac{1}{n}\frac{\partial}{\partial y}M_{n}( \mathbf{p},y).\] Proof.: Using the definition of the conditional moments in Expression (7), we know that \[\frac{\partial}{\partial y}M_{n}(\mathbf{p},y)=\frac{\partial}{\partial y} \left(\int q^{\omega}(\mathbf{p},y)^{n}dF(\omega)\right).\] Figure 1: Schematic overview of our main argument Interchanging the derivative and integral operators gives us \[\frac{\partial}{\partial y}M_{n}(\mathbf{p},y) =\int\frac{\partial}{\partial y}q^{\omega}(\mathbf{p},y)^{n}dF(\omega)\] \[=n\int q^{\omega}(\mathbf{p},y)^{n-1}\frac{\partial}{\partial y}q^ {\omega}(\mathbf{p},y)dF(\omega)\] \[=n\ \mathbb{E}\left[q^{\omega}(\mathbf{p},y)^{n-1}\frac{\partial}{ \partial y}q^{\omega}(\mathbf{p},y)\right].\] We now use our knowledge of income effects to compute a linear approximation to compensated demand. We then appeal to Shephard's lemma to calculate changes in the expenditures via integrating compensated demand. This yields a second-order approximation to the EV. This is summarized in the following theorem. **Theorem 1**.: _The second-order approximation of the \(n\)th moment of the CV depends only on the \(n\)th and \((n+1)\)st conditional moment of demand.18_ Footnote 18: We let \(O\) denote Landau’s big O. \[\mathbb{E}[CV^{\omega}(p_{0},p_{1},y)^{n}]=(\Delta p)^{n}\left(M_{n}(\mathbf{ p}_{0},y)+\frac{\Delta p}{2}\left[\frac{\partial M_{n}(\mathbf{p}_{0},y)}{ \partial p}+\frac{n}{n+1}\frac{\partial M_{n+1}(\mathbf{p}_{0},y)}{\partial y }\right]+O((\Delta p)^{2})\right).\] Proof.: We only consider the case for the average CV for clarity of exposition. For the other moments, refer to Appendix C. Observe that by Shephard's lemma (6), \[\frac{\partial}{\partial p}e^{\omega}(p,u)=h^{\omega}(p,u),\] so that we can write the CV in terms of compensated demand, \[CV^{\omega}(p_{0},p_{1},y)=\int_{0}^{1}h^{\omega}(p(t),v^{\omega}(\mathbf{p}_{ 0},y))dp,\] for some continuous price path \(p:[0,1]\rightarrow\mathcal{P}\) with \(p(0)=p_{0}\) and \(p(1)=p_{1}\). Without loss of generality, we will assume the price path to be linear, i.e., \(p(t)=p_{0}+t\Delta p\), such that19 Footnote 19: The integral is path independent due to Slutksy symmetry. \[CV^{\omega}(p_{0},p_{1},y)=\Delta p\int_{0}^{1}h^{\omega}(p_{0}+t\Delta p,v_{ 0}^{\omega})dt,\] and therefore \[\mathbb{E}[CV^{\omega}(p_{0},p_{1},y)]=\Delta p\int_{0}^{1}\mathbb{E}[h^{ \omega}(p_{0}+t\Delta p,v_{0}^{\omega})]dt, \tag{9}\] where \(v_{0}^{\omega}=v^{\omega}(\mathbf{p}_{0},y)\). We now combine the Slutsky equation (5) and Lemma 2 to derive the expectation of the price derivative of compensated demand. In particular, we have that \[\mathbb{E}\left[\frac{\partial}{\partial p}h^{\omega}(\mathbf{p},y)\right] =\mathbb{E}\left[\frac{\partial}{\partial p}q^{\omega}(\mathbf{p}, y)+q^{\omega}(\mathbf{p},y)\frac{\partial}{\partial y}q^{\omega}(\mathbf{p},y)\right]\] \[=\frac{\partial}{\partial p}M_{1}(\mathbf{p},y)+\frac{1}{2}\frac {\partial}{\partial p}M_{2}(\mathbf{p},y),\] for every \((\mathbf{p},y)\in\mathcal{P}\times\mathcal{Y}\). This allows us to derive a first-order approximation to average compensated demand around \(t=0\): \[\mathbb{E}[h^{\omega}(p_{0}+t\Delta p,v_{0}^{\omega})]=M_{1}(\mathbf{p}_{0},y) +t\Delta p\left(\frac{\partial}{\partial p}M_{1}(\mathbf{p}_{0},y)+\frac{1}{2 }\frac{\partial}{\partial p}M_{2}(\mathbf{p}_{0},y)\right)+O((\Delta p)^{2}).\] Plugging this approximation into Expression (9) gives us \[\mathbb{E}[CV^{\omega}(p_{0},p_{1},y)] =\Delta p\int_{0}^{1}\left[M_{1}(\mathbf{p}_{0},y)+t\Delta p\left( \frac{\partial}{\partial p}M_{1}(\mathbf{p}_{0},y)+\frac{1}{2}\frac{\partial} {\partial p}M_{2}(\mathbf{p}_{0},y)\right)+O((\Delta p)^{2})\right]dt\] \[=\Delta pM_{1}(\mathbf{p}_{0},y)+\frac{(\Delta p)^{2}}{2}\left( \frac{\partial}{\partial p}M_{1}(\mathbf{p}_{0},y)+\frac{1}{2}\frac{\partial} {\partial p}M_{2}(\mathbf{p}_{0},y)\right)+O((\Delta p)^{3}).\] Specifically, the second-order approximation to the average CV only uses information from the conditional mean and the variance, the first two conditional moments of demand. This theorem informs us that the first two terms of the series expansion for all moments of the compensating variation can be identified in the neighbourhood of a budget set. In the following theorem, we show that, in some sense, this is the best approximation that can be obtained from cross-sectional data. **Theorem 2**.: _The \(k\)th-order approximation of the \(n\)th moment of the CV for \(k\geq 3\) is not identified from the conditional moments of demand._ Proof.: For clarity of exposition, we only consider the case for the average CV and \(k=3\). Suppose the true series expansion of the CV at some budget set \((p_{1},y)\) can be written as \[\mathbb{E}[CV^{\omega}(p_{0},p_{1},y)]=a_{0}+a_{1}\Delta p+a_{2}(\Delta p)^{2 }+a_{3}(\Delta p)^{3}.\] By extending the argument in the proof of Theorem 1, to recover \(a_{3}\), one must identify \(\mathbb{E}\left[D_{p}^{2}h^{\omega}(p,v_{1}^{\omega})\right]\), i.e., the expected second price derivative of compensated demand. By differentiating the identity \(h^{\omega}(p,u)=q^{\omega}(p,e^{\omega}(p,u))\) twice with respect to price, taking expectations, and interchanging differentiation and integration, one obtains that \[\mathbb{E}[D_{p}^{2}h^{\omega}(p,v_{1}^{\omega})]=D_{p}^{2}M_{1}(\mathbf{p}_{0 },y)+\frac{1}{2}D_{p,y}M_{2}(\mathbf{p}_{0},y)+\frac{1}{3}D_{y}^{2}M_{3}( \mathbf{p}_{0},y)-\mathbb{E}\left[q^{\omega}(\mathbf{p}_{0},y)\left(\frac{ \partial}{\partial y}q^{\omega}(\mathbf{p}_{0},y)\right)^{2}\right].\] As a direct consequence of Lemma 4 in the Appendix, the final term cannot be identified from cross-sectional data. That is, we show that two observationally equivalent models can generate different values for \(\mathbb{E}[q^{\omega}(\mathbf{p}_{0},y)(\frac{\partial}{\partial y}q^{\omega}( \mathbf{p}_{0},y))^{2}]\). This implies that the third-order approximation of the average CV is also not identified. **Remark 1**.: Akin to Hausman (1981), if the price of only one good changes, only knowledge of the good's demand is needed, reducing the analysis from many goods to two.20 Footnote 20: See Appendix B. **Remark 2**.: It is no coincidence that \(\mathbb{E}[q^{\omega}(\mathbf{p},y)(\frac{\partial}{\partial y}q^{\omega}( \mathbf{p},y))^{2}]\) is not identified from cross-sectional data. Using the law of iterated expectations, we can write \[\mathbb{E}\left[q^{\omega}(\mathbf{p},y)\left(\frac{\partial}{\partial y}q^{ \omega}(\mathbf{p},y)\right)^{2}\right]=\mathbb{E}\left[q^{\omega}(\mathbf{p},y)\mathbb{E}\left[\left(\frac{\partial}{\partial y}q^{\omega}(\mathbf{p},y) \right)^{2}\mid q^{\omega}(\mathbf{p},y)\right]\right].\] This highlights that \(\mathbb{E}[q^{\omega}(\mathbf{p},y)(\frac{\partial}{\partial y}q^{\omega}( \mathbf{p},y))^{2}]\) is equal to the (non-centered) covariance between the demand bundle and the second moment of the income effect at that demand bundle. Therefore, failure to identify the third-order approximation of average welfare is due to cross-sectional data being uninformative about how the variance of the income effect varies across demand bundles. Direct application of Theorem 2.1 in Hoderlein and Mammen (2007) shows that in nonseparable models, cross-sectional data identifies local average structural derivatives (e.g., \(\mathbb{E}[\frac{\partial}{\partial y}q^{\omega}(\mathbf{p},y)\mid q^{\omega} (\mathbf{p},y)]\)) but not transformations of these local average structural derivatives (e.g., \(\mathbb{E}[(\frac{\partial}{\partial y}q^{\omega}(\mathbf{p},y))^{2}\mid q^{ \omega}(\mathbf{p},y)]\)). This is why \(\mathbb{E}[D_{p}h^{\omega}(p,v_{1}^{\omega})]\) is identified, but \(\mathbb{E}[D_{p}^{2}h^{\omega}(p,v_{1}^{\omega})]\) is not. The same reasoning holds for the higher-order approximations, mutatis mutandis. **Remark 3**.: Since cross-sectional data identifies the local average structural derivatives \(\mathbb{E}\left[\frac{\partial}{\partial p}q^{\omega}(\mathbf{p},y)\mid q^{ \omega}(\mathbf{p},y)=\overline{q}\right]\) and \(\mathbb{E}[\frac{\partial}{\partial y}q^{\omega}(\mathbf{p},y)\mid q^{ \omega}(\mathbf{p},y)=\overline{q}]\), the approximation for the average CV developed in Theorem 1 could be made conditional on a given demand bundle \(\overline{q}\). Formally, we have that \[\mathbb{E}\left[\frac{\partial}{\partial p}h^{\omega}(\mathbf{p} _{0},y)\mid q^{\omega}(\mathbf{p}_{0},y)=\overline{q}\right]=\mathbb{E}\left[ \frac{\partial}{\partial p}q^{\omega}(\mathbf{p}_{0},y)+q^{\omega}(\mathbf{p} _{0},y)\frac{\partial}{\partial y}q^{\omega}(\mathbf{p}_{0},y)\mid q^{\omega} (\mathbf{p}_{0},y)=\overline{q}\right]\] \[=\mathbb{E}\left[\frac{\partial}{\partial p}q^{\omega}(\mathbf{ p}_{0},y)\mid q^{\omega}(\mathbf{p}_{0},y)=\overline{q}\right]+\overline{q} \mathbb{E}\left[\frac{\partial}{\partial y}q^{\omega}(\mathbf{p}_{0},y)\mid q ^{\omega}(\mathbf{p}_{0},y)=\overline{q}\right],\] where the RHS is identified. This expression shows that our method could be used to improve welfare estimates bundle by bundle if the entire demand model could be non-parametrically estimated. In practice this may be very demanding on the data. **Remark 4**.: Information on the income effects can also be used to construct informative bounds on changes in welfare. By the mean value theorem, \[\mathbb{E}[CV^{\omega}(p_{0},p_{1},y)]=\Delta pM_{1}(\mathbf{p}_{0},y)+\frac{( \Delta p)^{2}}{2}\left(\frac{\partial}{\partial p}M_{1}(\mathbf{p}_{0},y)+ \frac{1}{2}\frac{\partial}{\partial y}M_{2}(\mathbf{p}_{0},y)\right)+\frac{( \Delta p)^{3}}{6}D_{p}^{2}\mathbb{E}[h^{\omega}(\overline{p},y)],\] for some intermediate price \(\overline{p}\in[p_{0},p_{1}]\). If \(\Delta p>0\) and if compensated demand is convex in prices, the second-order approximation yields a lower bound. 21 Footnote 21: Specifications with convex compensated demands include linear and CES demand systems. On the other hand, if \(\Delta p<0\) and if compensated demand is convex in prices, our approximation acts as an upper bound. When the good is also normal, we have that \[CV^{\omega}(p_{0},p_{1},y)\geq CS^{\omega}(p_{0},p_{1},y),\] such that \[\mathbb{E}[CV^{\omega}(p_{0},p_{1},y)]\geq\mathbb{E}[CS^{\omega}(p_{0},p_{1}, y)],\] where \(CS^{\omega}(p_{0},p_{1},y)=\Delta p\int_{0}^{1}q^{\omega}(p(t),y)dt\). Therefore, one can obtain two-sided bounds in this case. The above remark demonstrates that our approach can be leveraged beyond just approximations, specifically, to construct bounds. **Remark 5**.: Using a similar insight to that in Lemma 2 it is possible to calculate average income elasticities nonparametrically. Let \(\eta^{\omega}(\mathbf{p},y)\) denote an individual's income elasticity for the budget set \(\mathbf{p},y\). It holds that \[\begin{split}\mathbb{E}\left[\eta^{\omega}(\mathbf{p},y)\right]& =\mathbb{E}\left[\frac{y}{q^{\omega}(\mathbf{p},y)}\frac{\partial }{\partial y}q^{\omega}(\mathbf{p},y)\right]\\ &=y\mathbb{E}\left[\frac{1}{q^{\omega}(\mathbf{p},y)}\frac{ \partial}{\partial y}q^{\omega}(\mathbf{p},y)\right]\\ &=y\mathbb{E}\left[\frac{\partial}{\partial y}\log(q^{\omega}( \mathbf{p},y))\right]\\ &=y\frac{\partial}{\partial y}\mathbb{E}\left[\log(q^{\omega}( \mathbf{p},y))\right],\end{split} \tag{10}\] which is identified from cross-sectional data.22 Footnote 22: This result is related to the work of Paluch, Kneip, and Hildenbrand (2012), who derive a connection between individual and aggregate income elasticities. Knowledge of average income effects is useful within the context of the sufficient statistic approach, where income effects enter the first-order approximations when the price or tax schedule is nonlinear.23 In almost all of the literature, however, income effects are ignored by assuming that individuals have quasi-linear utilities.24 Expression (10) provides a means to test this assumption nonparametrically. Footnote 24: Gruber and Saez (2002) conduct a parametric test and find evidence for economically insignificant income effects. Most of the subsequent literature has therefore ignored income effects altogether (e.g., see Burns and Ziliak (2016) and the references therein). ### Non Linearity in Price (But Not Income) The second-order approximation in the previous section works well if price changes are small or if demand is approximately linear in prices and income. In effect, the above approach only uses demand at one budget set and extrapolates linearly. However, we can compute more accurate welfare changes to accommodate large price changes. To carry this out, we use the method introduced by Hausman (1981) and Vartia (1983). They demonstrated that the CV could also be expressed as the solution to a first-order nonlinear ordinary differential equation (ODE). Let \(p(t):[0,1]\rightarrow\mathcal{P}\) be a continuous price path with \(p(0)=p_{0}\) and \(p(1)=p_{1}\). Further, define \[s^{\omega}(t)=e^{\omega}(p(t),v_{0}^{\omega})-y,\quad t\in[0,1]\] where \(v_{0}^{\omega}\) is the indirect utility at price \(p\) and income \(y\). We can now differentiate this expression with respect to t, fetching us: \[\frac{\partial s^{\omega}(t)}{\partial t}=\frac{\partial}{ \partial p}e^{\omega}(p(t),v_{0}^{\omega})\frac{\partial p(t)}{\partial t} \quad t\in[0,1]. \tag{11}\] By Shephard's lemma (6), the right hand side reduces to \(q^{\omega}(p(t),y+s^{\omega}(t))\frac{\partial p(t)}{\partial t}\), allowing us to write \[\frac{\partial s^{\omega}(t)}{\partial t}=q^{\omega}(p(t),y+s^{ \omega}(t))\frac{\partial p(t)}{\partial t},\quad t\in[0,1], \tag{12}\] with boundary condition \(s^{\omega}(0)=0\). The CV solves this equation for \(t=1\).25 If an individual's demand function is known, the change in welfare can be therefore calculated exactly. Footnote 25: The solution to this ODE exists an is unique when individual demand \(q^{\omega}\) is Lipschitz in \(t\) and \(s\). In this reformulation, exploiting knowledge of income effects at prices _along the path_ of the price change and not just at the original price can help improve our welfare estimates. **Theorem 3**.: _Consider individual-specific income effects that are constant in prices and income: i.e., \(\frac{\partial}{\partial y}q^{\omega}(\mathbf{p},y)=a_{1}^{\omega}\) for all \(\omega\in\Omega\) and \(\mathbf{p},y\in\mathcal{P}\times\mathcal{Y}\). The average CV is identified up to second order,_ \[\mathbb{E}[CV^{\omega}(p_{0},p_{1},y)]=\Delta p\int_{0}^{1}M_{1} (p(t),y)dt+\frac{(\Delta p)^{2}}{2}\int_{0}^{1}\frac{\partial}{\partial y}M_{ 2}(p(t),y)(1-t)dt+O((\Delta p)^{3}),\] _where \(p(t)=p_{0}+t\Delta p\) is the linear price path._ Proof.: Since the income effect is assumed to be constant in prices and income, we can write \(q^{\omega}(p(t),y+s(t))=q^{\omega}(p(t),y)+a_{1}^{\omega}s(t)\). Assuming a linear price path, Expression (12) therefore simplifies to the linear first-order ODE \[\frac{\partial s^{\omega}(t)}{\partial t}=[q^{\omega}(p(t),y)+a_{1}^{\omega}s (t)]\Delta p,\] which has the explicit solution \[s^{\omega}(t)=\exp(a_{1}^{\omega}\Delta pt)\int_{0}^{t}\Delta p\exp(-a_{1}^{ \omega}\Delta p\tau)q^{\omega}(p(\tau),y)d\tau\] such that \[CV^{\omega}(p_{0},p_{1},y)=s^{\omega}(1)=\exp(a_{1}^{\omega}\Delta p)\int_{0}^ {1}\Delta p\exp(-a_{1}^{\omega}\Delta pt)q^{\omega}(p(t),y)dt. \tag{13}\] Given that \(\exp(x)=1+x+O(x^{2})\), we have that \[CV^{\omega}(p_{0},p_{1},y) =\Delta p\int_{0}^{1}\exp(a_{1}^{\omega}\Delta p(1-t))q^{\omega}( p(t),y)dt\] \[=\Delta p\int_{0}^{1}[1+a_{1}^{\omega}\Delta p(1-t)]q^{\omega}(p( t),y)dt+O((\Delta p)^{3})\] \[=\Delta p\int_{0}^{1}q^{\omega}(p(t),y)dt+(\Delta p)^{2}\int_{0}^ {1}q^{\omega}(p(t),y)a_{1}^{\omega}(1-t)dt+O((\Delta p)^{3})\] \[=\Delta p\int_{0}^{1}q^{\omega}(p(t),y)dt+\frac{(\Delta p)^{2}}{2 }\int_{0}^{1}\frac{\partial}{\partial y}(q^{\omega}(p(t),y))^{2}(1-t)dt+O(( \Delta p)^{3}). \tag{14}\] Taking expectations on both sides leads to the expression \[\mathbb{E}[CV^{\omega}(p_{0},p_{1},y)]=\Delta p\int_{0}^{1}M_{1}(p(t),y)dt+ \frac{(\Delta p)^{2}}{2}\int_{0}^{1}\frac{\partial}{\partial y}M_{2}(p(t),y)( 1-t)dt+O((\Delta p)^{3}).\] **Remark 6**.: Under the assumptions of Theorem 3, our approximation acts as a lower bound for the average CV. This can be readily seen from the fact that \(\exp(x)\geq 1+x\); the second equality in Expression (14) can therefore be replaced by an inequality. Moreover, our estimate is always below the upper bound as derived by Hausman and Newey (2016), as \[CV_{B_{u}}^{\omega} =\Delta p\int_{0}^{1}\exp(B_{u}\Delta p(1-t))q^{\omega}(p(t),y)dt\] \[\geq\Delta p\int_{0}^{1}[1+B_{u}\Delta p(1-t)]B_{u}\Delta p(1-t))q^ {\omega}(p(t),y)dt\] \[\geq\Delta p\int_{0}^{1}[1+B_{u}\Delta p(1-t)]a^{\omega}\Delta p(1 -t))q^{\omega}(p(t),y)dt.\] **Remark 7**.: Unfortunately, higher moments of the CV cannot be approximated using a similar approach to Theorem 3. To see this, consider the second moment of the CV. Raising both sides of Expression (13) to the second power yields \[(CV^{\omega}(p_{0},p_{1},y))^{2}=(\Delta p)^{2}\int_{0}^{1}\int_{0}^{1}\exp(a _{1}^{\omega}\Delta p(2-t-t^{\prime}))q^{\omega}(p(t),y)q^{\omega}(p(t^{ \prime}),y)dtdt^{\prime}.\] Even the zeroth-order expansion of the exponential functions gives, after taking expectations, a term that contains \(\mathbb{E}[q^{\omega}(p(t),y)q^{\omega}(p(t^{\prime}),y)]\). The covariance of individual demand at different prices is not identified from cross-sectional data unless demand is assumed to be a linear function of prices. **Remark 8**.: If one does away with the assumption that the income effect is price independent, one also gets nonidentification of the second-order approximation. Let \(q^{\omega}(p(t),y+s(t))=q^{\omega}(p(t),y)+a_{1}^{\omega}(t)s(t)\). Analogous arguments as in the proof of Theorem 3 give \[CV^{\omega}(p_{0},p_{1},y)=\Delta p\int_{0}^{1}\exp\left(\Delta p\left(\int_{0 }^{1}a_{1}^{\omega}(t^{\prime})dt^{\prime}-\int_{0}^{t}a_{1}^{\omega}(t^{ \prime\prime})dt^{\prime\prime}\right)\right)q(p(t),y)dt.\] The term \(\mathbb{E}\left[\int_{0}^{1}\int_{0}^{1}\int_{\tau_{2}}^{1}a_{1}^{\omega}(\tau _{1})a_{1}^{\omega}(\tau_{2})q^{\omega}(p(\tau_{3}),y)d\tau_{1}d\tau_{2}d\tau_ {3}\right]\) is not identified from cross-sections. The above two remarks loosely make the point that the approximation in 3 is "tight". One cannot allow for non-linearity in income or use similar techniques to construct approximations of higher-order moments from purely cross-sectional data. **Remark 9**.: Information on average income effects can also be exploited to tighten the identified set provided by Hausman and Newey (2016). This set is derived by means of uniform bounds on individuals' income effects. Using Chebyshev inequalities, one can restrict the probability of extreme income effects from knowledge of these bounds and the observed average income effect. This, in turn, restricts the probability of extreme values for the CV. The resulting set is probabilistic in the sense that it comes along with a coverage probability for the true average CV to be within the set. Formally, let \(B^{\omega}(t,s)=\Delta p\frac{\partial}{\partial y}q^{\omega}(p(t),y+s)\) and let \(B^{\omega}_{u}=\sup_{t,s}B^{\omega}(t,s)\) and \(B^{\omega}_{l}=\inf_{t,s}B^{\omega}(t,s)\). Assuming income effects to be contained within \([\underline{B},\overline{B}]\) with \(\underline{B}\geq 0\), and using Chebyshev's inequality for bounded variables, we have that \[\Pr[B^{\omega}_{u}\geq k]\geq\frac{\mathbb{E}[B^{\omega}_{u}]-k}{\overline{B} }\geq\frac{\sup_{t,s}\mathbb{E}[B^{\omega}(t,s)]-k}{\overline{B}},\] and \[\Pr[B^{\omega}_{l}\geq z]\leq\frac{\mathbb{E}[B^{\omega}_{l}]}{z}\leq\frac{ \inf_{t,s}\mathbb{E}[B^{\omega}(t,s)]}{z},\] where both right-hand sides are identified from cross-sectional data. From Theorem 3 in Hausman and Newey (2016), we know that \(CV^{\omega}_{B_{l}}(p_{0},p_{1},y)\leq CV^{\omega}(p_{0},p_{1},y)\leq CV^{ \omega}_{B_{u}}(p_{0},p_{1},y)\) for \(\Delta p>0\), such that \[\mathbb{E}[CV^{\omega}] \geq\mathbb{E}[CV^{\omega}_{B_{l}}]\] \[=\Pr[B^{\omega}_{l}\geq z]\mathbb{E}[CV^{\omega}_{B_{l}}\mid B^{ \omega}_{l}\geq z]+\Pr[B^{\omega}_{l}<z]\mathbb{E}[CV^{\omega}_{B_{l}}\mid B^ {\omega}_{l}<z]\] \[\geq\Pr[B^{\omega}_{l}\geq z]\mathbb{E}[CV^{\omega}_{B_{l}}\mid B ^{\omega}_{l}=z]+\Pr[B^{\omega}_{l}<z]\mathbb{E}[CV^{\omega}_{B_{l}}\mid B^ {\omega}_{l}=\underline{B}],\] \[\mathbb{E}[CV^{\omega}] \leq\mathbb{E}[CV^{\omega}_{B_{u}}]\] \[=\Pr[B^{\omega}_{u}\geq k]\mathbb{E}[CV^{\omega}_{B_{u}}\mid B^ {\omega}_{u}\geq k]+\Pr[B^{\omega}_{u}<k]\mathbb{E}[CV^{\omega}_{B_{u}}\mid B ^{\omega}_{u}<k]\] \[\leq\Pr[B^{\omega}_{u}\geq k]\mathbb{E}[CV^{\omega}_{B_{u}}\mid B ^{\omega}_{u}=\overline{B}]+(1-\Pr[B^{\omega}_{u}\geq k])\mathbb{E}[CV^{ \omega}_{B_{u}}\mid B^{\omega}_{u}=k],\] where the dependence of the CV on prices and income is suppressed for notational clarity. By varying \(z\) and \(k\), these bounds can be computed for arbitrary degrees of statistical coverage. Note that by setting \(z=\overline{B}\) and \(k=\underline{B}\), one obtains the bounds of Hausman and Newey (2016) as a special case. ## 5 Conditional Moments and Rationality In this section, we study how the conditional moments of demand can be used to test the rationality of a population. Hurwicz and Uzawa (1971) provide well-known necessary and sufficient conditions for the integrability of demand. In the case where the analyst can observe conditional quantile demand functions, this problem has been studied by Dette, Hoderlein, and Neumeyer (2016) and Hausman and Newey (2016). We contribute to the literature by considering the empirical content of moments instead of quantiles. ### Two-good Case Assuming homogeneity of degree zero and Walras' law hold, the only remaining restriction is negative semidefiniteness, as symmetry holds trivially in the two-good case. In particular, for all types \(\omega\in\Omega\), and for all budget sets \((\mathbf{p},y)\in\mathcal{P}\times\mathcal{Y}\), \[\frac{\partial}{\partial p}q^{\omega}(\mathbf{p},y)+q^{\omega}(\mathbf{p},y) \frac{\partial}{\partial y}q^{\omega}(\mathbf{p},y)\leq 0.\] This restriction can be rewritten in terms of the conditional moments of demand. Multiplying both sides by \(q^{\omega}(\mathbf{p},y)^{n}\) for some \(n\in\mathbb{N}_{+}\), we define \[\Gamma_{n}^{\omega}(\mathbf{p},y)=q^{\omega}(\mathbf{p},y)^{n}\left[\frac{ \partial}{\partial p}q^{\omega}(\mathbf{p},y)+q^{\omega}(\mathbf{p},y)\frac{ \partial}{\partial y}q^{\omega}(\mathbf{p},y)\right],\] and \[\Gamma_{n}(\mathbf{p},y)=\mathbb{E}\left[\Gamma_{n}^{\omega}(\mathbf{p},y) \right].\] Since for every type, \[\Gamma_{n}^{\omega}(\mathbf{p},y)=\frac{1}{n+1}\frac{\partial}{\partial p}q^{ \omega}(\mathbf{p},y)^{n+1}+\frac{1}{n+2}\frac{\partial}{\partial y}q^{ \omega}(\mathbf{p},y)^{n+2}\leq 0,\] we have that \[\Gamma_{n}(\mathbf{p},y) =\int\left(\frac{1}{n+1}\frac{\partial}{\partial p}q^{\omega}( \mathbf{p},y)^{n+1}+\frac{1}{n+2}\frac{\partial}{\partial y}q^{\omega}( \mathbf{p},y)^{n+2}\right)dF(\omega)\] \[=\frac{1}{n+1}\frac{\partial}{\partial p}M_{n+1}(\mathbf{p},y)+ \frac{1}{n+2}\frac{\partial}{\partial y}M_{n+2}(\mathbf{p},y)\] \[\leq 0,\] where the second equality follows from interchanging integration and differentiation as well as the definition of the conditional moments, and the inequality follows from the Slutsky equation being point-wise negative. This expression imposes a necessary restriction on every two consecutive moments. Notice that \(\Gamma_{n}(\mathbf{p},y)\) maps a budget set to a real number. More generally, let \(\mathbb{Q}\left[\mathbb{R}\right]\) be the set of polynomials over the real numbers with rational coefficients that are positive in the support of demand \([0,\frac{y}{p}]\). For any polynomial \(\pi_{n}^{\omega}(\mathbf{p},y)=\sum_{i=1}^{n}a_{i}(q^{\omega}(\mathbf{p},y))^ {n}\in\mathbb{Q}\left[\mathbb{R}\right]\), we define \[\Lambda_{\pi_{n}}^{\omega}(\mathbf{p},y)=\pi_{n}^{\omega}(\mathbf{p},y)\left[ \frac{\partial}{\partial p}q^{\omega}(\mathbf{p},y)+q^{\omega}(\mathbf{p},y) \frac{\partial}{\partial y}q^{\omega}(\mathbf{p},y)\right],\] and \[\Lambda_{\pi_{n}}(\mathbf{p},y)=\mathbb{E}\left[\Lambda_{\pi_{n}}^{\omega}( \mathbf{p},y)\right].\] We can use the linearity of the expectation to compute \(\Lambda_{\pi_{n}}^{\omega}(\mathbf{p},y)\) and \(\Lambda\pi_{n}(\mathbf{p},y)\). \[\pi_{n}^{\omega}(\mathbf{p},y) =\sum_{i=1}^{n}a_{i}(q^{\omega}(\mathbf{p},y))^{n}\] \[\implies\Lambda_{\pi_{n}}^{\omega}(\mathbf{p},y) =\sum_{i=1}^{n}a_{i}\Gamma_{n}^{\omega}(\mathbf{p},y)\] and \[\Lambda_{\pi_{n}}(\mathbf{p},y)=\mathbb{E}[\Lambda_{\pi_{n}}^{ \omega}(\mathbf{p},y)]=\sum_{i=1}^{n}a_{i}\Gamma_{n}(\mathbf{p},y)\] This allows us to characterize demand in terms of moments of demand. **Theorem 4**.: _In the two-good case, the following statements are equivalent:_ 1. _A demand distribution can be generated by a rational population._ 2. _For any polynomial_ \(\pi_{n}(\mathbf{p},y)\) _that is positive in the support of the distribution of demand at_ \((\mathbf{p},y)\)_, it holds that_ \(\Lambda_{\pi_{n}}(\mathbf{p},y)\leq 0\)_._ Proof.: The \((1)\implies(2)\) part simply follows from any polynomial transformation being a sum of monomial transformations, thus requiring negativity. For the \((2)\implies(1)\) part, we proceed by means of proof by contradiction. Hausman and Newey (2016) show that negativity of the quantile demand function characterizes rationalizability. Suppose \((2)\) holds, but negativity is contradicted at some quantile. This would mean that there is some quantile \(\tau\in(0,1)\), and some quantile demand \(\widetilde{q}(\tau\mid\mathbf{p},y)=\inf\{q:\Pr[q^{\omega}(\mathbf{p},y)\leq q \mid\mathbf{p},y]\geq\tau\}\) such that \[\frac{\partial}{\partial p}\widetilde{q}(\tau\mid\mathbf{p},y)+ \widetilde{q}(\tau\mid\mathbf{p},y)\frac{\partial}{\partial y}\widetilde{q}( \tau\mid\mathbf{p},y)>0.\] We can pick a sequence of polynomials \(\{\pi_{n}\}_{n=1}^{\infty}\) such that26 Footnote 26: To be precise, one should pick a set of sequences of polynomials that uniformly converge in a neighborhood of \(\mathbf{p},y\). Therefore, derivatives with respect to elements of \(\mathbf{p},y\) are well-defined. \[\lim_{n\to\infty}\{\pi_{n}\}\to\delta(\widetilde{q}(\tau|\mathbf{p},y)),\] where \(\delta\) is the Dirac delta function. Therefore, by continuity of \(\Lambda_{\pi_{n}}\), we have that \[\lim_{n\to\infty}\{\Lambda_{\pi_{n}}(\mathbf{p},y)\}\to\left[ \frac{\partial}{\partial p}\widetilde{q}(\tau\mid\mathbf{p},y)+\widetilde{q}( \tau\mid\mathbf{p},y)\frac{\partial}{\partial y}\widetilde{q}(\tau\mid \mathbf{p},y)\right]>0,\] which means that beyond some finite \(n\in\mathbb{N}_{+}\), negativity must be contradicted. This would in turn contradict \((2)\), hence proving the theorem. **Remark 10**.: The equivalence in Theorem 4 can be used to construct a semi-decidable test.27 Let \(\mathbb{Q}_{+}\left[\mathbb{R}\right]=\{\pi\in\mathbb{Q}\left[\mathbb{R}\right]\mid x \in[0,y/p]\implies\pi(x)\geq 0\}\) be the set of polynomials over the real numbers with rational coefficients that are positive for \(x\geq 0\). Since the rational numbers are countable, so is the set \(\mathbb{Q}_{+}\left[\mathbb{R}\right]\); one can therefore pick an enumeration \(\{\pi_{n}\}_{n=1}^{\infty}\) of this set. A simple semi-decidable test would consist of the following iterative scheme at step \(n\): Footnote 27: This test has the property that no rationalizable distribution is ever rejected, and all non-rationalizable distributions are eventually rejected. 1. If \(\Lambda(\pi_{n},\mathbf{p},y)\leq 0\), move to the \((n+1)\)st step; 2. If \(\Lambda(\pi_{n},\mathbf{p},y)>0\), stop and reject the distribution. The first part follows directly from Theorem 4. The second part follows from the fact that if the distribution is not rationalizable, there exists some polynomial \(\pi\) which has a positive translation. Since \(\{\pi_{n}\}_{n=1}^{\infty}\) is countable, there must exist some \(n\) where \(\pi_{n}\) has a positive translation, leading to rejection. **Remark 11**.: In the case where only the zeroth and first monomial translation can be computed (or equivalently, the first three moments can be observed), only linear polynomials enter the analysis, which makes testing much simpler. Denote the support of demand at budget set \(\mathbf{p},y\) as \(0\leq q_{min}\leq q_{max}\leq y/p\). In terms of the first two translations, only four polynomials need to be checked for negativity: (i) 1; (ii) \(x\); (iii) \(-q_{min}+x\); and (iv) \(q_{max}-x\). This translates to the conditions: \[\Lambda(1,\mathbf{p},y) \leq 0,\] \[\Lambda(x,\mathbf{p},y) \leq 0,\] \[-q_{min}\Lambda(1,\mathbf{p},y)+\Lambda(x,\mathbf{p},y) \leq 0,\] \[q_{max}\Lambda(1,\mathbf{p},y)-\Lambda(x,\mathbf{p},y) \leq 0.\] This means that in addition to monomial negativity, only \(q_{max}\Lambda(1,\mathbf{p},y)\leq\Lambda(x,\mathbf{p},y)\leq q_{min}\Lambda( 1,\mathbf{p},y)\) needs to be checked. Figure 2 shows the admissible set of solutions shaded in red. ### Many-good Case We now consider the case where we have multiple goods. From the Slutsky equation (5), we have that \[\mathbb{E}\left[\frac{\partial}{\partial p}\mathbf{h}^{\omega}(\mathbf{p},u) \right]=\mathbb{E}\left[\frac{\partial}{\partial p}\mathbf{q}^{\omega}( \mathbf{p},y)\right]+\mathbb{E}\left[\frac{\partial}{\partial y}\mathbf{q}^{ \omega}(\mathbf{p},y)(\mathbf{q}^{\omega}(\mathbf{p},y))^{\intercal}\right]. \tag{15}\] Without imposing Slutsky symmetry, \(\mathbf{E}\left[\frac{\partial}{\partial y}\mathbf{q}^{\omega}(\mathbf{p},y)( \mathbf{q}^{\omega}(\mathbf{p},y))^{\intercal}\right]\) is not automatically identified from the first two conditional moments of demand. This is due to the fact that the variance of demand \(\mathbf{M}_{2}\) being symmetric imposes a loss of "degrees of freedom". This is different from the two-good case, where there is no loss of information because symmetry holds trivially. **Proposition 1**.: Without Slutsky symmetry being imposed, \[\mathbb{E}\left[\frac{\partial}{\partial p}\mathbf{q}^{\omega}(\mathbf{p},y)+ \frac{\partial}{\partial y}\mathbf{q}^{\omega}(\mathbf{p},y)(\mathbf{q}^{ \omega}(\mathbf{p},y))^{\intercal}\right]\] is not identified from the first two moments of demand. Proof.: For simplicity, we consider the two-good case. Firstly, observe that the first part of the expectation, namely \(\mathbb{E}\left[\frac{\partial}{\partial p}\mathbf{q}^{\omega}(\mathbf{p},y)\right]\) is identified because it is simply the price derivative of the first moment of demand. From the definition of the conditional variance, \[\mathbf{M}_{2}(\mathbf{p},y)=\mathbb{E}\begin{bmatrix}(q_{1}^{\omega}(\mathbf{ p},y))^{2}&q_{1}^{\omega}(\mathbf{p},y)q_{2}^{\omega}(\mathbf{p},y)\\ q_{1}^{\omega}(\mathbf{p},y)q_{2}^{\omega}(\mathbf{p},y)&(q_{2}^{\omega}( \mathbf{p},y))^{2}\end{bmatrix},\] Figure 2: Test for rationality based on the three first conditional moments which is a symmetric matrix. However, one needs to identify, \[\mathbb{E}\begin{bmatrix}q_{1}^{\omega}(\mathbf{p},y)\frac{\partial}{\partial y}q _{1}^{\omega}(\mathbf{p},y)&q_{1}^{\omega}(\mathbf{p},y)\frac{\partial}{ \partial y}q_{2}^{\omega}(\mathbf{p},y)\\ q_{2}^{\omega}(\mathbf{p},y)\frac{\partial}{\partial y}q_{1}^{\omega}(\mathbf{p },y)&q_{2}^{\omega}(\mathbf{p},y)\frac{\partial}{\partial y}q_{2}^{\omega}( \mathbf{p},y).\end{bmatrix}\] Even though the diagonal terms of the matrix are pinned down, the off-diagonal terms cannot be identified because the information in the variance is redundant. In particular, we can identify \[\mathbb{E}\left[\frac{\partial}{\partial y}(q_{1}^{\omega}(\mathbf{p},y)q_{2} ^{\omega}(\mathbf{p},y))\right]=\mathbb{E}\left[q_{1}^{\omega}(\mathbf{p},y) \frac{\partial}{\partial y}q_{2}^{\omega}(\mathbf{p},y))+\frac{\partial}{ \partial y}q_{1}^{\omega}(\mathbf{p},y)q_{2}^{\omega}(\mathbf{p},y))\right],\] but not the terms at the right-hand side separately. This means there can exist different models that disagree on the value of \(\frac{\partial}{\partial y}\mathbf{q}^{\omega}(\mathbf{p},y)(\mathbf{q}^{ \omega}(\mathbf{p},y))^{\intercal}\) but are observationally equivalent in terms of mean and variance. This result shows that if we remain agnostic about rationality, the above term is not identified from the first two moments of demand. However, if we assume that individuals satisfy Slutsky symmetry, this exactly identifies the Slustky terms. This leads to the following theorem (which already appeared before as Lemma 3). **Theorem 5**.: _If individuals obey Slutsky symmetry, the first two moments identify the Slutsky matrix \(\mathbb{E}\left[\frac{\partial}{\partial p}\mathbf{h}^{\omega}(\mathbf{p},u)\right]\)._ Proof.: Using the definition of the conditional moments in Expression (7), we know that \[\frac{\partial}{\partial y}\mathbf{M}_{2}(\mathbf{p},y)=\frac{\partial}{ \partial y}\left(\int\mathbf{q}^{\omega}(\mathbf{p},y)(\mathbf{q}^{\omega}( \mathbf{p},y))^{\intercal}dF(\omega)\right).\] Interchanging the derivative and integral operators gives us \[\frac{\partial}{\partial y}\mathbf{M}_{2}(\mathbf{p},y) =\int\frac{\partial}{\partial y}(\mathbf{q}^{\omega}(\mathbf{p}, y)(\mathbf{q}^{\omega}(\mathbf{p},y))^{\intercal})dF(\omega)\] \[=\int_{\omega}\left[\frac{\partial}{\partial y}\mathbf{q}^{\omega }(\mathbf{p},y)(\mathbf{q}^{\omega}(\mathbf{p},y))^{\intercal}+\mathbf{q}^{ \omega}(\mathbf{p},y)(\frac{\partial}{\partial y}\mathbf{q}^{\omega}( \mathbf{p},y))^{\intercal}\right]dF(\omega).\] From the Slutsky equation (5), we have that \[\frac{\partial}{\partial p}\mathbf{h}^{\omega}(\mathbf{p},u)=\frac{\partial} {\partial p}\mathbf{q}^{\omega}(\mathbf{p},y)+\frac{\partial}{\partial y} \mathbf{q}^{\omega}(\mathbf{p},y)(\mathbf{q}^{\omega}(\mathbf{p},y))^{ \intercal},\] which is symmetric due to Slutsky symmetry. Adding this equation to its transpose fetches us \[2\frac{\partial}{\partial p}\mathbf{h}^{\omega}(\mathbf{p},u)=\frac{\partial} {\partial p}\mathbf{q}^{\omega}(\mathbf{p},y)+\left(\frac{\partial}{\partial p }\mathbf{q}^{\omega}(\mathbf{p},y)\right)^{\intercal}+\frac{\partial}{ \partial y}\mathbf{q}^{\omega}(\mathbf{p},y)(\mathbf{q}^{\omega}(\mathbf{p},y ))^{\intercal}+\mathbf{q}^{\omega}(\mathbf{p},y)\left(\frac{\partial}{ \partial y}\mathbf{q}^{\omega}(\mathbf{p},y)\right)^{\intercal},\] such that \[\mathbb{E}\left[\frac{\partial}{\partial p}\mathbf{h}^{\omega}(\mathbf{p},u) \right]=\frac{1}{2}\left[\frac{\partial}{\partial p}\mathbf{M}_{1}(\mathbf{p},y) +(\frac{\partial}{\partial p}\mathbf{M}_{1}(\mathbf{p},y))^{\intercal}+\frac{ \partial}{\partial y}\mathbf{M}_{2}(\mathbf{p},y)\right].\] **Remark 12**.: Proposition 1, together with the above theorem implies that Slutsky symmetry is untestable from the first two conditional moments of demand. That can be seen because there are several values of \(\mathbb{E}[q^{\omega}\frac{\partial}{\partial y}q^{\omega}]\) which agree with a mean-variance system, but only one value of \(\mathbb{E}[q^{\omega}\frac{\partial}{\partial y}q^{\omega}]\) arises from a symmetric system. Therefore there must be asymmetric systems which agree with the mean-variance data, and there must also be symmetric systems, as we have shown above. This renders symmetry untestable. **Remark 13**.: Theorem 5 shows that two symmetric models that generate the same conditional mean and variance of demand (e.g., see the example in Lemma 4) have the same average substitution. Therefore, the first two moments pin down average substitution under Slutsky symmetry. Figure 3 provides a Venn diagram of these results. **Remark 14**.: Assuming Slutsky symmetry, one can test negative semi-definiteness of the population based on the first two moments of demand. If they are rationalizable, \[\mathbf{P}(\mathbf{p},y)=\frac{\partial}{\partial p}\mathbf{M}_{1}(\mathbf{p},y)+\frac{1}{2}\frac{\partial}{\partial y}\mathbf{M}_{2}(\mathbf{p},y)\] Figure 3: Symmetry and rationalizability must be negative semidefinite. This follows from the fact that \(\frac{\partial}{\partial p}\mathbf{h}^{\omega}(\mathbf{p},u)\) is negative semidefinite and \(\mathbf{P}(\mathbf{p},y)+\mathbf{P}(\mathbf{p},y)^{\intercal}=2\frac{\partial} {\partial p}\mathbf{h}^{\omega}(\mathbf{p},u)\).28 Footnote 28: Note that for a square matrix \(\mathbf{A}\) it holds that \(\mathbf{v}^{\intercal}(\mathbf{A}+\mathbf{A}^{\intercal})\mathbf{v}=2 \mathbf{v}^{\intercal}\mathbf{A}\mathbf{v}\). Akin to the two-goods case, we have similar restrictions on the higher moments of demand. The difference is that the monomial translation for any moment is now a tensor form. The following theorem provides necessary conditions for the moments to be generated by a demand system. **Theorem 6**.: _For any \(n\in\mathbb{N}_{+}\), the following \(n+1\) tensor form is negative semidefinite.29_ Footnote 29: We say a tensor form \(\mathbf{T}_{n}^{\omega}\) is _negative semidefinite_ if \[\mathbf{T}_{n}^{\omega}(\underbrace{\mathbf{v}\times\mathbf{v}\times\cdots \times\mathbf{v}}_{n\text{ times}})=\sum_{i_{1},i_{2},\ldots,i_{n}=1}^{l-1}t_{i_{ 1},i_{2},\ldots,i_{n}}^{\omega}v_{i_{1}}v_{i_{2}}\ldots v_{i_{n}}\leq 0,\quad \forall\mathbf{v}\in\mathbb{R}^{l-1}.\] \[n^{-1}\frac{\partial}{\partial p}\mathbf{M}_{n}+(n+1)^{-1}\frac{\partial}{ \partial y}\mathbf{M}_{n+1}\] Proof.: The proof is relegated to Appendix C. Notice that the form is \(n+1\) because differentiating a \(n\) form with respect to price increases the order of the form. **Remark 15**.: Because the restriction in Theorem 6 is a test of negative semidefiniteness (and not of symmetry), any small perturbation of a finite and rationalizable moment sequence is itself also rationalizable. This is because negative semidefiniteness is an open condition. **Remark 16**.: Finally, the restrictions in Theorems 4 and 6 do not depend on the levels of the moments, but only on their changes with respect to prices and income. This leads to two fundamental properties of these restrictions. First, if there is additively separable i.i.d. measurement error in the observed demands, these restrictions can still be estimated consistently. Second, none of our restrictions depend on statistical constraints on moments, such as non-negativity (for even moments) or Chebyshev-type tail inequalities. ## 6 Empirical Illustration We now apply our results on consumer data from cross-sectional household budget surveys. In Section 6.1, we first describe our data and lay out the estimation procedure. Section 6.2 outlines the price changes and compares our results with those of the RA approach. ### Data and Estimation The data we employ consists of repeated cross-sections of household budget surveys from the UK. In particular, we use 14 waves from the Expenditure and Food Survey (2006-2007) and the Living Costs and Food Survey (2008-2019). These contain detailed observations on households' expenditures, income, and demographic characteristics. Price data is collected from the Office for National Statistics. We aggregate households' expenditures into four broad categories: (i) food, (ii) housing, (iii) transportation, and (iv) other nondurables and services. "Food" consists of expenditures on food and non-alcoholic beverages. "Housing" encompasses goods and services for the usage, maintenance, supply of water, and heating of the household's dwelling. "Transport" covers the purchase of vehicles and expenditures on maintenance and fuel, passenger transport services, and courier services. "Other nondurables and services" encompasses expenditures on alcohol, tobacco, clothing and footwear, communication, recreation, and restaurants and hotels.30 Footnote 30: More details on the construction of these categories is provided in Appendix E. To facilitate nonparametric estimation, we impose some sample restrictions. We drop households with zero expenditures on rent or transportation. We also remove those households with budget shares outside the 2nd-98th percentile range for at least one of the four categories. To further reduce the influence of outliers, we trim households with total expenditures and disposable income outside the 2nd-98th percentile range. Our final estimation sample consists of 12,494 households; descriptive statistics are relegated to Appendix E. To apply our method for average welfare, we need to estimate the first two conditional moments of demand. The conditional means and variances are modelled semiparametrically using partially linear kernel regression (Racine and Li, 2004).31 This specification is flexible in budget sets and avoids the curse of dimensionality. In particular, for every category \(k\) we have Footnote 31: As a consequence of Remark 1, it suffices to model the mean and variance for every category separately if only a single price is changed at a time. \[\mathbb{E}[q_{k}^{\omega}(\mathbf{p},y,w)^{n}]=h_{kn}(\mathbf{p},y)+\mathbf{ \delta}_{kn}^{\prime}\mathbf{d},\quad n=1,2, \tag{16}\] where \(h_{kn}\) is a nonparametric function in prices and income, and \(\mathbf{\delta}_{kn}\) is a vector that captures the impact of household characteristics \(\mathbf{d}\).32 The latter controls for observed heterogeneity and consists of the number of adults, number of children, number of retired, and number of earners in the household. Footnote 32: The category “other nondurables and services” will be treated as the numeraire. We calculate the required price and income derivatives on the basis of the estimated moment functions in Expression (16). Following Paluch, Kneip, and Hildenbrand (2012), we test whether our results are sensitive to outlying values of these derivatives. We find that trimming derivatives outside the 5th-95th percentile range does not qualitatively change our results. A disadvantage of our price data is that it contains does not contain household-level variation. To increase cross-sectional price variation, we make use of Stone-Lewbel price indices (Hoderlein and Mihaleva, 2008). These household-specific indices make use of the variability in expenditures on nested commodities. For every category we will use the variation in budget shares of commodities one COICOP level lower.33 Footnote 33: The Classification of Individual Consumption According to Purpose (COICOP) harmonizes the classification of household expenditures across countries. To ensure tractability, we will assume that the within-category preferences over these nested commodities are Cobb-Douglas.34 Footnote 34: The assumption of Cobb-Douglas preferences delivers a simple expression for the price indices. Denote \(q_{ikj}\) the demand of individual \(i\) for good \(j\) in category \(k\) and let \(p_{kj}\) be the price of this good. The price index for the \(i\)th individual for category \(k\) becomes \[p_{ik}=\left(\prod_{j=1}^{n_{k}}\overline{w}_{kj}^{-\overline{w}_{kj}}\right) ^{-1}\prod_{j=1}^{n_{k}}\left(\frac{p_{kj}}{w_{ikj}}\right)^{w_{ikj}},\] where \(w_{ikj}=\frac{q_{ikj}p_{kj}}{\sum_{j=1}^{n_{k}}q_{ikj}p_{kj}}\) and \(\overline{w}_{kj}=\frac{1}{n}\sum_{i=1}^{n}w_{ikj}\). Notice that in this approach, the between-category preferences remain arbitrarily flexible. ### Empirical Results We focus on the welfare impact of six distinct scenarios: a 5 or 10% increase in the price of either food, housing, or transportation. In every scenario the price of the other goods is kept constant. To allow for meaningful interpersonal welfare comparisons, for every household, we fix the vector of baseline prices to the sample mean. Table 2 shows the estimates for the average CV using the RA and our approach. As the demands for all goods are normal, we find that the estimate for the RA approach is below our estimate in each of the six scenarios. This is especially true for housing and transportation where the relative bias can be as high as 17.9% (5% increase for transportation) or 27.2% (10% increase for transportation). These differences imply that the RA approach might significantly understate the welfare cost of the price increases. Moreover, as depicted in Figure 5, we find that this relative bias has a distributional dimension. For food, we find that the bias is larger for households with substantial weekly incomes. For transportation the bias is highest for those household with a weekly disposable income of around 300 pounds. In the Appendix, we provide more insight in these distributional patterns by plotting the variance in consumption with respect to income. It is shown that the error between both approaches is proportional to these variance under common parametric specifications (e.g., the Almost Ideal Demand System). Figure 4: Relative deviation in average welfare by income level (5% price increases) ## 7 Conclusion In this paper, we introduce novel methods to approximate welfare changes which are caused by price changes. To do so, we show that the conditional moments of demand contain information about the distribution of individuals' income effects. We use this information to conduct more accurate counterfactual exercises in applied welfare analysis. We also demonstrate that better approximations cannot be found using cross-sections. Furthermore, we show that the conditional moments of demand contain empirical content and can be used to test individual rationality, specifically the negative semidefiniteness of the Slutsky matrix. Going forward, there is room for future work in at least two directions. Firstly, it would be interesting to understand what additional power short panels on consumers would give to estimate the above counterfactuals; first steps in this direction has been made by Crawford (2019) and Cooprider, Hoderlein, and Alexander (2022). Considering stochastic rationalizability, it is still a wide-open question as to whether Slutsky symmetry can be tested on cross-sectional data and if so, how to construct tests. Asking if symmetry carries any empirical content at the level of cross-sections would in itself be a very interesting question. \begin{table} \begin{tabular}{l c c} \hline \hline category & \multicolumn{2}{c}{\(\mathbb{E}[CV]\)} \\ \cline{2-3} & RA approach & our approach \\ \hline _5\% price increase_ & & \\ \hline food & 3.19 & 3.22 \\ housing & 11.31 & 12.38 \\ transportation & 4.31 & 5.08 \\ \hline _10\% price increase_ & & \\ \hline food & 7.06 & 7.20 \\ housing & 30.75 & 35.00 \\ transportation & 10.66 & 13.57 \\ \hline \hline \end{tabular} \end{table} Table 2: Average welfare impact of a 5 and 10% price increase